-
Notifications
You must be signed in to change notification settings - Fork 30
Handle VMFS5's double indirect pointer for files > 256G #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
glandium#12 This provides read support for files > 256G, due to vSphere 5 adding double indirect block pointers. It uses a double indirect lookup if the file has a blocksize of 1M and is over the VMFS size threshold for using double indirect blocks. Perhaps there's a cleaner way of determining the use of double indirect from the inode. We may also want to implement a block pointer cache like VMware introduced with this feature, however given the use cases of this software it may not be necessary.
|
Thank you for patch. You saved my day! 600Gb VMDK file was copied successfully. |
|
No idea! The pull request has been here awhile. |
|
@mlsorensen Thanks so much for the large file support! I have ScaleIO for my VM storage. With your fix I can mount the ScaleIO volume on CentOS, have linux read the file system, and backup the VMs with Borg! Just FYI: I couldn't get your fork to Make with GCC 4.8.5 on CentOS7.2. Using GCC 4.4 worked. |
|
Is there a reason this hasn't been merged? Disks are much larger these days, and I've got several large files I need to copy off. Unfortunately for me, it's also off a 4 G disk, which will require more development work on my part, I fear. |
|
According to the book "Storage Design and Implementation in vSphere 6", <= 8k uses "sub-blocks", >8k && <= 256M uses file blocks, > 256M && <= 256GB uses pointer blocks, and > 256 GB && < 64 TB uses pointer blocks + secondary pointer blocks, so I think this patch works as expected. It seems the size is the only factor, nothing magic in the inode. |
|
Fix tested on a 400GB vmdk file and it works fine ! |
|
please merge |
|
Tested to rescue a 500GB image with (g)ddrescue (disk had some bad sectors) : works great. I think this patch should be merged in linux distros (tested with debian 11 with a manual merge from debian sources). |
|
Dear mlsorensen, Your pool request was published about 8 years ago, but it is still relevant! The new Ubuntu and Debian distributions contain an old version of vmfs-tools, which did not include your code. You helped me solve the problem with a large vmdk file, thank you so much! |
|
This is a nine year old useful patch. It enabled me to recover files from a crashed / inop ESXI instance. PR was submitted nine years ago. @mlsorensen @glandium Request to merge. |
This provides read support for files > 256G, due to vSphere 5
adding double indirect block pointers. It uses a double indirect
lookup if the file has a blocksize of 1M and is over the VMFS size
threshold for using double indirect blocks. Perhaps there's a cleaner
way of determining the use of double indirect from the inode.
We may also want to implement a block pointer cache like VMware introduced
with this feature, however given the use cases of this software it may
not be necessary.