Skip to content

trusted.glusterfs.shard.file-size may not be updated correctly #4522

Open
@chen1195585098

Description

@chen1195585098

Description:
When features.shard on, trusted.glusterfs.shard.file-size xattr may not be updated correctly when we got a fallocate failure. In that case, when we remove this file from mount point, the shards will be left in bricks since the deletion process heavily rely on trusted.glusterfs.shard.file-size to judge how many shards needs to be deleted.

Reproduce:
same as #2100

  1. create and start a volume with features.shard on.
  2. mount this volume and fill oversized data by fallocate.
    e.g. run fallocate -l 100g /mnt/test.img.
  3. fallocate will return No space error.
    After that, check the size of /mnt/test.img and the xattr of /path-to-brick/test.img.
  4. rm -f /mnt/test.img, and see if shards in .shard dir are removed.

Example:

fallocate: fallocate failed: No space left on device
Done
[root@localhost glusterfs]# ll -h /glusterfs_mnt/test.img
-rw-r--r-- 1 root root 0 04  19 14:11 /glusterfs_mnt/test.img
[root@localhost glusterfs]# getfattr -d -m . -e hex /gfs/issue1/test.img
getfattr: Removing leading '/' from absolute path names
# file: gfs/issue1/test.img
trusted.afr.dirty=0x000000010000000000000000
trusted.gfid=0x2ab30b5675664c8db5ad6386e146a68a
trusted.gfid2path.0e14da2453a6332a=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f746573742e696d67
trusted.glusterfs.mdata=0x0100000000000000000000000068033e940000000019f1a4330000000068033e94000000001a150e6e0000000068033e94000000001a150e6e
trusted.glusterfs.shard.block-size=0x0000000004000000
trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000 # incorrect

# shards were generated.
[root@localhost glusterfs]# ls /gfs/issue1/.shard|grep 2ab30b56|wc -l
1599

[root@localhost glusterfs]# LC_ALL=us.utf-8 df -h /glusterfs_mnt/
Filesystem       Size  Used Avail Use% Mounted on
localhost:issue   15G   15G     0 100% /glusterfs_mnt
[root@localhost glusterfs]# du -sh /gfs/issue1/
15G	/gfs/issue1/

when I rm -f test.img, the base file was indeed removed, however, all the shards are left in .shard dir and the space of bricks is not freed up.

[root@localhost glusterfs]# rm -f /glusterfs_mnt/test.img
[root@localhost glusterfs]# LC_ALL=us.utf-8 df -h /glusterfs_mnt/
Filesystem       Size  Used Avail Use% Mounted on
localhost:issue   15G   15G     0 100% /glusterfs_mnt
[root@localhost glusterfs]# du -sh /gfs/issue1/
15G	/gfs/issue1/

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions