You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please help in recommending the right disk size for booting up a new base node from scratch.
Disk size mentioned in the repo which is 4TB and docs i.e. 2.5TB is inconsistent
Currently the snapshot size for mainnet is 3.4TB in zip. That means if someone wants to boot up a new base node, they would need at least 7TB of disk space, 3.5TB for zip download, and at least 3.5TB for extracting the data.
This means to do this efficiently and make it future-proof, do we have to boot up a server with a 10TB disk? or am I doing something wrong?
The text was updated successfully, but these errors were encountered:
just very simple, downloading and extracting export LATEST=$(curl https://base-snapshots-mainnet-archive.s3.amazonaws.com/latest) && wget https://base-snapshots-mainnet-archive.s3.amazonaws.com/${LATEST} -O - | tar -xzvf - -C /data/base
Please help in recommending the right disk size for booting up a new base node from scratch.
Disk size mentioned in the repo which is 4TB and docs i.e. 2.5TB is inconsistent
Currently the snapshot size for mainnet is 3.4TB in zip. That means if someone wants to boot up a new base node, they would need at least 7TB of disk space, 3.5TB for zip download, and at least 3.5TB for extracting the data.
This means to do this efficiently and make it future-proof, do we have to boot up a server with a 10TB disk? or am I doing something wrong?
The text was updated successfully, but these errors were encountered: