This is an n8n community node for Mega.nz S4 object storage service. It provides S3-compatible operations for managing buckets and objects in your Mega S4 account.
n8n is a fair-code licensed workflow automation platform where you can host and automate your workflows locally or on cloud.
- List Buckets - Get all buckets in your account
- Create Bucket - Create new storage buckets
- Delete Bucket - Remove empty buckets
- Head Bucket - Check if bucket exists and is accessible
- Get Location - Retrieve the region where a bucket is located
- List Objects - Browse objects in a bucket with pagination support
- Upload Object - Upload files or text data to buckets
- Download Object - Retrieve objects as binary data
- Delete Object - Remove single objects
- Delete Multiple - Batch delete multiple objects
- Head Object - Get object metadata without downloading
- Copy Object - Copy objects between buckets or within the same bucket
- ✅ AWS SDK v3 for reliable S3-compatible operations
- ✅ Multiple region support (Amsterdam, Luxembourg, Montreal, Vancouver)
- ✅ Binary data handling for file uploads/downloads
- ✅ Pagination for large object lists
- ✅ Input validation (bucket names, object keys)
- ✅ Custom metadata support
- ✅ Read-only ACL operations (GetBucketAcl, GetObjectAcl)
- ✅ Bucket policy management for access control
- ℹ️ All objects use STANDARD storage class (Mega S4 default)
📺 Prefer video? Watch our step-by-step installation tutorial
Follow the installation guide in the n8n community nodes documentation.
npm install @nskha/n8n-nodes-megaYou can install community nodes directly in n8n Cloud through the Settings → Community Nodes menu.
To use this node, you need Mega S4 API credentials:
- Log in to your Mega account
- Navigate to S4 settings (FM → Object storage → Keys)
- Generate or locate your:
- Access Key ID
- Secret Access Key
- Select your preferred region:
eu-central-1- Amsterdam (default)eu-central-2- Luxembourgca-central-1- Montrealca-west-1- Vancouver
In n8n, go to Credentials → New → Mega S4 API and enter:
| Field | Required | Description |
|---|---|---|
| Access Key ID | Yes | Your Mega S4 access key |
| Secret Access Key | Yes | Your Mega S4 secret key |
| Region | Yes | The region for your S4 service |
| Custom S3 Endpoint | No | Override default endpoint (advanced) |
| Force Path Style | No | Use path-style URLs (default: true, for mega users) |
- Add the Mega S4 node to your workflow
- Connect your credentials
- Select:
- Resource: Bucket
- Operation: List
- Execute the node
Returns an array of all buckets with names and creation dates.
- Add the Mega S4 node after a node that outputs binary data (e.g., HTTP Request, Read Binary File)
- Configure:
- Resource: Object
- Operation: Upload
- Bucket Name:
my-bucket - Object Key:
folder/myfile.pdf - Binary Data:
true - Binary Property:
data(default)
- Execute the node
The file will be uploaded to the specified bucket and path.
- Add the Mega S4 node to your workflow
- Configure:
- Resource: Object
- Operation: Download
- Bucket Name:
my-bucket - Object Key:
folder/myfile.pdf - Binary Property:
data(output property name)
- Execute the node
The file will be downloaded and available as binary data for the next node.
- Add the Mega S4 node
- Configure:
- Resource: Object
- Operation: List
- Bucket Name:
my-bucket - Return All:
false - Limit:
50 - Additional Fields → Prefix:
documents/2024/
- Execute the node
Returns up to 50 objects that start with documents/2024/.
- Add the Mega S4 node
- Configure:
- Resource: Object
- Operation: Copy
- Source Bucket:
source-bucket - Source Object Key:
old-location/file.txt - Destination Bucket:
destination-bucket - Destination Object Key:
new-location/file.txt
- Execute the node
The object will be copied to the new location.
Lists all buckets in your Mega S4 account. No parameters required.
Returns: Array of buckets with name and creationDate.
Creates a new bucket.
Parameters:
bucketName(required): Unique bucket name (3-63 characters, lowercase, no spaces)
Note: Access control is managed via bucket policies, not ACLs.
Deletes an empty bucket.
Parameters:
bucketName(required): Name of the bucket to delete
Note: Bucket must be empty. Use "Delete Multiple Objects" first if needed.
Checks if a bucket exists and is accessible.
Parameters:
bucketName(required): Name of the bucket to check
Returns: exists: true if accessible, error if not found or no permission.
Gets the region where a bucket is located.
Parameters:
bucketName(required): Name of the bucket
Returns: region (e.g., eu-central-1)
Lists objects in a bucket with pagination support.
Parameters:
bucketName(required): Name of the bucketreturnAll(optional): Return all objects or limit resultslimit(optional): Max number of objects to return (default: 50)prefix(optional): Filter objects by prefix (e.g.,folder/)delimiter(optional): Group keys (e.g.,/for folder simulation)startAfter(optional): Start listing after this key
Returns: Array of objects with key, size, lastModified, etag, etc.
Uploads a file or text data to a bucket.
Parameters:
bucketName(required): Destination bucketobjectKey(required): Path/name for the object (e.g.,folder/file.txt)binaryData(optional): Upload binary data (true) or text (false)binaryPropertyName(required if binary): Property containing file datatextData(required if not binary): Text content to uploadcontentType(optional): MIME type (e.g.,image/png)metadata(optional): Custom key-value metadata
Returns: success, etag, size, etc.
Note: All objects automatically use STANDARD storage class. Access control is managed via bucket policies.
Downloads an object from a bucket.
Parameters:
bucketName(required): Source bucketobjectKey(required): Path/name of the objectbinaryPropertyName(optional): Output property name (default:data)
Returns: Object metadata in JSON + binary data in specified property.
Deletes a single object.
Parameters:
bucketName(required): Bucket containing the objectobjectKey(required): Path/name of the object to delete
Returns: success, deleteMarker, versionId
Deletes multiple objects in one operation (batch delete).
Parameters:
bucketName(required): Bucket containing the objectsobjectKeys(required): Comma-separated list of object keysquiet(optional): Only return errors, not successful deletions
Returns: Arrays of deleted and errors with counts.
Gets object metadata without downloading the object.
Parameters:
bucketName(required): Bucket containing the objectobjectKey(required): Path/name of the object
Returns: contentType, size, etag, lastModified, metadata, etc.
Copies an object to a new location.
Parameters:
sourceBucket(required): Source bucketsourceKey(required): Source object path/namedestinationBucket(required): Destination bucket (can be same as source)destinationKey(required): Destination path/namemetadataDirective(optional):COPY(keep source metadata) orREPLACE
Returns: success, etag, lastModified
Mega S4 is S3-compatible but does not support all AWS S3 features. This node avoids unsupported features and provides alternatives where applicable.
| Feature | Status | Alternative |
|---|---|---|
| ACL (write operations) | Not supported | Use Bucket Policies |
| Storage class selection | Not supported | Automatic STANDARD |
| Server-side encryption | Not supported | Client-side encryption |
| Object lock/retention | Not supported | N/A |
| Versioning | Not supported | N/A |
| Location constraint | Not supported | Configure via endpoint/region |
| Checksum algorithm | Not supported | Use ETags |
| Presigned POST | Not supported | Use Presigned PUT |
| x-amz-grant-* headers | Not supported | Use Bucket Policies |
Supported access control
- GetBucketAcl, GetObjectAcl (read-only)
- Bucket policies: GetBucketPolicy, PutBucketPolicy, DeleteBucketPolicy
See the full list: Mega S4 API Limitations
The node performs comprehensive validation:
- Must be 3-63 characters long
- Lowercase letters, numbers, hyphens, and dots only
- Must start and end with a letter or number
- Cannot be formatted as an IP address
- Must not be empty
- Cannot exceed 1024 characters
- Cannot contain consecutive slashes (
//)
Errors include:
- Validation errors: Clear explanation of what's wrong with input
- S3 API errors: HTTP status codes and S3 error codes
- Network errors: Connection issues, timeouts
- Permission errors: Access denied messages
All errors include the item index for multi-item operations.
- n8n-workflow: Node execution framework
- AWS SDK v3:
@aws-sdk/client-s3for S3-compatible operations - TypeScript: Strict typing for reliability
The node is built with a modular architecture:
Mega.node.ts- Main node classoperators.ts- Resource and operation definitionsfields.ts- Parameter definitionsmethods.ts- Operation handler implementationsexecute.ts- Execution orchestratorinterfaces.ts- TypeScript interfacesGenericFunctions.ts- S3 client and helper functions
Mega S4 is S3-compatible and uses AWS Signature Version 4 authentication. This node leverages the official AWS SDK v3, ensuring compatibility and reliability.
- Node.js >= 18.17.0
- npm or pnpm
- n8n instance for testing
# Clone the repository
git clone https://github.com/Automations-Project/n8n-nodes-mega.git
cd n8n-nodes-mega
# Install dependencies
npm install
# Build the node
npm run build
# Lint and format
npm run lint
npm run format# Build and install to local n8n (Windows)
npm run start:dev-windows
# Watch mode for development
npm run devn8n-nodes-mega/
├── credentials/
│ ├── MegaApi.credentials.ts
│ └── mega.svg
├── nodes/
│ └── Mega/
│ ├── Mega.node.ts
│ ├── Mega.node.json
│ ├── mega.svg
│ ├── operators.ts
│ ├── fields.ts
│ ├── methods.ts
│ ├── execute.ts
│ ├── interfaces.ts
│ └── GenericFunctions.ts
├── package.json
├── tsconfig.json
└── README.md
Ensure your bucket name follows S3 naming rules:
- 3-63 characters
- Lowercase only
- No spaces, underscores, or special characters (except hyphens and dots)
- Verify your Access Key ID and Secret Access Key are correct
- Check that your Mega S4 account has permissions for the operation
- Ensure the bucket/object exists in the selected region
- Confirm the bucket name is spelled correctly (case-sensitive)
- Check if the bucket is in the selected region
- Try using the "Head Bucket" operation to test access
- Ensure the previous node outputs binary data
- Check that
binaryPropertyNamematches the property name from the previous node - Verify the binary data property exists in the input
If operations fail:
- Try using a custom endpoint in credentials
- Verify the region in your Mega S4 account settings
- Check Mega's status page for service availability
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Submit a pull request
- n8n Community Forum: community.n8n.io
- Issues: GitHub Issues
- Mega Support: mega.io/help
VibeCoded with ❤️ for the n8n & Mega community
