-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Description
Apache Airflow Provider(s)
amazon
Versions of Apache Airflow Providers
apache-airflow-providers-amazon==9.18.0 (issue exists in all versions)
Apache Airflow version
3.0.0 (also affects 2.x)
Operating System
Debian/Ubuntu-based containers (Astronomer Runtime, official Airflow images)
Deployment
Astronomer
Deployment details
Any deployment where /bin/sh is dash (default on Debian/Ubuntu) rather than bash.
What happened
EksPodOperator fails with 401 Unauthorized when connecting to EKS clusters from Debian/Ubuntu-based containers.
The root cause is in airflow/providers/amazon/aws/hooks/eks.py line 83, where the COMMAND template uses source:
source {credentials_file}
source is a bash builtin, not a POSIX command. On Debian/Ubuntu, /bin/sh is dash, which doesn't have source:
$ sh -c 'source /dev/null'
sh: 1: source: not found
The POSIX-compliant equivalent is . (dot):
$ sh -c '. /dev/null'
works
Error output:
ApiException: (401)
Reason: Unauthorized
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
Debug showing the actual failure:
Shell return code: 1
Shell stderr: sh: 2: source: not found
What you think should happen instead
The operator should use POSIX-compliant . instead of source so it works on all shells.
How to reproduce
How to reproduce:
- Deploy Airflow on a Debian/Ubuntu-based container (e.g., Astronomer Runtime, official Airflow Docker image)
- Configure an EksPodOperator task to connect to an EKS cluster
- Run the DAG
- Observe 401 Unauthorized error
Verify your environment is affected:
docker exec -it sh -c 'source /dev/null'
Output: sh: 1: source: not found
Verify the shell:
docker exec -it ls -la /bin/sh
Output: /bin/sh -> dash
Anything else
Why this passes local testing
When running locally with ~/.aws credentials mounted into the container, the bug is masked:
- source {credentials_file} fails silently
- eks_get_token.py falls back to boto3's default credential chain
- Finds credentials in ~/.aws/credentials → token generation succeeds
In production/cloud environments (Astronomer, MWAA, etc.):
- No ~/.aws directory exists
- Credentials are only available via the temp file that failed to source
- eks_get_token.py has no credentials → 401 Unauthorized
This makes the bug difficult to catch during local development.
There's also a secondary issue: AUTHENTICATION_API_VERSION is set to client.authentication.k8s.io/v1alpha1 (line 39), which was https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins. This should be updated to v1beta1.
Are you willing to submit PR?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project's Code of Conduct