Description
I'm trying to build the image from source using wolfi toolings.
In istio, we already have the base image (https://github.com/istio/istio/blob/master/docker/iptables.yaml) and then we put the built ztunnel binary on top of that.
following are the apk packages that we pull in distroless variant of the image and I copied the same in ztunnel
package as well.
$ syft istio/ztunnel:1.24.0-distroless
✔ Loaded image istio/ztunnel:1.24.0-distroless
✔ Parsed image sha256:52c18372cb344fb7c9c07cc3117db1cbcbe28592fff116b47fee291a41f66d1c
✔ Cataloged contents a859dea8dd51b56f4a97af72010499fac182f0e3d1576b40d3512a06e04369a1
├── ✔ Packages [12 packages]
├── ✔ File digests [192 files]
├── ✔ File metadata [192 locations]
└── ✔ Executables [146 executables]
NAME VERSION TYPE
ca-certificates-bundle 20240705-r0 apk
glibc 2.40-r2 apk
glibc-locale-posix 2.40-r2 apk
ip6tables 1.8.10-r4 apk
iptables 1.8.10-r4 apk
ld-linux 2.40-r2 apk
libgcc 14.2.0-r3 apk
libmnl 1.0.5-r4 apk
libnetfilter_conntrack 1.0.9-r4 apk
libnfnetlink 1.0.2-r4 apk
libnftnl 1.2.7-r0 apk
wolfi-baselayout 20230201-r15 apk
I've tried to include all these dependencies at runtime and build a package out of it (wolfi-dev/os#34028)
I was able to build the image using it but after installing the image following helm instructions, I'm running into following issues.
the main highlight logs to look into is following:
2024-11-13T11:47:57.726255Z debug hyper_util::client::legacy::connect::dns:xds{id=110}:resolve{host=istiod.istio-system.svc} resolving host="istiod.istio-system.svc"
2024-11-13T11:47:57.727082Z debug hyper_util::client::legacy::connect::http:xds{id=110} connecting to 10.96.92.78:15012
2024-11-13T11:47:57.727186Z debug hyper_util::client::legacy::connect::http:xds{id=110} connected to 10.96.92.78:15012
2024-11-13T11:47:57.727197Z debug rustls::client::hs:xds{id=110} No cached session for DnsName("istiod.istio-system.svc")
2024-11-13T11:47:57.727258Z debug rustls::client::hs:xds{id=110} Not resuming any session
2024-11-13T11:47:57.728934Z debug rustls::client::hs:xds{id=110} Using ciphersuite TLS13_AES_128_GCM_SHA256
2024-11-13T11:47:57.728948Z debug rustls::client::tls13:xds{id=110} Not resuming
2024-11-13T11:47:57.729018Z debug rustls::client::tls13:xds{id=110} TLS1.3 encrypted extensions: [Protocols([ProtocolName(6832)])]
2024-11-13T11:47:57.729025Z debug rustls::client::hs:xds{id=110} ALPN protocol is Some(b"h2")
2024-11-13T11:47:57.729032Z debug rustls::client::tls13:xds{id=110} Got CertificateRequest CertificateRequestPayloadTls13 { context: , extensions: [Unknown(UnknownExtension { typ: S
tatusRequest, payload: }), Unknown(UnknownExtension { typ: SCT, payload: }), SignatureAlgorithms([RSA_PSS_SHA256, ECDSA_NISTP256_SHA256, ED25519, RSA_PSS_SHA384, RSA_PSS_SHA512, RSA_PKCS1_
HA256, RSA_PKCS1_SHA384, RSA_PKCS1_SHA512, ECDSA_NISTP384_SHA384, ECDSA_NISTP521_SHA512, RSA_PKCS1_SHA1, ECDSA_SHA1_Legacy[]), AuthorityNames([DistinguishedName(301831163014060355040a130d636
c75737465722e6c6f63616c)])] }
2024-11-13T11:47:57.729039Z debug rustls::client::common:xds{id=110} Client auth requested but no cert/sigscheme available
2024-11-13T11:47:57.729136Z warn xds::client:xds{id=110} XDS client connection error: gRPC connection error connecting to https://istiod.istio-system.svc:15012: status: Unknown, mes
sage: "client error (Connect)", source: invalid peer certificate: UnknownIssuer, retrying in 15s
From my understanding, it's connecting and then failing to recognize the certificate.
Environment details:
- kind cluster on a x86 linux machine
Things I've tried are:
- searching for similar issues on the repo, I found out we can update the env to fix this
kubectl set env -n istio-system deploy/istiod ISTIOD_CUSTOM_HOST=localhost
this didn't worked for me. - The good thing is that, when I replace the image back to istio/ztunnel then everything start working meaning I'm doing something wrong. Maybe I need to pass the tls flags during build time. As of now, I'm only doing
cargo build --release
- some issue mentioned that the istiod service should be running and I validated that it's running.
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istiod ClusterIP 10.96.92.78 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 57m
I'll appreciate more guidance here on how to build the image from source and if missed something in the deployment config. Thank you!!