-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for importing all jsonnet files in a directory #1375
Comments
Hi 🙂 Sorry for the late reply. I'm not completely sure about how your restricted-environment setup for Tanka differs from your "normal" setup. We also have a clusters with specific characteristics but we've gone a slightly different way: The clusters themselves are defined in another system but then exported as JSON. This is a file that we can then simply import into the inline-env process and filter there for those clusters that we need. Something that we've done recently is make the processing of these "raw meta files" more specific to the use-cases we need to cover. For your setup, wouldn't it be easier to have some kind of pre-processing step before Tanka that generates you a JSON file with the data that is needed so that you can just import that one file? |
I also responded this on the PR but this issue seems more suited for the discussion.
That was my brief canned response as to why this would not be a good idea implementing this only for Tanka. The discussion has been done more broadly on linked Google Group. That said, it is easy enough to generate a file with imports for a certain directory: // generate.jsonnet
function(ls=(importstr '/dev/stdin'))
std.join(
'\n',
['{']
+ [
" '%(file)s': import './%(file)s'," % { file: file }
for file in std.split(ls, '\n')
if std.endsWith(file, '.libsonnet')
]
+ ['}']
) Then call that with: ls | jsonnet -S generate.jsonnet |
Reading the issue description more in depth, I'd love to see some jsonnet code that shows the process to understand fully why reading a full dir is needed and why it can't be generated at the time these files are created. |
While this will work for some restricted use cases, this approach has a number of downsides:
So yes, there are workarounds for this, but they are all hacks in one way or another so i figured it would be worth investigating if a native implementation would be accepted into the Tanka project. As covered in the referenced MR, my proposed change doesn't actually modify the way jsonnet's native import handling functions, it only provides a mechanism for doing a separate evaluation of files outside of the current jsonnet context and passing the resulting json blob to Tanka's jsonnet VM. This is extremely similar to how My thinking was that if it was OK to relax the import semantics for helm charts (one could also write similar jsonnet/shell scripts to render helm charts for import by tanka), then it would be OK to relax them in a nearly identical fashion to help smooth over some of the rough edges currently present with inline environments. |
My company (singlestore.com) has an extremely large and diverse kubernetes infrastructure which is managed by Tanka. At the time of this writing, we maintain 1,830 separate Tanka envrionments, comprised of 61 unique environments deployed across 96 Kubernetes clusters.
To avoid mass duplication of environment configuration, we use Tanka inline environments to render the configs for each cluster/environment. Our inline environment code is roughly structured as follows:
This approach has served us well, but we have run into some issues that we would like to address:
import
call for the cluster's config as Jsonnet has no functionality for dynamically computing imports.To resolve this, I am proposing the addition of a new jsonnet native function that is capable of importing and evaluating all files in a given directory.
This way:
The only downside to this approach is the rendered object doesn't support late binding with directly imported resources, but this is fine for our use case as we can make sure the cluster config resources are rendered separately. I considered doing this with raw json instead, but this is less ideal for us as we have a bunch of standardized attribute names defined in a constants libsonnet file, and we wouldn't be able to use those if our cluster configs are stored as json
I put together a PR for this: #1374 and verified that it works as expected. If this change is accepted, I will happily add the necessary library support to jsonnet-libs
The text was updated successfully, but these errors were encountered: