diff --git a/.github/workflows/dependabot-auto-merge.yml b/.github/workflows/dependabot-auto-merge.yml index 4cc0f9e76..c6327b2af 100644 --- a/.github/workflows/dependabot-auto-merge.yml +++ b/.github/workflows/dependabot-auto-merge.yml @@ -12,7 +12,7 @@ jobs: steps: - name: Dependabot metadata id: metadata - uses: dependabot/fetch-metadata@v1.6.0 + uses: dependabot/fetch-metadata@v2.1.0 with: github-token: "${{ secrets.GITHUB_TOKEN }}" - name: Enable auto-merge for Dependabot PRs diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index bce46313d..66eb3b896 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -1,10 +1,5 @@ name: Build and publish doc - -on: - push: - branches: - - main - +on: push jobs: build: runs-on: ubuntu-latest @@ -42,6 +37,7 @@ jobs: ./build.sh nav ssb ci-nais dev-nais tenant publish: + if: github.ref == 'refs/heads/main' needs: build runs-on: ubuntu-latest strategy: @@ -65,7 +61,7 @@ jobs: key: ${{ github.sha }} - name: Sync documentation to bucket - uses: actions-hub/gcloud@466.0.0 + uses: actions-hub/gcloud@474.0.0 env: PROJECT_ID: not-used APPLICATION_CREDENTIALS: ${{ secrets.NAIS_DOC_SA }} @@ -74,7 +70,7 @@ jobs: cli: gsutil - name: Invalidate cache - uses: actions-hub/gcloud@466.0.0 + uses: actions-hub/gcloud@474.0.0 env: PROJECT_ID: not-used APPLICATION_CREDENTIALS: ${{ secrets.NAIS_DOC_SA }} diff --git a/build.sh b/build.sh index 0f4a0177d..777f45992 100755 --- a/build.sh +++ b/build.sh @@ -1,16 +1,17 @@ #! /bin/bash +set -e rm -rf ./out ./docs-base mkdir -p ./out ./docs-base # Copy documentation to base folder as we need to use docs as a staging folder -cp -r ./docs/* ./docs-base +cp -ra ./docs/. ./docs-base/ -for TENANT in $@; - do +for TENANT in $@; + do rm -rf ./docs - cp -r ./docs-base ./docs - cp -rf ./tenants/$TENANT/* ./docs - TENANT=$TENANT poetry run mkdocs build --no-strict -d out/$TENANT + cp -ra ./docs-base/. ./docs/ + cp -rf ./tenants/$TENANT/* ./docs || true + TENANT=$TENANT poetry run mkdocs build --strict -d out/$TENANT done diff --git a/docs/.pages b/docs/.pages new file mode 100644 index 000000000..1dd8887fe --- /dev/null +++ b/docs/.pages @@ -0,0 +1,17 @@ +nav: +- Home: README.md +- explanations +- tutorials +- "": "" +- workloads +- build +- auth +- observability +- persistence +- security +- Other services: services +- "": "" +- operate +- "": "" +- tags.md +- ... diff --git a/docs/README.md b/docs/README.md index 0dca5da4c..d7ae2e674 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,58 +1,48 @@ --- +title: NAIS Developer Documentation hide: - feedback - footer + - toc --- - - -# NAIS Documentation +# :wave: **Welcome to the NAIS developer documentation!** -
-- :seedling: **Getting Started?** +Here you'll find _explanations_[^1], _how-to guides_[^2], and _references_[^3] for all things NAIS. - --- - - Initially it's a good idea to learn about [what NAIS is](./explanation/nais.md) to get an idea of the fundamentals. - - Once you're familiar with the basic concepts, move on to the [Hello NAIS](./tutorial/hello-nais/hello-nais-1.md) tutorial to get your first app running. +### :seedling: **New to NAIS?** -
+Start here to get an idea of the fundamentals.
-- :rocket: **Tutorials** - - --- - - Learning-oriented lessons that take you through a series of steps to complete a project. Most useful when you want to get started with NAIS. - - [:octicons-arrow-right-24: Tutorials](tutorial/README.md) - -- :dart: **How-to guides** - - --- - Practical step-by-step guides to help you achieve a specific goal. Most useful when you're trying to get something done. +- :ok_hand: [**What is NAIS?**](explanations/nais.md) +- :student: [**Your first application; Hello NAIS**](tutorials/hello-nais.md) - [:octicons-arrow-right-24: How-to guides](how-to-guides/README.md) - -- :bulb: **Explanation** - - --- - - Big-picture explanations of higher-level concepts. Most useful when you want to understand how NAIS works. - - [:octicons-arrow-right-24: Explanations](explanation/README.md) +
-- :computer: **Reference** +### :technologist: **Already familiar with NAIS?** - --- +What can we help you with today? - Reference documentation for the NAIS platform. Most useful when you need to look up details about a specific feature. +
- [:octicons-arrow-right-24: Reference](reference/README.md) +- :package: [**Run your code**](workloads/README.md) +- :rocket: [**Build and deploy your code**](build/README.md) +- :open_file_folder: [**Store your data**](persistence/README.md) +- :closed_lock_with_key: [**Auth your users and workloads**](auth/README.md) +- :telescope: [**Gain insight into your workloads**](observability/README.md) +- :wrench: [**Manage your workloads and services**](operate/README.md) +- :heavy_plus_sign: [**Explore the rest of NAIS**](services/README.md)
+ +[^1]: :bulb: [_explanations_](tags.md#explanation) present higher-level concepts. Most useful when you want to understand how NAIS works. +[^2]: :dart: [_how-to guides_](tags.md#how-to) help you achieve a specific goal. Most useful when you're trying to get something done. +[^3]: :books: [_references_](tags.md#reference) contain technical descriptions and specifications. Most useful when you need to look up details about a specific feature. diff --git a/docs/auth/.pages b/docs/auth/.pages new file mode 100644 index 000000000..8d080d5f7 --- /dev/null +++ b/docs/auth/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 💡 Explanations: explanations +- 📚 Reference: reference +- ... diff --git a/docs/auth/README.md b/docs/auth/README.md new file mode 100644 index 000000000..3f84142e3 --- /dev/null +++ b/docs/auth/README.md @@ -0,0 +1,96 @@ +--- +tags: [auth, explanation] +description: Services and addons to support authentication and authorization in your applications. +--- + +# Authentication and authorization + +NAIS helps your applications [log in users](#logging-in-users), [validate inbound requests](#validating-inbound-requests) and [make authenticated outbound requests](#making-outbound-requests) using the following identity providers: + +
+ +- [**Azure AD**][Azure AD] (aka Entra ID) + + For employees and internal services. + +- [**ID-porten**][ID-porten] + + For Norwegian citizens. + +- [**TokenX**][TokenX] + + For internal applications acting on-behalf-of ID-porten citizens. + +- [**Maskinporten**][Maskinporten] + + For machine-to-machine communication between organizations or businesses. + +
+ +Your application may have multiple use cases that can require a combination of services. + +See the different scenarios below to identify which service(s) you need for your application, and follow the links to the respective service for more details. + +## Logging in users + +Depending on who your users are, you can use the following services to log them in: + +:person_standing: Log in employees :octicons-arrow-right-24: [Azure AD](../security/auth/azure-ad/sidecar.md) + +:person_standing: Log in citizens :octicons-arrow-right-24: [ID-porten] + +## Validating inbound requests + +...from applications acting + +```mermaid +graph TD + B1[on-behalf-of] + B2[as themselves] + + B1 --> |citizens| TokenX[TokenX] + B1 --> |employees| AAD_machine[Azure AD] + + + B2 --> |internally| AAD_machine[Azure AD] + B2 --> |externally| Maskinporten[Maskinporten] +``` + +The graph above can also be described as: + +:material-server::person_standing: Validate requests from application on behalf of employee :octicons-arrow-right-24: [Azure AD] + +:material-server::person_standing: Validate requests from application on behalf of citizen :octicons-arrow-right-24: [TokenX] + +:material-server: Validate requests from internal application :octicons-arrow-right-24: [Azure AD] + +:material-server: Validate requests from external application :octicons-arrow-right-24: [Maskinporten] + +## Making outbound requests + +```mermaid +graph TD + B1[on-behalf-of] + B2[as application] + + B1 --> |citizens| TokenX[TokenX] + B1 --> |employees| AAD_machine[Azure AD] + + B2 --> |internally| AAD_machine[Azure AD] + B2 --> |externally| Maskinporten[Maskinporten] +``` + +The graph above can also be described as: + +:material-server::person_standing: Make requests on behalf of employee :octicons-arrow-right-24: [Azure AD] + +:material-server::person_standing: Make requests on behalf of citizen :octicons-arrow-right-24: [TokenX] + +:material-server: Make requests to internal API :octicons-arrow-right-24: [Azure AD] + +:material-server: Make requests to external API :octicons-arrow-right-24: [Maskinporten] + +[Azure AD]: ../security/auth/azure-ad/README.md +[ID-porten]: idporten/README.md +[TokenX]: tokenx/README.md +[Maskinporten]: maskinporten/README.md diff --git a/docs/security/auth/concepts.md b/docs/auth/explanations/README.md similarity index 94% rename from docs/security/auth/concepts.md rename to docs/auth/explanations/README.md index 8210d3757..73ada82b7 100644 --- a/docs/security/auth/concepts.md +++ b/docs/auth/explanations/README.md @@ -1,4 +1,8 @@ -# Concepts +--- +tags: [auth, explanation] +--- + +# Auth concepts This page describes basic concepts and glossary commonly referred to when working with authentication and authorization. @@ -24,19 +28,19 @@ There are multiple ways of obtaining such a grant, depending on the use case: **Internal applications** -- [Machine-to-machine with Azure AD](azure-ad/usage.md#oauth-20-client-credentials-grant) +- [Machine-to-machine with Azure AD](../../security/auth/azure-ad/usage.md#oauth-20-client-credentials-grant) **Employee-facing applications** -- [On-behalf-of an end-user with Azure AD](azure-ad/usage.md#oauth-20-on-behalf-of-grant) +- [On-behalf-of an end-user with Azure AD](../../security/auth/azure-ad/usage.md#oauth-20-on-behalf-of-grant) **External applications** -- [Machine-to-machine with Maskinporten](maskinporten/client.md) +- [Machine-to-machine with Maskinporten](../maskinporten/README.md) **Citizen-facing applications** -- [On-behalf-of an end-user with TokenX](tokenx.md) +- [On-behalf-of an end-user with TokenX](../tokenx/README.md) ### OpenID Connect @@ -45,8 +49,8 @@ It is used to authenticate end users. The platform provides opt-in sidecars that implement OpenID Connect: -- [Sidecar for Azure AD](azure-ad/sidecar.md) (employee-facing applications) -- [Sidecar for ID-porten](idporten.md) (citizen-facing applications) +- [Sidecar for Azure AD](../../security/auth/azure-ad/sidecar.md) (employee-facing applications) +- [Sidecar for ID-porten](../idporten/README.md) (citizen-facing applications) Due to the complexity involved in implementing and maintaining such clients, we recommend that your applications use these sidecars when possible. @@ -68,10 +72,10 @@ Similar terms such as _authorization server_ (AS) or _OpenID provider_ (OP) are Providers that the platform supports provisioning for: -- [Azure AD](azure-ad/README.md) -- [ID-porten](idporten.md) -- [Maskinporten](maskinporten/README.md) -- [TokenX](tokenx.md) +- [Azure AD](../../security/auth/azure-ad/README.md) +- [ID-porten](../idporten/README.md) +- [Maskinporten](../maskinporten/README.md) +- [TokenX](../tokenx/README.md) #### Well-Known URL / Metadata Document @@ -192,9 +196,9 @@ identifier is generally not considered to be confidential. The client ID for your client is injected at runtime as an environment variable. See the respective identity provider page for details: -- [Azure AD](azure-ad/usage.md#runtime-variables-credentials) -- [ID-porten](idporten.md#runtime-variables-credentials) -- [TokenX](tokenx.md#runtime-variables-credentials) +- [Azure AD](../../security/auth/azure-ad/usage.md#runtime-variables-credentials) +- [ID-porten](../idporten/reference/README.md#runtime-variables-credentials) +- [TokenX](../tokenx/README.md#runtime-variables-credentials) #### Client Authentication @@ -424,7 +428,7 @@ Validation should always be performed before granting access to any [resource se Use well-known and widely used libraries and frameworks that take care of most of the heavy lifting for you. -See [libraries and frameworks](development.md#libraries-and-frameworks) for a non-comprehensive list. +See [libraries and frameworks](../reference/README.md#libraries-and-frameworks-for-validating-and-acquiring-tokens) for a non-comprehensive list. #### Signature Validation @@ -465,10 +469,10 @@ Most libraries will have implementations to automatically validate these de fact See the individual identity provider pages for specific validation related to each provider: -- [Azure AD](azure-ad/usage.md#token-validation) -- [ID-porten](idporten.md#token-validation) -- [Maskinporten](maskinporten/scopes.md#3-validate-tokens) -- [TokenX](tokenx.md#token-validation) +- [Azure AD](../../security/auth/azure-ad/usage.md#token-validation) +- [ID-porten](../idporten/how-to/secure.md#validate-token-in-authorization-header) +- [Maskinporten](../maskinporten/how-to/secure.md#validate-tokens) +- [TokenX](../tokenx/README.md#token-validation) --- diff --git a/docs/auth/idporten/.pages b/docs/auth/idporten/.pages new file mode 100644 index 000000000..5c3d5eb56 --- /dev/null +++ b/docs/auth/idporten/.pages @@ -0,0 +1,6 @@ +title: ID-porten +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/auth/idporten/README.md b/docs/auth/idporten/README.md new file mode 100644 index 000000000..9db0c10d7 --- /dev/null +++ b/docs/auth/idporten/README.md @@ -0,0 +1,25 @@ +--- +tags: [auth, idporten, services, explanation] +--- + +# ID-porten + +{%- if tenant() == "nav" %} +!!! warning "Availability" + + This functionality is only available in the [Google Cloud Platform](../../workloads/reference/environments.md#google-cloud-platform-gcp) environments. +{%- endif %} + +[ID-porten](https://docs.digdir.no/docs/idporten/) is the standard authentication service used by Norwegian citizens to access public services. + +If you have a citizen-facing application that requires authentication, you will need to integrate with ID-porten. + +NAIS simplifies this by providing: + +- :books: [A login endpoint](reference/README.md#logout-endpoint) that handles the authentication flow with ID-porten +- :books: [A logout endpoint](reference/README.md#logout-endpoint) that triggers single-logout with ID-porten +- :books: [Session management](../../security/auth/wonderwall.md#5-sessions) + +Your application is left with the responsibility to verify that inbound requests have valid tokens. + +:dart: Learn how to [secure your application with ID-porten](how-to/secure.md). diff --git a/docs/auth/idporten/how-to/secure.md b/docs/auth/idporten/how-to/secure.md new file mode 100644 index 000000000..cc6609edf --- /dev/null +++ b/docs/auth/idporten/how-to/secure.md @@ -0,0 +1,93 @@ +--- +tags: [idporten, how-to] +--- + +# Secure your application with ID-porten + +This how-to guides you through the steps required to ensure that only citizens authenticated with [ID-porten](../README.md) can access your application. + +1. [Configure your application](#configure-your-application) +1. [Handle inbound requests](#handle-inbound-requests) + +## Prerequisites + +- Your application is [exposed to the appropriate audience](../../../workloads/application/how-to/expose.md). + +## Configure your application + +```yaml title="app.yaml" +spec: + idporten: + enabled: true + sidecar: + enabled: true +``` + +See the [NAIS application reference](../../../workloads/application/reference/application-spec.md#idportensidecar) for the complete specifications with all possible options. + +Now that your application is configured, you will need to handle inbound requests in your application code. + +## Handle inbound requests + +As long as the citizen is authenticated, the `Authorization` header includes their `access_token` as a [Bearer token](../../explanations/README.md#bearer-token). + +Your application is responsible for verifying that this token is present and valid. To do so, follow these steps: + +### Handle missing or empty `Authorization` header + +If the `Authorization` header is missing or empty, the citizen is unauthenticated. + +Redirect the citizen to the [login endpoint] provided by NAIS: + +``` +https:///oauth2/login +``` + +### Validate token in `Authorization` header + +If the `Authorization` header is present, validate the token. +If invalid, redirect the citizen to the [login endpoint] provided by NAIS: + +``` +https:///oauth2/login +``` + +To validate the token, start by validating the [signature and standard time-related claims](../../explanations/README.md#token-validation). +Additionally, perform the following validations: + +**Issuer Validation** + +Validate that the `iss` claim has a value that is equal to either: + +1. the `IDPORTEN_ISSUER` [environment variable](../reference/README.md#runtime-variables-credentials), or +2. the `issuer` property from the [metadata discovery document](../../explanations/README.md#well-known-url-metadata-document). + The document is found at the endpoint pointed to by the `IDPORTEN_WELL_KNOWN_URL` environment variable. + +**Audience Validation** + +Validate that the `aud` claim is equal to the `IDPORTEN_AUDIENCE` environment variable. + +**Signature Validation** + +Validate that the token is signed with a ID-porten's public key published at the JWKS endpoint. +This endpoint URI can be found in one of two ways: + +1. the `IDPORTEN_JWKS_URI` environment variable, or +2. the `jwks_uri` property from the metadata discovery document. + The document is found at the endpoint pointed to by the `IDPORTEN_WELL_KNOWN_URL` environment variable. + +**Claims Validation** + +[Other claims](../reference/README.md#claims) may be present in the token. Validation of these claims is optional. + +!!! tip "Recommended JavaScript Library" + + See that helps with token validation and exchange in JavaScript applications. + +## Related pages + +:dart: Learn how to [consume other APIs on behalf of a citizen](../../tokenx/how-to/consume.md) + +:books: [ID-porten reference](../reference/README.md) + +[login endpoint]: ../reference/README.md#login-endpoint diff --git a/docs/auth/idporten/reference/README.md b/docs/auth/idporten/reference/README.md new file mode 100644 index 000000000..d928d22af --- /dev/null +++ b/docs/auth/idporten/reference/README.md @@ -0,0 +1,122 @@ +--- +tags: [idporten, reference] +--- + +# ID-porten reference + +## Claims + +Notable claims in tokens from ID-porten: + +- `acr` (**Authentication Context Class Reference**) + - The [security level](#security-levels) used for authenticating the end-user. +- `pid` (**personidentifikator**) + - The Norwegian national ID number (fødselsnummer/d-nummer) of the authenticated end user. + +For a complete list of claims, see the [Access Token Reference in ID-porten](https://docs.digdir.no/docs/idporten/oidc/oidc_protocol_access_token#by-value--self-contained-access-token). + +## Endpoints + +NAIS provides the following endpoints to help you integrate with ID-porten: + +### Login endpoint + +This endpoint handles the authentication flow with ID-porten. It is available at: + +```http +https:///oauth2/login +``` + +To log in a citizen, redirect them to this endpoint. +By default, they will be redirected back to the matching context path for your application's ingress: + +- `/` for `https://.nav.no` +- `/path` for `https://nav.no/path` + +To override the path, use the `redirect` parameter and specify a different path: + +``` +https:///oauth2/login?redirect=/some/path +``` + +If you include query parameters, ensure that they are URL encoded. + +### Logout endpoint + +This endpoint triggers single-logout with ID-porten. It is available at: + +```http +https:///oauth2/logout +``` + +To log out a citizen, redirect them to this endpoint. + +## Locales + +ID-porten supports a few different locales for the user interface during authentication. + +Valid values shown below: + +| Value | Description | +|:------|:------------------| +| `nb` | Norwegian Bokmål | +| `nn` | Norwegian Nynorsk | +| `en` | English | +| `se` | Sámi | + +Set the query parameter `locale` when redirecting the user to login: + +``` +https:///oauth2/login?locale=en +``` + +## Runtime variables & credentials + +Your application will automatically be injected with environment variables at runtime. + +| Name | Description | +|:--------------------------|:---------------------------------------------------------------------------------------------------------------------------| +| `IDPORTEN_AUDIENCE` | The expected [audience](../../explanations/README.md#token-validation) for access tokens from ID-porten. | +| `IDPORTEN_WELL_KNOWN_URL` | The URL for ID-porten's [OIDC metadata discovery document](../../explanations/README.md#well-known-url-metadata-document). | +| `IDPORTEN_ISSUER` | `issuer` from the [metadata discovery document](../../explanations/README.md#issuer). | +| `IDPORTEN_JWKS_URI` | `jwks_uri` from the [metadata discovery document](../../explanations/README.md#jwks-endpoint-public-keys). | + +These variables are used to :dart: [secure your application with ID-porten](../how-to/secure.md). + +## Security levels + +ID-porten classifies different user authentication methods into [security levels of assurance](https://docs.digdir.no/docs/idporten/oidc/oidc_protocol_id_token#acr-values). +This is reflected in the `acr` claim for the user's JWTs issued by ID-porten. + +Valid values, in increasing order of assurance levels: + +| Value | Description | Notes | +|:---------------------------|:-----------------------------------------------------------------|:-----------------------| +| `idporten-loa-substantial` | a substantial level of assurance, e.g. MinID | Also known as `Level3` | +| `idporten-loa-high` | a high level of assurance, e.g. BankID, Buypass, Commfides, etc. | Also known as `Level4` | + +To configure a default value for _all_ login requests: + +```yaml title="app.yaml" hl_lines="6" +spec: + idporten: + enabled: true + sidecar: + enabled: true + level: idporten-loa-high +``` + +The default value is `idporten-loa-high`. + +NAIS ensures that the user's authentication level matches or exceeds the level configured by the application. +If lower, the user is considered unauthenticated. + +For runtime control of the value, set the query parameter `level` when redirecting the user to login: + +``` +https:///oauth2/login?level=idporten-loa-high +``` + +## Spec + +See the [:books: NAIS application reference](../../../workloads/application/reference/application-spec.md#idporten). diff --git a/docs/auth/maskinporten/.pages b/docs/auth/maskinporten/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/auth/maskinporten/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/auth/maskinporten/README.md b/docs/auth/maskinporten/README.md new file mode 100644 index 000000000..e679f1706 --- /dev/null +++ b/docs/auth/maskinporten/README.md @@ -0,0 +1,35 @@ +--- +tags: [auth, maskinporten, services, explanation] +description: > + Enabling service-to-service authorization between organizations and businesses using Maskinporten. +--- + +# Maskinporten + +[Maskinporten](https://docs.digdir.no/maskinporten_overordnet.html) is a service provided by DigDir used to authorize access to APIs between organizations or businesses. + +NAIS provides support for declarative registration and configuration of Maskinporten resources. +These cover two distinct use cases: + +## Consume an API + +To consume an external API secured with Maskinporten, you'll need to acquire a [token](../explanations/README.md#tokens): + +```mermaid +graph LR + Consumer["Application"] --1. request token---> Maskinporten + Maskinporten --2. return token---> Consumer + Consumer --3. use token---> API["External API"] +``` + +:dart: Learn how to [consume an external API using Maskinporten](how-to/consume.md) + +## Secure your API + +To secure your API with Maskinporten, you'll need to define _permissions_ (also known as _scopes_) and grant consumers access to these. + +Once configured, your consumers can acquire a token from Maskinporten to [consume your API](#consume-an-api). + +Your application code must verify inbound requests by validating the included tokens. + +:dart: Learn how to [secure your API using Maskinporten](how-to/secure.md) diff --git a/docs/auth/maskinporten/how-to/consume.md b/docs/auth/maskinporten/how-to/consume.md new file mode 100644 index 000000000..53ba322b2 --- /dev/null +++ b/docs/auth/maskinporten/how-to/consume.md @@ -0,0 +1,209 @@ +--- +tags: [maskinporten, how-to] +--- + +# Consume external API using Maskinporten + +This how-to guides you through the steps required to consume an API secured with [Maskinporten](../README.md): + +1. [Declare the scopes that you want to consume](#declare-consumer-scopes) +2. [Acquire tokens from Maskinporten](#acquire-token) +3. [Consume the API using the token](#consume-api) + +## Declare consumer scopes + +Declare all the scopes that you want to consume in your application's NAIS manifest so that your application is granted access to them: + +```yaml hl_lines="5-7" title="nais.yaml" +spec: + maskinporten: + enabled: true + scopes: + consumes: + - name: "skatt:some.scope" + - name: "nav:some/other/scope" +``` + +The scopes themselves are defined and owned by the external API provider. The exact scope values must be exchanged out-of-band. + +{%- if tenant() == "nav" %} +???+ warning "Ensure that organization has access to scopes" + + Make sure that the provider has granted NAV (organization number `889640782`) access to any scopes that you wish to consume. + + Provisioning of client will fail otherwise. + +???+ warning "Use webproxy for outbound network connectivity from on-premises environments" + + If you're on-premises, you must enable and use [`webproxy`](../../../workloads/application/reference/application-spec.md#webproxy) to access Maskinporten. + +{%- endif %} + +## Acquire token + +To acquire a token from Maskinporten, you will need to create a [client assertion](../../explanations/README.md#client-assertion). + +### Create client assertion + +The client assertion is a JWT that consists of a **header**, a **payload** and a **signature**. + +The **header** should consist of the following parameters: + +| Parameter | Value | Description | +|:----------|:-----------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **`kid`** | `` | The key identifier of the [private JWK](../../explanations/README.md#private-keys) used to sign the assertion. The private key is found in the [`MASKINPORTEN_CLIENT_JWK` environment variable][variables-ref]. | +| **`typ`** | `JWT` | Represents the type of this JWT. Set this to `JWT`. | +| **`alg`** | `RS256` | Represents the cryptographic algorithm used to secure the JWT. Set this to `RS256`. | + +The **payload** should have the following claims: + +| Claim | Example Value | Description | +|:------------|:---------------------------------------|:----------------------------------------------------------------------------------------------------------| +| **`aud`** | `https://test.maskinporten.no/` | The _audience_ of the token. Set to the [`MASKINPORTEN_ISSUER` environment variable][variables-ref]. | +| **`iss`** | `60dea49a-255b-48b5-b0c0-0974ac1c0b53` | The _issuer_ of the token. Set to the [`MASKINPORTEN_CLIENT_ID` environment variable][variables-ref]. | +| **`scope`** | `nav:test/api` | `scope` is a whitespace-separated list of scopes that you want in the issued token from Maskinporten. | +| **`iat`** | `1698435010` | `iat` stands for _issued at_. Set to now. | +| **`exp`** | `1698435070` | `exp` is the _expiration time_. Between 1 and 120 seconds after now. Typically 30 seconds is fine | +| **`jti`** | `2d1a343c-6e7d-4ace-ae47-4e77bcb52db9` | The _JWT ID_ of the token. Used to uniquely identify a token. Set this to a unique value such as an UUID. | + +If the API provider requires the use of an [audience-restricted token](https://docs.digdir.no/maskinporten_func_audience_restricted_tokens.html), you must also include the following claim: + +| Claim | Example Value | Description | +|:---------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------| +| **`resource`** | `https://api.some-provider.no/` | Target audience for the token returned by Maskinporten. The exact value is defined by the API provider and exchanged out-of-band. | + +Finally, create a **signature** for the client assertion. + +???+ example "Example Code for Creating a Client Assertion" + + The sample code below shows how to create and sign a client assertion in a few different languages: + + === "Kotlin" + + Minimal example code for creating a client assertion in Kotlin, using [Nimbus JOSE + JWT](https://connect2id.com/products/nimbus-jose-jwt). + + ```kotlin linenums="1" + import com.nimbusds.jose.* + import com.nimbusds.jose.crypto.* + import com.nimbusds.jose.jwk.* + import com.nimbusds.jwt.* + import java.time.Instant + import java.util.Date + import java.util.UUID + + val clientId: String = System.getenv("MASKINPORTEN_CLIENT_ID") + val clientJwk: String = System.getenv("MASKINPORTEN_CLIENT_JWK") + val issuer: String = System.getenv("MASKINPORTEN_ISSUER") + val scope: String = "nav:test/api" + val rsaKey: RSAKey = RSAKey.parse(clientJwk) + val signer: RSASSASigner = RSASSASigner(rsaKey.toPrivateKey()) + + val header: JWSHeader = JWSHeader.Builder(JWSAlgorithm.RS256) + .keyID(rsaKey.keyID) + .type(JOSEObjectType.JWT) + .build() + + val now: Date = Date.from(Instant.now()) + val expiration: Date = Date.from(Instant.now().plusSeconds(60)) + val claims: JWTClaimsSet = JWTClaimsSet.Builder() + .issuer(clientId) + .audience(issuer) + .issueTime(now) + .claim("scope", scope) + .expirationTime(expiration) + .jwtID(UUID.randomUUID().toString()) + .build() + + val jwtAssertion: String = SignedJWT(header, claims) + .apply { sign(signer) } + .serialize() + ``` + + === "Python" + + Minimal example code for creating a client assertion in Python, using [PyJWT](https://github.com/jpadilla/pyjwt). + + ```python linenums="1" + import json, jwt, os, uuid + from datetime import datetime, timezone, timedelta + from jwt.algorithms import RSAAlgorithm + + issuer = os.getenv('MASKINPORTEN_ISSUER') + jwk = os.getenv('MASKINPORTEN_CLIENT_JWK') + client_id = os.getenv('MASKINPORTEN_CLIENT_ID') + + header = { + "kid": json.loads(jwk)['kid'] + } + + payload = { + "aud": issuer, + "iss": client_id, + "scope": "nav:test/api", + "iat": datetime.now(tz=timezone.utc), + "exp": datetime.now(tz=timezone.utc)+timedelta(minutes=1), + "jti": str(uuid.uuid4()) + } + + private_key = RSAAlgorithm.from_jwk(jwk) + jwtAssertion = jwt.encode(payload, private_key, "RS256", header) + ``` + +### Request token from Maskinporten + +**Request** + +The token request is an HTTP POST request. +It should have the `Content-Type` set to `application/x-www-form-urlencoded` + +The body of the request should contain the following parameters: + +| Parameter | Value | Description | +|:-------------|:----------------------------------------------|:-------------------------------------------------------------------------------------------| +| `grant_type` | `urn:ietf:params:oauth:grant-type:jwt-bearer` | Type of grant the client is sending. Always `urn:ietf:params:oauth:grant-type:jwt-bearer`. | +| `assertion` | `eyJraWQ...` | The client assertion itself. It should be unique and only used once. | + +Send the request to the `token_endpoint`, i.e. the URL found in the [`MASKINPORTEN_TOKEN_ENDPOINT`][variables-ref] environment variable: + +```http +POST ${MASKINPORTEN_TOKEN_ENDPOINT} HTTP/1.1 +Content-Type: application/x-www-form-urlencoded + +grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer& +assertion=eY... +``` + +**Response** + +Maskinporten will respond with a JSON object that contains the access token. + +```json +{ + "access_token": "eyJraWQ...", + "expires_in": 3599, + ... +} +``` + +???+ tip "Cache your tokens" + + The `expires_in` field in the response indicates the lifetime of the token in seconds. + + Use this field to cache and reuse the token to minimize network latency impact. + +See the [Maskinporten token documentation](https://docs.digdir.no/docs/Maskinporten/maskinporten_protocol_token) for more details. + +## Consume API + +Once you have acquired the token, you can finally consume the external API. + +Use the token in the `Authorization` header as a [Bearer token](../../explanations/README.md#bearer-token): + +```http +GET /resource HTTP/1.1 + +Host: api.example.com +Authorization: Bearer eyJraWQ... +``` + +[variables-ref]: ../reference/README.md#variables-for-acquiring-tokens diff --git a/docs/auth/maskinporten/how-to/secure.md b/docs/auth/maskinporten/how-to/secure.md new file mode 100644 index 000000000..ba0935c35 --- /dev/null +++ b/docs/auth/maskinporten/how-to/secure.md @@ -0,0 +1,107 @@ +--- +tags: [maskinporten, how-to] +--- + +# Secure your API using Maskinporten + +This how-to guides you through the steps required to secure your API using [Maskinporten](../README.md): + +1. [Define the scopes that you want to expose to other organizations](#define-scopes) +2. [Grant access to scopes for other organizations](#grant-access-to-consumers) +3. [Validate tokens in requests from external consumers](#validate-tokens) + +## Prerequisites + +- [Expose your application](../../../workloads/application/how-to/expose.md) to consumers at a publicly accessible domain. + +## Define scopes + +A _scope_ represents a permission that a given consumer has access to. + +Declare all the scopes that you want to expose in your application's NAIS manifest: + +```yaml title="nais.yaml" hl_lines="5-11" +spec: + maskinporten: + enabled: true + scopes: + exposes: + - name: "some.scope.read" + enabled: true + product: "arbeid" + - name: "some.scope.write" + enabled: true + product: "arbeid" +``` + +See the [scope naming reference](../reference/README.md#scope-naming) for details on naming scopes. + +See the [NAIS application reference](../../../workloads/application/reference/application-spec.md#maskinportenscopesexposes) for the complete specifications with all possible options. + +## Grant access to consumers + +Grant the external consumer access to the scopes by specifying their organization number: + +```yaml title="nais.yaml" hl_lines="8-9" +spec: + maskinporten: + enabled: true + scopes: + exposes: + - name: "some.scope.read" + ... + consumers: + - orgno: "123456789" +``` + +Now that you have configured the scopes in Maskinporten, consumers can now request tokens with these scopes. +You will now need to validate these tokens in your application. + +## Validate tokens + +Verify incoming requests from the external consumer(s) by validating the [Bearer token](../../explanations/README.md#bearer-token) in the `Authorization` header. + +Always validate the [signature and standard time-related claims](../../explanations/README.md#token-validation). +Additionally, perform the following validations: + +**Issuer Validation** + +Validate that the `iss` claim has a value that is equal to either: + +1. the [`MASKINPORTEN_ISSUER`][variables-ref] environment variable, or +2. the `issuer` property from the [metadata discovery document](../../explanations/README.md#well-known-url-metadata-document). + The document is found at the endpoint pointed to by the `MASKINPORTEN_WELL_KNOWN_URL` environment variable. + +**Scope Validation** + +Validate that the `scope` claim contains the expected scope(s). +The `scope` claim is a string that contains a whitespace-separated list of scopes. + +The semantics and authorization that a scope represents is up to you to define and enforce in your application code. + +**Audience Validation** + +The `aud` claim is not included by default in Maskinporten tokens and does not need to be validated. +It is only included if the consumer has requested an [audience-restricted token](https://docs.digdir.no/maskinporten_func_audience_restricted_tokens.html). + +Only validate the `aud` claim if you want to require your consumers to use audience-restricted tokens. +The expected audience value is up to you to define and must be communicated to your consumers. +The value must be an absolute URI (such as `https://some-provider.no` or `https://some-provider.no/api`). + +**Signature Validation** + +Validate that the token is signed with a public key published at the JWKS endpoint. +This endpoint URI can be found in one of two ways: + +1. the [`MASKINPORTEN_JWKS_URI`][variables-ref] environment variable, or +2. the `jwks_uri` property from the metadata discovery document. + The document is found at the endpoint pointed to by the [`MASKINPORTEN_WELL_KNOWN_URL`][variables-ref] environment variable. + +**Other Token Claims** + +Other claims may be present in the token. +Validation of these claims is optional. + +See the [Access Token Reference in Maskinporten](https://docs.digdir.no/docs/Maskinporten/maskinporten_protocol_token#the-access-token) for a list of all claims. + +[variables-ref]: ../reference/README.md#variables-for-validating-tokens diff --git a/docs/auth/maskinporten/reference/README.md b/docs/auth/maskinporten/reference/README.md new file mode 100644 index 000000000..7b713abb0 --- /dev/null +++ b/docs/auth/maskinporten/reference/README.md @@ -0,0 +1,156 @@ +--- +tags: [maskinporten, reference] +--- + +# Maskinporten reference + +## Runtime Variables & Credentials + +Your application will automatically be injected with environment variables at runtime. + +### Variables for acquiring tokens + +These variables are used to [:dart: consume an external API](../how-to/consume.md). + +| Name | Description | +|:------------------------------|:------------------------------------------------------------------------------------------------------------------------| +| `MASKINPORTEN_CLIENT_ID` | [Client ID](../../explanations/README.md#client-id) that uniquely identifies the client in Maskinporten. | +| `MASKINPORTEN_CLIENT_JWK` | [Private JWK](../../explanations/README.md#private-keys) (RSA) for the client. | +| `MASKINPORTEN_SCOPES` | Whitespace-separated string of scopes registered for the client. | +| `MASKINPORTEN_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../../explanations/README.md#well-known-url-metadata-document) | +| `MASKINPORTEN_ISSUER` | `issuer` from the [metadata discovery document](../../explanations/README.md#issuer). | +| `MASKINPORTEN_TOKEN_ENDPOINT` | `token_endpoint` from the [metadata discovery document](../../explanations/README.md#token-endpoint). | + +### Variables for validating tokens + +These variables are used to [:dart: secure your API](../how-to/secure.md). + +| Name | Description | +|:------------------------------|:------------------------------------------------------------------------------------------------------------------------| +| `MASKINPORTEN_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../../explanations/README.md#well-known-url-metadata-document) | +| `MASKINPORTEN_ISSUER` | `issuer` from the [metadata discovery document](../../explanations/README.md#issuer). | +| `MASKINPORTEN_JWKS_URI` | `jwks_uri` from the [metadata discovery document](../../explanations/README.md#jwks-endpoint-public-keys). | + +## Scope Naming + +All scopes within Maskinporten consist of a _prefix_ and a _subscope_: + +```text +scope := : +``` + +For example: + +```text +scope := nav:trygdeopplysninger +``` + +{%- if tenant() == "nav" %} +### Prefix + +The _prefix_ is set to `nav` for all scopes. +{%- endif %} + +### Subscope + +A _subscope_ should describe the resource to be exposed as accurately as possible. +It consists of three parts; _product_, _separator_ and _name_: + +```text +subscope := +``` + +**Product** + +A product should be a _logical grouping_ of the resource, such as `trygdeopplysninger` or `pensjon`. + +**Separator** + +The default separator is `:`. If `name` contains `/`, the default separator is instead `/`. + +If the [`separator` field](../../../workloads/application/reference/application-spec.md#maskinportenscopesexposesseparator) is configured, it will override the default separator. + +**Name** + +The _name_ may also be _postfixed_ to separate between access levels. +For instance, you could separate between `write` access: + +```text +name := trygdeopplysninger.write +``` + +...and `read` access: + +```text +name := trygdeopplysninger.read +``` + +### Example scope + +=== "Without forward slash" + + If **name** does not contain any `/` (forward slash), the **separator** is set to `:` (colon). + + For the following scope: + + ```yaml title="nais.yaml" hl_lines="5-11" + spec: + maskinporten: + enabled: true + scopes: + exposes: + - name: "some.scope.read" + enabled: true + product: "arbeid" + ``` + + - **product** is set to `arbeid` + - **name** is set to `some.scope.read` + + The subscope is then: + + ```text + subscope := arbeid:some.scope.read + ``` + + which results in the fully qualified scope: + + ```text + scope := nav:arbeid:some.scope.read + ``` + +=== "With forward slash" + + If **name** contains a `/` (forward slash), the **separator** is set to `/` (forward slash). + + For the following scope: + + ```yaml title="nais.yaml" hl_lines="5-11" + spec: + maskinporten: + enabled: true + scopes: + exposes: + - name: "some/scope.read" + enabled: true + product: "arbeid" + ``` + + - **product** is set to `arbeid` + - **name** is set to `some/scope.read` + + The subscope is then: + + ```text + subscope := arbeid/some/scope.read + ``` + + which results in the fully qualified scope: + + ```text + scope := nav:arbeid/some/scope.read + ``` + +## Spec + +See the [:books: NAIS application reference](../../../workloads/application/reference/application-spec.md#maskinporten). diff --git a/docs/security/auth/development.md b/docs/auth/reference/README.md similarity index 70% rename from docs/security/auth/development.md rename to docs/auth/reference/README.md index c8bc10eef..b6d89e4ae 100644 --- a/docs/security/auth/development.md +++ b/docs/auth/reference/README.md @@ -1,11 +1,15 @@ -# Development +--- +tags: [auth, reference] +--- -## Mocking +# Auth reference + +## Libraries for mocking - - - a wrapper around the above mock server -## Libraries and Frameworks +## Libraries and frameworks for validating and acquiring tokens Below is a list of some well-known and widely used libraries for handling OAuth, OpenID Connect, and token validation. @@ -20,18 +24,17 @@ Below is a list of some well-known and widely used libraries for handling OAuth, ### JavaScript - -- - - -See also for a non-comprehensive list for many various languages. +See also for a non-comprehensive list for many other various languages. ## Token Generators -In many cases, you want to locally develop and test against a secured API in the development environments. +In some cases, you want to locally develop and test against a secured API in the development environments. You will need a token to access said API. See the respective identity provider pages for details on acquiring such tokens: -- [Azure AD](azure-ad/usage.md#token-generator) -- [TokenX](tokenx.md#token-generator) +- [Azure AD](../../security/auth/azure-ad/usage.md#token-generator) +- [TokenX](../tokenx/reference/README.md#token-generator) diff --git a/docs/auth/tokenx/.pages b/docs/auth/tokenx/.pages new file mode 100644 index 000000000..d249ceb0f --- /dev/null +++ b/docs/auth/tokenx/.pages @@ -0,0 +1,6 @@ +title: TokenX +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/auth/tokenx/README.md b/docs/auth/tokenx/README.md new file mode 100644 index 000000000..68901b5c4 --- /dev/null +++ b/docs/auth/tokenx/README.md @@ -0,0 +1,39 @@ +--- +tags: [auth, tokenx, services, explanation] +--- + +# TokenX + +TokenX is NAIS' own implementation of OAuth 2.0 Token Exchange. + +This allows internal applications to act on behalf of a citizen that originally authenticated with [ID-porten](../idporten/README.md), +while maintaining the [zero trust](../../workloads/explanations/zero-trust.md) security model between applications throughout a request chain. + +NAIS provides support for declarative registration and configuration of TokenX resources. +These cover two distinct use cases: + +## Consume an API + +To consume an API secured with TokenX on behalf of a citizen, you'll need to exchange the inbound [token](../explanations/README.md#tokens) for a new token. + +The new token preserves the citizen's identity context and is only valid for the specific API you want to access. + +```mermaid +graph LR + Consumer -->|1. citizen token| A + A["Your API"] -->|2. exchange token| TokenX + TokenX -->|3. new token for Other API| A + A -->|4. use token| O[Other API] +``` + +:dart: Learn how to [consume an API using TokenX](how-to/consume.md) + +## Secure your API + +To secure your API with TokenX, you'll need to grant consumers access to your application. + +Once configured, your consumers can exchange a token with TokenX to [consume your API](#consume-an-api). + +Your application code must verify inbound requests by validating the included tokens. + +:dart: Learn how to [secure your API using TokenX](how-to/secure.md) diff --git a/docs/auth/tokenx/how-to/consume.md b/docs/auth/tokenx/how-to/consume.md new file mode 100644 index 000000000..fcd9fad97 --- /dev/null +++ b/docs/auth/tokenx/how-to/consume.md @@ -0,0 +1,154 @@ +--- +tags: [tokenx, how-to] +--- + +# Consume API using TokenX + +This how-to guides you through the steps required to consume an API secured with [TokenX](../README.md): + +1. [Configure your application](#configure-your-application) +1. [Exchange token with TokenX](#exchange-token) +1. [Consume the API using the token](#consume-api) + +## Prerequisites + +- The API you're consuming has [granted access to your application](secure.md#grant-access-to-consumers) + +## Configure your application + +- Enable TokenX in your application: + + ```yaml title="app.yaml" + spec: + tokenx: + enabled: true + ``` + +- Depending on how you communicate with the API you're consuming, [configure the appropriate outbound access policies](../../../workloads/how-to/access-policies.md). + +## Exchange token + +### Create client assertion + +To perform a token exchange, your application must authenticate itself. +To do so, create a [client assertion](../../explanations/README.md#client-assertion). + +Sign the client assertion with your applications [private key](../../explanations/README.md#private-keys) contained within the [`TOKEN_X_PRIVATE_JWK`][variables-ref] environment variable. + +The assertion must contain the following claims: + +| Claim | Example Value | Description | +|:----------|:-------------------------------------------------|:----------------------------------------------------------------------------------------------------------------| +| **`sub`** | `dev-gcp:my-team:app-a` | The _subject_ of the token. Set to the [`TOKEN_X_CLIENT_ID` environment variable][variables-ref]. | +| **`iss`** | `dev-gcp:my-team:app-a` | The _issuer_ of the token. Set to the [`TOKEN_X_CLIENT_ID` environment variable][variables-ref]. | +| **`aud`** | `https://tokenx.dev-gcp.nav.cloud.nais.io/token` | The _audience_ of the token. Set to the [`TOKEN_X_TOKEN_ENDPOINT` environment variable][variables-ref]. | +| **`jti`** | `83c580a6-b479-426d-876b-267aa9848e2f` | The _JWT ID_ of the token. Used to uniquely identify a token. Set this to a UUID or similar. | +| **`nbf`** | `1597783152` | `nbf` stands for _not before_. Set to now. | +| **`iat`** | `1597783152` | `iat` stands for _issued at_. Set to now. | +| **`exp`** | `1597783182` | `exp` is the _expiration time_ of the token. Between 1 and 120 seconds after now. Typically 30 seconds is fine. | + +Additionally, the headers of the assertion must contain the following parameters: + +| Parameter | Value | Description | +|:----------|:---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------| +| **`kid`** | `93ad09a5-70bc-4858-bd26-5ff4a0c5f73f` | The key identifier of the key used to sign the assertion. This identifier is available in the JWK found in [`TOKEN_X_PRIVATE_JWK`][variables-ref]. | +| **`typ`** | `JWT` | Represents the type of this JWT. Set this to `JWT`. | +| **`alg`** | `RS256` | Represents the cryptographic algorithm used to secure the JWT. Set this to `RS256`. | + +??? example "Example Client Assertion Values" + + ```json title="Header" + { + "kid": "93ad09a5-70bc-4858-bd26-5ff4a0c5f73f", + "typ": "JWT", + "alg": "RS256" + } + ``` + + ```json title="Payload" + { + "sub": "prod-gcp:namespace-gcp:gcp-app", + "aud": "https://tokenx.dev-gcp.nav.cloud.nais.io/token", + "nbf": 1592508050, + "iss": "prod-gcp:namespace-gcp:gcp-app", + "exp": 1592508171, + "iat": 1592508050, + "jti": "fd9717d3-6889-4b22-89b8-2626332abf14" + } + ``` + +### Create and perform exchange request + +Now that you have a client assertion, we can use this to exchange the inbound token you received from your consumer. + +Create a POST request with the following required parameters: + +| Parameter | Value | Comment | +|:------------------------|:-----------------------------------------------------------|:-----------------------------------------------------------------------------------------------| +| `grant_type` | `urn:ietf:params:oauth:grant-type:token-exchange` | | +| `client_assertion_type` | `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` | | +| `client_assertion` | The [client assertion](#create-client-assertion). | Token that authenticates your application. It should be unique and only used once. | | +| `subject_token_type` | `urn:ietf:params:oauth:token-type:jwt` | | +| `subject_token` | The inbound citizen token, either from ID-porten or TokenX | Token that should be exchanged. | +| `audience` | The identifier for the target application | Value follows naming scheme `::`, e.g. `prod-gcp:namespace1:app1` | + +Send the request to the `token_endpoint`, i.e. the URL found in the [`TOKEN_X_TOKEN_ENDPOINT`][variables-ref] environment variable. + +```http title="Example request" +POST ${TOKEN_X_TOKEN_ENDPOINT} HTTP/1.1 +Content-Type: application/x-www-form-urlencoded + +grant_type=urn:ietf:params:oauth:grant-type:token-exchange& +client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer& +client_assertion=eY...............& +subject_token_type=urn:ietf:params:oauth:token-type:jwt& +subject_token=eY...............& +audience=prod-gcp:namespace1:app1 +``` + +#### Success response + +TokenX responds with a JSON object that contains the new access token: + +```json title="Success response body" +{ + "access_token" : "eyJraWQiOi..............", + "expires_in" : 899, + ... +} +``` + +???+ tip "Cache your tokens" + + The `expires_in` field in the response indicates the lifetime of the token in seconds. + + Use this field to cache and reuse the token to minimize network latency impact. + + A safe cache key is `key = sha256($subject_token + $audience)`. + +#### Error response + +If the exchange request is invalid, you will receive a structured error as specified in +[RFC 8693, Section 2.2.2](https://www.rfc-editor.org/rfc/rfc8693.html#name-error-response): + +```json title="Error response body" +{ + "error_description" : "token exchange audience is invalid", + "error" : "invalid_request" +} +``` + +## Consume API + +Once you have acquired the token, you can finally consume the target API. + +Use the token in the `Authorization` header as a [Bearer token](../../explanations/README.md#bearer-token): + +```http +GET /resource HTTP/1.1 + +Host: api.example.com +Authorization: Bearer eyJraWQ... +``` + +[variables-ref]: ../reference/README.md#variables-for-acquiring-tokens diff --git a/docs/auth/tokenx/how-to/generate.md b/docs/auth/tokenx/how-to/generate.md new file mode 100644 index 000000000..2984cfb84 --- /dev/null +++ b/docs/auth/tokenx/how-to/generate.md @@ -0,0 +1,33 @@ +--- +tags: [tokenx, how-to] +--- + +# Generate a token for development + +This how-to guides you through the steps required to generate a token that you can use against an [API secured with TokenX](secure.md) in the development environments. + +## Grant access + +[Grant access](secure.md#grant-access-to-consumers) to the token generator service: + +```yaml title="app.yaml" +spec: + tokenx: + enabled: true + accessPolicy: + inbound: + rules: + - application: tokenx-token-generator + namespace: aura + cluster: dev-gcp +``` + +## Generate token + +1. Visit in your browser. + - Replace `` with the intended _audience_ of the token, in this case the API application. + - The audience value must be on the form of `::` + - For example: `dev-gcp:my-team:my-app` +2. You will be redirected to log in at ID-porten (if not already logged in). +3. After logging in, you should be redirected back to the token generator and presented with a JSON response containing an `access_token`. +4. Use the `access_token` as a [Bearer token](../../explanations/README.md#bearer-token) for calls to your API application. diff --git a/docs/auth/tokenx/how-to/secure.md b/docs/auth/tokenx/how-to/secure.md new file mode 100644 index 000000000..cf51d190a --- /dev/null +++ b/docs/auth/tokenx/how-to/secure.md @@ -0,0 +1,74 @@ +--- +tags: [tokenx, how-to] +--- + +# Secure your application with TokenX + +This how-to guides you through the steps required to secure your API using [TokenX](../README.md): + +1. [Grant access to your consumers](#grant-access-to-consumers) +1. [Validate tokens in requests from consumers](#validate-tokens) + +## Grant access to consumers + +Specify inbound access policies to authorize your consumers: + +```yaml title="app.yaml" +spec: + tokenx: + enabled: true + accessPolicy: + inbound: + rules: + - application: app-1 # same namespace and cluster + + - application: app-2 # same cluster + namespace: team-a + + - application: app-3 + namespace: team-b + cluster: prod-gcp +``` + +The above configuration authorizes the following applications: + +* application `app-1` running in the same namespace and same cluster as your application +* application `app-2` running in the namespace `team-a` in the same cluster +* application `app-3` running in the namespace `team-b` in the cluster `prod-gcp` + +Now that you have granted access to your consumers, they can now exchange tokens for new tokens that target your application. +You will need to validate these tokens in your application. + +## Validate tokens + +Verify incoming requests from consumers by validating the [Bearer token](../../explanations/README.md#bearer-token) in the `Authorization` header. + +Always validate the [signature and standard time-related claims](../../explanations/README.md#token-validation). +Additionally, perform the following validations: + +**Issuer Validation** + +Validate that the `iss` claim has a value that is equal to either: + +1. the [`TOKEN_X_ISSUER`][variables-ref] environment variable, or +2. the `issuer` property from the [metadata discovery document](../../explanations/README.md#well-known-url-metadata-document). + The document is found at the endpoint pointed to by the `TOKEN_X_WELL_KNOWN_URL` environment variable. + +**Audience Validation** + +Validate that the `aud` claim is equal to [`TOKEN_X_CLIENT_ID`][variables-ref]. + +**Signature Validation** + +Validate that the token is signed with a public key published at the JWKS endpoint. +This endpoint URI can be found in one of two ways: + +1. the [`TOKEN_X_JWKS_URI`][variables-ref] environment variable, or +2. the `jwks_uri` property from the metadata discovery document. + The document is found at the endpoint pointed to by the [`TOKEN_X_WELL_KNOWN_URL`][variables-ref] environment variable. + +## See also + +- [TokenX claims reference](../reference/README.md#claims) + +[variables-ref]: ../reference/README.md#variables-for-validating-tokens diff --git a/docs/auth/tokenx/reference/README.md b/docs/auth/tokenx/reference/README.md new file mode 100644 index 000000000..dc5c1a692 --- /dev/null +++ b/docs/auth/tokenx/reference/README.md @@ -0,0 +1,59 @@ +--- +tags: [tokenx, reference] +--- + +# TokenX reference + +## Claims + +In addition to the [standard claims](../../explanations/README.md#claims-validation), tokens from TokenX include the following claims: + +| Claim | Description | +|:------------|:----------------------------------------------------------------------------------------------------------------------------------| +| `idp` | The original [`issuer`](../../explanations/README.md#issuer) of the subject token | +| `client_id` | The consumer's [`client_id`](../../explanations/README.md#client-id). Follows the naming scheme `::` | + +Other claims such as `pid` are copied verbatim from the [original token issued by ID-porten](../../idporten/reference/README.md#claims). + +### Claim Mappings + +Some claims are mapped to a different value for legacy/compatibility reasons. + +The table below shows the claim mappings: + +| Claim | Original Value | Mapped Value | +|:------|:---------------------------|:--------------| +| `acr` | `idporten-loa-substantial` | `Level3` | +| `acr` | `idporten-loa-high` | `Level4` | + +The mappings will be removed at some point in the future. +If you're using the `acr` claim in any way, check for both the original and mapped values. + +## Runtime Variables & Credentials + +Your application will automatically be injected with environment variables at runtime. + +### Variables for acquiring tokens + +These variables are used to [:dart: consume an API](../how-to/consume.md): + +| Name | Description | +|:-------------------------|:----------------------------------------------------------------------------------------------------------| +| `TOKEN_X_CLIENT_ID` | [Client ID](../../explanations/README.md#client-id) that uniquely identifies the application in TokenX. | +| `TOKEN_X_PRIVATE_JWK` | [Private JWK](../../explanations/README.md#private-keys) containing an RSA key belonging to client. | +| `TOKEN_X_TOKEN_ENDPOINT` | `token_endpoint` from the [metadata discovery document](../../explanations/README.md#token-endpoint). | + +### Variables for validating tokens + +These variables are used to [:dart: secure your API](../how-to/secure.md): + +| Name | Description | +|:-------------------------|:----------------------------------------------------------------------------------------------------------------------| +| `TOKEN_X_CLIENT_ID` | [Client ID](../../explanations/README.md#client-id) that uniquely identifies the application in TokenX. | +| `TOKEN_X_WELL_KNOWN_URL` | The URL for Tokendings' [metadata discovery document](../../explanations/README.md#well-known-url-metadata-document). | +| `TOKEN_X_ISSUER` | `issuer` from the [metadata discovery document](../../explanations/README.md#issuer). | +| `TOKEN_X_JWKS_URI` | `jwks_uri` from the [metadata discovery document](../../explanations/README.md#jwks-endpoint-public-keys). | + +## Spec + +See the [:books: NAIS application reference](../../../workloads/application/reference/application-spec.md#tokenx). diff --git a/docs/build/.pages b/docs/build/.pages new file mode 100644 index 000000000..c30c7a0df --- /dev/null +++ b/docs/build/.pages @@ -0,0 +1,5 @@ +title: Build and deploy +nav: +- README.md +- 🎯 How-To: how-to +- ... diff --git a/docs/build/README.md b/docs/build/README.md new file mode 100644 index 000000000..e846b409c --- /dev/null +++ b/docs/build/README.md @@ -0,0 +1,25 @@ +--- +tags: [build, deploy, explanation, services] +--- + +# Build and deploy + +To make your application available to others, you need to build and deploy it. + +NAIS attempts to make this as simple as possible by providing a set of composable [GitHub Actions](https://docs.github.com/en/actions). + +Use these actions to compose your own build and deploy pipeline through [Github Actions workflows](https://docs.github.com/en/actions/using-workflows). + +## GitHub Actions + +:books: [nais/docker-build-push](https://github.com/nais/docker-build-push) + +:books: [nais/deploy](https://github.com/nais/deploy/tree/master/actions/deploy) + +See the respective GitHub Action links for detailed configuration options. + +## What's next + +:dart: [Build and deploy with Github Actions](how-to/build-and-deploy.md) + +:dart: [Set up auto-merge with Dependabot](how-to/dependabot-auto-merge.md) diff --git a/docs/how-to-guides/github-action.md b/docs/build/how-to/build-and-deploy.md similarity index 66% rename from docs/how-to-guides/github-action.md rename to docs/build/how-to/build-and-deploy.md index 4d90461ac..ff395a10d 100644 --- a/docs/how-to-guides/github-action.md +++ b/docs/build/how-to/build-and-deploy.md @@ -1,20 +1,24 @@ +--- +tags: [build, deploy, how-to] +--- + # Build and deploy with Github Actions This how-to guide shows you how to build and deploy your application using [Github Actions](https://help.github.com/en/actions/automating-your-workflow-with-github-actions) and the NAIS deploy action. -## 0. Prerequisites +## Prerequisites -- You're part of a [NAIS team](./team.md) +- You're part of a [NAIS team](../../operate/how-to/create-team.md) - A Github repository where the NAIS team has access -- The repository contains a valid [workload manifest](../explanation/workloads/README.md) +- The repository contains a valid [workload manifest](../../workloads/README.md) -## 1. Authorize your Github repository for deployment +## Authorize your Github repository for deployment -1. Open [NAIS console](https://console.<>.cloud.nais.io) in your browser and select your team. +1. Open [NAIS Console](https://console.<>.cloud.nais.io) in your browser and select your team. 2. Select the `Repositories` tab 3. Find the repository you want to deploy from, and click `Authorize` -## 2. Create a Github workflow +## Create a Github workflow !!! note If you require a more advanced workflow, or already have one. Just copy the relevant parts from the example below. @@ -52,6 +56,14 @@ This how-to guide shows you how to build and deploy your application using [Gith ``` This example workflow is a minimal example that builds, signs, and pushes your container image to the image registry. -It then deploys the [app.yaml](../reference/application-spec.md), injecting the image tag from the previous step. +It then deploys the [app.yaml](../../workloads/application/reference/application-spec.md), injecting the image tag from the previous step. When this file is pushed to the `main` branch, the workflow will be triggered and you are all set. + +!!! info "Google Artifact Registry (GAR)" + + The [nais/docker-build-push GitHub action](https://github.com/nais/docker-build-push) builds and pushes images to the _Google Artifact Registry_ (GAR). + + This is a registry managed by NAIS and is the recommended way to store your container images for use in workloads on NAIS. + + We keep the last 10 versions for each image regardless of age. Versions older than 90 days are automatically deleted. diff --git a/docs/build/how-to/dependabot-auto-merge.md b/docs/build/how-to/dependabot-auto-merge.md new file mode 100644 index 000000000..6587331fc --- /dev/null +++ b/docs/build/how-to/dependabot-auto-merge.md @@ -0,0 +1,129 @@ +--- +tags: [build, deploy, how-to] +--- + +# Dependabot with auto-merge + +[working-with-dependabot]: https://docs.github.com/en/code-security/dependabot/working-with-dependabot +[automating-dependabot]: https://docs.github.com/en/code-security/dependabot/working-with-dependabot/automating-dependabot-with-github-actions +[configure-dependabot-yaml]: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file +[github-cli]: https://cli.github.com/ + +[Dependabot][working-with-dependabot] is a security tool offered by GitHub. +Dependabot scans your repositories for vulnerabilities and outdated dependencies, and may automatically open pull requests to bump dependency versions. +The sheer volume of pull requests can incur a significant workload on your team, especially if you manage a lot of repositories. + +By completing this guide, Dependabot will automatically fix your insecure or outdated dependencies, and the changes will automatically get merged into your main branch. + +## Prerequisites + +* [GitHub command-line interface][github-cli] installed. + +## Enable Dependabot + +The contents of this file will depend on your project requirements. Do not use this file as-is. +Please see [dependabot.yaml configuration syntax][configure-dependabot-yaml] for detailed instructions on how to configure Dependabot. + +!!! note ".github/dependabot.yaml" + + ```yaml + version: 2 + updates: + - die: &I didn't edit my config file + - package-ecosystem: "github-actions" + directory: "/" + schedule: + interval: "daily" + time: "10:05" + timezone: "Europe/Oslo" + - package-ecosystem: "docker" + directory: "/" + schedule: + interval: "daily" + time: "10:05" + timezone: "Europe/Oslo" + ``` + +## GitHub workflow for auto-merging Dependabot pull requests + +This workflow will trigger when dependabot opens a pull request. +All minor and patch-level changes are automatically merged. +Major version bumps needs manual merging. +Additionally, all GitHub Actions workflow version bumps will be merged automatically, even if they are major bumps. + +See also [Automating Dependabot with GitHub Actions][automating-dependabot]. + +!!! note ".github/workflows/dependabot-auto-merge.yaml" + + ```yaml + name: Dependabot auto-merge + on: pull_request + + permissions: + contents: write + pull-requests: write + + jobs: + dependabot: + runs-on: ubuntu-latest + if: ${{ github.actor == 'dependabot[bot]' }} + steps: + - name: Dependabot metadata + id: metadata + uses: dependabot/fetch-metadata@v1 + with: + github-token: "${{ secrets.GITHUB_TOKEN }}" + - name: Auto-merge changes from Dependabot + if: steps.metadata.outputs.update-type != 'version-update:semver-major' || steps.metadata.outputs.package-ecosystem == 'github_actions' + run: gh pr merge --auto --squash "$PR_URL" + env: + PR_URL: ${{github.event.pull_request.html_url}} + GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} + ``` + +## Enable branch protection and auto-merge on repository + +Change working directory to your git repository, then run this script. +Otherwise, the workflow above might not work as expected. + +If you prefer, you can instead use GitHub's web frontend to configure auto-merge and branch protection. See GitHub docs for +[enable auto-merge](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-auto-merge-for-pull-requests-in-your-repository) +and +[branch protection rules](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/managing-a-branch-protection-rule). + +!!! note "enforce_branch_protection.sh" + + ```bash + #!/bin/bash + # adapted from https://github.com/navikt/dagpenger/blob/master/bin/enforce_branch_protection.sh + + # Get the current repository information + repo_url=$(git remote get-url origin) + repo_name=$(basename -s .git "$repo_url") + owner=$(echo "$repo_url" | awk -F"(/|:)" '{print $2}') + + # Determine the name of the main branch + main_branch=$(git symbolic-ref --short HEAD 2>/dev/null || git branch -l --no-color | grep -E '^[*]' | sed 's/^[* ] //') + + # Configure branch protection, and require tests to pass before merging. + # Match the list of checks up against repository workflows. + echo '{ "required_status_checks": { "strict": true, "checks": [ { "context": "test" } ] }, "enforce_admins": false, "required_pull_request_reviews": null, "required_conversation_resolution": true, "restrictions": null }' | \ + gh api repos/"$owner"/"$repo_name"/branches/"$main_branch"/protection \ + --method PUT \ + --silent \ + --header "Accept: application/vnd.github.v3+json" \ + --input - + + # Enable auto-merge on repository + echo '{ "allow_auto_merge": true, "delete_branch_on_merge": true }' | gh api repos/"$owner"/"$repo_name" \ + --method PATCH \ + --silent \ + --header "Accept: application/vnd.github.v3+json" \ + --input - + + if [ $? -eq 0 ]; then + echo "Branch protection configured for $owner/$repo_name on branch $main_branch" + else + echo "Failed to configure branch protection for $owner/$repo_name on branch $main_branch" + fi + ``` diff --git a/tenants/nav/how-to-guides/oci-migration.md b/docs/build/how-to/oci-migration.md similarity index 90% rename from tenants/nav/how-to-guides/oci-migration.md rename to docs/build/how-to/oci-migration.md index dcb319ce2..7193aa825 100644 --- a/tenants/nav/how-to-guides/oci-migration.md +++ b/docs/build/how-to/oci-migration.md @@ -1,3 +1,7 @@ +--- +tags: [build, deploy, how-to] +--- + # How to migrate from GitHub Container Registry (GHCR) to Google Artifact Registry (GAR) ## Migrate @@ -119,9 +123,6 @@ as `needs.build.outputs.image` VAR: image=${{ needs.build.outputs.image }} ``` -If you wish to refer to the image later, please consult the guide -on [Image registry](https://doc.nais.io/guides/application/#step-6-push-your-image-to-our-image-registry). - #### NAIS Salsa (SLSA - Supply Chain Levels for Software Artifacts) [SLSA](https://slsa.dev/) is short for Supply chain Levels for Software Artifacts pronounced salsa. @@ -130,8 +131,8 @@ enhancing integrity, and securing both packages and infrastructure within our pr The `nais/docker-build-push` action automatically signs your image with [cosign](https://github.com/sigstore/cosign), and uploads the attestation, the result can be reviewed in your team tab -in [NAIS Console](https://console.nav.cloud.nais.io). -For more information about Salsa, please refer to [NAIS Salsa](https://docs.nais.io/security/salsa/salsa/). +in [NAIS Console](https://console.<>.cloud.nais.io). +For more information about Salsa, please refer to [NAIS Salsa](../../services/salsa.md). #### Workflow permissions @@ -163,7 +164,7 @@ to [The GitHub Blog Post](https://github.blog/changelog/2021-04-20-github-action The use of `secrets.NAIS_DEPLOY_APIKEY` is deprecated, and will be removed in the future. For more information on how to authorize your workflow, please refer -to [Authorize your workflow](https://docs.nais.io/deployment/github-action/?h=deploy#1-authorize-your-github-repository-for-deployment) +to [Authorize your workflow](build-and-deploy.md#1-authorize-your-github-repository-for-deployment) ### Finalized workflow @@ -203,7 +204,7 @@ jobs: id-token: write steps: - uses: actions/checkout@v4 - - uses: nais/deploy/actions/deploy@v1 + - uses: nais/deploy/actions/deploy@v2 env: CLUSTER: target-cluster # Replace RESOURCE: nais.yaml @@ -217,6 +218,4 @@ will not run unless the `build` job is successful. That is why we can reference ## Related documentation -To get more details about deploying to NAIS, please -refer: [Deploy an application](https://doc.nais.io/guides/application/#deploying-an-application) -and [Deploy with GitHub Actions](https://doc.nais.io/deployment/?h=deploy+to#nais-deploy) +- :dart: [Build and deploy with Github Actions](build-and-deploy.md) diff --git a/docs/explanation/.pages b/docs/explanation/.pages deleted file mode 100644 index a11838b18..000000000 --- a/docs/explanation/.pages +++ /dev/null @@ -1,9 +0,0 @@ -title: Explanations -nav: - - nais.md - - team.md - - workloads - - zero-trust.md - - under-the-hood.md - - naisdevice.md - - ... diff --git a/docs/explanation/README.md b/docs/explanation/README.md deleted file mode 100644 index 15702ad18..000000000 --- a/docs/explanation/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Explanations - -Big-picture explanations of higher-level concepts. Most useful when you want to understand how NAIS works. diff --git a/docs/explanation/database/opensearch.md b/docs/explanation/database/opensearch.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/docs/explanation/naisdevice.md b/docs/explanation/naisdevice.md deleted file mode 100644 index c97aa8110..000000000 --- a/docs/explanation/naisdevice.md +++ /dev/null @@ -1,9 +0,0 @@ -# naisdevice - -naisdevice is a mechanism provided by NAIS, that lets you connect to services not available on the public internet from your machine. - -Examples of such services are: - -- Access to the NAIS cluster with kubectl -- Applications on internal domains -- Internal NAIS services such as [console](https://console.<>.cloud.nais.io). diff --git a/docs/explanation/restoring.md b/docs/explanation/restoring.md deleted file mode 100644 index 156ff1e2a..000000000 --- a/docs/explanation/restoring.md +++ /dev/null @@ -1,5 +0,0 @@ -# Restoring resources - -Every second hour, we backup of all the workload and resource definitions in the environments. - -If you mess up something in your namespace and need something restored, contact the NAIS team on Slack and we will help you out. diff --git a/docs/explanation/team.md b/docs/explanation/team.md deleted file mode 100644 index 2a4d32ed5..000000000 --- a/docs/explanation/team.md +++ /dev/null @@ -1,20 +0,0 @@ -# What is a team? - -Everything in NAIS is organized around the concept of a team. -Nothing in NAIS is owned by an individual; the team as a whole owns the [workloads](./workloads/README.md) built by the team, as well as all provisioned resources. This is to ensure that everything can continue to operate even if someone leaves. - -A NAIS team doesn't necessarily map directly to the organizational team unit, and (usually) consists of purely technical personnel developing and operating on the same set of products or services. The reason for this is that being member of a NAIS team will grant you access to all the workloads and provisioned resources that the team owns. To reduce the attack surface, it's a good idea to limit access to the people that actually need it. - -## The anatomy of a team - -A team has two different roles, `owner` and `member`. -A team has at least one `owner`, and can have multiple `members`. The `owners` have permission to add and remove `members`, as well as changing the roles of the `members`. -You can be a member and owner of multiple teams. - -## What does a NAIS team provide? - -When you [create a team](../how-to-guides/team.md), the following will be provisioned for you: - -- An isolated area for your team's workload and resources in each environment (e.g. dev and prod) -- A GitHub team with the same name in your GitHub organization. The members of your NAIS team will be synchronized with the GitHub team. -- Roles and permissions to access the teams workloads and resources. diff --git a/docs/explanation/under-the-hood.md b/docs/explanation/under-the-hood.md deleted file mode 100644 index a2ab72b02..000000000 --- a/docs/explanation/under-the-hood.md +++ /dev/null @@ -1,54 +0,0 @@ -# Under the hood -In this explanation, we will go through some of the underlying technologies we use to provide NAIS. - -## Environment - -### Runtime implementation -Each environment is its own [Kubernetes](https://kubernetes.io) cluster using [GKE](https://cloud.google.com/kubernetes-engine?hl=en). -Inside each environment, every team has their own [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), which is only accessible by the members of the team. - -### Workload isolation -All workloads are deployed in a team namespace and every workload is isolated from _all_ other workloads by utilizing [Kubernetes network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) unless [explicitly allowed](./zero-trust.md). - -## GCP resources (CloudSQL, Cloud Storage, BigQuery, etc.) -When resources, such as a database, is requested, it is provisioned in a separate GCP project that is dedicated to _this_ team for _this_ environment. -As with the team's namespace, the team's project is only accessible by the members of the team. - -Example NAIS environment: -```mermaid -graph LR -subgraph GCP - subgraph NAIS-dev cluster - subgraph team-a-ns[Team A namespace] - team-a-app[App A] - end - - subgraph team-b-ns[Team B namespace] - team-b-app[App B] - end - - subgraph team-c-ns[Team C namespace] - team-c-app[App C] - end - end - - subgraph team-a-project[A-dev project] - team-a-db[Database A] - end - - subgraph team-b-project[B-dev project] - team-b-db[Database B] - end - - subgraph team-c-project[C-dev project] - team-c-db[Database C] - end -end - -team-a-app --> team-a-db -team-b-app --> team-b-db -team-c-app --> team-c-db -``` - -In the example above, we have three teams, `A`, `B` and `C`. -Each team has their own namespace in the `dev` cluster, and when they request a database, it is provisioned in their own `team-dev` project. \ No newline at end of file diff --git a/docs/explanation/workloads/README.md b/docs/explanation/workloads/README.md deleted file mode 100644 index 68149844b..000000000 --- a/docs/explanation/workloads/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Workloads - -The main purpose of NAIS is enabling the developer to run the code they write. - -Below is a list of the different kinds we support. - -## NAIS application -A [NAIS application](./application.md) is a used for long-running processes such as a API. - -## NAIS job -A [NAIS job](./job.md) is used for tasks meant to complete and then exit. This can either run as a one-off task or on a schedule. diff --git a/docs/explanation/workloads/application.md b/docs/explanation/workloads/application.md deleted file mode 100644 index c4c5b19e2..000000000 --- a/docs/explanation/workloads/application.md +++ /dev/null @@ -1,11 +0,0 @@ -# Application - -!!! warning - This explanation is incomplete - -A [NAIS application](../../reference/application-example.md) lets you run one or more instances of a container image. - -An application is defined by its [application manifest](../../reference/application-spec.md), which is a YAML file that describes how the application should be run and what resources it needs. - -Once the application manifest is applied, NAIS will set up your application as specified. If you've requested resources, NAIS will provision and configure your application to use those resources. - diff --git a/docs/explanation/workloads/job.md b/docs/explanation/workloads/job.md deleted file mode 100644 index 4ce53a3a2..000000000 --- a/docs/explanation/workloads/job.md +++ /dev/null @@ -1,7 +0,0 @@ -# NAIS Job - -!!! warning - This explanation is incomplete - -A NAIS Job is used for tasks meant to complete and then exit. This can either run as a one-off task or on a schedule, like a [cron job](https://en.wikipedia.org/wiki/Cron). - diff --git a/docs/explanation/zero-trust.md b/docs/explanation/zero-trust.md deleted file mode 100644 index dc53248b0..000000000 --- a/docs/explanation/zero-trust.md +++ /dev/null @@ -1,42 +0,0 @@ -# Zero trust - -NAIS embraces the [zero trust](https://en.wikipedia.org/wiki/Zero_trust_security_model) security model, where the core principle is to "never trust, always verify". - -In NAIS every [workload](./workloads/README.md) is isolated by default - which means that it is not able to make _any_ outbound requests or receive _any_ incoming traffic unless explicitly defined. This includes traffic inside your namespace, in the same environment as well as to and from the Internet. -In order to control traffic to and from your workload, you need to define [access policies](../how-to-guides/access-policies.md). - -For the native NAIS services - the platform takes care of this for you. For example, when you have a [database](../how-to-guides/persistence/postgres.md), the access policies required to reach the database will be created automatically. - -## Example - -Consider a simple application which consists of a frontend and a backend, where naturally the frontend needs to communicate with the backend. - -This communication is denied by default as indicated by the red arrow. -![access-policy-1](../assets/access-policy-1.png) - -In order to fix this, the frontend needs to allow outbound traffic to the backend by adding the following access policy. - -```yaml -spec: - accessPolicy: - outbound: - - application: backend -``` - -![access-policy-2](../assets/access-policy-2.png) - -However - the frontend is still not allowed to make any requests to the backend. -The missing piece of the puzzle is adding an inbound policy to the backend like so: - -```yaml -spec: - accessPolicy: - inbound: - - application: frontend -``` - -![access-policy-3](../assets/access-policy-3.png) - -Now that both applications has explicitly declared their policies, the communication is allowed. - -See more about [how to define access policies](../how-to-guides/access-policies.md) diff --git a/docs/explanations/.pages b/docs/explanations/.pages new file mode 100644 index 000000000..64d371136 --- /dev/null +++ b/docs/explanations/.pages @@ -0,0 +1,4 @@ +nav: +- nais.md +- team.md +- under-the-hood.md diff --git a/docs/explanation/nais.md b/docs/explanations/nais.md similarity index 93% rename from docs/explanation/nais.md rename to docs/explanations/nais.md index d57f322a8..89e5a2bf4 100644 --- a/docs/explanation/nais.md +++ b/docs/explanations/nais.md @@ -1,3 +1,7 @@ +--- +tags: [explanation, nais] +--- + # What is NAIS? NAIS is a platform aiming to provide you with the technical capabilities you need to develop and run software in a safe and enjoyable way. @@ -8,7 +12,7 @@ In order to support this idea, we aim to provide you with functionality that jus You can think of the provided functionality as building blocks, where you as a developer can select the ones that fit your specific needs. -The fundamental building block provided by NAIS is a robust and secure runtime environment for your [workloads](./workloads/README.md). +The fundamental building block provided by NAIS is a robust and secure runtime environment for your [workloads](../workloads/README.md). When your workload is up and running, it’s crucial to be able to observe how it’s doing. Here the platform provides you with the tooling you need to log, emit metrics and run traces. diff --git a/docs/explanations/team.md b/docs/explanations/team.md new file mode 100644 index 000000000..72cb568a9 --- /dev/null +++ b/docs/explanations/team.md @@ -0,0 +1,30 @@ +--- +tags: [explanation, nais, team] +--- + +# What is a team? + +Everything in NAIS is organized around the concept of a _team_. + +A NAIS team should consist of technical personnel involved with developing and operating the team's workloads and resources. + +Being member of a team grants you full access to the team's workloads and provisioned resources. +Limit access to the people that actually need it according to the _principle of least privilege_. + +## The anatomy of a team + +A team consists of one or more _users_. The team has at least one `owner` and can have multiple `members`. + +An `owner` can add, remove, and change the roles of other users on the team. + +A user can be part of multiple teams. + +## What happens when you create a team? + +When you [create a team](../operate/how-to/create-team.md), the following will be provisioned for you: + +- An isolated area for your team's workloads and resources in each environment (e.g. `dev` and `prod`) +- A GitHub team with the same name in your GitHub organization. The members of your NAIS team will be synchronized with the GitHub team. +- Roles and permissions to access the teams workloads and resources. + +The creator of a team is automatically granted `owner` privileges for the team. diff --git a/docs/explanations/under-the-hood.md b/docs/explanations/under-the-hood.md new file mode 100644 index 000000000..23a1e25bc --- /dev/null +++ b/docs/explanations/under-the-hood.md @@ -0,0 +1,75 @@ +--- +tags: [explanation, nais] +--- + +# Under the hood + +In this explanation, we will go through some of the underlying technologies we use to provide NAIS. + +## Environment + +### Runtime implementation + +Each _environment_ is its own [Kubernetes :octicons-link-external-16:](https://kubernetes.io) cluster using [Google Kubernetes Engine (GKE) :octicons-link-external-16:](https://cloud.google.com/kubernetes-engine?hl=en). + +Inside each environment, every team has their own [namespace :octicons-link-external-16:](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). + +A namespace can contain one or more [workloads](../workloads/README.md). +Only members of the team have access to the namespace and its resources. + +```mermaid +graph LR + subgraph env-dev[dev environment] + subgraph ns-dev[team namespace] + app[App] + job[Job] + end + end +``` + +In the example above, the team has an application and a job running in the `dev` environment. + +### Workload isolation + +All workloads are deployed in a team namespace. + +Every workload is isolated from _all_ other workloads with [Kubernetes network policies :octicons-link-external-16:](https://kubernetes.io/docs/concepts/services-networking/network-policies/). + +Access is denied by default, unless [explicitly allowed](../workloads/explanations/zero-trust.md). + +## Google Cloud Platform (GCP) resources + +Each team has a dedicated [GCP project :octicons-link-external-16:](https://cloud.google.com/resource-manager/docs/creating-managing-projects) for _each_ environment. + +When your workload requests resources e.g. a bucket, it will be provisioned in the team's project for the matching environment. + +```mermaid +graph LR + subgraph env-dev["dev environment"] + subgraph ns-dev[team namespace] + app-dev[App] + end + end + + subgraph project-dev[team project dev] + bucket-dev[Bucket] + end + + subgraph env-prod["prod environment"] + subgraph ns-prod[team namespace] + app-prod[App] + end + end + + subgraph project-prod[team project prod] + bucket-prod[Bucket] + end + +app-dev--> bucket-dev +app-prod--> bucket-prod +``` + +In the example above, the team has an application running in the `dev` environment. +When the application requests a bucket, it is provisioned in the team's `dev` project. + +Equivalently for the `prod` environment, the bucket is provisioned in the team's `prod` project. diff --git a/docs/how-to-guides/.pages b/docs/how-to-guides/.pages deleted file mode 100644 index 74f970ed7..000000000 --- a/docs/how-to-guides/.pages +++ /dev/null @@ -1 +0,0 @@ -title: How-to guides diff --git a/docs/how-to-guides/README.md b/docs/how-to-guides/README.md deleted file mode 100644 index 3294dfc74..000000000 --- a/docs/how-to-guides/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# How-to guides - -Practical step-by-step guides to help you achieve a specific goal. Most useful when you're trying to get something done. - diff --git a/docs/how-to-guides/command-line-access/.pages b/docs/how-to-guides/command-line-access/.pages deleted file mode 100644 index 2a7835815..000000000 --- a/docs/how-to-guides/command-line-access/.pages +++ /dev/null @@ -1 +0,0 @@ -title: Command line access diff --git a/docs/how-to-guides/command-line-access/troubleshooting.md b/docs/how-to-guides/command-line-access/troubleshooting.md deleted file mode 100644 index abee3b466..000000000 --- a/docs/how-to-guides/command-line-access/troubleshooting.md +++ /dev/null @@ -1,8 +0,0 @@ -## Troubleshooting - -## Mac: GCP auth plugin has been removed -If you get `error: The gcp auth plugin has been removed` after you have updated the kubeconfig, you might be missing kubelogin. - -Run ```which kubelogin``` in a terminal - -Install kubelogin if the output is empty, (Follow the instructions in [kubelogins documentation](https://azure.github.io/kubelogin/install.html)) \ No newline at end of file diff --git a/docs/how-to-guides/communicating-inside-environment.md b/docs/how-to-guides/communicating-inside-environment.md deleted file mode 100644 index 410378a97..000000000 --- a/docs/how-to-guides/communicating-inside-environment.md +++ /dev/null @@ -1,24 +0,0 @@ -# Communicating inside the environment - -This guide will show you how to communicate with other applications inside the same environment. - -## 0. Prerequisites -- Working [access policies](./access-policies.md) for the applications you want to communicate with. - -## 1. Identify the endpoint you want to communicate with - -To identity the endpoint of the workload we are communicating with, we need to know it's `name` and what `namespace` it's running in. - -If the workload you are calling is in the same namespace, you can reach it by calling it's name directly using HTTP like so: - -```plaintext -http:// -``` - -If the workload is running in another team's namespace, you need to specify the namespace as well: - -```plaintext -http://. -``` - -With this endpoint, you can now call the workload using HTTP from your own workload. diff --git a/docs/how-to-guides/nais-cli/.pages b/docs/how-to-guides/nais-cli/.pages deleted file mode 100644 index f987054a1..000000000 --- a/docs/how-to-guides/nais-cli/.pages +++ /dev/null @@ -1 +0,0 @@ -title: NAIS CLI diff --git a/docs/how-to-guides/naisdevice/.pages b/docs/how-to-guides/naisdevice/.pages deleted file mode 100644 index 0dc4264a2..000000000 --- a/docs/how-to-guides/naisdevice/.pages +++ /dev/null @@ -1 +0,0 @@ -title: naisdevice diff --git a/docs/how-to-guides/naisdevice/install.md b/docs/how-to-guides/naisdevice/install.md deleted file mode 100644 index 84c22cd0e..000000000 --- a/docs/how-to-guides/naisdevice/install.md +++ /dev/null @@ -1,102 +0,0 @@ -# Install - -## Device-specific installation steps - -=== "macOS" - - 1. [Install Homebrew](https://brew.sh/) unless you already have it. - - Homebrew makes it possible to install and maintain apps using the terminal app on your Mac. - - 1. Open terminal (Use ` + ` to find `Terminal.app`) and add the nais tap by typing or pasting the text below and press ``. - - Adding the nais tap lets Homebrew know where to get and update files from. Do not worry about where it will be installed, we got you covered. - - ```bash - brew tap nais/tap - ``` - - 1. When the tap is added, you are ready to install naisdevice, by typing or pasting the following in terminal and press ``. - - ```bash - brew install naisdevice-tenant - ``` - - 1. You will be asked for your local device account's password to finish the installation. - - 1. Turn on your freshly installed `naisdevice` app. - - 1. Use ` + ` to find your `naisdevice.app` and press ``. - 1. Follow the [instructions to connect your _nais_ device](#connect-naisdevice-through-tasksys-tray-icon). - -=== "Windows" - - #### Install using Scoop - - 1. Install [Scoop](https://scoop.sh) unless you already have it. - - Scoop makes it possible to install and maintain programs from the command line. - - 1. Use the following command in the command line to add the nais bucket to let Scoop know where to get and update files from. Do not worry about where it will be installed, we got you covered. - - ```powershell - scoop bucket add nais https://github.com/nais/scoop-bucket - ``` - - 1. When the bucket is added, you are ready to install naisdevice, by typing the following in the command line: - - ```powershell - scoop install naisdevice-tenant - ``` - - (you will be asked for administrator access to run the installer) - 1. Start _naisdevice_ from the _Start menu_ - -=== "Manual" - - 1. [Download and install naisdevice-tenant.exe](https://github.com/nais/device/releases/latest) - (you will be asked for administrator access when you run the installer) - 1. Start _naisdevice_ from the _Start menu_ - -=== "Ubuntu" - - !!! warning - - Using Gnome DE on latest Ubuntu LTS - only supported variant atm - - 1. Add the nais PPA repo: - - ``` - NAIS_GPG_KEY="/etc/apt/keyrings/nav_nais_gar.asc" - curl -sfSL "https://europe-north1-apt.pkg.dev/doc/repo-signing-key.gpg" | sudo dd of="$NAIS_GPG_KEY" - echo "deb [arch=amd64 signed-by=$NAIS_GPG_KEY] https://europe-north1-apt.pkg.dev/projects/nais-io nais-ppa main" | sudo tee /etc/apt/sources.list.d/nav_nais_gar.list - sudo apt update - ``` - - **NOTE** curl is not installed in a "fresh" ubuntu: - - ``` - sudo apt install curl - ``` - - 1. Install the naisdevice package: - - ``` - sudo apt install naisdevice-tenant - ``` - - 1. Turn on your freshly installed `naisdevice` application. - 1. Find `naisdevice` in your application menu, or use the `naisdevice` command in a terminal to start the application. - 2. Follow the [instructions to connect your _nais_ device](#connect-naisdevice-through-tasksys-tray-icon). - - ### Connect naisdevice through task/sys -tray icon - - ![A macOS systray exemplifying a red-colored `naisdevice` icon.](../assets/naisdevice-systray-icon.svg) - - When you have opened naisdevice, you may be concerned that nothing happened. The little naisdevice icon has appeared in your Systray (where all your small program icons are located - see above picture for how it looks on Mac): - - 1. Find your `naisdevice` icon (pictured above - though it should not be red at first attempted connection). - - Can't find the icon? Make sure it is installed (See [macOS](#macos-installation), [Windows](#windows-installation) or [Ubuntu](#ubuntu-installation)) - 1. Left-click it and select `Connect`. - 1. Left-click the `naisdevice` icon again and click `Connect`. - You might need to allow ~20 seconds to pass before clicking `Connect` turns your `naisdevice` icon green. diff --git a/docs/how-to-guides/persistence/.pages b/docs/how-to-guides/persistence/.pages deleted file mode 100644 index f96c6e5a3..000000000 --- a/docs/how-to-guides/persistence/.pages +++ /dev/null @@ -1,4 +0,0 @@ -nav: - - buckets - - kafka - - ... diff --git a/docs/how-to-guides/persistence/bigquery/.pages b/docs/how-to-guides/persistence/bigquery/.pages deleted file mode 100644 index 94581998e..000000000 --- a/docs/how-to-guides/persistence/bigquery/.pages +++ /dev/null @@ -1 +0,0 @@ -title: BigQuery diff --git a/docs/how-to-guides/persistence/kafka/create.md b/docs/how-to-guides/persistence/kafka/create.md deleted file mode 100644 index 28e3cfcf6..000000000 --- a/docs/how-to-guides/persistence/kafka/create.md +++ /dev/null @@ -1,33 +0,0 @@ -# Create a Kafka topic -This guide will show you how to create a Kafka topic - -## 0. Creating topics - -???+ note ".nais/topic.yaml" - ```yaml hl_lines="4-5 7 9 11-14" - apiVersion: kafka.nais.io/v1 - kind: Topic - metadata: - name: - namespace: - labels: - team: - spec: - pool: # TODO: link to available tenant pools - acl: - - team: - application: - access: readwrite # read, write, readwrite - ``` -See the [Kafka topic reference](../../../reference/kafka-topic-spec.md) for a complete list of available options. - -## 1. Grant access to the topic for other applications (optional) -See [manage access](manage-acl.md) for how to grant access to your topic. - -## 2. Apply the Topic resource -=== "Automatically" - Add the file to your application repository to deploy with [NAIS github action](../../github-action.md). -=== "Manually" - ```bash - kubectl apply -f ./nais/topic.yaml --namespace= --context= - ``` diff --git a/docs/how-to-guides/persistence/opensearch/.pages b/docs/how-to-guides/persistence/opensearch/.pages deleted file mode 100644 index 2854cc404..000000000 --- a/docs/how-to-guides/persistence/opensearch/.pages +++ /dev/null @@ -1 +0,0 @@ -title: OpenSearch diff --git a/docs/how-to-guides/secrets/.pages b/docs/how-to-guides/secrets/.pages deleted file mode 100644 index b86614c21..000000000 --- a/docs/how-to-guides/secrets/.pages +++ /dev/null @@ -1,4 +0,0 @@ -nav: - - console.md - - workload.md - - ... diff --git a/docs/how-to-guides/workload-crud-operations.md b/docs/how-to-guides/workload-crud-operations.md deleted file mode 100644 index 1d1120784..000000000 --- a/docs/how-to-guides/workload-crud-operations.md +++ /dev/null @@ -1,77 +0,0 @@ -# Workload CRUD-operations - -This guide shows you how to perform CRUD-operations on your workload. - -## 0. Prerequisites -- [Command-line access to the cluster](./command-line-access/setup.md) -- [Member of a NAIS team](../explanation/team.md) -- [Workload spec](../explanation/workloads/README.md) - -=== "Application" - - ## 1. Create/apply the application spec - - ```shell - kubectl apply -f nais.yaml --namespace= --context= - ``` - - Verify that the application was successfully created by running `describe` on the Application: - - ```shell - kubectl describe app - ``` - - The events will tell you if the application was successfully created or not. - - - ## 2. Read/list your applications - - ```shell - kubectl get application --namespace= --context= - ``` - - ## 3. Update/edit your application - - ```shell - kubectl edit application --namespace= --context= - ``` - - ## 4. Delete your application - - ```shell - kubectl delete application --namespace= --context= - ``` - -=== "Naisjob" - - ## 1. Create/apply the naisjob spec - - ```shell - kubectl apply -f nais.yaml --namespace= --context= - ``` - - Verify that the naisjob was successfully created by running `describe` on the Naisjob: - - ```shell - kubectl describe naisjob - ``` - - The events will tell you if the naisjob was successfully created or not. - - ## 2. Read/list your naisjobs - - ```shell - kubectl get naisjob --namespace= --context= - ``` - - ## 3. Update/edit your naisjob - - ```shell - kubectl edit naisjob --namespace= --context= - ``` - - ## 4. Delete your naisjob - - ```shell - kubectl delete naisjob --namespace= --context= - ``` diff --git a/docs/observability/.pages b/docs/observability/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/observability/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/observability/README.md b/docs/observability/README.md similarity index 93% rename from docs/explanation/observability/README.md rename to docs/observability/README.md index a69b0f1ea..aa2d4fb46 100644 --- a/docs/explanation/observability/README.md +++ b/docs/observability/README.md @@ -4,8 +4,9 @@ description: >- This page describes the different options and how to use them. search: boost: 1 -tags: [explanation] +tags: [explanation, observability] --- + # Observability Building and deploying applications is only half the battle. The other half is to be able to observe what's going on in your application. This is where observability comes in. @@ -41,7 +42,7 @@ graph NAIS provides a new way to get started with observability. By enabling auto-instrumentation, you can get started with observability without having to write any code. This is the easiest way to get started with observability, as it requires little to no effort on the part of the team developing the application. -[:dart: Get started with auto-instrumentation](../../how-to-guides/observability/auto-instrumentation.md) +[:dart: Get started with auto-instrumentation](../observability/how-to/auto-instrumentation.md) ## Metrics @@ -51,7 +52,7 @@ We use the [OpenMetrics][openmetrics] format for metrics. This is a text-based f [openmetrics]: https://openmetrics.io/ -[:bulb: Learn more about metrics](./metrics.md) +[:bulb: Learn more about metrics](metrics/README.md) ### Prometheus @@ -67,7 +68,7 @@ graph LR Prometheus --GET /metrics--> Application ``` -[:simple-prometheus: Access Prometheus here](./metrics.md#prometheus-environments) +[:simple-prometheus: Access Prometheus here](metrics/README.md#prometheus-environments) ### Grafana @@ -92,7 +93,7 @@ graph LR Router --> C[Elastic / Kibana] ``` -[:bulb: Learn more about logs](./logging.md) +[:bulb: Learn more about logs](logging/README.md) ## Traces @@ -109,7 +110,7 @@ graph LR Tempo --> Grafana ``` -[:bulb: Learn more about tracing](./tracing.md) +[:bulb: Learn more about tracing](tracing/README.md) ## Alerts @@ -126,7 +127,7 @@ graph LR Alertmanager --> Slack ``` -[:bulb: Learn more about alerts](./alerting.md) +[:bulb: Learn more about alerts](alerting/README.md) ## Learning more diff --git a/docs/observability/alerting/.pages b/docs/observability/alerting/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/observability/alerting/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/observability/alerting.md b/docs/observability/alerting/README.md similarity index 98% rename from docs/explanation/observability/alerting.md rename to docs/observability/alerting/README.md index 47e57f096..c4be449ce 100644 --- a/docs/explanation/observability/alerting.md +++ b/docs/observability/alerting/README.md @@ -1,8 +1,9 @@ --- description: >- Alerting is a crucial part of observability, and it's the first step in knowing when something is wrong with your application. -tags: [explanation] +tags: [explanation, alerting, observability, services] --- + # Alerting @@ -79,4 +80,4 @@ Consider the following attributes when setting up alerts: * https://cloud.google.com/blog/products/management-tools/practical-guide-to-setting-slos * https://cloud.google.com/blog/products/management-tools/good-relevance-and-outcomes-for-alerting-and-monitoring -* https://sre.google/workbook/implementing-slos/ \ No newline at end of file +* https://sre.google/workbook/implementing-slos/ diff --git a/docs/how-to-guides/observability/alerts/grafana.md b/docs/observability/alerting/how-to/grafana.md similarity index 85% rename from docs/how-to-guides/observability/alerts/grafana.md rename to docs/observability/alerting/how-to/grafana.md index cbe40fcab..9032f85ad 100644 --- a/docs/how-to-guides/observability/alerts/grafana.md +++ b/docs/observability/alerting/how-to/grafana.md @@ -1,7 +1,8 @@ --- description: Learn how to create an alert for your application in Grafana. -tags: [guide, grafana] +tags: [how-to, observability, alerting, grafana] --- + # Create alert in Grafana @@ -10,21 +11,21 @@ This guide shows you how to create an alert for your application in Grafana. Whi [howto-prometheus-alert]: ./prometheus-basic.md -## 0. Prerequisites +## Prerequisites You will need to have an application that exposes metrics. If you don't have one, you can follow the [Instrument Your Application][howto-instrument-application] guide. -You will need some basic knowledge of [PromQL](../../../reference/observability/metrics/promql.md) to create alert conditions. +You will need some basic knowledge of [PromQL](../../metrics/reference/promql.md) to create alert conditions. -[howto-instrument-application]: ../metrics/expose.md +[howto-instrument-application]: ../../metrics/how-to/expose.md -## 1. Create a new Alert rule +## Create a new Alert rule 1. Open [Grafana](<>) and navigate to "Alerting" > "[Alert rules](<>)" in the left-hand menu. 2. Click on "+ New alert rule" button. 3. Give your alert a descriptive name, you will choose a folder for it later. -## 2. Define query and alert condition +## Define query and alert condition Define queries and/or expressions and then choose one of them as the alert rule condition. This is the threshold that an alert rule must meet or exceed in order to fire. @@ -32,7 +33,7 @@ Define queries and/or expressions and then choose one of them as the alert rule 2. Write a PromQL query that returns the metric you want to alert on. For example, `http_requests_total{}`. 3. In the "Expression" field, write the condition that should trigger the alert. Choose the operator and the threshold value. For example, `IS ABOVE 100`. -## 3. Set evaluation behavior +## Set evaluation behavior Define how the alert rule is evaluated. @@ -40,7 +41,7 @@ Define how the alert rule is evaluated. 2. Leave "Evaluation group" empty unless you want to group this alert with others. 3. Set "Pending period" to the amount of time the alert condition must be met before the alert is triggered. -## 4. Add annotations +## Add annotations Add annotations to provide more context in your alert notifications. @@ -52,11 +53,11 @@ Add annotations to provide more context in your alert notifications. * `action` - Describes the action that should be taken when the alert is triggered. * `consequence` - Describes the observed consequence from the user's perspective. -## 5. Configure notifications +## Configure notifications Add custom labels to change the way your notifications are routed. * `app` - The application name. * `env` - The environment name. * `team` - The team name. -* `severity` - The severity of the alert. \ No newline at end of file +* `severity` - The severity of the alert. diff --git a/docs/how-to-guides/observability/alerts/prometheus-advanced.md b/docs/observability/alerting/how-to/prometheus-advanced.md similarity index 97% rename from docs/how-to-guides/observability/alerts/prometheus-advanced.md rename to docs/observability/alerting/how-to/prometheus-advanced.md index d859e42bd..500800923 100644 --- a/docs/how-to-guides/observability/alerts/prometheus-advanced.md +++ b/docs/observability/alerting/how-to/prometheus-advanced.md @@ -1,16 +1,17 @@ --- description: Advanced guide to customized Promethues alerts -tags: [guide, prometheus] +tags: [how-to, alerting, observability, prometheus] --- + # Customize Prometheus alerts This guide will show you how to customize Promethues alerts for your team. This is useful if you want to experiment with formatting, use a different webhook, or have a different set of labels for your alerts. -## 0. Prerequisites +## Prerequisites Each team namespace will have a default `AlertmanagerConfig` which will pickup alerts labeled `namespace: `. If you want to change anything about alerting for your team, e.g. the formatting of alerts, webhook used, ..., you can create a simular `AlertmanagerConfig` which is configured for different labels -## 1. Create PrometheusRule +## Create PrometheusRule Remember that these matchers will match against every alert in the cluster, so be sure to use values that will be unique for your team. In your `PrometheusRule` also include the label `alert_type: custom` to be sure the default configuration doesn't pickup your alert. @@ -47,7 +48,7 @@ For more information about `slackConfigs` and other posibilites see the [Prometh alert_type: custom ``` -## 3. Create AlertmanagerConfig +## Create AlertmanagerConfig You can use an `AlertmanagerConfig` if you need to change the default slack layout, need another channel than the default channel, need another set of colors etc. diff --git a/docs/how-to-guides/observability/alerts/prometheus-basic.md b/docs/observability/alerting/how-to/prometheus-basic.md similarity index 84% rename from docs/how-to-guides/observability/alerts/prometheus-basic.md rename to docs/observability/alerting/how-to/prometheus-basic.md index 62191c3d2..c751201c7 100644 --- a/docs/how-to-guides/observability/alerts/prometheus-basic.md +++ b/docs/observability/alerting/how-to/prometheus-basic.md @@ -1,20 +1,21 @@ --- description: Create alerts for your application using Prometheus. -tags: [guide, prometheus] +tags: [how-to, observability, alerting, prometheus] --- + # Create alert with Prometheus This guide shows you how to create alerts for your application. -## 0. Prerequisites +## Prerequisites -- Your application serves [metrics](../metrics/expose.md) +- Your application serves [metrics](../../metrics/how-to/expose.md) You can define alerts by using Kubernetes resources (`PrometheusRule`), as well as directly in Grafana (GUI based). You will have a separate alertmanager for each environment available at `https://alertmanager..<>.cloud.nais.io/` -## 1. Create PrometheusRule +## Create PrometheusRule We use native [Prometheus alert rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/), and let [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) handle the notifications. @@ -49,18 +50,19 @@ You can define alerts by creating a `PrometheusRule` resource in your teams name severity: critical ``` -## 2. Activate the alert +## Activate the alert === "Automatically" - Add the file to your application repository, alongside `nais.yaml` to deploy with [NAIS github action](../../github-action.md). -=== "Manually" + Add the file to your application repository, alongside `nais.yaml` to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). + Some [link]( +=== "Manually" ```bash kubectl apply -f ./nais/alert.yaml ``` -## 3. Verify your alert +## Verify your alert You can see the alerts in the Alertmanager at `https://alertmanager..<>.cloud.nais.io/` and the defined rules in Prometheus at `https://prometheus..<>.cloud.nais.io/rules` -## 4. Disable resolved (Optional) +## Disable resolved (Optional) A message will be automatically sent when the alert is resolved. In some cases this message may be unnecessary and can be disabled by adding the label `send_resolved: "false"`: @@ -71,4 +73,4 @@ labels: send_resolved: "false" ``` -Learn how to write good alerts [here](../../../explanation/observability/alerting.md) +Learn how to write good alerts [here](../README.md) diff --git a/docs/reference/observability/alerts/prometheusrule.md b/docs/observability/alerting/reference/prometheusrule.md similarity index 91% rename from docs/reference/observability/alerts/prometheusrule.md rename to docs/observability/alerting/reference/prometheusrule.md index 648732fee..efb64b90e 100644 --- a/docs/reference/observability/alerts/prometheusrule.md +++ b/docs/observability/alerting/reference/prometheusrule.md @@ -1,16 +1,17 @@ --- description: PrometheusRule resource specification for defining alerts in Prometheus. -tags: [reference, prometheus] +tags: [reference, observability, alerting, prometheus] --- + # Prometheus Alerting Rule Reference [Prometheus alerts][prometheus-alerting-rule] are defined in a `PrometheusRule` resource. This resource is part of the [Prometheus Operator][prometheus-operator] and is used to define alerts that should be sent to the Alertmanager. [Alertmanager][alertmanager] is a component of the Prometheus project that handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct Slack channel. -Prometheus alerts are defined using the [PromQL](../metrics/promql.md) query language. The query language is used to specify when an alert should fire, and the `PrometheusRule` resource is used to specify the alert and its properties. +Prometheus alerts are defined using the [PromQL](../../metrics/reference/promql.md) query language. The query language is used to specify when an alert should fire, and the `PrometheusRule` resource is used to specify the alert and its properties. -Prometheus alerts are sent to the team's Slack channel configured in [nais teams](../../../explanation/team.md) when the alert fires. +Prometheus alerts are sent to the team's Slack channel configured in [Console](../../../operate/console.md) when the alert fires. ```mermaid graph LR diff --git a/docs/explanation/observability/frontend.md b/docs/observability/frontend/README.md similarity index 97% rename from docs/explanation/observability/frontend.md rename to docs/observability/frontend/README.md index 60964cb32..92de3e3a1 100644 --- a/docs/explanation/observability/frontend.md +++ b/docs/observability/frontend/README.md @@ -2,8 +2,9 @@ description: >- NAIS offers observability tooling for frontend applications. This page describes how to use these offerings. -tags: [explanation] +tags: [explanation, observability, services] --- + # Frontend apps !!! info "Status: Alpha" @@ -129,7 +130,7 @@ context.with(trace.setSpan(context.active(), span), () => { When you deploy your frontend as a NAIS application, the telemetry collector URL can be automatically configured. -To use this feature, you must specify [frontend.mountPath.generatedConfig](../../reference/application-spec.md#frontendgeneratedconfigmountpath) in your `nais.yaml`. +To use this feature, you must specify [frontend.mountPath.generatedConfig](../../workloads/application/reference/application-spec.md#frontendgeneratedconfigmountpath) in your `nais.yaml`. A Javascript file will be created at the specified path in your pod file system, and contains the appropriate configuration. Additionally, the environment variable `NAIS_FRONTEND_TELEMETRY_COLLECTOR_URL` will be set in your pod. diff --git a/docs/how-to-guides/observability/auto-instrumentation.md b/docs/observability/how-to/auto-instrumentation.md similarity index 85% rename from docs/how-to-guides/observability/auto-instrumentation.md rename to docs/observability/how-to/auto-instrumentation.md index fc33fbb30..0b9edb0b3 100644 --- a/docs/how-to-guides/observability/auto-instrumentation.md +++ b/docs/observability/how-to/auto-instrumentation.md @@ -1,10 +1,11 @@ --- description: Get started with auto-instrumentation for your applications with OpenTelemetry data for Tracing, Metrics and Logs using the OpenTelemetry Agent. -tags: [guide, tracing] +tags: [how-to, tracing, observability] --- + # Get started with auto-instrumentation -This guide will explain how to get started with auto-instrumentation your applications with OpenTelemetry data for [Tracing](../../explanation/observability/tracing.md), [Metrics](../../explanation/observability/metrics.md) and [Logs](../../explanation/observability/logging.md) using the OpenTelemetry Agent. +This guide will explain how to get started with auto-instrumentation your applications with OpenTelemetry data for [Tracing](../tracing/README.md), [Metrics](../metrics/README.md) and [Logs](../logging/README.md) using the OpenTelemetry Agent. The main benefit of auto-instrumentation is that is requires little to no effort on the part of the team developing the application while providing insight into popular libraries, frameworks and external services such as PostgreSQL, Redis, Kafka and HTTP clients. @@ -33,7 +34,7 @@ spec: observability: autoInstrumentation: enabled: true - runtime: node + runtime: nodejs ``` ## Enable auto-instrumentation for Python applications @@ -64,4 +65,4 @@ spec: ## Resources -[:computer: OpenTelemetry Auto-Instrumentation Configuration Reference](../../reference/observability/auto-config.md) +[:books: OpenTelemetry Auto-Instrumentation Configuration Reference](../reference/auto-config.md) diff --git a/docs/observability/logging/.pages b/docs/observability/logging/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/observability/logging/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/observability/logging.md b/docs/observability/logging/README.md similarity index 94% rename from docs/explanation/observability/logging.md rename to docs/observability/logging/README.md index df9e09c54..4da0b137b 100644 --- a/docs/explanation/observability/logging.md +++ b/docs/observability/logging/README.md @@ -1,8 +1,9 @@ --- description: >- Logs are a way to understand what is happening in your application. They are usually text-based and are often used for debugging. Since the format of logs is usually not standardized, it can be difficult to query and aggregate logs and thus we recommend using metrics for dashboards and alerting. -tags: [explanation] +tags: [explanation, logging, observability, services] --- + # Logging ## Purpose of logs @@ -35,7 +36,7 @@ Grafana Loki is a log aggregation system inspired by Prometheus and integrated w Loki is designed to be used in conjunction with metrics and tracing to provide a complete picture of an application's performance. Without the other two, it can be perceived as more cumbersome to use than a traditional logging system. -[:dart: Get started with Grafana Loki](../../how-to-guides/observability/logs/loki.md) +[:dart: Get started with Grafana Loki](how-to/loki.md) {% if tenant() == "nav" %} @@ -45,5 +46,5 @@ Loki is designed to be used in conjunction with metrics and tracing to provide a Kibana is a tool for visualizing and analyzing logs. It is part of the Elastic Stack and is widely used for log analysis and visualization in NAV. Kibana Elastic is supported by atom. -[:dart: Get started with Kibana](../../how-to-guides/observability/logs/kibana.md) +[:dart: Get started with Kibana](how-to/kibana.md) {% endif %} diff --git a/docs/how-to-guides/observability/logs/disable.md b/docs/observability/logging/how-to/disable.md similarity index 95% rename from docs/how-to-guides/observability/logs/disable.md rename to docs/observability/logging/how-to/disable.md index 1362af06c..6aa64d5ff 100644 --- a/docs/how-to-guides/observability/logs/disable.md +++ b/docs/observability/logging/how-to/disable.md @@ -1,7 +1,8 @@ --- description: Disable log storage for a specific application -tags: [guide] +tags: [how-to, logging, observability] --- + # Disable persistent application logs This guide will help you disable persistent log storage for an application. This is useful if you have an application whose logs are not useful or are causing too much noise in the logs. diff --git a/docs/how-to-guides/observability/logs/kubectl.md b/docs/observability/logging/how-to/kubectl.md similarity index 60% rename from docs/how-to-guides/observability/logs/kubectl.md rename to docs/observability/logging/how-to/kubectl.md index ccb74fbbd..72df97fea 100644 --- a/docs/how-to-guides/observability/logs/kubectl.md +++ b/docs/observability/logging/how-to/kubectl.md @@ -1,17 +1,18 @@ --- description: View logs from the command line using kubectl. -tags: [guide, kubectl] +tags: [how-to, logging, observability, command-line] --- + # View logs from the command line This guide will show you how to view logs from the command line using `kubectl`. -## 0. Prerequisites +## Prerequisites -- You have installed the [kubectl](../../command-line-access.md) command-line tool. -- You have access to the [team](../../team.md) where the application is running. +- You have installed the [kubectl](../../../operate/how-to/command-line-access.md) command-line tool. +- You have access to the [team](../../../explanations/team.md) where the application is running. -## 1. Find the pod name +## Find the pod name You can view logs for a specific pod. First, you need to find the name of the pod you want to view logs for. @@ -21,7 +22,7 @@ List all pods in the namespace: kubectl get pods -n ``` -## 2. View logs +## View logs View logs for a specific pod: diff --git a/docs/how-to-guides/observability/logs/loki.md b/docs/observability/logging/how-to/loki.md similarity index 81% rename from docs/how-to-guides/observability/logs/loki.md rename to docs/observability/logging/how-to/loki.md index ec52baa18..dbbc056f6 100644 --- a/docs/how-to-guides/observability/logs/loki.md +++ b/docs/observability/logging/how-to/loki.md @@ -1,7 +1,8 @@ --- description: Get started with Grafana Loki, a log aggregation system that is integrated with Grafana and inspired by Prometheus. -tags: [guide, loki] +tags: [how-to, logging, observability, loki] --- + # Get started with Grafana Loki This guide will help you get started with Grafana Loki, a log aggregation system that is integrated with Grafana and inspired by Prometheus. @@ -27,7 +28,7 @@ Grafana Loki can be enabled by setting the list of logging destinations in your Grafana Loki is integrated directly with Grafana, and you can access your logs either by adding a Logs Panel to your dashboard or by clicking on the "[Explore](<>)" link on the left-hand side of the Grafana UI and selecting one of the Loki data sources (one for each environment). -Grafana Loki has a query language called [LogQL](../../../reference/observability/logs/logql.md) that you can use to search for logs. LogQL is a simplified version of PromQL, and you can use LogQL to search for logs by message, by field, or by a combination of both. +Grafana Loki has a query language called [LogQL](../reference/logql.md) that you can use to search for logs. LogQL is a simplified version of PromQL, and you can use LogQL to search for logs by message, by field, or by a combination of both. To get you started we suggest using the query builder mode when writing your first LogQL queries. The query builder mode is a graphical interface that helps you build LogQL queries by selecting labels and fields from your logs. @@ -35,4 +36,4 @@ To get you started we suggest using the query builder mode when writing your fir ## Further reading -- [LogQL reference](../../../reference/observability/logs/logql.md) \ No newline at end of file +- [LogQL reference](../reference/logql.md) diff --git a/docs/reference/observability/logs/logql.md b/docs/observability/logging/reference/logql.md similarity index 96% rename from docs/reference/observability/logs/logql.md rename to docs/observability/logging/reference/logql.md index 918977e11..f50c5e51d 100644 --- a/docs/reference/observability/logs/logql.md +++ b/docs/observability/logging/reference/logql.md @@ -1,10 +1,10 @@ --- description: LogQL reference documentation for querying logs in Grafana Loki. -tags: [reference, loki] +tags: [reference, loki, logging] --- # LogQL Reference -LogQL is the query language used in Grafana Loki to query logs. It is a powerful query language that allows you to filter, aggregate, and search for logs and should be familiar to anyone who has used SQL or [PromQL](../metrics/promql.md). +LogQL is the query language used in Grafana Loki to query logs. It is a powerful query language that allows you to filter, aggregate, and search for logs and should be familiar to anyone who has used SQL or [PromQL](../../metrics/reference/promql.md). Where LogQL differs from PromQL is it's trailing pipeline syntax, or log pipeline. A log pipeline is a set of stage expressions that are chained together and applied to the selected log streams. Each expression can filter out, parse, or mutate log lines and their respective labels. diff --git a/docs/observability/metrics/.pages b/docs/observability/metrics/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/observability/metrics/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/observability/metrics.md b/docs/observability/metrics/README.md similarity index 93% rename from docs/explanation/observability/metrics.md rename to docs/observability/metrics/README.md index 4ef987dfc..c12956af9 100644 --- a/docs/explanation/observability/metrics.md +++ b/docs/observability/metrics/README.md @@ -1,10 +1,11 @@ --- description: Metrics are a way to measure the state of your application and can be used to create alerts in Prometheus and dashboards in Grafana. -tags: [explanation] +tags: [explanation, metrics, observability, services] --- + # Metrics -See [how to](../../how-to-guides/observability/metrics/expose.md) set up metrics. +See [how to](how-to/expose.md) set up metrics. Metrics are a way to measure the state of your application from within and something that is built into a microservice architecture from the very beginning. We suggest you start with the basics, that is defining what is fascinating to your team to track in terms of service health and level of service quality. @@ -24,13 +25,13 @@ graph LR ``` [openmetrics]: https://openmetrics.io/ -[nais-manifest-prometheus]: ../../reference/application-spec.md#prometheus +[nais-manifest-prometheus]: ../../workloads/application/reference/application-spec.md#prometheus All applications that have Prometheus scraping enabled will show up in the [default Grafana dashboard](https://grafana.<>.cloud.nais.io/d/000000283/nais-app-dashbord), or create their own. ## Metric naming -For metric names we use the internet-standard [Prometheus naming conventions](https://prometheus.io/docs/practices/naming/): +For metric names we use the Internet standard [Prometheus naming conventions](https://prometheus.io/docs/practices/naming/): - Metric names should have a (single-word) application prefix relevant to the domain the metric belongs to. - Metric names should be nouns in **snake_case**; do not use verbs. @@ -70,7 +71,7 @@ You should, as a developer, that build metrics into your application have solid NAIS clusters comes with a set of metrics that are available for all applications. Many of these relates to Kubernetes and includes metrics like CPU and memory usage, number of pods, etc. You can find a comprehensive list in the [kube-state-metrics documentation](https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md). -Our ingress controller also exposes metrics about the number of requests, response times, etc. You can find a comprehensive list in our [ingress documentation](../../reference//ingress.md#ingress-metrics). +Our ingress controller also exposes metrics about the number of requests, response times, etc. You can find a comprehensive list in our [ingress documentation](../../workloads/reference/ingress.md#ingress-metrics). ## Debugging metrics @@ -90,4 +91,4 @@ List of Prometheus environments: {% else %} * <> * <> -{% endif %} \ No newline at end of file +{% endif %} diff --git a/docs/how-to-guides/observability/metrics/dashboard.md b/docs/observability/metrics/how-to/dashboard.md similarity index 98% rename from docs/how-to-guides/observability/metrics/dashboard.md rename to docs/observability/metrics/how-to/dashboard.md index 4a72449cc..d0a7bac81 100644 --- a/docs/how-to-guides/observability/metrics/dashboard.md +++ b/docs/observability/metrics/how-to/dashboard.md @@ -1,7 +1,8 @@ --- description: Create a dashboard in Grafana for your application -tags: [guide, grafana] +tags: [how-to, grafana, observability, metrics] --- + # Create a dashboard in Grafana This guide shows you how to create a dashboard in Grafana for your application. Grafana is a popular open-source visualization and analytics platform that allows you to query, visualize, alert on, and understand your metrics no matter where they are stored. diff --git a/docs/how-to-guides/observability/metrics/expose.md b/docs/observability/metrics/how-to/expose.md similarity index 79% rename from docs/how-to-guides/observability/metrics/expose.md rename to docs/observability/metrics/how-to/expose.md index 4e4a0dbdd..b2ef98930 100644 --- a/docs/how-to-guides/observability/metrics/expose.md +++ b/docs/observability/metrics/how-to/expose.md @@ -1,12 +1,12 @@ --- description: Expose metrics from your application -tags: [guide, prometheus] +tags: [how-to, metrics, prometheus, observability] --- # Expose metrics from your application This guide will show you how to expose metrics from your application, and how to configure Prometheus to scrape them. -See further [explanations on metrics](../../../explanation/observability/metrics.md) for more details +See further [explanations on metrics](../README.md) for more details ## 1. Add metric to your application @@ -14,7 +14,7 @@ Most languages have a Prometheus client library available. See [Prometheus clien Once instrumented, your application must serve these metrics using HTTP on a given `path` (e.g. `/metrics`). -## 2. Enable metrics in [manifest](../../../reference/application-spec.md) +## 2. Enable metrics in [manifest](../../../workloads/application/reference/application-spec.md) ```yaml spec: diff --git a/docs/how-to-guides/observability/metrics/push.md b/docs/observability/metrics/how-to/push.md similarity index 93% rename from docs/how-to-guides/observability/metrics/push.md rename to docs/observability/metrics/how-to/push.md index 1d6bb387a..a0bb4df5f 100644 --- a/docs/how-to-guides/observability/metrics/push.md +++ b/docs/observability/metrics/how-to/push.md @@ -1,6 +1,6 @@ --- description: Push metrics to Prometheus -tags: [guide, prometheus] +tags: [how-to, observability, metrics, prometheus] --- # Push metrics to Prometheus @@ -9,7 +9,7 @@ This is typically used in NAIS jobs, which by it's nature often is short-lived a Prometheus has further explanations and examples [here](https://prometheus.io/docs/instrumenting/pushing/) -## 1. Add pushgateway and accessPolicy to your manifest +## Add pushgateway and accessPolicy to your manifest ???+ note ".nais/naisjob.yaml" @@ -34,7 +34,7 @@ Prometheus has further explanations and examples [here](https://prometheus.io/do namespace: nais-system ``` -## 2. Send metrics to your application +## Send metrics to your application ```java package io.prometheus.client.it.pushgateway; diff --git a/docs/observability/metrics/reference/globals.md b/docs/observability/metrics/reference/globals.md new file mode 100644 index 000000000..7d0dcf0a1 --- /dev/null +++ b/docs/observability/metrics/reference/globals.md @@ -0,0 +1,67 @@ +--- +tags: [reference, metrics] +--- + +# Global Metrics + +This is a list of metrics exposed by nais for all applications. + +## Common Labels + +The following labels are common to most metrics: + +| Label | Description | +| ----------- | ------------------------------------------------------------------------------------------------------------------- | +| `namespace` | The Kubernetes namespace in which the application is running. | +| `pod` | The name of the pod in which the application is running. This will be unique for each instance of your application. | +| `container` | The name of the container in which the application is running. This will be the same as your application name. | +| `service` | The name of the service in which the application is running. | +| `node` | The name of the node on which the application is running. | +| `image` | The name of the container image in which the application is running. | + +## Memory and CPU Usage + +### Memory Usage + +The `container_memory_usage_bytes` metric tracks the amount of memory used by a container. + +```promql +container_memory_usage_bytes{container="my-application", ...} +``` + +### CPU Usage + +The `container_cpu_usage_seconds_total` metric tracks the amount of CPU time used by a container. + +```promql +container_cpu_usage_seconds_total{container="my-application", ...} +``` + +Since this metric is cumulative, you can calculate the rate of CPU usage by using the `rate` function. + +```promql +rate(container_cpu_usage_seconds_total{container="my-application", ...}[5m]) +``` + +### Limits and Requests + +The `kube_pod_container_resource_limits` and `kube_pod_container_resource_requests` metrics track the resource limits and requests for a container. + +```promql +kube_pod_container_resource_limits{resource="memory", container="my-application", ...} +kube_pod_container_resource_requests{resource="memory", container="my-application", ...} +kube_pod_container_resource_limits{resource="cpu", container="my-application", ...} +kube_pod_container_resource_requests{resource="cpu", container="my-application", ...} +``` + +### Out of Memory (OOMKilled) + +Out of memory restarts occur when a container exceeds its memory limit and the kernel kills the process to free up memory. OOM kills can be caused by a variety of factors, including memory leaks, excessive memory usage, and insufficient memory limits. + +The `kube_pod_container_status_terminated_reason` metric tracks the number of times a container has been terminated for various reasons, including OOM kills. + +```promql +kube_pod_container_status_terminated_reason{reason="OOMKilled", ...} +``` + +Other reasons for container termination include `Error`, `Completed`, and `ContainerCannotRun`. diff --git a/docs/reference/observability/metrics/grafana.md b/docs/observability/metrics/reference/grafana.md similarity index 87% rename from docs/reference/observability/metrics/grafana.md rename to docs/observability/metrics/reference/grafana.md index 4d40d2c8b..ab09e54df 100644 --- a/docs/reference/observability/metrics/grafana.md +++ b/docs/observability/metrics/reference/grafana.md @@ -1,5 +1,5 @@ --- -tags: [reference, grafana] +tags: [reference, grafana, metrics] --- # Grafana Glossary @@ -10,7 +10,7 @@ This glossary contains terms and concepts related to Grafana. A dashboard is a collection of panels arranged in a grid layout. Each panel is a single visualization or a single query result. You can add multiple panels to a dashboard to create a complete view of your application. -* [Guide: Create a dashboard in Grafana](../../../how-to-guides/observability/metrics/dashboard.md) +* [Guide: Create a dashboard in Grafana](../how-to/dashboard.md) * [Grafana docs: Dashboards](https://grafana.com/docs/grafana/latest/dashboards/) ## Panel @@ -38,4 +38,4 @@ A visualization is a graphical representation of your data. Grafana supports a v Grafana has a built-in alerting system that allows you to set up alerts for your metrics. You can create alert rules that are evaluated at regular intervals, and when the alert rule conditions are met, Grafana will send notifications to the configured notification channels. -* [Guide: Alerting in Grafana](../../../how-to-guides/observability/alerts/grafana.md) \ No newline at end of file +* [Guide: Alerting in Grafana](../../alerting/how-to/grafana.md) diff --git a/docs/reference/metrics.md b/docs/observability/metrics/reference/metrics.md similarity index 80% rename from docs/reference/metrics.md rename to docs/observability/metrics/reference/metrics.md index 6368fd110..4bd8e57d7 100644 --- a/docs/reference/metrics.md +++ b/docs/observability/metrics/reference/metrics.md @@ -1,9 +1,13 @@ +--- +tags: [reference, metrics, prometheus] +--- + # Metrics reference ## Retention When using Prometheus the retention is 30 days. -If you need data stored longer than what Prometheus support, we recommend using [BigQuery](../how-to-guides/persistence/bigquery/create.md). +If you need data stored longer than what Prometheus support, we recommend using [BigQuery](../../../persistence/bigquery/README.md). Then you have full control of the database and retention. ## Accessing prometheus diff --git a/docs/reference/observability/metrics/otel.md b/docs/observability/metrics/reference/otel.md similarity index 50% rename from docs/reference/observability/metrics/otel.md rename to docs/observability/metrics/reference/otel.md index 8a67be841..3a5255492 100644 --- a/docs/reference/observability/metrics/otel.md +++ b/docs/observability/metrics/reference/otel.md @@ -1,3 +1,7 @@ +--- +tags: [reference, otel, observability, metrics] +--- + # OpenTelemetry Metrics This is a list of metrics exported by the OpenTelemetry SDKs and auto-instrumentation libraries. @@ -10,15 +14,33 @@ The OpenTelemetry SDKs and auto-instrumentation libraries export the following g | ------------- | ------------------------------------------------------------------------------------------------------ | | `target_info` | Information about the target service, such as the service name, service namespace, and container name. | +Target information is exported as a set of labels, including: + +| Label Name | Description | Example Value | +| ------------------------- | ------------------------------------------------ | ------------------------------------------------------------- | +| `os_description` | The operating system description. | `Linux 6.1.58+` | +| `os_type` | The operating system type. | `Linux` | +| `process_command_args` | The command arguments used to start the process. | `["/usr/lib/jvm/java-21-openjdk/bin/java","-jar","/app.jar"]` | +| `process_runtime_name` | The runtime name. | `OpenJDK Runtime Environment` | +| `process_runtime_version` | The runtime version. | `21.0.2+13-LTS` | +| `telemetry_sdk_language` | The telemetry SDK language. | `java` | +| `telemetry_sdk_version` | The telemetry SDK version. | `1.36.0` | + ## HTTP Server Metrics The OpenTelemetry SDKs and auto-instrumentation libraries export the following metrics for HTTP servers: -| Metric Name | Description | -| ------------------------------------------ | ------------------------------------------------------ | -| `http_server_duration_milliseconds_bucket` | Duration of HTTP server requests, in milliseconds. | -| `http_server_duration_milliseconds_count` | Count of HTTP server requests, by duration. | -| `http_server_duration_milliseconds_sum` | Sum of HTTP server request durations, in milliseconds. | +| Metric Name | Description | +| --------------------------------------------- | ----------------------------------------------------------- | +| `http_server_request_duration_seconds_bucket` | Duration of HTTP server requests, in seconds (java) | +| `http_server_duration_milliseconds_bucket` | Duration of HTTP server requests, in milliseconds (node.js) | + +## HTTP Client Metrics + +| Metric Name | Description | +| --------------------------------------------- | ----------------------------------------------------------- | +| `http_client_request_duration_seconds_bucket` | Duration of HTTP client requests, in seconds (java) | +| `http_client_duration_milliseconds_bucket` | Duration of HTTP client requests, in milliseconds (node.js) | ## Database Client Connection Metrics diff --git a/docs/reference/observability/metrics/promql.md b/docs/observability/metrics/reference/promql.md similarity index 89% rename from docs/reference/observability/metrics/promql.md rename to docs/observability/metrics/reference/promql.md index 5479bd5ec..479ea8f7c 100644 --- a/docs/reference/observability/metrics/promql.md +++ b/docs/observability/metrics/reference/promql.md @@ -1,13 +1,14 @@ --- description: PromQL reference documentation for querying metrics in Prometheus. -tags: [reference, prometheus] +tags: [reference, prometheus, metrics] --- + # PromQL Reference PromQL is a query language for Prometheus monitoring system. It allows you to select and aggregate time series data in real time. PromQL is used to [create dashboards in Grafana][howto-grafana-dashboard], and to [create alerts with Alertmanager][howto-alertmanager-alerts]. -[howto-grafana-dashboard]: ../../../how-to-guides/observability/metrics/dashboard.md -[howto-alertmanager-alerts]: ../../../how-to-guides/observability/alerts/prometheus-basic.md +[howto-grafana-dashboard]: ../how-to/dashboard.md +[howto-alertmanager-alerts]: ../../alerting/how-to/prometheus-basic.md ## Basic Syntax diff --git a/docs/observability/reference/auto-config.md b/docs/observability/reference/auto-config.md new file mode 100644 index 000000000..91654d4fa --- /dev/null +++ b/docs/observability/reference/auto-config.md @@ -0,0 +1,82 @@ +--- +tags: [observability, reference] +--- + +# OpenTelemetry Auto-Instrumentation Configuration + +When you enable [auto-instrumentation](../how-to/auto-instrumentation.md) in your application the following OpenTelemetry configuration will become available to your application as environment variables: + +| Variable | Example Value | +|--------------------------------------|-----------------------------------------------------------------------------------------------| +| `OTEL_SERVICE_NAME` | `my-application` | +| `OTEL_EXPORTER_OTLP_ENDPOINT` | `http://opentelemetry-collector.nais-system:4317` | +| `OTEL_EXPORTER_OTLP_PROTOCOL` | `grpc` | +| `OTEL_EXPORTER_OTLP_INSECURE` | `true` | +| `OTEL_PROPAGATORS` | `tracecontext,baggage` | +| `OTEL_TRACES_SAMPLER` | `parentbased_always_on` | +| `OTEL_LOGS_EXPORTER` | `none` | +| `OTEL_RESOURCE_ATTRIBUTES_POD_NAME` | `my-application-777787df6d-pw9mq` | +| `OTEL_RESOURCE_ATTRIBUTES_NODE_NAME` | `gke-node-abc123` | +| `OTEL_RESOURCE_ATTRIBUTES` | `service.name=my-application,service.namespace=my-team,k8s.container.name=my-application,...` | + +!!! tip + Do not hardcode these values in your application. OpenTelemetry SDKs and auto-instrumentation libraries will automatically pick up these environment variables and use them to configure the SDK. + +## Agent Logs + +You can enable logging for the OpenTelemetry Auto-Instrumentation by setting the `OTEL_LOGS_EXPORTER` environment variable to `otlp`. This will intercept all logs produced by the application and send them to the OpenTelemetry Collector. + +```shell +spec: + env: + - name: OTEL_LOGS_EXPORTER + value: otlp +``` + +!!! warning + Enabling logging for the OpenTelemetry Auto-Instrumentation will send all logs to the OpenTelemetry Collector including logs from other libraries and frameworks such as log4j, logback, and slf4j. This should not be enabled if you are using Secure Logs. + +## Agent Versions + +The OpenTelemetry Agent is used to automatically instrument your application. The agent is responsible for collecting telemetry data and sending it to the OpenTelemetry Collector. + +| Language | Agent Version | SDK Version | +| -------- | ------------- | ----------- | +| Java | 2.2.0 | 1.36.0 | +| Node.js | 0.51.0 | 1.24.0 | +| Python | 0.44b0 | 1.23.0 | + +## Java Agent + +The OpenTelemetry Java Agent is a Java agent that automatically instruments your Java application. The agent is responsible for collecting telemetry data and sending it to the OpenTelemetry Collector. + +It is attached to your JVM automatically at startup using the [`JAVA_TOOL_OPTIONS`](https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/envvars002.html) environment variable. + +### Supported libraries, frameworks, application servers, and JVMs + +The OpenTelemetry Java Agent supports many popular libraries, frameworks, application servers, and JVMs. A full list of supported libraries and frameworks can be found on the open-telemetry/opentelemetry-java-instrumentation repository. + +* [:octicons-link-external-24: OpenTelemetry Instrumentation for Java Supported Libraries and Frameworks](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md) + +### Advanced Configuration + +When using the OpenTelemetry Java SDK and Agent (auto-instrumentation), the following additional environment variables are available: + +| Variable | Description | Example Value | +| ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | ---------------------------------- | +| `OTEL_JAVAAGENT_EXCLUDE_CLASSES` | Suppresses all instrumentation for specific classes, format is "my.package.MyClass,my.package2.*" | `my.package.MyClass,my.package2.*` | +| `OTEL_INSTRUMENTATION_SPRING_BOOT_ACTUATOR_AUTOCONFIGURE_ENABLED` | Enables or disables the Spring Boot Actuator auto-configuration instrumentation | `false` | +| `OTEL_INSTRUMENTATION_MICROMETER_ENABLED` | Enables or disables the Micrometer instrumentation | `false` | +| `OTEL_INSTRUMENTATION_COMMON_EXPERIMENTAL_CONTROLLER_TELEMETRY_ENABLED` | Enables or disables controller span instrumentation | `false` | +| `OTEL_INSTRUMENTATION_COMMON_EXPERIMENTAL_VIEW_TELEMETRY_ENABLED` | Enables or disables view span instrumentation | `false` | + +* [:octicons-link-external-24: Advanced Java configuration options](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/advanced-configuration-options.md) + +## More OpenTelemetry Configuration + +A full list of environment variables that can be used to configure the OpenTelemetry SDK can be found here: + +* [:octicons-link-external-24: General SDK Configuration](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#general-sdk-configuration) +* [:octicons-link-external-24: OTLP Exporter Configuration](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/) + +[OTLP is the OpenTelemetry Protocol](https://opentelemetry.io/docs/specs/otel/protocol/exporter/), and is the protocol used to send telemetry data to Prometheus, Grafana Tempo, and Grafana Loki. diff --git a/docs/reference/glossary.md b/docs/observability/reference/glossary.md similarity index 97% rename from docs/reference/glossary.md rename to docs/observability/reference/glossary.md index 07cffff10..97a964523 100644 --- a/docs/reference/glossary.md +++ b/docs/observability/reference/glossary.md @@ -1,4 +1,8 @@ -# A nais glossary +--- +tags: [observability, reference] +--- + +# Observability Glossary ## Observability diff --git a/docs/observability/tracing/.pages b/docs/observability/tracing/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/observability/tracing/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/observability/tracing.md b/docs/observability/tracing/README.md similarity index 95% rename from docs/explanation/observability/tracing.md rename to docs/observability/tracing/README.md index 4bef69fbf..8f8f9aa1e 100644 --- a/docs/explanation/observability/tracing.md +++ b/docs/observability/tracing/README.md @@ -1,7 +1,7 @@ --- description: >- Application Performance Monitoring or tracing using Grafana Tempo on NAIS. -tags: [explanation, tracing] +tags: [explanation, observability, tracing, services] --- # Distributed Tracing @@ -48,7 +48,7 @@ The preferred way to get started with tracing is to enable auto-instrumentation This is the easiest way to get started with tracing, as it requires little to no effort on the part of the team developing the application and provides instrumentation for popular libraries, frameworks and external services such as PostgreSQL, Redis, Kafka and HTTP clients. -[:dart: Get started with auto-instrumentation](../../how-to-guides/observability/auto-instrumentation.md) +[:dart: Get started with auto-instrumentation](../how-to/auto-instrumentation.md) ### The hard way: Manual instrumentation @@ -56,7 +56,7 @@ If you want more control over how your application is instrumented, you can manu To get the correct configuration for you can still use the auto-instrumentation configuration, but set the `runtime` to `sdk` as this will only set up the OpenTelemetry configuration, without injecting the OpenTelemetry Agent. -[:dart: Get started with manual-instrumentation](../../how-to-guides/observability/auto-instrumentation.md#enable-auto-instrumentation-for-other-applications) +[:dart: Get started with manual-instrumentation](../how-to/auto-instrumentation.md#enable-auto-instrumentation-for-other-applications) ### OpenTelemetry SDKs @@ -114,7 +114,7 @@ The easiest way to get started with Tempo is to use the *Explore view* in Grafan [:simple-grafana: Open Grafana Explore][grafana-explore] -[:dart: Get started with Grafana Tempo](../../how-to-guides/observability/tracing/tempo.md) +[:dart: Get started with Grafana Tempo](how-to/tempo.md) ![Grafana Tempo](../../assets/grafana-tempo.png) diff --git a/docs/how-to-guides/observability/tracing/context-propagation.md b/docs/observability/tracing/how-to/context-propagation.md similarity index 88% rename from docs/how-to-guides/observability/tracing/context-propagation.md rename to docs/observability/tracing/how-to/context-propagation.md index aae017b64..a4737f8cb 100644 --- a/docs/how-to-guides/observability/tracing/context-propagation.md +++ b/docs/observability/tracing/how-to/context-propagation.md @@ -1,12 +1,13 @@ --- description: Learn how to propagate trace context across process boundaries in a few common scenarios. -tags: [guide, tracing] +tags: [how-to, tracing, observability] --- + # Trace context propagation Each Span carries a Context that includes metadata about the trace (like a unique trace identifier and span identifier) and any other data you choose to include. This context is propagated across process boundaries, allowing all the work that's part of a single trace to be linked together, even if it spans multiple services. -This guide explains how to propagate trace context across process boundaries in a few common scenarios. If you are using [auto-instrumentation](../auto-instrumentation.md), trace context propagation is already handled for you. +This guide explains how to propagate trace context across process boundaries in a few common scenarios. If you are using [auto-instrumentation](../../how-to/auto-instrumentation.md), trace context propagation is already handled for you. [:octicons-link-external-24: OpenTelemetry Context Propagation on opentelemetry.io](https://opentelemetry.io/docs/concepts/context-propagation/) @@ -15,4 +16,4 @@ This guide explains how to propagate trace context across process boundaries in When a service makes an HTTP request to another service, it should include the trace context in the request headers. The receiving service can then use this context to create a new Span that's part of the same trace. OpenTelemetry provides a standard for how trace context should be propagated in HTTP requests, called the [W3C Trace Context](https://www.w3.org/TR/trace-context/) standard. * [OpenTelemetry Setup in Spring Boot Application](https://opentelemetry.io/docs/languages/java/automatic/spring-boot) -* [OpenTelemetry Setup in Ktor Application](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/ktor/ktor-2.0/library) \ No newline at end of file +* [OpenTelemetry Setup in Ktor Application](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/ktor/ktor-2.0/library) diff --git a/docs/how-to-guides/observability/tracing/correlate-traces-logs.md b/docs/observability/tracing/how-to/correlate-traces-logs.md similarity index 90% rename from docs/how-to-guides/observability/tracing/correlate-traces-logs.md rename to docs/observability/tracing/how-to/correlate-traces-logs.md index 0c5898463..0cd6766b6 100644 --- a/docs/how-to-guides/observability/tracing/correlate-traces-logs.md +++ b/docs/observability/tracing/how-to/correlate-traces-logs.md @@ -1,24 +1,24 @@ --- description: Learn how to correlate traces with logs in Grafana Tempo. -tags: [guide, tracing] +tags: [how-to, tracing, observability] --- # Correlate traces and logs This guide will explain how to correlate traces with logs in Grafana Tempo. This is only necessary if you are not using auto-instrumentation with OpenTelemetry Agent. If you are using auto-instrumentation, logs are automatically correlated with traces. -## Step 1: Configure Tracing +## Configure Tracing First you need to configure OpenTelemetry tracing in your application. The easiest way to get started with tracing is to enable auto-instrumentation for your application. This will automatically collect traces and send them to the correct place using the OpenTelemetry Agent or you can use the OpenTelemetry SDK to manually instrument your application. -[:dart: Get started with auto-instrumentation](../auto-instrumentation.md) +[:dart: Get started with auto-instrumentation](../../how-to/auto-instrumentation.md) -## Step 2: Enable logging to Grafana Loki +## Enable logging to Grafana Loki In order to use the Grafana Tempo log correlation feature, you need to send your logs to Grafana Loki. -[:dart: Enable logging to Grafana Loki](../logs/loki.md#enable-logging-to-grafana-loki) +[:dart: Enable logging to Grafana Loki](../../logging/how-to/loki.md#enable-logging-to-grafana-loki) -## Step 3: Include trace information in your logs +## Include trace information in your logs The final step is to include trace information in your logs. This will allow Grafana Tempo to look up logs that are associated with a trace. @@ -79,10 +79,9 @@ The final step is to include trace information in your logs. This will allow Gra ``` -## 4. Profit +## Profit Now that you have tracing and logging set up, you can use Grafana Tempo to correlate traces and logs. When you view a trace in Grafana Tempo, you can see the logs that are associated with that trace. This makes it easy to understand what happened in your application and troubleshoot issues. ![Correlate traces and logs](../../../assets/grafana-tempo-logs.png) -[:arrow_backward: Back to the list of guides](../index.md) \ No newline at end of file diff --git a/docs/how-to-guides/observability/tracing/tempo.md b/docs/observability/tracing/how-to/tempo.md similarity index 92% rename from docs/how-to-guides/observability/tracing/tempo.md rename to docs/observability/tracing/how-to/tempo.md index 1bab075e1..9396778f6 100644 --- a/docs/how-to-guides/observability/tracing/tempo.md +++ b/docs/observability/tracing/how-to/tempo.md @@ -1,10 +1,14 @@ +--- +tags: [how-to, tracing, tempo, observability] +--- + # Get started with Grafana Tempo Grafana Tempo is an open-source, easy-to-use, high-scale, and cost-effective distributed tracing backend that stores and queries traces in a way that is easy to understand and use. It is fully integrated with Grafana, allowing you to visualize and query traces in the same interface as your metrics, and logs. Since NAIS does not collect application trace data automatically, you need to enable tracing in your application. The preferred way to get started with tracing is to enable auto-instrumentation for your application. This will automatically collect traces and send them to the correct place using the OpenTelemetry Agent. -[:dart: Get started with auto-instrumentation](../auto-instrumentation.md) +[:dart: Get started with auto-instrumentation](../../how-to/auto-instrumentation.md) Once you have traces being collected, you can visualize and query them in Grafana using the Grafana Tempo data source. To get started with Tempo, you can use the Explore view in Grafana, which provides a user-friendly interface for querying and visualizing traces. @@ -34,7 +38,7 @@ TraceQL provides a powerful and flexible way to query trace data, and it is desi -[:computer: Learn more about TraceQL query language](../../../reference/observability/tracing/traceql.md) +[:books: Learn more about TraceQL query language](../reference/traceql.md) ## Understanding trace data @@ -48,5 +52,5 @@ A red circle next to a span indicates that the span has an error. You can click Traces in nais follows the OpenTelemetry Semantic Conventions, which provides a standard for naming and structuring trace data. This makes it easier to understand and use trace data, as you can rely on a consistent structure across all traces. -[:computer: Learn more about OpenTelemetry Trace Semantic Conventions](../../../reference/observability/tracing/trace-semconv.md) +[:books: Learn more about OpenTelemetry Trace Semantic Conventions](../reference/trace-semconv.md) diff --git a/docs/reference/observability/tracing/trace-semconv.md b/docs/observability/tracing/reference/trace-semconv.md similarity index 76% rename from docs/reference/observability/tracing/trace-semconv.md rename to docs/observability/tracing/reference/trace-semconv.md index f3648708f..110b5a448 100644 --- a/docs/reference/observability/tracing/trace-semconv.md +++ b/docs/observability/tracing/reference/trace-semconv.md @@ -1,3 +1,7 @@ +--- +tags: [reference, otel, observability, tracing] +--- + # OpenTelemetry Trace Semantic Conventions OpenTelemetry Trace Semantic Conventions can be found at [opentelemetry.io](https://opentelemetry.io/docs/specs/semconv/general/trace/). diff --git a/docs/reference/observability/tracing/traceql.md b/docs/observability/tracing/reference/traceql.md similarity index 97% rename from docs/reference/observability/tracing/traceql.md rename to docs/observability/tracing/reference/traceql.md index 6e9de4adb..003cc9d59 100644 --- a/docs/reference/observability/tracing/traceql.md +++ b/docs/observability/tracing/reference/traceql.md @@ -1,10 +1,10 @@ --- description: TraceQL reference documentation for querying traces in Grafana Tempo. -tags: [reference, tempo] +tags: [reference, tempo, tracing] --- # TraceQL Reference -TraceQL is the query language used in Grafana Tempo to query traces. It is a powerful query language that allows you to filter, aggregate, and search for traces and should be familiar to anyone who has used SQL or [PromQL](../metrics/promql.md). +TraceQL is the query language used in Grafana Tempo to query traces. It is a powerful query language that allows you to filter, aggregate, and search for traces and should be familiar to anyone who has used SQL or [PromQL](../../metrics/reference/promql.md). Where TraceQL differs from PromQL is it's trailing pipeline syntax, or trace pipeline. A trace pipeline is a set of stage expressions that are chained together and applied to the selected trace data. Each expression can filter out, parse, or mutate trace spans and their respective labels. diff --git a/docs/operate/.pages b/docs/operate/.pages new file mode 100644 index 000000000..40247fcc3 --- /dev/null +++ b/docs/operate/.pages @@ -0,0 +1,7 @@ +nav: +- README.md +- 🎯 How-To: how-to +- Console: console.md +- CLI: cli +- naisdevice: naisdevice +- ... diff --git a/docs/operate/README.md b/docs/operate/README.md new file mode 100644 index 000000000..ff199b723 --- /dev/null +++ b/docs/operate/README.md @@ -0,0 +1,18 @@ +--- +tags: [explanation, operate] +--- + +# Manage your workloads and services + +This section covers how to manage your workloads and services on the NAIS platform. +It describes the different options available, and how to use them. + +Most of the management tasks can be done through the [NAIS Console](console.md), which is a web-based interface. + +Some more advanced use-cases still requires you to interact with the platform using the Kubernetes CLI, `kubectl`. + +## Related pages + +:bulb: [Learn more about Console](console.md) + +:dart: [Setup command line access](how-to/command-line-access.md) diff --git a/docs/operate/cli/.pages b/docs/operate/cli/.pages new file mode 100644 index 000000000..68abb12bf --- /dev/null +++ b/docs/operate/cli/.pages @@ -0,0 +1,5 @@ +title: nais-cli +nav: + - ... + - 🎯 How-To: how-to + - 📚 Reference: reference diff --git a/docs/operate/cli/README.md b/docs/operate/cli/README.md new file mode 100644 index 000000000..0c6f7bbd7 --- /dev/null +++ b/docs/operate/cli/README.md @@ -0,0 +1,30 @@ +# nais-cli + +nais-cli is a CLI application that provides some useful commands and utilities for interacting with the NAIS platform. + +## Prerequisites + +- [naisdevice](../naisdevice/README.md) + +## Installation + +- [Install nais-cli](how-to/install.md) + +## Usage + +See available subcommands under the Reference section in the navigation sidebar. + +!!! warning "Flag ordering" + + nais-cli requires all flags to appear **before** arguments. Otherwise, the flags will be interpreted as arguments. + + :white_check_mark: OK: + ```shell + nais start --topics events appname teamname + ``` + + :x: Not OK: + + ```shell + nais start appname teamname --topics events + ``` diff --git a/docs/how-to-guides/nais-cli/install.md b/docs/operate/cli/how-to/install.md similarity index 95% rename from docs/how-to-guides/nais-cli/install.md rename to docs/operate/cli/how-to/install.md index c4cc13800..b89d6cad6 100644 --- a/docs/how-to-guides/nais-cli/install.md +++ b/docs/operate/cli/how-to/install.md @@ -1,4 +1,8 @@ -# Install +--- +tags: [command-line, how-to] +--- + +# Install nais-cli === "macOS" diff --git a/docs/how-to-guides/nais-cli/troubleshooting.md b/docs/operate/cli/how-to/troubleshooting.md similarity index 95% rename from docs/how-to-guides/nais-cli/troubleshooting.md rename to docs/operate/cli/how-to/troubleshooting.md index 7b690ae65..e056c3a92 100644 --- a/docs/how-to-guides/nais-cli/troubleshooting.md +++ b/docs/operate/cli/how-to/troubleshooting.md @@ -1,4 +1,8 @@ -# Troubleshooting +--- +tags: [command-line, how-to] +--- + +# Troubleshooting nais-cli ## `Could not create process with command` diff --git a/docs/reference/cli/device.md b/docs/operate/cli/reference/device.md similarity index 94% rename from docs/reference/cli/device.md rename to docs/operate/cli/reference/device.md index f3f558258..82705193f 100644 --- a/docs/reference/cli/device.md +++ b/docs/operate/cli/reference/device.md @@ -1,6 +1,10 @@ +--- +tags: [command-line, reference] +--- + # device command -The device command can be used to connect to, disconnect from, and view the connection status of [naisdevice](../../explanation/naisdevice.md). +The device command can be used to connect to, disconnect from, and view the connection status of [naisdevice](../../naisdevice/README.md). Currently, the command requires the processes `naisdevice-agent` and `naisdevice-helper` to run, both of which can be run by starting naisdevice. ## connect diff --git a/docs/reference/cli/kubeconfig.md b/docs/operate/cli/reference/kubeconfig.md similarity index 96% rename from docs/reference/cli/kubeconfig.md rename to docs/operate/cli/reference/kubeconfig.md index 2b5b19920..95cdde707 100644 --- a/docs/reference/cli/kubeconfig.md +++ b/docs/operate/cli/reference/kubeconfig.md @@ -1,3 +1,7 @@ +--- +tags: [command-line, reference] +--- + # kubeconfig command Create a kubeconfig file for connecting to available clusters for you. diff --git a/docs/reference/cli/postgres.md b/docs/operate/cli/reference/postgres.md similarity index 99% rename from docs/reference/cli/postgres.md rename to docs/operate/cli/reference/postgres.md index fa144c1d6..408940605 100644 --- a/docs/reference/cli/postgres.md +++ b/docs/operate/cli/reference/postgres.md @@ -1,3 +1,7 @@ +--- +tags: [command-line, reference] +--- + # postgres command The postgres command can be used to connect to a cloudsql postgres database with your personal user diff --git a/docs/reference/cli/validate.md b/docs/operate/cli/reference/validate.md similarity index 92% rename from docs/reference/cli/validate.md rename to docs/operate/cli/reference/validate.md index 4b7cf91e3..22ccc85b8 100644 --- a/docs/reference/cli/validate.md +++ b/docs/operate/cli/reference/validate.md @@ -1,3 +1,7 @@ +--- +tags: [command-line, reference] +--- + # validate command ```bash @@ -30,9 +34,9 @@ See the [templating section](#templating) for examples. The following resource kinds are supported: -- [Application](../application-example.md) -- [Naisjob](../naisjob-example.md) -- [Topic (Kafka)](../kafka-topic-example.md) +- [Application](../../../workloads/application/reference/application-example.md) +- [Naisjob](../../../workloads/job/reference/naisjob-example.md) +- [Topic (Kafka)](../../../persistence/kafka/reference/kafka-topic-example.md) ## Templating diff --git a/docs/operate/console.md b/docs/operate/console.md new file mode 100644 index 000000000..b58f352be --- /dev/null +++ b/docs/operate/console.md @@ -0,0 +1,13 @@ +--- +tags: [console, explanation, operate] +--- + +# Console + +NAIS Console is a web-based interface for managing your workloads and services on the NAIS platform. It aims to provide a user-friendly way to interact with the platform, without needing to use the command line. + +It tries to provide you with insight into what you've got running on the platform. Is everything running as expected? Are you utilizing the resources you allocate effectively? + +The Console is designed to be self-service, meaning that you can manage your workloads and services without needing to involve the NAIS team. This includes creating, updating, and deleting workloads, as well as managing secrets and other resources. + +Access Console at [console.<>.cloud.nais.io](https://console.<>.cloud.nais.io). diff --git a/docs/how-to-guides/command-line-access/setup.md b/docs/operate/how-to/command-line-access.md similarity index 63% rename from docs/how-to-guides/command-line-access/setup.md rename to docs/operate/how-to/command-line-access.md index af5527668..013aed8ea 100644 --- a/docs/how-to-guides/command-line-access/setup.md +++ b/docs/operate/how-to/command-line-access.md @@ -1,22 +1,25 @@ -# Command line access +--- +tags: [command-line, how-to, operate] +--- + +# Setup command line access This guide shows you how to set up command line tools for accessing NAIS clusters -## Setup -### 0. Prerequisites +## Prerequisites -- [naisdevice](../naisdevice/install.md) installed -- [nais-cli](../nais-cli/install.md) installed +- [naisdevice](../naisdevice/how-to/install.md) installed +- [nais-cli](../cli/how-to/install.md) installed -### 1. Install gcloud +## Install gcloud -Follow Googles instructions on how to install [Gcloud](https://cloud.google.com/sdk/docs/install) for your OS +Follow Googles instructions on how to install [gcloud](https://cloud.google.com/sdk/docs/install) for your OS -### 2. Install kubectl +## Install kubectl Follow the instruction to install [kubectl](https://kubernetes.io/docs/tasks/tools/) for your OS -### 3. Authenticate using gcloud +## Authenticate using gcloud ```shell gcloud auth login --update-adc @@ -24,10 +27,17 @@ gcloud auth login --update-adc This will open your browser. Follow the instructions to authenticate using the email from your organization. + When successfully authenticated, you will be shown "You are now authenitcated with the gcloud CLI!" in your browser. You can now close the browser window. -### 4. Generate kubeconfig file +You will also need to install a plugin in order to authenticate to the Kubernetes clusters: + +```shell +gcloud components install gke-gcloud-auth-plugin +``` + +## Generate kubeconfig file Use nais-cli to generate the kubeconfig file that grants access to the NAIS clusters. @@ -37,7 +47,7 @@ nais kubeconfig A successful run will output how many clusters and where the kubeconfig file is written to. -### 5. Verify access +## Verify access ```shell kubectl --context '' get ns @@ -48,5 +58,3 @@ If you are unsure about which clusters are available, you can list them with: ```shell kubectl config get-clusters ``` - -If you experience any issues, please refer to the [troubleshooting](./troubleshooting.md) guide. diff --git a/docs/how-to-guides/team.md b/docs/operate/how-to/create-team.md similarity index 61% rename from docs/how-to-guides/team.md rename to docs/operate/how-to/create-team.md index 5714bdef4..d3a3080b6 100644 --- a/docs/how-to-guides/team.md +++ b/docs/operate/how-to/create-team.md @@ -1,18 +1,22 @@ -# Create NAIS team +--- +tags: [team, how-to, operate] +--- + +# Create a NAIS team This how-to guide shows you how to create a NAIS team. -## 0. Prerequisites +## Prerequisites -- [naisdevice installed](./naisdevice/install.md), to be able to access Console. +- [naisdevice installed](../naisdevice/how-to/install.md), to be able to access Console. -## 1. Create your team +## Create your team -1. Open [Console](https://console.<>.cloud.nais.io/) in your browser, and autenticate. +1. Open [Console](https://console.<>.cloud.nais.io/) in your browser, and authenticate. 2. Click on "Create" 3. Choose a team name, add a description of the team and the Slack channel that is mentioned in the prerequisites. 4. Click "Create" Your team will now be created, and you will be the owner. -Read more about what resources are created for your team [here](../explanation/team.md). +Read more about what resources are created for your team [here](../../explanations/team.md). diff --git a/docs/operate/naisdevice/.pages b/docs/operate/naisdevice/.pages new file mode 100644 index 000000000..b43a840a6 --- /dev/null +++ b/docs/operate/naisdevice/.pages @@ -0,0 +1,4 @@ +nav: +- README.md +- 🎯 How-To: how-to +- ... diff --git a/docs/operate/naisdevice/README.md b/docs/operate/naisdevice/README.md new file mode 100644 index 000000000..2c64fe4df --- /dev/null +++ b/docs/operate/naisdevice/README.md @@ -0,0 +1,20 @@ +--- +tags: [naisdevice, explanation, operate] +--- + +# naisdevice + +naisdevice is a mechanism that lets you connect to services not available on the public internet from your machine. + +Examples of such services are: + +- Access to the NAIS cluster with [kubectl](../how-to/command-line-access.md) +- Applications on [internal domains](../../workloads/reference/environments.md) +- Internal NAIS services such as [Console](../console.md). + +{% if tenant() == "nav" %} + +Before connecting, your machine needs to meet certain requirements. These requirements are enforced by a third-party service called [Kolide](https://kolide.com/) that is installed alongside naisdevice. + +Kolide will notify you through Slack when something is wrong with your machine, and will guide you through the process of fixing it. +{% endif %} diff --git a/tenants/nav/how-to-guides/naisdevice/install.md b/docs/operate/naisdevice/how-to/install.md similarity index 51% rename from tenants/nav/how-to-guides/naisdevice/install.md rename to docs/operate/naisdevice/how-to/install.md index d6c66f31d..d44457a6a 100644 --- a/tenants/nav/how-to-guides/naisdevice/install.md +++ b/docs/operate/naisdevice/how-to/install.md @@ -1,20 +1,40 @@ +--- +tags: [naisdevice, how-to] +--- + # Install naisdevice +{% if tenant() == "nav" %} + +!!! warning + + To make sure you are using naisdevice as securely as possible, make sure you are a member of the [Slack channel #naisdevice](https://nav-it.slack.com/archives/C013XV66XHB). Important information will be published there. This also where you find us, if you need any help. + ## Prerequisites - [Install the Kolide agent](./install-kolide.md). +!!! note + + On first time connection you will be presented with soft policies (aka. Do's & Don'ts) + +{% endif %} + ## Device-specific installation steps === "macOS" + {% if tenant() == "nav" %} + The Kolide agent will be added to your Slack app, and let you know when there are recommended updates or security issues you need to address - and how to address them. They have been vetted by the NAIS team and should be followed to keep your device safe. - 2. [Install Homebrew](https://brew.sh/) unless you already have it. + {% endif %} + + 1. [Install Homebrew](https://brew.sh/) unless you already have it. Homebrew makes it possible to install and maintain apps using the terminal app on your Mac. - 3. Open terminal (Use ` + ` to find `Terminal.app`) and add the nais tap by typing or pasting the text below and press ``. + 1. Open terminal (Use ` + ` to find `Terminal.app`) and add the nais tap by typing or pasting the text below and press ``. Adding the nais tap lets Homebrew know where to get and update files from. Do not worry about where it will be installed, we got you covered. @@ -22,57 +42,98 @@ brew tap nais/tap ``` - 4. When the tap is added, you are ready to install naisdevice, by typing or pasting the following in terminal and press ``. + 1. When the tap is added, you are ready to install naisdevice, by typing or pasting the following in terminal and press ``. + + {% if tenant() == "nav" %} ```bash brew install naisdevice ``` - 5. You will be asked for your local device account's password to finish the installation. - 1. The password is not accepted unless you have administrator privileges, so you need to get that first. - 2. If you're running a NAV Mac: Open your `Privileges.app` (Use ` + ` to find the `Privileges.app` and request privileges. When this is done, you can enter your password in terminal. The privileges last 10 minutes. The limited time is due to security reasons, because we know many of us forget to turn it off afterwards. + 1. You will be asked for your local device account's password to finish the installation. + 1. The password is not accepted unless you have administrator privileges, so you need to get that first. + 1. If you're running a NAV Mac: Open your `Privileges.app` (Use ` + ` to find the `Privileges.app` and request privileges. When this is done, you can enter your password in terminal. The privileges last 10 minutes. The limited time is due to security reasons, because we know many of us forget to turn it off afterwards. - 6. Turn on your freshly installed `naisdevice` app. - 1. Use ` + ` to find your `naisdevice.app` and press ``. - 2. Follow the [instructions to connect your _nais_ device](#connect-naisdevice-through-tasksys-tray-icon). + {% else %} - 7. If you need to connect to anything running in K8s cluster, remember to [update your kubeconfig](#connecting-to-nais-clusters) + ```bash + brew install naisdevice-tenant + ``` + + {% endif %} + + 1. You will be asked for your local device account's password to finish the installation. + 1. Turn on your freshly installed `naisdevice` app. + 1. Use ` + ` to find your `naisdevice.app` and press ``. + 1. Follow the [instructions to connect your _nais_ device](#connect-naisdevice-through-tasksys-tray-icon). === "Windows" #### Install using Scoop + {% if tenant() == "nav" %} + The Kolide agent will be added to your Slack app, and let you know when there are recommended updates or security issues you need to address - and how to address them. They have been vetted by the NAIS team and should be followed to keep your device safe. + {% endif %} + 1. Install [Scoop](https://scoop.sh) unless you already have it. Scoop makes it possible to install and maintain programs from the command line. 1. Use the following command in the command line to add the nais bucket to let Scoop know where to get and update files from. Do not worry about where it will be installed, we got you covered. + ```powershell scoop bucket add nais https://github.com/nais/scoop-bucket ``` + 1. When the bucket is added, you are ready to install naisdevice, by typing the following in the command line: + + {% if tenant() == "nav" %} + ```powershell scoop install naisdevice ``` + + {% else %} + + ```powershell + scoop install naisdevice-tenant + ``` + + {% endif %} + (you will be asked for administrator access to run the installer) - 1. If you need to connect to anything running in K8s cluster, remember to [update your kubeconfig](#connecting-to-nais-clusters) 1. Start _naisdevice_ from the _Start menu_ - ### Manual installation +=== "Manual" + + {% if tenant() == "nav" %} - 1. [Install Kolide agent](install-kolide.md). + [Install Kolide agent](install-kolide.md). The Kolide agent will be added to your Slack app, and let you know when there are recommended updates or security issues you need to address - and how to address them. They have been vetted by the NAIS team and should be followed to keep your device safe. - 2. [Download and install naisdevice.exe](https://github.com/nais/device/releases/latest) + [Download and install naisdevice.exe](https://github.com/nais/device/releases/latest) + + {% else %} + + [Download and install naisdevice-tenant.exe](https://github.com/nais/device/releases/latest) + + {% endif %} + (you will be asked for administrator access when you run the installer) - 3. Start _naisdevice_ from the _Start menu_ + + 1. Start _naisdevice_ from the _Start menu_ === "Ubuntu" + !!! warning + + Using Gnome DE on latest Ubuntu LTS - only supported variant atm + 1. Add the nais PPA repo: + ``` NAIS_GPG_KEY="/etc/apt/keyrings/nav_nais_gar.asc" curl -sfSL "https://europe-north1-apt.pkg.dev/doc/repo-signing-key.gpg" | sudo dd of="$NAIS_GPG_KEY" @@ -80,19 +141,48 @@ sudo apt update ``` + **NOTE** curl is not installed in a "fresh" ubuntu: + + ``` + sudo apt install curl + ``` + 1. Install the naisdevice package: + + {% if tenant() == "nav" %} + ``` sudo apt install naisdevice ``` + + {% else %} + + ``` + sudo apt install naisdevice-tenant + ``` + + {% endif %} + 1. Turn on your freshly installed `naisdevice` application. - 1. Find `naisdevice` in your application menu, or use the `naisdevice` command in a terminal to start the application. - 2. Follow the [instructions to connect your _nais_ device](#connect-naisdevice-through-tasksys-tray-icon). - 1. Remember to [update your kubeconfig](./install-tenant.md#connecting-to-nais-clusters). + 1. Find `naisdevice` in your application menu, or use the `naisdevice` command in a terminal to start the application. + 2. Follow the [instructions to connect your _nais_ device](#connect-naisdevice-through-tasksys-tray-icon). + +{% if tenant() == "nav" %} !!! warning - To make sure you are using naisdevice as securely as possible, make sure you are a member of the [Slack channel #naisdevice](https://nav-it.slack.com/archives/C013XV66XHB). Important information will be published there. This also where you find us, if you need any help. + On first time connection you will be presented with soft policies (aka. Do's & Don'ts) -!!! note +{% endif %} - On first time connection you will be presented with soft policies (aka. Do's & Don'ts) +## Connect naisdevice through task/sys -tray icon + +![A macOS systray exemplifying a red-colored `naisdevice` icon.](../../../assets/naisdevice-systray-icon.svg) + +When you have opened naisdevice, you may be concerned that nothing happened. The little naisdevice icon has appeared in your Systray (where all your small program icons are located - see above picture for how it looks on Mac): + +1. Find your `naisdevice` icon (pictured above - though it should not be red at first attempted connection). +- Can't find the icon? Make sure it is installed (See [macOS](#macos-installation), [Windows](#windows-installation) or [Ubuntu](#ubuntu-installation)) +1. Left-click it and select `Connect`. +1. Left-click the `naisdevice` icon again and click `Connect`. +You might need to allow ~20 seconds to pass before clicking `Connect` turns your `naisdevice` icon green. diff --git a/docs/how-to-guides/naisdevice/troubleshooting.md b/docs/operate/naisdevice/how-to/troubleshooting.md similarity index 91% rename from docs/how-to-guides/naisdevice/troubleshooting.md rename to docs/operate/naisdevice/how-to/troubleshooting.md index ba7876331..77a0dcae6 100644 --- a/docs/how-to-guides/naisdevice/troubleshooting.md +++ b/docs/operate/naisdevice/how-to/troubleshooting.md @@ -1,4 +1,8 @@ -# Troubleshooting +--- +tags: [naisdevice, how-to] +--- + +# Troubleshooting naisdevice - Browser does not open after you click connect and naisdevice - Restart your default browser. diff --git a/docs/operate/naisdevice/how-to/uninstall.md b/docs/operate/naisdevice/how-to/uninstall.md new file mode 100644 index 000000000..eae7220ac --- /dev/null +++ b/docs/operate/naisdevice/how-to/uninstall.md @@ -0,0 +1,95 @@ +--- +tags: [naisdevice, how-to] +--- + +# Uninstall naisdevice + +## OS-specific Uninstall steps + +{% if tenant() == "nav" %} + +### macOS uninstall + +1. Stop and remove Kolide and the related launch mechanisms + ```zsh + sudo /bin/launchctl unload /Library/LaunchDaemons/com.kolide-k2.launcher.plist + sudo /bin/rm -f /Library/LaunchDaemons/com.kolide-k2.launcher.plist + ``` +2. Delete files, configuration and binaries + ```zsh + sudo /bin/rm -rf /usr/local/kolide-k2 + sudo /bin/rm -rf /etc/kolide-k2 + sudo /bin/rm -rf /var/kolide-k2 + ``` +3. Uninstall the naisdevice Homebrew cask + ```bash + brew uninstall --force naisdevice + ``` + +### Windows uninstall + +1. Uninstall _Kolide_ from Apps & Features +2. (Optionally) Uninstall _WireGuard_ from Apps & Features +3. Uninstall naisdevice + * Installed with Scoop + ```powershell + scoop uninstall naisdevice + ``` + * Installed manually + * Uninstall _naisdevice_ from Apps & Features + +### Ubuntu uninstall + +1. Stop and remove Kolide and the related launch mechanisms + ```bash + sudo systemctl stop launcher.kolide-k2.service + sudo systemctl disable launcher.kolide-k2.service + ``` +2. Uninstall Kolide program files + ```bash + sudo apt(-get) remove launcher-kolide-k2 + ``` +3. Delete Kolide files & caches + ```bash + sudo rm -r /{etc,var}/kolide-k2 + ``` +4. Uninstall the naisdevice deb package + ```bash + sudo apt remove naisdevice + ``` + +## OS-agnostic uninstall steps + +When the program has been removed from your device, let an admin know in [#naisdevice](https://nav-it.slack.com/archives/C013XV66XHB) Slack channel. +This is necessary so that the record of your device can be purged from our Kolide systems. + +{% else %} + +### macOS uninstall + +Uninstall the naisdevice Homebrew cask + +```bash +brew uninstall --force naisdevice +``` + +### Windows uninstall + +1. (Optionally) Uninstall _WireGuard_ from Apps & Features +2. Uninstall naisdevice + * Installed with Scoop + ```powershell + scoop uninstall naisdevice + ``` + * Installed manually + * Uninstall _naisdevice_ from Apps & Features + +### Ubuntu uninstall + +Uninstall the naisdevice deb package + +```bash +sudo apt remove naisdevice +``` + +{% endif %} diff --git a/docs/how-to-guides/naisdevice/update.md b/docs/operate/naisdevice/how-to/update.md similarity index 94% rename from docs/how-to-guides/naisdevice/update.md rename to docs/operate/naisdevice/how-to/update.md index dc0ad7c15..bcd95dc45 100644 --- a/docs/how-to-guides/naisdevice/update.md +++ b/docs/operate/naisdevice/how-to/update.md @@ -1,3 +1,7 @@ +--- +tags: [naisdevice, how-to] +--- + # Updating naisdevice === "macOS" diff --git a/docs/persistence/.pages b/docs/persistence/.pages new file mode 100644 index 000000000..49e08c087 --- /dev/null +++ b/docs/persistence/.pages @@ -0,0 +1,4 @@ +nav: +- README.md +- 💡 Explanations: explanations +- ... diff --git a/docs/persistence/README.md b/docs/persistence/README.md new file mode 100644 index 000000000..1110e1807 --- /dev/null +++ b/docs/persistence/README.md @@ -0,0 +1,117 @@ +--- +description: >- + NAIS offers several storage solutions for storing data. This page describes + the different options and how to use them. +tags: [persistence, explanation] +--- + +# Persistent Data Overview + +In this section we will discuss how to work with persistent data in your +applications and the different options available to you. + +Persistent data is data that is stored on disk and survives application +restarts. This is in contrast to ephemeral data which is stored in memory +and is lost when the application is restarted. + +## Responsibilities + +The team is responsible for any data that is stored in the various storage +options that are available through the platform. You can read more in the +[Data Responsibilities](explanations/responsibilities.md) section. + +## Availability + +Some of the storage options are only available from certain environments. Make +sure to check what storage options are available in your environment in the +[Storage Comparison](#storage-comparison) section below. + +## What should I choose? + +Sequence of questions to ask yourself when choosing the right storage option. +Choose wisely. + +```mermaid +graph TD + F[I need caching!] --> REDIS[Redis] + A[I got data!] --> B[Is it structured?] + B --> |Yes| C[Is it events?] + B --> |No| D[Is it files?] + C --> |Yes| Kafka + C --> |No| E[Is it analytical?] + D --> |Yes| GCS[Cloud Storage] + D --> |No| Opensearch[OpenSearch] + E --> |Yes| GBQ[BigQuery] + E --> |No| GCSQL[Cloud SQL] + + click REDIS "#redis" + click GBQ "#bigquery" + click GCS "#cloud-storage-buckets" + click GCSQL "#cloud-sql" + click Kafka "#kafka" + click Opensearch "#opensearch" +``` + +## Storage Comparison + +Below is a list of the different storage options available to you. + +| Name | Type | Availability | Backup | +|-----------------------------------------|-------------|:------------:|:------:| +| [Kafka](#kafka) | Streaming | All | Yes* | +| [Cloud Storage](#cloud-storage-buckets) | Object | GCP | Yes* | +| [Cloud SQL](#cloud-sql) | Relational | GCP | Yes | +| [BigQuery](#bigquery) | Relational | GCP | Yes* | +| [OpenSearch](#opensearch) | Document | GCP | Yes | +| [Redis](#redis) | Key/Value | GCP | Yes | + +\* Data is highly available and fault-tolerant but not backed up if deleted by +mistake. + +## Kafka + +Kafka is a streaming platform that is used for storing and processing data. It +is a very powerful tool that can be used for a wide variety of use cases. It is +also a very complex tool that requires a lot of knowledge to use effectively. + +[:bulb: Learn more about Kafka](./kafka/README.md) + +## Cloud Storage (Buckets) + +Cloud Storage is a service that provides object storage. It is a very simple +service that is easy to use and provides a lot of flexibility. It is a good +choice for storing data that is not relational in nature. + +[:bulb: Learn more about Cloud Storage](./buckets/README.md) + +## Cloud SQL + +Cloud SQL is a PostgreSQL relational database service that is provided by Google +Cloud Platform. It is a good choice for storing data that is relational in +nature. + +[:bulb: Learn more about Cloud SQL](./postgres/README.md) + +## BigQuery + +BigQuery is a service that provides a relational database that is optimized for +analytical workloads. It is a good choice for storing data that is relational in +nature. + +[:bulb: Learn more about Google BigQuery](./bigquery/README.md) + +## OpenSearch + +OpenSearch is a document database that is used for storing and searching data. +It is a good choice for storing data that is not relational in nature. +OpenSearch offers a drop-in replacement for Elasticsearch. + +[:bulb: Learn more about OpenSearch](./opensearch/README.md) + +## Redis + +Redis is a key value database that is used for storing and querying data. It is +a good choice for storing data that is not relational in nature and often used +for caching. + +[:bulb: Learn more about Redis](./redis/README.md) diff --git a/docs/persistence/bigquery/.pages b/docs/persistence/bigquery/.pages new file mode 100644 index 000000000..99efe087e --- /dev/null +++ b/docs/persistence/bigquery/.pages @@ -0,0 +1,5 @@ +title: BigQuery +nav: +- README.md +- 🎯 How-To: how-to +- ... diff --git a/docs/explanation/database/bigquery.md b/docs/persistence/bigquery/README.md similarity index 78% rename from docs/explanation/database/bigquery.md rename to docs/persistence/bigquery/README.md index 141df7a55..637d8d07b 100644 --- a/docs/explanation/database/bigquery.md +++ b/docs/persistence/bigquery/README.md @@ -1,3 +1,7 @@ +--- +tags: [persistence, bigquery, explanation, services] +--- + # Google Cloud BigQuery Dataset Google Cloud BigQuery is a service that provides a relational database that is optimized for analytical workloads. It is a good choice for storing data that is relational in nature. @@ -8,17 +12,17 @@ started with BigQuery for your applications. # NAIS Application yaml manifest options -Full documentation of all available options can be found over at: [`spec.gcp.bigQueryDatasets[]`](../../reference/application-spec.md#gcpbigquerydatasets). +Full documentation of all available options can be found over at: [`spec.gcp.bigQueryDatasets[]`](../../workloads/application/reference/application-spec.md#gcpbigquerydatasets). Example of an application using a `nais.yaml` provisioned BigQuery Dataset can be found here: [testapp](https://github.com/nais/testapp/blob/master/pkg/bigquery/bigquery.go). ## Caveats to be aware of === "Automatic Deletion" - Once a BigQuery Dataset is provisioned, it will not be automatically deleted - unless one explicitly sets [`spec.gcp.bigQueryDatasets[].cascadingDelete`](../../reference/application-spec.md#gcpbigquerydatasetscascadingdelete) to `true`. + Once a BigQuery Dataset is provisioned, it will not be automatically deleted - unless one explicitly sets [`spec.gcp.bigQueryDatasets[].cascadingDelete`](../../workloads/application/reference/application-spec.md#gcpbigquerydatasetscascadingdelete) to `true`. Clean up is done by deleting application resource and deleting the BigQuery instance directly in [console.cloud.google.com](https://console.cloud.google.com/bigquery).
- When there exist no tables in the specified BigQuery Dataset, deleting the "nais application" will delete the whole BigQuery Dataset, even if [`spec.gcp.bigQueryDatasets[].cascadingDelete`](../../reference/application-spec.md#gcpbigquerydatasetscascadingdelete) is set to `false`. + When there exist no tables in the specified BigQuery Dataset, deleting the "nais application" will delete the whole BigQuery Dataset, even if [`spec.gcp.bigQueryDatasets[].cascadingDelete`](../../workloads/application/reference/application-spec.md#gcpbigquerydatasetscascadingdelete) is set to `false`. === "Unique names" The name of your Dataset must be unique within your team's GCP project. === "Updates/Immutability" @@ -30,4 +34,4 @@ Example of an application using a `nais.yaml` provisioned BigQuery Dataset can b ## Example with all configuration options -See [full example](../../reference/application-example.md). \ No newline at end of file +See [full example](../../workloads/application/reference/application-example.md). diff --git a/docs/how-to-guides/persistence/bigquery/connect.md b/docs/persistence/bigquery/how-to/connect.md similarity index 92% rename from docs/how-to-guides/persistence/bigquery/connect.md rename to docs/persistence/bigquery/how-to/connect.md index 069bff53d..df2494315 100644 --- a/docs/how-to-guides/persistence/bigquery/connect.md +++ b/docs/persistence/bigquery/how-to/connect.md @@ -1,3 +1,7 @@ +--- +tags: [how-to, bigquery] +--- + # Using BigQuery from your application When connecting your BigQuery client you need to specify the project ID and the dataset ID. diff --git a/docs/how-to-guides/persistence/bigquery/create.md b/docs/persistence/bigquery/how-to/create.md similarity index 76% rename from docs/how-to-guides/persistence/bigquery/create.md rename to docs/persistence/bigquery/how-to/create.md index c036e3761..253cd7735 100644 --- a/docs/how-to-guides/persistence/bigquery/create.md +++ b/docs/persistence/bigquery/how-to/create.md @@ -1,6 +1,10 @@ +--- +tags: [how-to, bigquery] +--- + # Create an instance of BigQuery -Below you'll se a minimal working example for a NAIS Application manifest. +Below is a minimal working example for a NAIS Application manifest. ## 1. Create dataset ???+ note ".nais/app.yaml" diff --git a/docs/persistence/buckets/.pages b/docs/persistence/buckets/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/persistence/buckets/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/persistence/buckets/README.md b/docs/persistence/buckets/README.md new file mode 100644 index 000000000..ee98686e7 --- /dev/null +++ b/docs/persistence/buckets/README.md @@ -0,0 +1,10 @@ +--- +tags: [bucket, services, explanation] +--- + +# Buckets + +A bucket is a storage container for objects. +Objects are files that contain data, such as documents, images, videos, and application code. + +NAIS supports provisioning and managing buckets in Google Cloud Storage (GCS) for use by your applications. diff --git a/docs/how-to-guides/persistence/buckets/create.md b/docs/persistence/buckets/how-to/create.md similarity index 60% rename from docs/how-to-guides/persistence/buckets/create.md rename to docs/persistence/buckets/how-to/create.md index 22d1694a0..d231825ca 100644 --- a/docs/how-to-guides/persistence/buckets/create.md +++ b/docs/persistence/buckets/how-to/create.md @@ -1,12 +1,16 @@ -# Create +--- +tags: [how-to, bucket] +--- + +# Create a bucket This guide will show you how to create a Google Cloud Storage bucket. -## 0. Add the bucket to the NAIS application manifest +## Add the bucket to the NAIS application manifest You create the bucket through the NAIS application manifest. -!!!+ note "Naming" +!!! warning "Use a globally unique name" Bucket names must be globally unique across the entire Google infrastructure. @@ -29,11 +33,11 @@ spec: withState: ANY ``` -`retentionPeriodDays` and `lifecycleCondition` are for neccessary for [backup](../../../reference/bucket-backup.md). +`retentionPeriodDays` and `lifecycleCondition` are for necessary for [backup](../reference/README.md). -## 1. Deploy your manifest +## Deploy your manifest -Deploy your manifest either using [NAIS deploy action](../../github-action.md), or manually: +Deploy your manifest either using [NAIS deploy action](../../../build/how-to/build-and-deploy.md), or manually: ```bash kubectl apply -f diff --git a/docs/how-to-guides/persistence/buckets/delete.md b/docs/persistence/buckets/how-to/delete.md similarity index 89% rename from docs/how-to-guides/persistence/buckets/delete.md rename to docs/persistence/buckets/how-to/delete.md index 26f762a31..4ab602c82 100644 --- a/docs/how-to-guides/persistence/buckets/delete.md +++ b/docs/persistence/buckets/how-to/delete.md @@ -1,8 +1,11 @@ +--- +tags: [bucket, how-to] +--- # Deleting a bucket Delete unused buckets to avoid incurring unnecessary costs. A bucket is deleted by enabling cascading deletion, and deleting the application. -## 1. Enable cascading/automatic deletion +## Enable cascading/automatic deletion For deletion of the application to automatically delete the bucket, set `cascadingDelete` to `true` in your NAIS application spesification. Don't worry, the bucket won't be deleted if it contains files. @@ -25,7 +28,7 @@ spec: numNewerVersions: 2 withState: ANY ``` -## 2. Delete your application +## Delete your application Delete your application resource. diff --git a/docs/reference/bucket-backup.md b/docs/persistence/buckets/reference/README.md similarity index 89% rename from docs/reference/bucket-backup.md rename to docs/persistence/buckets/reference/README.md index 990f2e18b..09c613d5a 100644 --- a/docs/reference/bucket-backup.md +++ b/docs/persistence/buckets/reference/README.md @@ -1,8 +1,15 @@ -# Backup of a bucket +--- +title: Buckets reference +tags: [reference, bucket] +--- + +# Buckets reference + +## Backup of a bucket There is no automatic backup enabled for buckets. I.e. the files can not be restored in the event of accidental deletion or storage system failure. -## Specification +### Specification * `retentionPeriodDays` is set in number of days, if not set; no retention policy will be set and files can be deleted by application or manually. diff --git a/docs/persistence/explanations/responsibilities.md b/docs/persistence/explanations/responsibilities.md new file mode 100644 index 000000000..8a37dece8 --- /dev/null +++ b/docs/persistence/explanations/responsibilities.md @@ -0,0 +1,61 @@ +--- +tags: [explanation, persistence] +description: >- + This page aims to clarify the responsibilities as relates to data storage + using NAIS and GCP. Depending on which infrastructure the data is stored on, + the responsibilities look slightly different. +--- + +# Responsibilities + +It is important to understand the responsibilities of the different parties involved when working with data in NAIS. This page aims to clarify the responsibilities as relates to data storage on-prem and in GCP. Depending on which infrastructure the data is stored on, the responsibilites look slightly different. + +## The Platform + +The platform does not manage the underlying infrastructure or run the data storage service provided by the platform. +These are provided by NAV, Google or Aiven. +NAIS is responsible for setting up the infrastructure and data storage service according to the specifications provided by the application. + +The platform team is responsible for the following: + +* Provisioning and maintaining underlying infrastructure +* Tooling and automation to make it easy to use the platform +* Providing documentation and support for the platform + +The platform team is **not** responsible for the application itself, not the data stored in the provided data storage services. + +??? note "List of data processors" + +{% if tenant() == "nav" %} + | Infrastructure | Data processor | + |----------------|----------------| + | On-premise | NAV (ITIP) | + | Cloud Storage | Google | + | Cloud SQL | Google | + | BigQuery | Google | + | Kafka | Aiven | + | OpenSearch | Aiven | +{% else %} + | Infrastructure | Data processor | + |----------------|----------------| + | Cloud Storage | Google | + | Cloud SQL | Google | + | BigQuery | Google | + | Kafka | Aiven | + | OpenSearch | Aiven | +{% endif %} + +## The Team + +At the end of the day, the team is responsible for its own data and how it is managed. This includes compliance with data policies (e.g. GDPR or archiving), ensuring disaster recovery (aided by tooling and interfaces supplied by the platform) and daily operations. + +### Team Checklist for Data Storage + +Here is a simple checklist for what the teams should think about related to how and where the data is stored: + +{% if tenant() == "nav" %} +* [x] Update [Behandlingskatalogen](https://behandlingskatalog.nais.adeo.no) where data is stored. +{% endif %} +* [x] Is the data storage in compliance with data policies (GDPR, PII, etc.)? +* [x] What is the SLA for the data storage? +* [x] What is the backup strategy for the data storage? diff --git a/docs/persistence/kafka/.pages b/docs/persistence/kafka/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/persistence/kafka/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/kafka.md b/docs/persistence/kafka/README.md similarity index 89% rename from docs/explanation/kafka.md rename to docs/persistence/kafka/README.md index a5fca41df..891ccbdc2 100644 --- a/docs/explanation/kafka.md +++ b/docs/persistence/kafka/README.md @@ -1,5 +1,29 @@ +--- +description: >- + Kafka is a distributed streaming platform that can be used to publish and + subscribe to streams of records. It is a good alternative to synchronous + communication between services if you need to decouple services. +tags: [kafka, explanation, persistence, services] +--- + # Kafka +NAIS offers Kafka as a managed service through Aiven. + +Start using Kafka by [creating a `Topic` resource](how-to/create.md) in one of our Kubernetes clusters. + +A `Topic` belongs to one of the Kafka _pools_. +A pool is a highly available, replicated Kafka cluster running at Aiven. +After the topic is created, relevant users are added to the topic's access control list (ACL). + +To get started with Kafka in your application, see [accessing topics from an application](how-to/access.md). + +!!! note "Backups and recovery" + + Kafka as a system is highly durable, and is designed to be able to keep your data safe in the event of a failure. + This requires a properly configured replication factor for your topic, and that your clients use the appropriate strategy when sending messages and committing offsets. + Even so, our recommendation is that Kafka should not be the master of your data, and you should have the ability to restore your data from some other system. + ## What happens on deploy? When you deploy an application that requests access to Kafka, Naiserator will create an `AivenApplication` resource in the cluster. diff --git a/docs/how-to-guides/persistence/kafka/access.md b/docs/persistence/kafka/how-to/access.md similarity index 64% rename from docs/how-to-guides/persistence/kafka/access.md rename to docs/persistence/kafka/how-to/access.md index b41f4d923..d3de5d612 100644 --- a/docs/how-to-guides/persistence/kafka/access.md +++ b/docs/persistence/kafka/how-to/access.md @@ -1,12 +1,16 @@ +--- +tags: [how-to, kafka] +--- + # Accessing topics from an application This guide shows you how to access Kafka topics from your application. -## 0. Prerequisites +## Prerequisites -You need an existing topic to access. See [Create a Kafka topic](./create.md) for how to create a topic. +You need an existing topic to access. See [Create a Kafka topic](create.md) for how to create a topic. -## 1. Enable access to the relevant pool in your workload definition +## Enable access to the relevant pool in your workload definition ???+ note ".nais/app.yaml" @@ -20,14 +24,16 @@ You need an existing topic to access. See [Create a Kafka topic](./create.md) fo team: spec: kafka: - pool: # TODO: link to available tenant pools + pool: ``` -## 2. Grant access to the topic +Select a `pool` from one of the [available pools](../reference/pools.md). + +## Grant access to the topic The owner of the topic must [grant your application access to the topic](manage-acl.md). -## 3. Configure your application +## Configure your application Aiven has written several articles on how to configure your application. We use SSL, so ignore the SASL-SSL examples: @@ -37,12 +43,13 @@ We use SSL, so ignore the SASL-SSL examples: - [Node.js](https://docs.aiven.io/docs/products/kafka/howto/connect-with-nodejs.html) - [Go](https://docs.aiven.io/docs/products/kafka/howto/connect-with-go.html) -For all available fields and configuration options, see the [kafka reference](../../../reference/kafka.md). -We recommend following the [application design guidelines](../../../explanation/kafka.md#application-design-guidelines) for how to configure your application. +For all available environment variables, see the [reference](../reference/environment-variables.md). + +We recommend following the [application design guidelines](../README.md#application-design-guidelines) for how to configure your application. -## 3. Apply the application +## Apply the application === "Automatically" - Add the file to your application repository to deploy with [NAIS github action](../../github-action.md). + Add the file to your application repository to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). === "Manually" ```bash kubectl apply -f ./nais/app.yaml --namespace= --context= diff --git a/docs/persistence/kafka/how-to/create.md b/docs/persistence/kafka/how-to/create.md new file mode 100644 index 000000000..4c6fb9958 --- /dev/null +++ b/docs/persistence/kafka/how-to/create.md @@ -0,0 +1,51 @@ +--- +tags: [how-to, kafka] +--- + +# Create a Kafka topic + +This guide will show you how to create a Kafka topic. + +The fully qualified topic name is the name of the `Topic` resource prefixed with your team namespace: + +``` +. +``` + +This name is also set in the `.status.fullyQualifiedName` field on your Topic resource once the Topic is synchronized to Aiven. + +## Creating topics + +???+ note ".nais/topic.yaml" + ```yaml hl_lines="4-5 7 9 11-14" + apiVersion: kafka.nais.io/v1 + kind: Topic + metadata: + name: + namespace: + labels: + team: + spec: + pool: + acl: + - team: + application: + access: readwrite # read, write, readwrite + ``` + +Select a `pool` from one of the [available pools](../reference/pools.md). + +See the [Kafka topic reference](../reference/kafka-topic-spec.md) for a complete list of available options. + +## Grant access to the topic for other applications (optional) + +See [manage access](manage-acl.md) for how to grant access to your topic. + +## Apply the Topic resource + +=== "Automatically" + Add the file to your application repository to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). +=== "Manually" + ```bash + kubectl apply -f ./nais/topic.yaml --namespace= --context= + ``` diff --git a/docs/how-to-guides/persistence/kafka/delete.md b/docs/persistence/kafka/how-to/delete.md similarity index 85% rename from docs/how-to-guides/persistence/kafka/delete.md rename to docs/persistence/kafka/how-to/delete.md index fa577f26c..bf9963991 100644 --- a/docs/how-to-guides/persistence/kafka/delete.md +++ b/docs/persistence/kafka/how-to/delete.md @@ -1,9 +1,13 @@ +--- +tags: [how-to, kafka] +--- + # Delete Kafka topic and data !!! warning Permanent deletes are irreversible. Enable this feature only as a step to completely remove your data. -## 0. Enable data deletion +## Enable data deletion When a `Topic` resource is deleted from a Kubernetes cluster, the Kafka topic is still retained, and the data kept intact. If you need to remove data and start from scratch, you must add the following annotation to your `Topic` resource: ???+ note ".nais/topic.yaml" @@ -18,16 +22,16 @@ When a `Topic` resource is deleted from a Kubernetes cluster, the Kafka topic is ``` When this annotation is in place, deleting the topic resource from Kubernetes will also delete the Kafka topic and all of its data. -## 1. Apply the Topic resource +## Apply the Topic resource === "Automatically" - Add the file to your application repository to deploy with [NAIS github action](../../github-action.md). + Add the file to your application repository to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). === "Manually" ```bash kubectl apply -f ./nais/topic.yaml --namespace= --context= ``` -## 2. Delete the topic resource +## Delete the topic resource ```bash kubectl delete -f ./nais/topic.yaml --namespace= --context= ``` diff --git a/docs/how-to-guides/persistence/kafka/internal.md b/docs/persistence/kafka/how-to/internal.md similarity index 73% rename from docs/how-to-guides/persistence/kafka/internal.md rename to docs/persistence/kafka/how-to/internal.md index db32c57de..6546d9c4f 100644 --- a/docs/how-to-guides/persistence/kafka/internal.md +++ b/docs/persistence/kafka/how-to/internal.md @@ -1,7 +1,11 @@ +--- +tags: [how-to, kafka] +--- + # Using Kafka Streams with internal topics This guide will show you how to use Kafka Streams with internal topics. -## 1. Enable Kafka Streams in your application +## Enable Kafka Streams in your application ???+ note ".nais/app.yaml" ```yaml hl_lines="11" @@ -14,16 +18,21 @@ This guide will show you how to use Kafka Streams with internal topics. team: spec: kafka: - pool: # TODO: link to available tenant pools + pool: streams: true ``` -## 2. Configure your application + +Select a `pool` from one of the [available pools](../reference/pools.md). + +## Configure your application + When you do this you **must** configure Kafka Streams by setting the property `application.id` to a value that starts with the value of the env var `KAFKA_STREAMS_APPLICATION_ID`, which will be injected into your pod automatically. -## 3. Apply the application +## Apply the application + === "Automatically" - Add the file to your application repository to deploy with [NAIS github action](../../github-action.md). + Add the file to your application repository to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). === "Manually" ```bash kubectl apply -f ./nais/app.yaml --namespace= --context= diff --git a/docs/how-to-guides/persistence/kafka/manage-acl.md b/docs/persistence/kafka/how-to/manage-acl.md similarity index 85% rename from docs/how-to-guides/persistence/kafka/manage-acl.md rename to docs/persistence/kafka/how-to/manage-acl.md index be34e9ac4..9f25d4c53 100644 --- a/docs/how-to-guides/persistence/kafka/manage-acl.md +++ b/docs/persistence/kafka/how-to/manage-acl.md @@ -1,18 +1,22 @@ +--- +tags: [how-to, kafka] +--- + # Manage access This guide will show you how to manage access to your topic -## 0. Prerequisites +## Prerequisites -- [An existing topic](./create.md) to manage access to. +- [An existing topic](create.md) to manage access to. -## 1. Add ACLs to your topic +## Add ACLs to your topic !!! info It is possible to use simple wildcards (`*`) in both team and application names, which matches any character any number of times. Be aware that due to the way ACLs are generated and length limits, the ends of long names can be cut, eliminating any wildcards at the end. -Example of varoius ACLs: +Example of various ACLs: ???+ note ".nais/topic.yaml" @@ -44,9 +48,9 @@ Example of varoius ACLs: ... ``` -## 2. Apply the Topic resource +## Apply the Topic resource === "Automatically" - Add the file to your application repository to deploy with [NAIS github action](../../github-action.md). + Add the file to your application repository to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). === "Manually" ```bash kubectl apply -f ./nais/topic.yaml --namespace= --context= diff --git a/docs/how-to-guides/persistence/kafka/monitoring.md b/docs/persistence/kafka/how-to/monitoring.md similarity index 98% rename from docs/how-to-guides/persistence/kafka/monitoring.md rename to docs/persistence/kafka/how-to/monitoring.md index c4dd15566..7c8b7b974 100644 --- a/docs/how-to-guides/persistence/kafka/monitoring.md +++ b/docs/persistence/kafka/how-to/monitoring.md @@ -1,3 +1,7 @@ +--- +tags: [how-to, kafka] +--- + # Kafka metrics This guide will show you how to monitor your Kafka topics with Grafana. diff --git a/docs/how-to-guides/persistence/kafka/remove-access.md b/docs/persistence/kafka/how-to/remove-access.md similarity index 78% rename from docs/how-to-guides/persistence/kafka/remove-access.md rename to docs/persistence/kafka/how-to/remove-access.md index 5f64b9534..a73b68410 100644 --- a/docs/how-to-guides/persistence/kafka/remove-access.md +++ b/docs/persistence/kafka/how-to/remove-access.md @@ -1,10 +1,14 @@ +--- +tags: [how-to, kafka] +--- + # Remove access to Kafka This guide will show you how to remove your application's access to a Kafka topic. -## 1. Remove ACLs from the topic +## Remove ACLs from the topic [Remove the ACL](manage-acl.md) that grants your application access to the topic. -## 2. Remove the kafka resource from your application +## Remove the kafka resource from your application ???+ note ".nais/app.yaml" ```yaml hl_lines="9-10" @@ -20,9 +24,9 @@ This guide will show you how to remove your application's access to a Kafka topi pool: # TODO: link to available tenant pools ``` -## 3. Apply the application +## Apply the application === "Automatically" - Add the file to your application repository to deploy with [NAIS github action](../../github-action.md). + Add the file to your application repository to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). === "Manually" ```bash kubectl apply -f ./nais/app.yaml --namespace= --context= diff --git a/docs/how-to-guides/persistence/kafka/schema-operations.md b/docs/persistence/kafka/how-to/schema-operations.md similarity index 98% rename from docs/how-to-guides/persistence/kafka/schema-operations.md rename to docs/persistence/kafka/how-to/schema-operations.md index df94a339c..ccd3a9e39 100644 --- a/docs/how-to-guides/persistence/kafka/schema-operations.md +++ b/docs/persistence/kafka/how-to/schema-operations.md @@ -1,3 +1,7 @@ +--- +tags: [how-to, kafka] +--- + # Avro and schema This guide will show you how to do various schema operations on your Kafka topics. diff --git a/docs/reference/kafka.md b/docs/persistence/kafka/reference/environment-variables.md similarity index 56% rename from docs/reference/kafka.md rename to docs/persistence/kafka/reference/environment-variables.md index 01449918c..152b06f7e 100644 --- a/docs/reference/kafka.md +++ b/docs/persistence/kafka/reference/environment-variables.md @@ -1,11 +1,13 @@ -# Kafka +--- +tags: [kafka, reference] +--- -## Environment variables for Kafka +# Environment variables for Kafka These variables are made available to your application when Kafka is enabled. | Variable name | Description | -| :------------------------------- | :------------------------------------------------------------------------- | +|:---------------------------------|:---------------------------------------------------------------------------| | `KAFKA_BROKERS` | Comma-separated list of HOST:PORT pairs to Kafka brokers | | `KAFKA_SCHEMA_REGISTRY` | URL to schema registry | | `KAFKA_SCHEMA_REGISTRY_USER` | Username to use with schema registry | @@ -20,19 +22,3 @@ These variables are made available to your application when Kafka is enabled. | `KAFKA_KEYSTORE_PATH` | PKCS\#12 keystore for use with Java clients, as file | | `KAFKA_TRUSTSTORE_PATH` | JKS truststore for use with Java clients, as file | | `AIVEN_SECRET_UPDATED` | A timestamp of when the secret was created | - - -## Authentication and authorization - -The NAIS platform will generate new credentials when your applications is deployed. Kafka requires TLS client certificates for authentication. Make sure your Kafka and/or TLS library can do client certificate authentication, and that you can specify a custom CA certificate for server validation. - -## Readiness and liveness - -Making proper use of liveness and readiness probes can help with many situations. -If producing or consuming Kafka messages are a vital part of your application, you should consider failing one or both probes if you have trouble with Kafka connectivity. -Depending on your application, failing liveness might be the proper course of action. -This will make sure your application is restarted when it is experiencing problems, which might help. - -In other cases, failing just the readiness probe will allow your application to continue running, attempting to move forward without being killed. -Failing readiness will be most helpful during deployment, where the old instances will keep running until the new are ready. -If the new instances are not able to connect to Kafka, keeping the old ones until the problem is resolved will allow your application to continue working. diff --git a/docs/reference/kafka-topic-example.md b/docs/persistence/kafka/reference/kafka-topic-example.md similarity index 96% rename from docs/reference/kafka-topic-example.md rename to docs/persistence/kafka/reference/kafka-topic-example.md index 145b1e425..b8099c6f8 100644 --- a/docs/reference/kafka-topic-example.md +++ b/docs/persistence/kafka/reference/kafka-topic-example.md @@ -1,3 +1,7 @@ +--- +tags: [kafka, reference] +--- + # NAIS Topic example YAML team-b-db team-c-app --> team-c-db ``` -Teams are managed in [NAIS Teams](../explanation/team.md). - ## Secure Software Development Lifecycle (Secure SDLC) NAIS is not a complete solution for secure software development lifecycle. It is @@ -115,7 +114,7 @@ provided by NAIS: NAIS provides a secure way to store [secrets] for use by applications. These secrets are encrypted at rest and are only accessible by applications and members of a given team. -[secrets]: ../explanation/secrets.md +[secrets]: ../services/secrets/README.md #### External dependencies @@ -138,7 +137,7 @@ Software Artifacts (SLSA)][slsa]. This means that every application is deployed using a secure supply chain that ensures that the application is deployed in a secure manner. -[slsa]: salsa/README.md +[slsa]: ../services/salsa.md #### Security Policies @@ -175,14 +174,14 @@ The following security policies are enforced by NAIS: Developer access control in NAIS is backed by [Google Cloud IAM][google-iam] and [Kubernetes RBAC][kubernetes-rbac]. The platform is responsible for setting up the necessary roles and permissions in Google Cloud IAM and Kubernetes RBAC -according to the teams registered in [NAIS Teams](../explanation/team.md) by the +according to the teams registered in [NAIS Teams](../explanations/team.md) by the developers. Tools and services provided by the platform to the developers are exposed securely in two ways: 1. On a private network only accessible to authenticated users over -[naisdevice](../explanation/naisdevice.md) to trusted devices using secure +[naisdevice](../operate/naisdevice/README.md) to trusted devices using secure [WireGuard][wireguard] VPN tunnels. 1. On a public network behind [Identity Aware Proxy (IAP)][google-iap] which @@ -196,7 +195,7 @@ ensures that all developers are authenticated with their personal user accounts. #### User authentication User authentication and authorization ("authnz") is the responsibility of each -application running on the plattform. Authorization (i.e. "who is allowed to see +application running on the platform. Authorization (i.e. "who is allowed to see and do what under which circumstances") is part of the business logic for each domain, so it makes sense that it is handled by the teams in their apps. Doing authnz right is complicated, so the plattform offers a few tools and services to @@ -205,25 +204,25 @@ assist. We recommend using OIDC to authenticate humans. The platform will (given a few lines of configuration) automatically provision clients at our main identity providers [Azure AD](auth/azure-ad/README.md) (for employees) and -[ID-porten](auth/idporten.md) (for the public). The secrets associated with +[ID-porten](../auth/idporten/README.md) (for the public). The secrets associated with these clients are handled behind the scenes and rotated regularly. To ease handling the OIDC flows we offer our "OIDC as a sidecar" named [Wonderwall](auth/wonderwall.md). For service to service-communication further down in the call chain we offer our own implementation of the "OAuth2 Token Exchange" standard named -[TokenX](auth/tokenx.md). Using TokenX eliminates the need for shared "service +[TokenX](../auth/tokenx/README.md). Using TokenX eliminates the need for shared "service users" with long-lived credentials and wide permissions. For machine to machine-communication between government agencies "Maskinporten" is widely used. The platform offers [the same type of -support](auth/maskinporten/README.md) for integrating with Maskinporten as we do +support](../auth/maskinporten/README.md) for integrating with Maskinporten as we do for the other OIDC/OAuth uses cases mentioned above. #### Network security Network security in NAIS is achieved by [Access -Policy](../how-to-guides/access-policies.md) that is backed by [Kubernetes +Policy](../workloads/how-to/access-policies.md) that is backed by [Kubernetes Network Policies][kubernetes-network-policies]. This controls traffic between pods in the same cluster as well as outgoing traffic from the cluster. @@ -243,11 +242,11 @@ Distributed Denial of Service (DDoS) protection. #### Logging and monitoring Application logs are collected and stored in -[Kibana](../explanation/observability/logging.md) together with infrastructure +[Kibana](../observability/logging/README.md) together with infrastructure components running in the cluster. Application metrics are collected and stored in -[Prometheus](../explanation/observability/metrics.md) together with infrastructure +[Prometheus](../observability/metrics/README.md) together with infrastructure components running in the cluster. Infrastructure, network flow logs and IAM audit logs are available from diff --git a/docs/security/auth/.pages b/docs/security/auth/.pages index 9cf044701..4996faf75 100644 --- a/docs/security/auth/.pages +++ b/docs/security/auth/.pages @@ -1,9 +1,6 @@ title: Auth nav: - - concepts.md - - development.md - - azure-ad - - idporten.md - - tokenx.md - - maskinporten - - ... +- README.md +- azure-ad +- wonderwall.md +- ... diff --git a/docs/security/auth/README.md b/docs/security/auth/README.md index f3f0ac86a..f6bb895bf 100644 --- a/docs/security/auth/README.md +++ b/docs/security/auth/README.md @@ -1,14 +1,15 @@ --- +tags: [auth, explanation] description: Services and addons to support authentication (AuthN) & authorization (AuthZ) --- # Authentication and Authorization -[OpenID Connect (OIDC)](concepts.md#openid-connect) and [OAuth 2.0](concepts.md#oauth-20) are the preferred specifications to provide end user authentication and ensure secure service-to-service communication for applications running on the platform. +[OpenID Connect (OIDC)](../../auth/explanations/README.md#openid-connect) and [OAuth 2.0](../../auth/explanations/README.md#oauth-20) are the preferred specifications to provide end user authentication and ensure secure service-to-service communication for applications running on the platform. -In short, OpenID Connect is used to delegate end user authentication to a third party, while the OAuth 2.0 protocol can provide signed [tokens](concepts.md#tokens) for service-to-service communication. +In short, OpenID Connect is used to delegate end user authentication to a third party, while the OAuth 2.0 protocol can provide signed [tokens](../../auth/explanations/README.md#tokens) for service-to-service communication. -See the [concepts](concepts.md) pages for an introduction to basic concepts and terms that are referred to throughout this documentation. +See the [concepts](../../auth/explanations/README.md) pages for an introduction to basic concepts and terms that are referred to throughout this documentation. ## How do I sign in or authenticate end users? @@ -16,7 +17,7 @@ This is a usually handled by a server-side component (backend-for-frontend) that **Citizen-facing applications** -Use the [OpenID Connect Authorization Code Flow in ID-porten](idporten.md). +Use the [OpenID Connect Authorization Code Flow in ID-porten](../../auth/idporten/README.md). **Employee-facing applications** @@ -26,7 +27,7 @@ Use the [OpenID Connect Authorization Code Flow in Azure AD](azure-ad/usage.md#o ## How do I perform requests on behalf of end-users? -The application receives requests from other [clients](concepts.md#client), authenticated with [Bearer tokens](concepts.md#bearer-token). +The application receives requests from other [clients](../../auth/explanations/README.md#client), authenticated with [Bearer tokens](../../auth/explanations/README.md#bearer-token). An end user initiates these request chains. The application performs requests to other downstream APIs on behalf of this end user. @@ -35,11 +36,11 @@ We must acquire new tokens for each unique downstream API that we need to access The new tokens should: 1. Propagate the original end user's identity -2. Be scoped to the correct downstream API with the correct [`aud` / audience claim](concepts.md#claims-validation) +2. Be scoped to the correct downstream API with the correct [`aud` / audience claim](../../auth/explanations/README.md#claims-validation) **Citizen-facing applications** -Use the [OAuth 2.0 Token Exchange Grant (TokenX)](tokenx.md). +Use the [OAuth 2.0 Token Exchange Grant (TokenX)](../../auth/tokenx/README.md). **Employee-facing applications** @@ -63,15 +64,15 @@ Use the [OAuth 2.0 Client Credentials Grant in Azure AD](azure-ad/usage.md#oauth **External** -Use the [OAuth 2.0 JWT Authorization Grant in Maskinporten](maskinporten/client.md). +Use the [OAuth 2.0 JWT Authorization Grant in Maskinporten](../../auth/maskinporten/README.md). --- ## How do I validate tokens? -The application receives requests from other clients, authenticated with [Bearer tokens](concepts.md#bearer-token). +The application receives requests from other clients, authenticated with [Bearer tokens](../../auth/explanations/README.md#bearer-token). The tokens contain information about the application that performed the request. The tokens will also contain information about the original end user, if any. -[Validate the tokens](concepts.md#token-validation) before granting access to the API resource. +[Validate the tokens](../../auth/explanations/README.md#token-validation) before granting access to the API resource. diff --git a/docs/security/auth/azure-ad/.pages b/docs/security/auth/azure-ad/.pages index 5aae50a6d..184e21eb3 100644 --- a/docs/security/auth/azure-ad/.pages +++ b/docs/security/auth/azure-ad/.pages @@ -1,7 +1,8 @@ title: Azure AD nav: - - configuration.md - - usage.md - - sidecar.md - - faq-troubleshooting.md - - ... +- README.md +- configuration.md +- usage.md +- sidecar.md +- faq-troubleshooting.md +- ... diff --git a/docs/security/auth/azure-ad/README.md b/docs/security/auth/azure-ad/README.md index 997638a20..c4105325d 100644 --- a/docs/security/auth/azure-ad/README.md +++ b/docs/security/auth/azure-ad/README.md @@ -1,5 +1,6 @@ --- title: Azure AD +tags: [auth, azure-ad, services] description: Enabling authentication and authorization in internal web applications. --- diff --git a/docs/security/auth/azure-ad/configuration.md b/docs/security/auth/azure-ad/configuration.md index b26f25dbe..ba0d1971e 100644 --- a/docs/security/auth/azure-ad/configuration.md +++ b/docs/security/auth/azure-ad/configuration.md @@ -1,10 +1,13 @@ +--- +tags: [azure-ad] +--- # Configuration ## Spec Minimal example below. -See the complete specification in the [NAIS manifest](../../../reference/application-spec.md#azure). +See the complete specification in the [NAIS manifest](../../../workloads/application/reference/application-spec.md#azure). === "nais.yaml" ```yaml @@ -31,7 +34,7 @@ Azure AD is an external service. The platform automatically configures outbound You do _not_ have to explicitly configure outbound access to Azure AD yourselves in GCP. -If you're on-premises however, you must enable and use [`webproxy`](../../../reference/application-spec.md#webproxy). +If you're on-premises however, you must enable and use [`webproxy`](../../../workloads/application/reference/application-spec.md#webproxy). ## Access Policy @@ -116,7 +119,7 @@ Consumers using the [on-behalf-of flow](usage.md#oauth-20-on-behalf-of-grant) wi #### Fine-Grained Group-Based Access Control -If you need more fine-grained access controls, you will need to handle authorization in your application by using the `groups` claim found in the user's [JWT](../concepts.md#jwt). +If you need more fine-grained access controls, you will need to handle authorization in your application by using the `groups` claim found in the user's [JWT](../../../auth/explanations/README.md#jwt). The `groups` claim in user tokens contains a list of [group object IDs](README.md#group-identifier) if and only if: diff --git a/docs/security/auth/azure-ad/faq-troubleshooting.md b/docs/security/auth/azure-ad/faq-troubleshooting.md index f8ec2531d..f63899bc8 100644 --- a/docs/security/auth/azure-ad/faq-troubleshooting.md +++ b/docs/security/auth/azure-ad/faq-troubleshooting.md @@ -1,3 +1,6 @@ +--- +tags: [azure-ad] +--- # FAQ / Troubleshooting This page lists common problems and solutions when authenticating with Azure AD. diff --git a/docs/security/auth/azure-ad/sidecar.md b/docs/security/auth/azure-ad/sidecar.md index 88930d340..17fba4a47 100644 --- a/docs/security/auth/azure-ad/sidecar.md +++ b/docs/security/auth/azure-ad/sidecar.md @@ -1,5 +1,6 @@ --- title: Sidecar +tags: [azure-ad, sidecar] description: Reverse-proxy that handles automatic authentication and login/logout flows for Azure AD. --- @@ -9,7 +10,7 @@ The Azure AD sidecar is a reverse proxy that provides functionality to perform A !!! warning "Availability" - The sidecar is only available in the [Google Cloud Platform](../../../reference/environments.md#google-cloud-platform-gcp) clusters. + The sidecar is only available in the [Google Cloud Platform](../../../workloads/reference/environments.md#google-cloud-platform-gcp) clusters. ## Spec @@ -30,7 +31,7 @@ Minimal example below. enabled: true ``` -See the [NAIS manifest reference](../../../reference/application-spec.md#azuresidecar) for the complete specification. +See the [NAIS manifest reference](../../../workloads/application/reference/application-spec.md#azuresidecar) for the complete specification. The above example will provision a unique Azure AD application and enable a sidecar that uses said application. @@ -41,14 +42,14 @@ For configuration of the Azure AD application itself, see [the Configuration pag !!! info "Prerequisites" - If you're unfamiliar with Azure AD, review the [core concepts](README.md#concepts). - - Ensure that you define at least one [ingress](../../../reference/application-spec.md#ingresses) for your application. + - Ensure that you define at least one [ingress](../../../workloads/application/reference/application-spec.md#ingresses) for your application. - Ensure that you configure [user access](configuration.md#users) for your application. **Users are not granted access by default**. Try out a basic user flow: 1. Visit your application's login endpoint (`https:///oauth2/login`) to trigger a login. 2. After logging in, you should be redirected back to your application. -3. All further requests to your application should now have an `Authorization` header with the user's access token as a [Bearer token](../concepts.md#bearer-token) +3. All further requests to your application should now have an `Authorization` header with the user's access token as a [Bearer token](../../../auth/explanations/README.md#bearer-token) 4. Visit your application's logout endpoint (`https:///oauth2/logout`) to trigger a logout. 5. You will be redirected to Azure AD for logout, and then back to your application's ingress. 6. Success! @@ -57,7 +58,7 @@ Try out a basic user flow: ### Token Validation -The sidecar attaches an `Authorization` header with the user's `access_token` as a [Bearer token](../concepts.md#bearer-token), as long as the user is authenticated. +The sidecar attaches an `Authorization` header with the user's `access_token` as a [Bearer token](../../../auth/explanations/README.md#bearer-token), as long as the user is authenticated. It is your responsibility to **validate the token** before granting access to resources. diff --git a/docs/security/auth/azure-ad/usage.md b/docs/security/auth/azure-ad/usage.md index be289697f..3a86af596 100644 --- a/docs/security/auth/azure-ad/usage.md +++ b/docs/security/auth/azure-ad/usage.md @@ -1,3 +1,6 @@ +--- +tags: [azure-ad] +--- # Usage ## Use Cases @@ -35,7 +38,7 @@ graph LR Your application receives requests from a user. These requests contain the user's token, known as the _subject token_. -The token has an audience (`aud`) [claim](../concepts.md#claims-validation) equal to _your own_ client ID. +The token has an audience (`aud`) [claim](../../../auth/explanations/README.md#claims-validation) equal to _your own_ client ID. To access a downstream API _on-behalf-of_ the user, we need a token [scoped](README.md#scopes) to the downstream API. That is, the token's audience must be equal to the _downstream API's_ client ID. @@ -85,7 +88,7 @@ The same principles apply if your application is a downstream API itself and nee - The new token has an audience equal to the downstream API. Your application does not need to validate this token. -3. Consume the downstream API by using the new token as a [Bearer token](../concepts.md#bearer-token). The downstream API [validates the token](#token-validation) and returns a response. +3. Consume the downstream API by using the new token as a [Bearer token](../../../auth/explanations/README.md#bearer-token). The downstream API [validates the token](#token-validation) and returns a response. 4. Repeat step 2 and 3 for each unique API that your application consumes. 5. The downstream API(s) may continue the call chain starting from step 1. @@ -144,7 +147,7 @@ That is, the token's audience must be equal to the _downstream API's_ client ID. - The new token has an audience equal to the downstream API. Your application does not need to validate this token. -2. Consume downstream API by using the token as a [Bearer token](../concepts.md#bearer-token). The downstream API [validates the token](#token-validation) and returns a response. +2. Consume downstream API by using the token as a [Bearer token](../../../auth/explanations/README.md#bearer-token). The downstream API [validates the token](#token-validation) and returns a response. 3. Repeat step 1 and 2 for each unique API that your application consumes. 4. The downstream API(s) may continue the call chain by starting from step 1. @@ -163,15 +166,15 @@ These variables are used for acquiring tokens using the [client credentials gran | Name | Description | |:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------| -| `AZURE_APP_CLIENT_ID` | [Client ID](../concepts.md#client-id) that uniquely identifies the application in Azure AD. | -| `AZURE_APP_CLIENT_SECRET` | [Client secret](../concepts.md#client-secret) for the application in Azure AD. | -| `AZURE_APP_JWK` | Optional. [Private JWK](../concepts.md#private-keys) (RSA) for the application. | -| `AZURE_OPENID_CONFIG_TOKEN_ENDPOINT` | `token_endpoint` from the [metadata discovery document](../concepts.md#token-endpoint). | -| `AZURE_APP_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../concepts.md#well-known-url-metadata-document). | +| `AZURE_APP_CLIENT_ID` | [Client ID](../../../auth/explanations/README.md#client-id) that uniquely identifies the application in Azure AD. | +| `AZURE_APP_CLIENT_SECRET` | [Client secret](../../../auth/explanations/README.md#client-secret) for the application in Azure AD. | +| `AZURE_APP_JWK` | Optional. [Private JWK](../../../auth/explanations/README.md#private-keys) (RSA) for the application. | +| `AZURE_OPENID_CONFIG_TOKEN_ENDPOINT` | `token_endpoint` from the [metadata discovery document](../../../auth/explanations/README.md#token-endpoint). | +| `AZURE_APP_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../../../auth/explanations/README.md#well-known-url-metadata-document). | `AZURE_APP_WELL_KNOWN_URL` is optional if you're using `AZURE_OPENID_CONFIG_TOKEN_ENDPOINT` directly. -`AZURE_APP_JWK` contains a private key that can be used to sign JWT [_client assertions_](../concepts.md#client-assertion). +`AZURE_APP_JWK` contains a private key that can be used to sign JWT [_client assertions_](../../../auth/explanations/README.md#client-assertion). This is an alternative client authentication method that can be used instead of _client secrets_. For further details, see Microsoft's documentation on [certificate credentials](https://learn.microsoft.com/en-us/azure/active-directory/develop/certificate-credentials). The `aud` claim in the JWT assertions should be set to the value of the `AZURE_OPENID_CONFIG_TOKEN_ENDPOINT` environment variable. @@ -182,18 +185,18 @@ These variables are used for [token validation](#token-validation): | Name | Description | |:-------------------------------|:----------------------------------------------------------------------------------------------------------------| -| `AZURE_APP_CLIENT_ID` | [Client ID](../concepts.md#client-id) that uniquely identifies the application in Azure AD. | -| `AZURE_OPENID_CONFIG_ISSUER` | `issuer` from the [metadata discovery document](../concepts.md#issuer). | -| `AZURE_OPENID_CONFIG_JWKS_URI` | `jwks_uri` from the [metadata discovery document](../concepts.md#jwks-endpoint-public-keys). | -| `AZURE_APP_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../concepts.md#well-known-url-metadata-document). | +| `AZURE_APP_CLIENT_ID` | [Client ID](../../../auth/explanations/README.md#client-id) that uniquely identifies the application in Azure AD. | +| `AZURE_OPENID_CONFIG_ISSUER` | `issuer` from the [metadata discovery document](../../../auth/explanations/README.md#issuer). | +| `AZURE_OPENID_CONFIG_JWKS_URI` | `jwks_uri` from the [metadata discovery document](../../../auth/explanations/README.md#jwks-endpoint-public-keys). | +| `AZURE_APP_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../../../auth/explanations/README.md#well-known-url-metadata-document). | `AZURE_APP_WELL_KNOWN_URL` is optional if you're using `AZURE_OPENID_CONFIG_ISSUER` and `AZURE_OPENID_CONFIG_JWKS_URI` directly. ## Token Validation -Verify incoming requests by validating the [Bearer token](../concepts.md#bearer-token) in the `Authorization` header. +Verify incoming requests by validating the [Bearer token](../../../auth/explanations/README.md#bearer-token) in the `Authorization` header. -Always validate the [signature and standard time-related claims](../concepts.md#token-validation). +Always validate the [signature and standard time-related claims](../../../auth/explanations/README.md#token-validation). Additionally, perform the following validations: **Issuer Validation** @@ -201,7 +204,7 @@ Additionally, perform the following validations: Validate that the `iss` claim has a value that is equal to either: 1. the `AZURE_OPENID_CONFIG_ISSUER` [environment variable](#variables-for-validating-tokens), or -2. the `issuer` property from the [metadata discovery document](../concepts.md#well-known-url-metadata-document). +2. the `issuer` property from the [metadata discovery document](../../../auth/explanations/README.md#well-known-url-metadata-document). The document is found at the endpoint pointed to by the `AZURE_APP_WELL_KNOWN_URL` environment variable. **Audience Validation** @@ -225,7 +228,7 @@ Validation of these claims is optional. Notable claims: - `azp` (**authorized party**) - - The [client ID](../concepts.md#client-id) of the application that requested the token (this would be your consumer). + - The [client ID](../../../auth/explanations/README.md#client-id) of the application that requested the token (this would be your consumer). - `azp_name` (**authorized party name**) - The value of this claim is the (human-readable) [name](README.md#client-name) of the consumer application that requested the token. - `groups` (**groups**) @@ -271,12 +274,12 @@ Tokens in NAV are v2.0 tokens. ## Local Development -See also the [development overview](../development.md) page. +See also the [development overview](../../../auth/reference/README.md) page. ### Token Generator In many cases, you want to locally develop and test against a secured API in the development environments. -To do so, you need a [token](../concepts.md#bearer-token) to access said API. +To do so, you need a [token](../../../auth/explanations/README.md#bearer-token) to access said API. Use to generate tokens in the development environments. @@ -315,5 +318,5 @@ Then: - For example: `dev-gcp:aura:my-app` 2. You will be redirected to log in at Azure AD (if not already logged in). 3. After logging in, you should be redirected back to the token generator and presented with a JSON response containing an `access_token`. -4. Use the `access_token` as a [Bearer token](../concepts.md#bearer-token) for calls to your API application. +4. Use the `access_token` as a [Bearer token](../../../auth/explanations/README.md#bearer-token) for calls to your API application. 5. Success! diff --git a/docs/security/auth/idporten.md b/docs/security/auth/idporten.md deleted file mode 100644 index f2951d2d7..000000000 --- a/docs/security/auth/idporten.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -description: Reverse-proxy that handles automatic authentication and login/logout flow public-facing authentication using ID-porten. ---- - -# ID-porten - -[ID-porten](https://docs.digdir.no/docs/idporten/) is a common log-in system used for logging into Norwegian public e-services for citizens. - -NAIS provides a _sidecar_ that integrates with ID-porten, so that you can easily and securely log in and authenticate citizen end-users. - -!!! warning "Availability" - The sidecar is only available in the [Google Cloud Platform](../../reference/environments.md#google-cloud-platform-gcp) clusters. - -## Spec - -!!! danger "Port Configuration" - The sidecar will occupy and use the ports `7564` and `7565`. - - Ensure that you do **not** bind to these ports from your application as they will be overridden. - -Minimal example: - -=== "nais.yaml" - ```yaml - spec: - idporten: - enabled: true - sidecar: - enabled: true - level: idporten-loa-high # optional, default value shown - locale: nb # optional, default value shown - ``` - -See the [NAIS manifest reference](../../reference/application-spec.md#idportensidecar) for the complete specification. - -Ensure that you also define at least one [ingress](../../reference/application-spec.md#ingresses) for your application. - -## Network Connectivity - -ID-porten is an external service. -Outbound access to the ID-porten hosts is automatically configured by the platform. - -You do _not_ have to explicitly configure outbound access to ID-porten yourselves when using the sidecar. - -## Usage - -Try out a basic user flow: - -1. Visit your application's login endpoint (`https:///oauth2/login`) to trigger a login. -2. After logging in, you should be redirected back to your application. -3. All further requests to your application should now have an `Authorization` header with the user's access token as a [Bearer token](concepts.md#bearer-token) -4. Visit your application's logout endpoint (`https:///oauth2/logout`) to trigger a logout. -5. You will be redirected to ID-porten for logout, and then back to a preconfigured logout page. -6. Success! - -**See [Wonderwall](wonderwall.md#usage) for further usage details.** - -### Runtime Variables & Credentials - -Your application will automatically be injected with both environment variables and files at runtime. -You can use whichever is most convenient for your application. - -The files are available at the following path: `/var/run/secrets/nais.io/idporten/` - -| Name | Description | -|:--------------------------|:--------------------------------------------------------------------------------------------------------------| -| `IDPORTEN_AUDIENCE` | The expected [audience](concepts.md#token-validation) for access tokens from ID-porten. | -| `IDPORTEN_WELL_KNOWN_URL` | The URL for ID-porten's [OIDC metadata discovery document](concepts.md#well-known-url-metadata-document). | -| `IDPORTEN_ISSUER` | `issuer` from the [metadata discovery document](concepts.md#issuer). | -| `IDPORTEN_JWKS_URI` | `jwks_uri` from the [metadata discovery document](concepts.md#jwks-endpoint-public-keys). | - -These variables are used for [token validation](#token-validation). - -### Security Levels - -ID-porten classifies different user authentication methods into [security levels of assurance](https://docs.digdir.no/docs/idporten/oidc/oidc_protocol_id_token#acr-values). -This is reflected in the `acr` claim for the user's JWTs issued by ID-porten. - -Valid values, in increasing order of assurance levels: - -| Value | Description | Notes | -|:---------------------------|:-----------------------------------------------------------------|:-----------------------| -| `idporten-loa-substantial` | a substantial level of assurance, e.g. MinID | Also known as `Level3` | -| `idporten-loa-high` | a high level of assurance, e.g. BankID, Buypass, Commfides, etc. | Also known as `Level4` | - -To configure a default value for _all_ login requests: - -=== "nais.yaml" - ```yaml hl_lines="6" - spec: - idporten: - enabled: true - sidecar: - enabled: true - level: idporten-loa-high - ``` - -**If unspecified, the sidecar will use `idporten-loa-high` as the default value.** - -The sidecar automatically validates and enforces the user's authentication level **matches or exceeds** the application's configured level. -The user's session is marked as unauthenticated if the level is _lower_ than the configured level. - -Example: - -* If the application requires authentication on the `idporten-loa-substantial` level, the sidecar will allow sessions with a level of `idporten-loa-high`. -* The inverse is rejected. That is, applications expecting `idporten-loa-high` authentication will have the sidecar mark sessions at `acr=idporten-loa-substantial` as unauthenticated. - -For runtime control of the value, set the query parameter `level` when redirecting the user to login: - -``` -https:///oauth2/login?level=idporten-loa-high -``` - -### Locales - -ID-porten supports a few different locales for the user interface during authentication. - -Valid values shown below: - -| Value | Description | -|:------|:------------------| -| `nb` | Norwegian Bokmål | -| `nn` | Norwegian Nynorsk | -| `en` | English | -| `se` | Sámi | - -To configure a default value for _all_ requests: - -=== "nais.yaml" - ```yaml hl_lines="6" - spec: - idporten: - enabled: true - sidecar: - enabled: true - locale: en - ``` - -**If unspecified, the sidecar will use `nb` as the default value.** - -For runtime control of the value, set the query parameter `locale` when redirecting the user to login: - -``` -https:///oauth2/login?locale=en -``` - -## Token Validation - -The sidecar attaches an `Authorization` header with the user's `access_token` as a [Bearer token](concepts.md#bearer-token), as long as the user is authenticated. - -It is your responsibility to **validate the token** before granting access to resources. - -For any endpoint that requires authentication; **deny access** if the request does not contain a valid Bearer token. - -Always validate the [signature and standard time-related claims](concepts.md#token-validation). -Additionally, perform the following validations: - -**Issuer Validation** - -Validate that the `iss` claim has a value that is equal to either: - -1. the `IDPORTEN_ISSUER` [environment variable](#runtime-variables-credentials), or -2. the `issuer` property from the [metadata discovery document](concepts.md#well-known-url-metadata-document). - The document is found at the endpoint pointed to by the `IDPORTEN_WELL_KNOWN_URL` environment variable. - -**Audience Validation** - -Validate that the `aud` claim is equal to the `IDPORTEN_AUDIENCE` environment variable. - -**Signature Validation** - -Validate that the token is signed with a public key published at the JWKS endpoint. -This endpoint URI can be found in one of two ways: - -1. the `IDPORTEN_JWKS_URI` environment variable, or -2. the `jwks_uri` property from the metadata discovery document. - The document is found at the endpoint pointed to by the `IDPORTEN_WELL_KNOWN_URL` environment variable. - -### Other Token Claims - -Other claims may be present in the token. -Validation of these claims is optional. - -Notable claims: - -- `acr` (**Authentication Context Class Reference**) - - The [security level](#security-levels) used for authenticating the end-user. -- `pid` (**personidentifikator**) - - The Norwegian national ID number (fødselsnummer/d-nummer) of the authenticated end user. - -For a complete list of claims, see the [Access Token Reference in ID-porten](https://docs.digdir.no/docs/idporten/oidc/oidc_protocol_access_token#by-value--self-contained-access-token). - -## Next Steps - -The access token provided by the sidecar should only be accepted and used by your application. - -To access other applications, you need a token scoped to the target application. - -For ID-porten, use the [token exchange grant (TokenX)](tokenx.md#exchanging-a-token) to exchange the token for a new token. - -!!! tip "Recommended: JavaScript Library" - - See for an opinionated JavaScript library for token validation and exchange. diff --git a/docs/security/auth/maskinporten/README.md b/docs/security/auth/maskinporten/README.md deleted file mode 100644 index 685c2d5da..000000000 --- a/docs/security/auth/maskinporten/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -description: > - Enabling service-to-service authentication with external agencies using Maskinporten. ---- - -# Maskinporten - -[Maskinporten](https://docs.digdir.no/maskinporten_overordnet.html) is a service provided by DigDir that allows API providers to securely enforce server-to-server authorization of their exposed APIs using OAuth 2.0 JWT grants. - -The NAIS platform provides support for declarative registration of Maskinporten resources. These cover two distinct use cases: - -## For API Consumers - -A _client_ allows your application to integrate with Maskinporten to acquire access tokens. -These tokens authenticate your application when consuming external APIs whom require Maskinporten tokens. - -```mermaid -graph LR - Consumer["API Consumer"] --1. request token---> Maskinporten - Maskinporten --2. issue token---> Consumer - Consumer --3. use token---> API["External API"] -``` - -[:octicons-arrow-right-24: Get started with a Maskinporten Client](client.md) - -## For API Providers - -A _scope_ represents a permission that a given consumer has access to. -In Maskinporten, you can define scopes and grant other organizations access to these scopes. - -```mermaid -graph LR - Provider["API Provider"] --define scope---> Maskinporten -``` - -When a consumer requests a token for a given scope, Maskinporten will enforce authorization checks and only issue a token if the consumer has access to the scope. - -```mermaid -graph LR - Consumer["External Consumer"] --1. request token---> Maskinporten - Maskinporten --2. issue token---> Consumer - Consumer --3. use token---> Provider["API Provider"] -``` - -[:octicons-arrow-right-24: Get started with Maskinporten Scopes](scopes.md) diff --git a/docs/security/auth/maskinporten/client.md b/docs/security/auth/maskinporten/client.md deleted file mode 100644 index 418beeffb..000000000 --- a/docs/security/auth/maskinporten/client.md +++ /dev/null @@ -1,258 +0,0 @@ -# Maskinporten Client - -The NAIS platform provides support for declarative provisioning of Maskinporten clients. - -A [client](../concepts.md#client) allows your application to integrate with Maskinporten to acquire access tokens. -These tokens authenticate your application when consuming external APIs whom require Maskinporten tokens. - -## Spec - -```yaml title="nais.yaml" -spec: - maskinporten: - enabled: true - scopes: - consumes: - - name: "skatt:some.scope" - - name: "nav:some/other/scope" - - # required for on-premises only - webproxy: true -``` - -See the [NAIS manifest reference](../../../reference/application-spec.md#maskinporten) for the complete specification. - -## Network Connectivity - -Maskinporten is an external service. -The platform automatically configures outbound access to the Maskinporten hosts. - -You do _not_ have to explicitly configure outbound access to Maskinporten yourselves in GCP. - -If you're on-premises however, you must enable and use [`webproxy`](../../../reference/application-spec.md#webproxy). - -## Runtime Variables & Credentials - -Your application will automatically be injected with both environment variables and files at runtime. -You can use whichever is most convenient for your application. - -The files are available at the following path: `/var/run/secrets/nais.io/maskinporten/` - -| Name | Description | -|:------------------------------|:----------------------------------------------------------------------------------------------------------| -| `MASKINPORTEN_CLIENT_ID` | [Client ID](../concepts.md#client-id) that uniquely identifies the client in Maskinporten. | -| `MASKINPORTEN_CLIENT_JWK` | [Private JWK](../concepts.md#private-keys) (RSA) for the client. | -| `MASKINPORTEN_SCOPES` | Whitespace-separated string of scopes registered for the client. | -| `MASKINPORTEN_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../concepts.md#well-known-url-metadata-document) | -| `MASKINPORTEN_ISSUER` | `issuer` from the [metadata discovery document](../concepts.md#issuer). | -| `MASKINPORTEN_TOKEN_ENDPOINT` | `token_endpoint` from the [metadata discovery document](../concepts.md#token-endpoint). | - -These variables are used when acquiring tokens. - -## Getting Started - -To consume external APIs, you will need to do three things: - -1. Declare the scopes that you want to consume -2. Acquire tokens from Maskinporten -3. Consume the API using the token - -### 1. Declare Consumer Scopes - -Declare all the scopes that you want to consume in your application's NAIS manifest so that the client is granted access to them: - -```yaml hl_lines="5-7" title="nais.yaml" -spec: - maskinporten: - enabled: true - scopes: - consumes: - - name: "skatt:some.scope" - - name: "nav:some/other/scope" -``` - -The scopes themselves are defined and owned by the external API provider. The exact scope values must thus be exchanged out-of-band. - -Make sure that the provider has granted **NAV** (organization number `889640782`) consumer access to any scopes that you wish to consume. -Provisioning of client will fail otherwise. - -### 2. Acquire Token - -To acquire a token from Maskinporten, you will need to create a JWT grant. - -A JWT grant is a [JWT](../concepts.md#jwt) that is used to [authenticate your client](../concepts.md#client-assertion) with Maskinporten. -The token is signed using a [private key](../concepts.md#private-keys) belonging to your client. - -#### 2.1. Create JWT Grant - -The JWT consists of a **header**, a **payload** and a **signature**. - -The **header** should consist of the following parameters: - -| Parameter | Value | Description | -|:----------|:-----------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **`kid`** | `` | The key identifier of the [private JWK](../concepts.md#private-keys) used to sign the assertion. The private key is found in the `MASKINPORTEN_CLIENT_JWK` [environment variable](#runtime-variables-credentials). | -| **`typ`** | `JWT` | Represents the type of this JWT. Set this to `JWT`. | -| **`alg`** | `RS256` | Represents the cryptographic algorithm used to secure the JWT. Set this to `RS256`. | - -The **payload** should have the following claims: - -| Claim | Example Value | Description | -|:------------|:---------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **`aud`** | `https://test.maskinporten.no/` | The _audience_ of the token. Set this to the Maskinporten `issuer`, i.e. [`MASKINPORTEN_ISSUER`](#runtime-variables-credentials). | -| **`iss`** | `60dea49a-255b-48b5-b0c0-0974ac1c0b53` | The _issuer_ of the token. Set this to your `client_id`, i.e. [`MASKINPORTEN_CLIENT_ID`](#runtime-variables-credentials). | -| **`scope`** | `nav:test/api` | `scope` is a whitespace-separated list of scopes that you want in the issued token from Maskinporten. | -| **`iat`** | `1698435010` | `iat` stands for _issued at_. Timestamp (seconds after Epoch) for when the JWT was issued or created. | -| **`exp`** | `1698435070` | `exp` is the _expiration time_. Timestamp (seconds after Epoch) for when the JWT is no longer valid. This **must** be less than **120** seconds after `iat`. That is, the maximum lifetime of the token must be no greater than **120 seconds**. | -| **`jti`** | `2d1a343c-6e7d-4ace-ae47-4e77bcb52db9` | The _JWT ID_ of the token. Used to uniquely identify a token. Set this to a UUID or similar. | - -The JWT grant should be unique and only used once. That is, every token request to Maskinporten should have a unique JWT grant: - -- Set the JWT ID (`jti`) claim to a unique value, such as an UUID. -- Set the JWT expiry (`exp`) claim so that the lifetime of the token is reasonably low: - - The _maximum_ lifetime allowed is 120 seconds. - - A lifetime between 10-30 seconds should be fine for most situations. - -If the API provider requires the use of an [audience-restricted token](https://docs.digdir.no/maskinporten_func_audience_restricted_tokens.html), you must also include the following claim: - -| Claim | Example Value | Description | -|:---------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------| -| **`resource`** | `https://api.some-provider.no/` | Target audience for the token returned by Maskinporten. The exact value is defined by the API provider and exchanged out-of-band. | - -Finally, a **signature** is created by hashing the header and payload, and then signing the hash using your client's private key. - -??? example "Example Code for Creating a JWT Grant" - - The sample code below shows how to create and sign a JWT grant in a few different languages: - - === "Kotlin" - - Minimal example code for creating a JWT grant in Kotlin, using [Nimbus JOSE + JWT](https://connect2id.com/products/nimbus-jose-jwt). - - ```kotlin linenums="1" - import com.nimbusds.jose.* - import com.nimbusds.jose.crypto.* - import com.nimbusds.jose.jwk.* - import com.nimbusds.jwt.* - import java.time.Instant - import java.util.Date - import java.util.UUID - - val clientId: String = System.getenv("MASKINPORTEN_CLIENT_ID") - val clientJwk: String = System.getenv("MASKINPORTEN_CLIENT_JWK") - val issuer: String = System.getenv("MASKINPORTEN_ISSUER") - val scope: String = "nav:test/api" - val rsaKey: RSAKey = RSAKey.parse(clientJwk) - val signer: RSASSASigner = RSASSASigner(rsaKey.toPrivateKey()) - - val header: JWSHeader = JWSHeader.Builder(JWSAlgorithm.RS256) - .keyID(rsaKey.keyID) - .type(JOSEObjectType.JWT) - .build() - - val now: Date = Date.from(Instant.now()) - val expiration: Date = Date.from(Instant.now().plusSeconds(60)) - val claims: JWTClaimsSet = JWTClaimsSet.Builder() - .issuer(clientId) - .audience(issuer) - .issueTime(now) - .claim("scope", scope) - .expirationTime(expiration) - .jwtID(UUID.randomUUID().toString()) - .build() - - val jwtGrant: String = SignedJWT(header, claims) - .apply { sign(signer) } - .serialize() - ``` - - === "Python" - - Minimal example code for creating a JWT grant in Python, using [PyJWT](https://github.com/jpadilla/pyjwt). - - ```python linenums="1" - import json, jwt, os, uuid - from datetime import datetime, timezone, timedelta - from jwt.algorithms import RSAAlgorithm - - issuer = os.getenv('MASKINPORTEN_ISSUER') - jwk = os.getenv('MASKINPORTEN_CLIENT_JWK') - client_id = os.getenv('MASKINPORTEN_CLIENT_ID') - - header = { - "kid": json.loads(jwk)['kid'] - } - - payload = { - "aud": issuer, - "iss": client_id, - "scope": "nav:test/api", - "iat": datetime.now(tz=timezone.utc), - "exp": datetime.now(tz=timezone.utc)+timedelta(minutes=1), - "jti": str(uuid.uuid4()) - } - - private_key = RSAAlgorithm.from_jwk(jwk) - grant = jwt.encode(payload, private_key, "RS256", header) - ``` - -#### 2.2. Request Token from Maskinporten - -**Request** - -The token request is an HTTP POST request. -It should have the `Content-Type` set to `application/x-www-form-urlencoded` - -The body of the request should contain the following parameters: - -| Parameter | Value | Description | -|:-------------|:----------------------------------------------|:-------------------------------------------------------------------------------------------| -| `grant_type` | `urn:ietf:params:oauth:grant-type:jwt-bearer` | Type of grant the client is sending. Always `urn:ietf:params:oauth:grant-type:jwt-bearer`. | -| `assertion` | `eyJraWQ...` | The JWT grant itself. | - -Send the request to the `token_endpoint`, i.e. [`MASKINPORTEN_TOKEN_ENDPOINT`](#runtime-variables-credentials): - -```http -POST ${MASKINPORTEN_TOKEN_ENDPOINT} HTTP/1.1 -Content-Type: application/x-www-form-urlencoded - -grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer& -assertion=eY... -``` - -**Response** - -Maskinporten will respond with a JSON object. - -```json -{ - "access_token": "eyJraWQ...", - "token_type": "Bearer", - "expires_in": 3599, - "scope": "nav:test/api" -} -``` - -| Parameter | Description | -|:------------------------|:---------------------------------------------------------------------------------------------------------------------| -| `access_token` | The access token that you may use to consume an external API. | -| `token_type` | The token type. Should always be `Bearer`. | -| `expires_in` | The lifetime of the token in seconds. Cache and reuse the token until it expires to minimize network latency impact. | -| `scope` | A list of scopes issued in the access token. | - -See the [Maskinporten token documentation](https://docs.digdir.no/docs/Maskinporten/maskinporten_protocol_token) for more details. - -### 3. Consume API - -Once you have acquired the token, you can finally consume the external API. - -Use the token in the `Authorization` header as a [Bearer token](../concepts.md#bearer-token): - -```http -GET /resource HTTP/1.1 - -Host: api.example.com -Authorization: Bearer eyJraWQ... -``` - -Success! diff --git a/docs/security/auth/maskinporten/scopes.md b/docs/security/auth/maskinporten/scopes.md deleted file mode 100644 index 94ff31b14..000000000 --- a/docs/security/auth/maskinporten/scopes.md +++ /dev/null @@ -1,252 +0,0 @@ -# Maskinporten Scopes - -A _scope_ represents a permission that a given consumer has access to. -In Maskinporten, you can define scopes and grant other organizations access to these scopes. - -As an API provider, you are fully responsible for defining the granularity of access and authorization associated with a given scope. - -An external consumer that has been granted access to your scopes may then acquire an `access_token` using a Maskinporten client that belongs to their organization. -[Clients registered via NAIS](client.md) belong to NAV and may only be used by NAV. - -## Spec - -Example configuration: - -```yaml title="nais.yaml" -spec: - maskinporten: - enabled: true - scopes: - exposes: - - name: "some.scope.read" - enabled: true - product: "arbeid" - consumers: - - orgno: "123456789" -``` - -See the [NAIS manifest](../../../reference/application-spec.md#maskinporten) for the complete specification. - -## Network Connectivity - -Maskinporten is an external service. -The platform automatically configures outbound access to the Maskinporten hosts. - -You do _not_ have to explicitly configure outbound access to Maskinporten yourselves in GCP. - -## Runtime Variables & Credentials - -Your application will automatically be injected with both environment variables and files at runtime. -You can use whichever is most convenient for your application. - -The files are available at the following path: `/var/run/secrets/nais.io/maskinporten/` - -| Name | Description | -|:------------------------------|:--------------------------------------------------------------------------------------------------------------| -| `MASKINPORTEN_WELL_KNOWN_URL` | The well-known URL for the [metadata discovery document](../concepts.md#well-known-url-metadata-document) | -| `MASKINPORTEN_ISSUER` | `issuer` from the [metadata discovery document](../concepts.md#issuer). | -| `MASKINPORTEN_JWKS_URI` | `jwks_uri` from the [metadata discovery document](../concepts.md#jwks-endpoint-public-keys). | - -These variables are used when validating tokens issued by Maskinporten. - -## Getting Started - -As an API provider, you will need to do three things: - -1. Define the scopes that you want to expose to other organizations -2. Expose your application to the external consumers -3. Validate tokens in requests from external consumers - -### 1. Define Scopes - -Declare all the scopes that you want to expose in your application's NAIS manifest: - -```yaml title="nais.yaml" hl_lines="5-11" -spec: - maskinporten: - enabled: true - scopes: - exposes: - - name: "some.scope.read" - enabled: true - product: "arbeid" - - name: "some.scope.write" - enabled: true - product: "arbeid" -``` - -Grant the external consumer access to the scopes by specifying their organization number: - -```yaml title="nais.yaml" hl_lines="8-9" -spec: - maskinporten: - enabled: true - scopes: - exposes: - - name: "some.scope.read" - ... - consumers: - - orgno: "123456789" -``` - -### 2. Expose Application - -Expose your application to the consumer(s) at a publicly accessible ingress. - -### 3. Validate Tokens - -Verify incoming requests from the external consumer(s) by validating the [Bearer token](../concepts.md#bearer-token) in the `Authorization` header. - -Always validate the [signature and standard time-related claims](../concepts.md#token-validation). -Additionally, perform the following validations: - -**Issuer Validation** - -Validate that the `iss` claim has a value that is equal to either: - -1. the `MASKINPORTEN_ISSUER` [environment variable](#runtime-variables-credentials), or -2. the `issuer` property from the [metadata discovery document](../concepts.md#well-known-url-metadata-document). - The document is found at the endpoint pointed to by the `MASKINPORTEN_WELL_KNOWN_URL` environment variable. - -**Scope Validation** - -Validate that the `scope` claim contains the expected scope(s). -The `scope` claim is a string that contains a whitespace-separated list of scopes. - -Continuing from the previous examples, you would validate that the `scope` claim contains at least one of: - -- `nav:arbeid:some.scope.read` or -- `nav:arbeid:some.scope.write` - -**Audience Validation** - -The `aud` claim is not included by default in Maskinporten tokens and does not need to be validated. -It is only included if the consumer has requested an [audience-restricted token](https://docs.digdir.no/maskinporten_func_audience_restricted_tokens.html). - -Only validate the `aud` claim if you want to require your consumers to use audience-restricted tokens. -The expected audience value is up to you to define and must be communicated to your consumers. -The value must be an absolute URI (such as `https://some-provider.no` or `https://some-provider.no/api`). - -**Signature Validation** - -Validate that the token is signed with a public key published at the JWKS endpoint. -This endpoint URI can be found in one of two ways: - -1. the `MASKINPORTEN_JWKS_URI` environment variable, or -2. the `jwks_uri` property from the metadata discovery document. - The document is found at the endpoint pointed to by the `MASKINPORTEN_WELL_KNOWN_URL` environment variable. - -## Other Token Claims - -Other claims may be present in the token. -Validation of these claims is optional. - -See the [Access Token Reference in Maskinporten](https://docs.digdir.no/docs/Maskinporten/maskinporten_protocol_token#the-access-token) for a list of all claims. - -## Scope Naming - -All scopes within Maskinporten consist of a **prefix** and a **subscope**: - -```text -scope := : -``` - -For example: - -```text -scope := nav:trygdeopplysninger -``` - -The **prefix** for all scopes provisioned through NAIS will always be `nav`. - -A **subscope** should describe the resource to be exposed as accurately as possible. -It consists of three parts; **product**, **separator** and **name**: - -```text -subscope := -``` - -The **name** may also be _postfixed_ to separate between access levels. -For instance, you could separate between `write` access: - -```text -name := trygdeopplysninger.write -``` - -...and `read` access: - -```text -name := trygdeopplysninger.read -``` - -Absence of a postfix should generally be treated as strictly `read` access. - -=== "Example scope" - - If **name** does not contain any `/` (forward slash), the **separator** is set to `:` (colon). - - For the following scope: - - ```yaml title="nais.yaml" hl_lines="5-11" - spec: - maskinporten: - enabled: true - scopes: - exposes: - - name: "some.scope.read" - enabled: true - product: "arbeid" - ``` - - - **product** is set to `arbeid` - - **name** is set to `some.scope.read` - - The subscope is then: - - ```text - subscope := arbeid:some.scope.read - ``` - - which results in the scope: - - ```text - scope := nav:arbeid:some.scope.read - ``` - -=== "Example scope with forward slash" - - If **name** contains a `/` (forward slash), the **separator** is set to `/` (forward slash). - - For the following scope: - - ```yaml title="nais.yaml" hl_lines="5-11" - spec: - maskinporten: - enabled: true - scopes: - exposes: - - name: "some/scope.read" - enabled: true - product: "arbeid" - ``` - - - **product** is set to `arbeid` - - **name** is set to `some/scope.read` - - The subscope is then: - - ```text - subscope := arbeid/some/scope.read - ``` - - which results in the scope: - - ```text - scope := nav:arbeid/some/scope.read - ``` - -## Delegation of scopes - -[Delegation of scopes](https://docs.digdir.no/docs/Maskinporten/maskinporten_func_delegering) is not supported. - -If you need a scope with delegation, please see [IaC repository](https://github.com/navikt/nav-maskinporten). diff --git a/docs/security/auth/tokenx.md b/docs/security/auth/tokenx.md deleted file mode 100644 index 459f8d5da..000000000 --- a/docs/security/auth/tokenx.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -description: Enabling zero trust on the application layer ---- - -# TokenX - -TokenX is our implementation of OAuth 2.0 Token Exchange. - -For each hop in a request chain, a token is exchanged for a new token. The new token is scoped to a single target application and contains the original end-user identity. - -There are primarily two distinct cases where one must use TokenX: - -1. You have a user facing app using [ID-porten](idporten.md) that should perform calls to another app on behalf of a user. -2. You have an app receiving tokens issued from TokenX and need to call another app while still propagating the original user context. - -## Configuration - -### Spec - -See the [NAIS manifest](../../reference/application-spec.md#tokenxenabled). - -### Getting Started - -=== "nais.yaml" - ```yaml - spec: - tokenx: - enabled: true - accessPolicy: - inbound: - rules: - - application: app-2 - - application: app-3 - namespace: team-a - - application: app-4 - namespace: team-b - cluster: prod-gcp - ``` - -### Access Policies - -In order for other applications to acquire a token targeting your application, you **must explicitly** specify inbound access policies that authorizes these other applications. - -Thus, the access policies defines _authorization_ on the application layer, and is enforced by Tokendings on token exchange operations. - -For example: - -```yaml -spec: - tokenx: - enabled: true - accessPolicy: - inbound: - rules: - - application: app-1 - - application: app-2 - namespace: team-a - - application: app-3 - namespace: team-b - cluster: prod-gcp -``` - -The above configuration authorizes the following applications: - -* application `app-1` running in the **same namespace** and **same cluster** as your application -* application `app-2` running in the namespace `team-a` in the **same cluster** -* application `app-3` running in the namespace `team-b` in the cluster `prod-gcp` - -## Usage - -### Runtime Variables & Credentials - -Your application will automatically be injected with both environment variables and files at runtime. You can use whichever is most convenient for your application. - -The files are available at the following path: `/var/run/secrets/nais.io/jwker/`. - -#### Variables for Exchanging Tokens - -These variables are used for [client authentication](tokenx.md#client-authentication) and [exchanging tokens](tokenx.md#exchanging-a-token): - -| Name | Description | -|:-------------------------|:---------------------------------------------------------------------------------------------------------| -| `TOKEN_X_CLIENT_ID` | [Client ID](concepts.md#client-id) that uniquely identifies the application in TokenX. | -| `TOKEN_X_PRIVATE_JWK` | [Private JWK](concepts.md#private-keys) containing an RSA key belonging to client. | -| `TOKEN_X_TOKEN_ENDPOINT` | `token_endpoint` from the [metadata discovery document](concepts.md#token-endpoint). | - -#### Variables for Validating Tokens - -These variables are used for [token validation](tokenx.md#token-validation): - -| Name | Description | -|:-------------------------|:-----------------------------------------------------------------------------------------------------| -| `TOKEN_X_CLIENT_ID` | [Client ID](concepts.md#client-id) that uniquely identifies the application in TokenX. | -| `TOKEN_X_WELL_KNOWN_URL` | The URL for Tokendings' [metadata discovery document](concepts.md#well-known-url-metadata-document). | -| `TOKEN_X_ISSUER` | `issuer` from the [metadata discovery document](concepts.md#issuer). | -| `TOKEN_X_JWKS_URI` | `jwks_uri` from the [metadata discovery document](concepts.md#jwks-endpoint-public-keys). | - -### Client Authentication - -Your application **must** authenticate itself with Tokendings when attempting to perform token exchanges. To do so, you must create a [client assertion](concepts.md#client-assertion). - -Create a [JWT](concepts.md#jwt) that is signed by your application using the [private key](concepts.md#private-keys) contained within [`TOKEN_X_PRIVATE_JWK`](tokenx.md#variables-for-exchanging-tokens). - -The assertion **must** contain the following claims: - -| Claim | Example Value | Description | -|:----------|:-------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **`sub`** | `dev-gcp:aura:app-a` | The _subject_ of the token. Must be set to your application's own `client_id`, i.e. [`TOKEN_X_CLIENT_ID`](tokenx.md#variables-for-exchanging-tokens). | -| **`iss`** | `dev-gcp:aura:app-a` | The _issuer_ of the token. Must be set to your application's own `client_id`, i.e. [`TOKEN_X_CLIENT_ID`](tokenx.md#variables-for-exchanging-tokens). | -| **`aud`** | `https://tokenx.dev-gcp.nav.cloud.nais.io/token` | The _audience_ of the token. Must be set to Tokendings' `token_endpoint`, i.e. [`TOKEN_X_TOKEN_ENDPOINT`](tokenx.md#variables-for-exchanging-tokens). | -| **`jti`** | `83c580a6-b479-426d-876b-267aa9848e2f` | The _JWT ID_ of the token. Used to uniquely identify a token. Set this to a UUID or similar. | -| **`nbf`** | `1597783152` | `nbf` stands for _not before_. It identifies the time \(seconds after Epoch\) before which the JWT MUST NOT be accepted for processing. | -| **`iat`** | `1597783152` | `iat` stands for _issued at_. It identifies the time \(seconds after Epoch\) in which the JWT was issued \(or created\). | -| **`exp`** | `1597783272` | `exp` is the _expiration time_ \(seconds after Epoch\) of the token. This **must** not be more than **120** seconds after `nbf` and `iat`. That is, the maximum lifetime of the token must be no greater than **120 seconds**. | - -Additionally, the headers of the assertion must contain the following parameters: - -| Parameter | Value | Description | -|:----------|:---------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| **`kid`** | `93ad09a5-70bc-4858-bd26-5ff4a0c5f73f` | The key identifier of the key used to sign the assertion. This identifier is available in the JWK found in [`TOKEN_X_PRIVATE_JWK`](tokenx.md#variables-for-exchanging-tokens). | -| **`typ`** | `JWT` | Represents the type of this JWT. Set this to `JWT`. | -| **`alg`** | `RS256` | Represents the cryptographic algorithm used to secure the JWT. Set this to `RS256`. | - -The assertion should be unique and only used once. That is, every request to Tokendings should contain a unique client assertion: - -- Set the JWT ID (`jti`) claim to a unique value, such as an UUID. -- Set the JWT expiry (`exp`) claim so that the lifetime of the token is reasonably low: - - The _maximum_ lifetime allowed is 120 seconds. - - A lifetime between 10-30 seconds should be fine for most situations. - -#### Example Client Assertion Values - -**Header** - -```json -{ - "kid": "93ad09a5-70bc-4858-bd26-5ff4a0c5f73f", - "typ": "JWT", - "alg": "RS256" -} -``` - -**Payload** - -```json -{ - "sub": "prod-gcp:namespace-gcp:gcp-app", - "aud": "https://tokenx.dev-gcp.nav.cloud.nais.io/token", - "nbf": 1592508050, - "iss": "prod-gcp:namespace-gcp:gcp-app", - "exp": 1592508171, - "iat": 1592508050, - "jti": "fd9717d3-6889-4b22-89b8-2626332abf14" -} -``` - -### Exchanging a token - -To acquire a properly scoped token for a given target application, you must exchange an existing _subject token_ (i.e. a token that contains a subject, in this case a citizen end-user). - -Tokendings will then issue an `access_token` in JWT format, based on the parameters set in the token request. The token can then be used as a [**Bearer token**](concepts.md#bearer-token) in the Authorization header when calling your target API on behalf of the aforementioned subject. - -#### Prerequisites - -* You have a _subject token_ in the form of an `access_token` issued by one of the following providers: - - [ID-porten](idporten.md) - - Tokendings -* You have a [client assertion](tokenx.md#client-authentication) that _authenticates_ your application. - -#### Exchange Request - -The following denotes the required parameters needed to perform an exchange request. - -| Parameter | Value | Comment | -|:------------------------|:---------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `grant_type` | `urn:ietf:params:oauth:grant-type:token-exchange` | The identifier of the OAuth 2.0 grant to use, in this case the OAuth 2.0 Token Exchange grant. This grants allows applications to exchange one token for a new one containing much of the same information while still being correctly "scoped" in terms of OAuth. | -| `client_assertion_type` | `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` | Identifies the type of _assertion_ the client/application will use to authenticate itself to Tokendings, in this case a JWT. | -| `client_assertion` | A serialized JWT identifying the calling app | The [client assertion](tokenx.md#client-authentication); a JWT signed by the calling client/application used to identify said client/application. | -| `subject_token_type` | `urn:ietf:params:oauth:token-type:jwt` | Identifies the type of token that will be exchanged with a new one, in this case a JWT | -| `subject_token` | A serialized JWT, the token that should be exchanged | The actual token \(JWT\) containing the signed-in user. Should be an `access_token`. | -| `audience` | The identifier of the app you wish to use the token for | Identifies the intended audience for the resulting token, i.e. the target app you request a token for. This value shall be the `client_id` of the target app using the naming scheme `::` e.g. `prod-fss:namespace1:app1` | - -Send the request to the `token_endpoint`, i.e. [`TOKEN_X_TOKEN_ENDPOINT`](tokenx.md#variables-for-exchanging-tokens). - -???+ example - ```http - POST /token HTTP/1.1 - Host: tokenx.prod-gcp.nav.cloud.nais.io - Content-Type: application/x-www-form-urlencoded - - grant_type=urn:ietf:params:oauth:grant-type:token-exchange& - client_assertion_type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer& - client_assertion=eY...............& - subject_token_type=urn:ietf:params:oauth:token-type:jwt& - subject_token=eY...............& - audience=prod-fss:namespace1:app1 - ``` - -#### Exchange Response - -Tokendings will respond with a JSON object - -???+ example - ```json - { - "access_token" : "eyJraWQiOi..............", - "issued_token_type" : "urn:ietf:params:oauth:token-type:access_token", - "token_type" : "Bearer", - "expires_in" : 899 - } - ``` - -The `expires_in` field denotes the lifetime of the token in seconds. - -Cache and reuse the token until it expires to minimize network latency impact. - -A safe cache key is `key = sha256($subject_token + $audience)`. - -#### Exchange Error Response - -If the exchange request is invalid, Tokendings will respond with a structured error, as specified in -[RFC 8693, Section 2.2.2](https://www.rfc-editor.org/rfc/rfc8693.html#name-error-response): - -???+ example - ```json - { - "error_description" : "token exchange audience is invalid", - "error" : "invalid_request" - } - ``` - -### Token Validation - -If your app is a [resource server / API](concepts.md#resource-server) and receives a token from another application, it is **your responsibility** to [validate the token](concepts.md#token-validation) intended for your application. - -Configure your app with the OAuth 2.0 Authorization Server Metadata found at the well-known endpoint, [`TOKEN_X_WELL_KNOWN_URL`](tokenx.md#variables-for-validating-tokens). -Alternatively, use the resolved values from said endpoint for convenience: - -- [`TOKEN_X_ISSUER`](tokenx.md#variables-for-validating-tokens) -- [`TOKEN_X_JWKS_URI`](tokenx.md#variables-for-validating-tokens) - -#### Signature Verification - -* The token should be signed with the `RS256` algorithm (defined in JWT header). Tokens not matching this algorithm should be rejected. -* Verify that the signature is correct. - * The issuer's signing keys can be retrieved from the JWK Set (JWKS) at the `jwks_uri`, i.e. [`TOKEN_X_JWKS_URI`](tokenx.md#variables-for-validating-tokens). - * The `kid` attribute in the token header is thus a reference to a key contained within the JWK Set. - * The token signature should be verified against the public key in the matching JWK. - -#### Claims - -The following claims are by default provided in the issued token and should explicitly be validated: - -* `iss` \(**issuer**\): The issuer of the token **must match exactly** with the Tokendings issuer URI ([`TOKEN_X_ISSUER`](tokenx.md#variables-for-validating-tokens)). -* `aud` \(**audience**\): The intended audience for the token, **must match** your application's `client_id` ([`TOKEN_X_CLIENT_ID`](tokenx.md#variables-for-validating-tokens)). -* `exp` \(**expiration time**\): Expiration time, i.e. tokens received after this date **must be rejected**. -* `nbf` \(**not before time**\): The token cannot be used before this time, i.e. if the token is issued in the "future" (outside "reasonable" clock skew) it **must be rejected**. -* `iat` \(**issued at time**\): The time at which the token has been issued. **Must be before `exp`**. -* `sub` \(**subject**\): If applicable, used in user centric access control. This represents a unique identifier for the user. - -Other non-standard claims (with some exceptions, see the [claim mappings](#claim-mappings) section) in the token are copied verbatim from the original token issued by `idp` (the original issuer of the subject token). -For example, the claim used for the personal identifier (_personidentifikator_) for tokens issued by ID-porten is `pid`. - -#### Claim Mappings - -Some claims are mapped to a different value for legacy/compatibility reasons, depending on the original issuer (`idp`). - -The table below shows the claim mappings: - -| Claim | Original Value | Mapped Value | -|:------|:---------------------------|:--------------| -| `acr` | `idporten-loa-substantial` | `Level3` | -| `acr` | `idporten-loa-high` | `Level4` | - -This currently only affects tokens from ID-porten, i.e. `idp=https://test.idporten.no` or `idp=https://idporten.no`. - -The mappings will be removed at some point in the future. -If you're using the `acr` claim in any way, check for both the original and mapped values. - -#### Example Token (exchanged from ID-porten) - -The following example shows the claims of a token issued by Tokendings, where the exchanged subject token is issued by [ID-porten](idporten.md): - -???+ example - ```json - { - "at_hash": "x6lQGCdbMX62p1VHeDsFBA", - "sub": "HmjqfL7....", - "amr": [ - "BankID" - ], - "iss": "https://tokenx.prod-gcp.nav.cloud.nais.io", - "pid": "12345678910", - "locale": "nb", - "client_id": "prod-gcp:team-a:app-a", - "sid": "DASgLATSjYTp__ylaVbskHy66zWiplQrGDAYahvwk1k", - "aud": "prod-fss:team-b:app-b", - "acr": "Level4", - "nbf": 1597783152, - "idp": "https://idporten.no", - "auth_time": 1611926877, - "exp": 1597783452, - "iat": 1597783152, - "jti": "97f580a6-b479-426d-876b-267aa9848e2e" - } - ``` - -## Local Development - -See also the [development overview](development.md) page. - -### Token Generator - -In many cases, you want to locally develop and test against a secured API in the development environments. -To do so, you need a [token](concepts.md#bearer-token) to access said API. - -Use to generate tokens in the development environments. - -#### Prerequisites - -1. The API application must be configured with [TokenX enabled](#configuration). -2. Pre-authorize the token generator service by adding it to the API application's [access policy](#access-policies): - ```yaml - spec: - accessPolicy: - inbound: - rules: - - application: tokenx-token-generator - namespace: aura - cluster: dev-gcp - ``` - -#### Getting a token - -1. Visit in your browser. - - Replace `` with the intended _audience_ of the token, in this case the API application. - - The audience value must be on the form of `::` - - For example: `dev-gcp:aura:my-app` -2. You will be redirected to log in at ID-porten (if not already logged in). -3. After logging in, you should be redirected back to the token generator and presented with a JSON response containing an `access_token`. -4. Use the `access_token` as a [Bearer token](concepts.md#bearer-token) for calls to your API application. -5. Success! diff --git a/docs/security/auth/wonderwall.md b/docs/security/auth/wonderwall.md index 04fca4f1d..d44b8edc3 100644 --- a/docs/security/auth/wonderwall.md +++ b/docs/security/auth/wonderwall.md @@ -1,12 +1,13 @@ --- title: Wonderwall +tags: [auth, sidecar, services] description: Sidecar for authentication --- # Wonderwall (sidecar for authentication) [Wonderwall](https://github.com/nais/wonderwall) is an application that implements an OpenID Connect (OIDC) -[Relying Party](concepts.md#client) in a way that makes it easy to plug into Kubernetes +[Relying Party](../../auth/explanations/README.md#client) in a way that makes it easy to plug into Kubernetes as a _sidecar_. As such, this is OIDC as a sidecar, or OaaS, or to explain the joke: @@ -14,11 +15,11 @@ As such, this is OIDC as a sidecar, or OaaS, or to explain the joke: > _Oasis - Wonderwall_ !!! warning "Availability" - Wonderwall is only available in the [Google Cloud Platform](../../reference/environments.md#google-cloud-platform-gcp) environments. + Wonderwall is only available in the [Google Cloud Platform](../../workloads/reference/environments.md#google-cloud-platform-gcp) environments. ## Overview -Wonderwall is a reverse-proxy that sits in front of your application. +Wonderwall is a reverse proxy that sits in front of your application. All incoming requests to your application are intercepted and proxied by the sidecar. If the user does _not_ have a valid session with the sidecar, requests will be proxied as-is without modifications to the application: @@ -33,7 +34,7 @@ graph LR ``` To obtain a local session, the user must be redirected to the `/oauth2/login` endpoint. -This will initiate the [OpenID Connect Authorization Code Flow](concepts.md#openid-connect): +This will initiate the [OpenID Connect Authorization Code Flow](../../auth/explanations/README.md#openid-connect): ```mermaid graph LR @@ -67,7 +68,7 @@ graph LR ``` All authenticated requests that are forwarded to the application will now contain the user's `access_token`. -The token is sent as a [Bearer token](concepts.md#bearer-token) in the `Authorization` header. +The token is sent as a [Bearer token](../../auth/explanations/README.md#bearer-token) in the `Authorization` header. This applies as long as the [session is not _expired_ or _inactive_](#5-sessions): ```mermaid @@ -89,7 +90,7 @@ application's ingress. ## Endpoints -The sidecar provides these endpoints under your application's [ingress](../../reference/ingress.md): +The sidecar provides these endpoints under your application's [ingress](../../workloads/reference/ingress.md): | Path | Description | Details | |--------------------------------|----------------------------------------------------------------------------|----------------------------------------------| @@ -110,7 +111,7 @@ Wonderwall currently supports the following [identity providers][identity provid For citizen end-users. - [:octicons-arrow-right-24: Read more about ID-porten](idporten.md) + [:octicons-arrow-right-24: Read more about ID-porten](../../auth/idporten/README.md) - :octicons-server-24:{ .lg .middle } **Azure AD** @@ -134,7 +135,7 @@ Minimal configuration examples below: enabled: true ``` - [:octicons-arrow-right-24: See the NAIS manifest reference for the complete specification](../../reference/application-spec.md#idportensidecar). +[:octicons-arrow-right-24: See the NAIS manifest reference for the complete specification](../../workloads/application/reference/application-spec.md#idportensidecar). === "Azure AD" @@ -147,7 +148,7 @@ Minimal configuration examples below: enabled: true ``` - [:octicons-arrow-right-24: See the NAIS manifest reference for the complete specification](../../reference/application-spec.md#azuresidecar). +[:octicons-arrow-right-24: See the NAIS manifest reference for the complete specification](../../workloads/application/reference/application-spec.md#azuresidecar). ## Usage @@ -251,9 +252,9 @@ Ensure that your frontend handles the `HTTP 401` response and redirects the user Autologin will by default match all paths for your application's ingresses, except the following: - `/oauth2/*` -- [`spec.prometheus.path`](../../reference/application-spec.md#prometheuspath), if defined -- [`spec.liveness.path`](../../reference/application-spec.md#livenesspath), if defined -- [`spec.readiness.path`](../../reference/application-spec.md#readinesspath), if defined +- [`spec.prometheus.path`](../../workloads/application/reference/application-spec.md#prometheuspath), if defined +- [`spec.liveness.path`](../../workloads/application/reference/application-spec.md#livenesspath), if defined +- [`spec.readiness.path`](../../workloads/application/reference/application-spec.md#readinesspath), if defined You can define additional paths or patterns to be excluded: @@ -364,7 +365,7 @@ The same restrictions and caveats apply here. ### 3. Token Validation -The sidecar attaches an `Authorization` header with the user's `access_token` as a [Bearer token](concepts.md#bearer-token), as long as the user has an [_active_ session](#5-sessions): +The sidecar attaches an `Authorization` header with the user's `access_token` as a [Bearer token](../../auth/explanations/README.md#bearer-token), as long as the user has an [_active_ session](#5-sessions): ``` GET /resource @@ -381,7 +382,7 @@ See the specific identity provider pages for further details on token validation === "ID-porten" - [:octicons-arrow-right-24: Read more about Token Validation for ID-porten](idporten.md#token-validation) + [:octicons-arrow-right-24: Read more about Token Validation for ID-porten](../../auth/idporten/how-to/secure.md#validate-token-in-authorization-header) === "Azure AD" @@ -643,7 +644,7 @@ To access other applications, you exchange the token to a new token that is corr === "ID-porten" - For ID-porten, use the [token exchange grant (TokenX)](tokenx.md#exchanging-a-token) to exchange the token. + For ID-porten, use [TokenX](../../auth/tokenx/how-to/consume.md) to exchange the token. === "Azure AD" @@ -653,4 +654,4 @@ To access other applications, you exchange the token to a new token that is corr See for an opinionated JavaScript library for token validation and exchange. -[identity provider]: concepts.md#identity-provider +[identity provider]: ../../auth/explanations/README.md#identity-provider diff --git a/docs/services/.pages b/docs/services/.pages new file mode 100644 index 000000000..ca34aa680 --- /dev/null +++ b/docs/services/.pages @@ -0,0 +1,6 @@ +nav: +- README.md +- CDN: cdn +- secrets +- feature-toggling.md +- ... diff --git a/docs/services/README.md b/docs/services/README.md new file mode 100644 index 000000000..107ad534e --- /dev/null +++ b/docs/services/README.md @@ -0,0 +1,12 @@ +# Other services + +This section covers the rest of the NAIS functionality that didn't fit into any other categories. + +It includes services such as: + +- [CDN](cdn/README.md) +- [Secrets](secrets/README.md) +- [Feature toggling](./feature-toggling.md) +- [Anti-virus](antivirus.md) +- [Salsa](salsa.md) +- [Leader election](leader-election/README.md) diff --git a/docs/security/antivirus.md b/docs/services/antivirus.md similarity index 96% rename from docs/security/antivirus.md rename to docs/services/antivirus.md index 050cd8652..339f50a38 100644 --- a/docs/security/antivirus.md +++ b/docs/services/antivirus.md @@ -1,5 +1,6 @@ --- description: Antivirus scanning of files and urls using ClamAV. +tags: [explanation, services] --- # Anti-Virus Scanning @@ -29,7 +30,7 @@ See [ClamAV documentation][clamav-docs] and [ClamAV REST API][clamav-api] for mo ## Access Policy -When using ClamAV on GCP, remember to add an [outbound access policy](../how-to-guides/access-policies.md): +When using ClamAV on GCP, remember to add an [outbound access policy](../workloads/how-to/access-policies.md): ```yaml apiVersion: "nais.io/v1alpha1" diff --git a/docs/services/cdn/.pages b/docs/services/cdn/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/services/cdn/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/cdn.md b/docs/services/cdn/README.md similarity index 92% rename from docs/explanation/cdn.md rename to docs/services/cdn/README.md index e7f09b360..a5b861c95 100644 --- a/docs/explanation/cdn.md +++ b/docs/services/cdn/README.md @@ -1,6 +1,5 @@ --- -tags: -- CDN +tags: [cdn, explanation, services] --- # Content Delivery Network (CDN) @@ -12,9 +11,6 @@ NAIS offers CDN as a service through Google Cloud CDN. The CDN is used for serving static content such as HTML files, JavaScript libraries and files, stylesheets, and images. -Assets are deployed by uploading them to a bucket using a GitHub -action. - ```mermaid graph LR {%- if tenant() == "nav" %} @@ -28,7 +24,8 @@ action. LB-->|/team3/*|BucketC ``` -CDN publish workflow: +Assets are deployed by uploading them to a bucket using a GitHub +action: ```mermaid graph LR @@ -78,10 +75,8 @@ Among many others: {% endif %} +## Related pages +:dart: Learn how to [upload assets to the CDN](how-to/upload-assets.md) - -## What's next? - -- :dart: Learn how to [upload assets to the CDN](../how-to-guides/cdn.md) -- :computer: [CDN Reference documentation](../reference/cdn.md) +:books: [CDN Reference documentation](reference/README.md) diff --git a/docs/how-to-guides/cdn.md b/docs/services/cdn/how-to/upload-assets.md similarity index 84% rename from docs/how-to-guides/cdn.md rename to docs/services/cdn/how-to/upload-assets.md index 95cd81a1c..3af6af19b 100644 --- a/docs/how-to-guides/cdn.md +++ b/docs/services/cdn/how-to/upload-assets.md @@ -1,25 +1,24 @@ --- -tags: -- CDN +tags: [cdn, how-to] --- # Upload assets to the CDN -This how-to guide shows you how to upload assets to the [CDN](../explanation/cdn.md). +This how-to guide shows you how to upload assets to the [CDN](../README.md). -## 0. Prerequisites +## Prerequisites -- A [NAIS team](team.md). +- A [NAIS team](../../../explanations/team.md). - A GitHub repository that the team has access to. -- The repository needs to have a [GitHub workflow](github-action.md#2-create-a-github-workflow) that builds the assets you want to upload. +- The repository needs to have a [GitHub workflow](../../../build/README.md) that builds the assets you want to upload. -## 1. Authorize repository for upload +## Authorize repository for upload 1. Open [NAIS console](https://console.<>.cloud.nais.io) in your browser and select your team. 2. Select the `Repositories` tab 3. Find the repository you want to deploy from, and click `Authorize` -## 2. Upload assets with the CDN action +## Upload assets with the CDN action {% if tenant() == "nav" %} ???+ note "SPA deploy" @@ -67,9 +66,9 @@ jobs: shell: bash ``` -For more information on the inputs and outputs of the action, see the [CDN Reference](../reference/cdn.md). +For more information on the inputs and outputs of the action, see the [CDN Reference](../reference/README.md). -## 3. Use the uploaded assets +## Use the uploaded assets The assets from the CDN will be available at diff --git a/docs/reference/cdn.md b/docs/services/cdn/reference/README.md similarity index 88% rename from docs/reference/cdn.md rename to docs/services/cdn/reference/README.md index b45cd6905..0a842e497 100644 --- a/docs/reference/cdn.md +++ b/docs/services/cdn/reference/README.md @@ -1,15 +1,15 @@ --- -tags: - - CDN +title: Content Delivery Network Reference +tags: [cdn, reference] --- -# Content Delivery Network (CDN) +# Content Delivery Network Reference -This is the reference documentation for the [CDN](../explanation/cdn.md) service. +This is the reference documentation for the [CDN](../README.md) service. ## How-to guides -- :dart: [Upload assets to the CDN](../how-to-guides/cdn.md) +:dart: [Upload assets to the CDN](../how-to/upload-assets.md) ## CDN Deploy Action diff --git a/docs/explanation/feature-toggling.md b/docs/services/feature-toggling.md similarity index 99% rename from docs/explanation/feature-toggling.md rename to docs/services/feature-toggling.md index 66767a6db..6c371a909 100644 --- a/docs/explanation/feature-toggling.md +++ b/docs/services/feature-toggling.md @@ -1,3 +1,7 @@ +--- +tags: [explanation, services] +--- + # Feature Toggling ## What is feature toggling? diff --git a/docs/services/leader-election/.pages b/docs/services/leader-election/.pages new file mode 100644 index 000000000..b43a840a6 --- /dev/null +++ b/docs/services/leader-election/.pages @@ -0,0 +1,4 @@ +nav: +- README.md +- 🎯 How-To: how-to +- ... diff --git a/docs/explanation/leader-election.md b/docs/services/leader-election/README.md similarity index 89% rename from docs/explanation/leader-election.md rename to docs/services/leader-election/README.md index 1d40dbc3c..b45fb95a0 100644 --- a/docs/explanation/leader-election.md +++ b/docs/services/leader-election/README.md @@ -1,3 +1,7 @@ +--- +tags: [leader-election, sidecar, explanation, services] +--- + # Leader Election With leader election it is possible to have one responsible pod. @@ -7,7 +11,7 @@ This is done by asking the [elector container](https://github.com/nais/elector) The leader election configuration does not control which pod the external service requests will be routed to. ## Elector sidecar -When you [enable leader election](../how-to-guides/leader-election.md), NAIS will inject an elector container as a sidecar into your pod. +When you [enable leader election](how-to/enable.md), NAIS will inject an elector container as a sidecar into your pod. When you have the `elector` container running in your pod, you can make a HTTP GET to the URL set in environment variable `$ELECTOR_PATH` to see which pod is the leader. diff --git a/docs/how-to-guides/leader-election.md b/docs/services/leader-election/how-to/enable.md similarity index 82% rename from docs/how-to-guides/leader-election.md rename to docs/services/leader-election/how-to/enable.md index 25648bbea..530548855 100644 --- a/docs/how-to-guides/leader-election.md +++ b/docs/services/leader-election/how-to/enable.md @@ -1,8 +1,12 @@ +--- +tags: [leader-election, how-to] +--- + # Enable Leader Election This guide will show you how to enable leader election for your application. -## 0. Enable leader election in [manifest](../reference/application-spec.md#leaderelection) +## Enable leader election in [manifest](../../../workloads/application/reference/application-spec.md#leaderelection) ???+ note ".nais/app.yaml" @@ -11,7 +15,7 @@ This guide will show you how to enable leader election for your application. leaderElection: true ``` -## 1. Using leader election in your application +## Using leader election in your application === "java" diff --git a/docs/security/salsa/README.md b/docs/services/salsa.md similarity index 98% rename from docs/security/salsa/README.md rename to docs/services/salsa.md index fb6b99dfb..8852fdc93 100644 --- a/docs/security/salsa/README.md +++ b/docs/services/salsa.md @@ -1,5 +1,6 @@ --- description: Github action that helps to secure supply chain for software artifacts. +tags: [explanation, services] --- # Salsa @@ -79,7 +80,7 @@ You can conduct searches within projects using the following tags: Below is a screenshot of a project utilizing the dependency graph within Dependency-Track: -![Dependency Graph](../../assets/salsa-graph.png) +![Dependency Graph](../assets/salsa-graph.png) [Dependency-Track](https://dependencytrack.org/) has a ton of features so check out the [documentation](https://docs.dependencytrack.org/) for more information. diff --git a/docs/services/secrets/.pages b/docs/services/secrets/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/services/secrets/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/explanation/secrets.md b/docs/services/secrets/README.md similarity index 74% rename from docs/explanation/secrets.md rename to docs/services/secrets/README.md index 20aa2925c..f842515a4 100644 --- a/docs/explanation/secrets.md +++ b/docs/services/secrets/README.md @@ -1,6 +1,10 @@ +--- +tags: [secrets, explanation, services] +--- + # Secrets -A secret is a piece of sensitive information that is used in a [workload](workloads/README.md). +A secret is a piece of sensitive information that is used in a [workload](../../workloads/README.md). This can be a password, an API key, or any other information that should not be exposed to the public. Secrets are kept separate from the codebase and configuration files that are usually stored in version control. @@ -21,14 +25,14 @@ There are two types of secrets on the NAIS platform: - :technologist: **User-defined secrets** --- - _User-defined secrets_ are managed by you and your [team](team.md). + _User-defined secrets_ are managed by you and your [team](../../explanations/team.md). - These are typically used for integrating with third-party services or APIs that are not provided by NAIS, such as Slack or GitHub. - User-defined secrets can also be used to store sensitive information specific to your application, such as encryption keys or other private configuration. -## What's next? +## Related pages + +:dart: Learn how to [create and manage a secret in Console](how-to/console.md) -- :dart: Learn how to [create and manage a secret in Console](../how-to-guides/secrets/console.md) -- :dart: Learn how to [use a secret in your workload](../how-to-guides/secrets/workload.md) -- :computer: See the [reference for secrets](../reference/secrets.md) for more technical details +:dart: Learn how to [use a secret in your workload](how-to/workload.md) diff --git a/docs/how-to-guides/secrets/console.md b/docs/services/secrets/how-to/console.md similarity index 88% rename from docs/how-to-guides/secrets/console.md rename to docs/services/secrets/how-to/console.md index e0eab5826..cee3aea9f 100644 --- a/docs/how-to-guides/secrets/console.md +++ b/docs/services/secrets/how-to/console.md @@ -1,10 +1,14 @@ +--- +tags: [secrets, how-to, console] +--- + # Get started with secrets in Console -This how-to guide shows you how to create and manage a [secret](../../explanation/secrets.md) in the NAIS Console. +This how-to guide shows you how to create and manage a [secret](../README.md) in the NAIS Console. ## Prerequisites -- You're part of a [NAIS team](../team.md) +- You're part of a [NAIS team](../../../explanations/team.md) ## List secrets diff --git a/docs/how-to-guides/secrets/workload.md b/docs/services/secrets/how-to/workload.md similarity index 88% rename from docs/how-to-guides/secrets/workload.md rename to docs/services/secrets/how-to/workload.md index 474a42065..4c858619b 100644 --- a/docs/how-to-guides/secrets/workload.md +++ b/docs/services/secrets/how-to/workload.md @@ -1,18 +1,22 @@ +--- +tags: [workloads, how-to, secrets] +--- + # Use a secret in your workload -This how-to guide shows you how to reference and use a [secret](../../explanation/secrets.md) -in your [workload](../../explanation/workloads/README.md). +This how-to guide shows you how to reference and use a [secret](../README.md) +in your [workload](../../../workloads/README.md). A secret can be made available as environment variables or files, or both. ## Prerequisites -- You're part of a [NAIS team](../team.md) +- You're part of a [NAIS team](../../../explanations/team.md) - You have previously [created a secret](console.md#create-a-secret) for your team - A Github repository where the NAIS team has access - The repository contains a valid workload manifest (`nais.yaml`) -## Expose secret as environment variables +## Expose secret as environment variables 1. Add a reference to the secret in the workload's `nais.yaml` manifest. diff --git a/docs/services/secrets/reference/README.md b/docs/services/secrets/reference/README.md new file mode 100644 index 000000000..fa71c543d --- /dev/null +++ b/docs/services/secrets/reference/README.md @@ -0,0 +1,56 @@ +--- +title: Secrets Reference +tags: [secrets, reference] +--- + +# Secrets Reference + +This is the reference documentation for [secrets](../README.md) on the NAIS platform. + +## Console + +Visit [NAIS Console :octicons-link-external-16:](https://console.<>.cloud.nais.io) to find and manage your team's user-defined secrets. + +## How-To Guides + +:dart: [Get started with secrets in Console](../how-to/console.md) + +:dart: [Use a secret in your workload](../how-to/workload.md) + +## Workloads + +Use a secret in your [workload](../../../workloads/README.md) by referencing them in your `nais.yaml` manifest. + +The secret can be made available as environment variables or files. + +### Environment Variables + +```yaml +spec: + envFrom: + - secret: +``` + +See also: + +:books: [Application reference][application] + +:books: [NaisJob reference][naisjob] + +### Files + +```yaml +spec: + filesFrom: + - secret: + mountPath: /var/run/secrets/ +``` + +See also: + +:books: [Application reference][application] + +:books: [NaisJob reference][naisjob] + +[application]: ../../../workloads/application/reference/application-spec.md#envfromsecret +[naisjob]: ../../../workloads/job/reference/naisjob-spec.md#envfromsecret diff --git a/docs/tags.md b/docs/tags.md new file mode 100644 index 000000000..98e6010a6 --- /dev/null +++ b/docs/tags.md @@ -0,0 +1,3 @@ +# Tags + + diff --git a/docs/tutorial/.pages b/docs/tutorial/.pages deleted file mode 100644 index bddf57479..000000000 --- a/docs/tutorial/.pages +++ /dev/null @@ -1 +0,0 @@ -title: Tutorials diff --git a/docs/tutorial/README.md b/docs/tutorial/README.md deleted file mode 100644 index f8325f5c1..000000000 --- a/docs/tutorial/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -ᴴₒᴴₒᴴₒ: false -hide: - - feedback - - footer ---- - -# NAIS Documentation -Learning-oriented lessons that take you through a series of steps to complete a project. Most useful when you want to get started with NAIS. - -
-- :octicons-rocket-24:{ .lg .middle } **1. Hello NAIS** - - --- - - This tutorial will take you through the process of getting a simple web application up and running on NAIS. No previous experience with NAIS is required. - - [:octicons-arrow-right-24: Hello NAIS](./hello-nais/hello-nais-1.md) - -
diff --git a/docs/tutorial/hello-nais/hello-nais-1.md b/docs/tutorial/hello-nais/hello-nais-1.md deleted file mode 100644 index e7ff142ab..000000000 --- a/docs/tutorial/hello-nais/hello-nais-1.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -tags: [tutorial] ---- -# Part 1 - Create application - -This tutorial will take you through the process of getting a simple application up and running on NAIS. - -## Prerequisites - -- You have a GitHub account connected to your GitHub organization (e.g. `navikt`) -- [naisdevice installed](../../how-to-guides/naisdevice/install.md) -- [Member of a NAIS team](../../explanation/team.md) -- [GitHub CLI installed](https://cli.github.com/) - -???+ note "Conventions" - - Throughout this guide, we will use the following conventions: - - - `` - The name of your NAIS application (e.g. `joannas-first`) - - `` - The name of your NAIS team (e.g. `onboarding`) - - `` - Your GitHub organization (e.g. `navikt`) - - `` - The name of the environment you want to deploy to (e.g. `dev`) - - **NB!** Choose names with *lowercase* letters, numbers and dashes only. - -## 1. Create your own GitHub repository - -Create your own repo using the [nais/hello-nais](https://github.com/nais/hello-nais/) as a template. - -You create a new repository through either the [GitHub UI](https://github.com/new?template_name=hello-nais&template_owner=nais) or through the GitHub CLI: - -```bash -gh repo create / --template nais/hello-nais --private --clone -``` - -```bash -cd -``` - -## 2. Grant your team access to your repository - -Open your repository: - -```bash -gh repo view --web -``` - -Click on `Settings` -> `Collaborators and teams` -> `Add teams`. - -Select your team, and grant them the `Write` role. - -You have now successfully created your own application repository and granted your team access to it. -In the next steps we will have a closer look at the files needed to make this application NAIS! diff --git a/docs/tutorial/hello-nais/hello-nais-2.md b/docs/tutorial/hello-nais/hello-nais-2.md deleted file mode 100644 index b3487b89e..000000000 --- a/docs/tutorial/hello-nais/hello-nais-2.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -tags: [tutorial] ---- -# Part 2 - Make it NAIS - -In the previous step, we created a repository for our application. -This part of the tutorial will show how to make your application NAIS. - -For this to happen, we need three files. - -### 1. Dockerfile - -This describes the system your application will be running on. -It includes the base image, and the commands needed to build your application. -This is the payload you are requesting NAIS to run. -We have created this file for you, as there are no changes needed for this tutorial. Check it out. - -### 2. Application manifest - -This file describes your application to the NAIS platform so that it can run it correctly and provision the resources it needs. - -Create a file called `app.yaml` in a `.nais`-folder. - -```bash -mkdir .nais -touch .nais/app.yaml -``` - -Add the following content to the file, and insert the appropriate values in the placeholders on the highlighted lines: - -???+ note ".nais/app.yaml" - - ```yaml hl_lines="6-8 11" - apiVersion: nais.io/v1alpha1 - kind: Application - - metadata: - labels: - team: - name: - namespace: - spec: - ingresses: - - https://..<>.cloud.nais.io - image: {{image}} - port: 8080 - ttl: 3h - replicas: - max: 1 - min: 1 - resources: - requests: - cpu: 50m - memory: 32Mi - ``` - -### 3. GitHub Actions workflow - -GitHub Actions uses the `Dockerfile` from step 1 and the `app.yaml` from step 2. to build and deploy your application to NAIS. - -Create a file called `main.yaml` in a `.github/workflows`-folder. - -```bash -mkdir -p .github/workflows -touch .github/workflows/main.yaml -``` - -Add the following content to the file, and insert the appropriate values in the placeholders on the highlighted lines: -???+ note ".github/workflows/main.yaml" - - ```yaml hl_lines="19 25" - name: Build and deploy - on: - push: - branches: - - main - jobs: - build_and_deploy: - name: Build, push and deploy - runs-on: ubuntu-latest - permissions: - contents: read - id-token: write - steps: - - uses: actions/checkout@v4 - - name: Push docker image to GAR - uses: nais/docker-build-push@v0 - id: docker-build-push - with: - team: # Replace - identity_provider: ${{ secrets.NAIS_WORKLOAD_IDENTITY_PROVIDER }} # Provided as Organization Secret - project_id: ${{ vars.NAIS_MANAGEMENT_PROJECT_ID }} # Provided as Organization Variable - - name: Deploy to NAIS - uses: nais/deploy/actions/deploy@v2 - env: - CLUSTER: # Replace - RESOURCE: .nais/app.yaml # This points to the file we created in the previous step - VAR: image=${{ steps.docker-build-push.outputs.image }} - ``` - -Excellent! We're now ready to deploy. :octicons-rocket-24: diff --git a/docs/tutorial/hello-nais/hello-nais-3.md b/docs/tutorial/hello-nais/hello-nais-3.md deleted file mode 100644 index c01ef1280..000000000 --- a/docs/tutorial/hello-nais/hello-nais-3.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -tags: [tutorial] ---- -# Part 3 - Ship it - -Previously we've made our application and created the required files for deployment. -In this part of the tutorial we will deploy our application to NAIS. - -## 1. Authorize the repository for deployment - -This is required for the GitHub Actions workflow to be able to deploy your application. - -Visit [Console](https://console.<>.cloud.nais.io). Select your team, and visit the `Repositories` tab. -Find your repository and click `Authorize`. - -??? note "Repository not visible?" - Normally these permissions are automatically synchronized every 15 minutes, but if you don't see the repository here, force synchronization by clicking the `Refresh` button. - -## 2. Commit and push your changes - -Now that we have added the required files, it's time to commit and push them to GitHub. - - -```bash -git add . -git commit -m "FEAT: Add nais app manifest and github workflow" -git push origin main -``` - -## 3. Observe the GitHub Actions workflow - -When pushed, the GitHub Actions workflow will automatically start. You can observe the workflow by running the following command: -=== "CLI" - ```bash - gh run watch - ``` -=== "GitHub Web" - ```bash - gh repo view --web # click the "actions" tab when redirected to github.com - ``` - -## 4. Visit your application -On successful completion, we can view our application at `https://..<>.cloud.nais.io` - -Congratulations! You have now successfully deployed your first application to NAIS! -The next and most important step is to clean up after ourselves. diff --git a/docs/tutorial/hello-nais/hello-nais-4.md b/docs/tutorial/hello-nais/hello-nais-4.md deleted file mode 100644 index f955ee400..000000000 --- a/docs/tutorial/hello-nais/hello-nais-4.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -tags: [tutorial] ---- -# Part 4 - Clean up - -During this tutorial we have - -- created a github repository -- added the required files for deployment -- deployed our application to NAIS - -Now it's time to clean up after ourselves. - -## 1. Delete your repository - -When you are finished with this guide you can delete your repository: - -=== "GitHub UI" - Visit your repository on GitHub, click `Settings` -> `Delete this repository` -> `I understand the consequences, delete this repository` - -=== "GitHub CLI" - ```bash - gh repo delete / - ``` diff --git a/docs/tutorials/.pages b/docs/tutorials/.pages new file mode 100644 index 000000000..ed02d4e7b --- /dev/null +++ b/docs/tutorials/.pages @@ -0,0 +1,4 @@ +nav: +- README.md +- Hello NAIS: hello-nais.md +- ... \ No newline at end of file diff --git a/docs/tutorials/README.md b/docs/tutorials/README.md new file mode 100644 index 000000000..3a00ed8b2 --- /dev/null +++ b/docs/tutorials/README.md @@ -0,0 +1,21 @@ +--- +tags: [tutorial] +hide: + - feedback + - footer +--- + +# Tutorials + +Tutorials are lessons that focuses on _learning by doing_. + +These give you hands-on experience with various parts of NAIS, where we carefully guide you each step of the way. + +
+- :octicons-rocket-24:{ .lg .middle } [**Hello NAIS**](./hello-nais.md) + + --- + + Get your first application up and running on NAIS. + +
diff --git a/docs/tutorials/hello-nais.md b/docs/tutorials/hello-nais.md new file mode 100644 index 000000000..a8a7b16ba --- /dev/null +++ b/docs/tutorials/hello-nais.md @@ -0,0 +1,225 @@ +--- +tags: [tutorial] +--- +# :wave: Hello NAIS + +This tutorial will take you through the process of getting a simple application up and running on NAIS. + +## Prerequisites + +- You have a GitHub account connected to your GitHub organization (e.g. `navikt`) +- [naisdevice installed](../operate/naisdevice/how-to/install.md) +- [Member of a NAIS team](../explanations/team.md) +- [GitHub CLI installed](https://cli.github.com/) + +???+ note "Conventions" + + Throughout this guide, we will use the following conventions: + + - `` - The name of your NAIS application (e.g. `joannas-first`) + - `` - The name of your NAIS team (e.g. `onboarding`) + - `` - Your GitHub organization (e.g. `navikt`) + - `` - The name of the environment you want to deploy to (e.g. `dev`) + + **NB!** Choose names with *lowercase* letters, numbers and dashes only. + +## :gear: Setup + +### Create your own GitHub repository + +Create your own repo using the [nais/hello-nais](https://github.com/nais/hello-nais/) as a template. + +You create a new repository through either the [GitHub UI](https://github.com/new?template_name=hello-nais&template_owner=nais) or through the GitHub CLI: + +```bash +gh repo create / --template nais/hello-nais --private --clone +``` + +```bash +cd +``` + +### Grant your team access to your repository + +Open your repository: + +```bash +gh repo view --web +``` + +Click on `Settings` -> `Collaborators and teams` -> `Add teams`. + +Select your team, and grant them the `Admin` role. + +You have now successfully created your own application repository and granted your team access to it. +In the next steps we will have a closer look at the files needed to make this application NAIS! + +For this to happen, we need three files. + +### Dockerfile + +This describes the system your application will be running on. +It includes the base image, and the commands needed to build your application. +This is the payload you are requesting NAIS to run. +We have created this file for you, as there are no changes needed for this tutorial. Check it out. + +### Application manifest + +This file describes your application to the NAIS platform so that it can run it correctly and provision the resources it needs. + +Create a file called `app.yaml` in a `.nais`-folder. + +```bash +mkdir .nais +touch .nais/app.yaml +``` + +Add the following content to the file, and insert the appropriate values in the placeholders on the highlighted lines: + +???+ note ".nais/app.yaml" + + ```yaml hl_lines="6-8 11" + apiVersion: nais.io/v1alpha1 + kind: Application + + metadata: + labels: + team: + name: + namespace: + spec: + ingresses: + - https://..<>.cloud.nais.io + image: {{image}} + port: 8080 + ttl: 3h + replicas: + max: 1 + min: 1 + resources: + requests: + cpu: 50m + memory: 32Mi + ``` + +### GitHub Actions workflow + +GitHub Actions uses the `Dockerfile` from step 1 and the `app.yaml` from step 2. to build and deploy your application to NAIS. + +Create a file called `main.yaml` in a `.github/workflows`-folder. + +```bash +mkdir -p .github/workflows +touch .github/workflows/main.yaml +``` + +Add the following content to the file, and insert the appropriate values in the placeholders on the highlighted lines: +???+ note ".github/workflows/main.yaml" + + ```yaml hl_lines="19 25" + name: Build and deploy + on: + push: + branches: + - main + jobs: + build_and_deploy: + name: Build, push and deploy + runs-on: ubuntu-latest + permissions: + contents: read + id-token: write + steps: + - uses: actions/checkout@v4 + - name: Push docker image to GAR + uses: nais/docker-build-push@v0 + id: docker-build-push + with: + team: # Replace + identity_provider: ${{ secrets.NAIS_WORKLOAD_IDENTITY_PROVIDER }} # Provided as Organization Secret + project_id: ${{ vars.NAIS_MANAGEMENT_PROJECT_ID }} # Provided as Organization Variable + - name: Deploy to NAIS + uses: nais/deploy/actions/deploy@v2 + env: + CLUSTER: # Replace + RESOURCE: .nais/app.yaml # This points to the file we created in the previous step + VAR: image=${{ steps.docker-build-push.outputs.image }} + ``` + +Excellent! We're now ready to deploy :rocket: + +## :ship: Ship it + +Previously we've made our application and created the required files for deployment. +In this part of the tutorial we will deploy our application to NAIS. + +### Authorize the repository for deployment + +This is required for the GitHub Actions workflow to be able to deploy your application. + +Visit [Console](https://console.<>.cloud.nais.io). Select your team, and visit the `Repositories` tab. +Find your repository and click `Authorize`. + +!!! note "Repository not visible?" + + Normally these permissions are automatically synchronized every 15 minutes, but if you don't see the repository here, force synchronization by clicking the `Synchronize team` button under the `Settings` panel. + +### Commit and push your changes + +Now that we have added the required files, it's time to commit and push them to GitHub. + + +```bash +git add . +git commit -m "FEAT: Add nais app manifest and github workflow" +git push origin main +``` + +### Observe the GitHub Actions workflow + +When pushed, the GitHub Actions workflow will automatically start. You can observe the workflow by running the following command: + +=== "CLI" + + ```bash + gh run watch + ``` + +=== "GitHub Web" + + - Visit your repository on [GitHub](https://github.com). + - Navigate to `Actions`. + - Select the latest workflow run. + +### Visit your application +On successful completion, we can view our application at `https://..<>.cloud.nais.io` + +Congratulations! You have now successfully deployed your first application to NAIS! +The next and most important step is to clean up after ourselves. + +## :broom: Clean up + +During this tutorial we have + +- created a github repository +- added the required files for deployment +- deployed our application to NAIS + +Now it's time to clean up after ourselves. + +### Delete your repository + +When you are finished with this guide you can delete your repository: + +=== "CLI" + + ```bash + gh repo delete / + ``` + +=== "GitHub Web" + + - Visit your repository on [GitHub](https://github.com). + - Navigate to `Settings`. + - At the bottom of the page, click on `Delete this repository` + - Confirm the deletion diff --git a/docs/workloads/.pages b/docs/workloads/.pages new file mode 100644 index 000000000..7e362ff91 --- /dev/null +++ b/docs/workloads/.pages @@ -0,0 +1,8 @@ +nav: +- README.md +- 💡 Explanations: explanations +- 🎯 How-To: how-to +- 📚 Reference: reference +- application +- job +- ... diff --git a/docs/workloads/README.md b/docs/workloads/README.md new file mode 100644 index 000000000..6ab37d388 --- /dev/null +++ b/docs/workloads/README.md @@ -0,0 +1,35 @@ +--- +tags: [workloads, explanation] +--- + +# Workloads + +A core functionality of NAIS is enabling you to run the code you write. + +We support two types of workloads, _applications_ and _jobs_. + +
+ +- **Application** + + --- + An _application_ is a used for long-running processes such as an API. + + [:bulb: Learn more about applications](application/README.md) + +- **Job** + + --- + A _job_ is used for one-off or scheduled tasks meant to complete and then exit. + + [:bulb: Learn more about jobs](job/README.md) + +
+ +## Related pages + +[:bulb: The workload runtime environment](explanations/environment.md) + +[:bulb: Good practices for your workloads](explanations/good-practices.md) + +[:bulb: Zero trust on NAIS](explanations/zero-trust.md) diff --git a/docs/workloads/application/.pages b/docs/workloads/application/.pages new file mode 100644 index 000000000..41ff25cdd --- /dev/null +++ b/docs/workloads/application/.pages @@ -0,0 +1,6 @@ +nav: +- README.md +- 💡 Explanations: explanations +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/workloads/application/README.md b/docs/workloads/application/README.md new file mode 100644 index 000000000..c71d30857 --- /dev/null +++ b/docs/workloads/application/README.md @@ -0,0 +1,27 @@ +--- +tags: [application, explanation, workloads, services] +--- + +# Application + +A NAIS application lets you run one or more instances of a container image. + +An application is defined by its application manifest, which is a YAML file that describes how the application should be run and what resources it needs. + +Once the application manifest is applied, NAIS will set up your application as specified. If you've requested resources, NAIS will provision and configure your application to use those resources. + +## Related pages + +[:bulb: Learn more about exposing your application](explanations/expose.md) + +[:dart: Create an application](how-to/create.md) + +[:dart: Expose an application](how-to/expose.md) + +[:dart: Set up access policies for your application](../how-to/access-policies.md) + +[:dart: Communicate with another application](../how-to/communication.md) + +[:books: Complete application example](reference/application-example.md) + +[:books: Application specification](reference/application-spec.md) diff --git a/docs/workloads/application/explanations/expose.md b/docs/workloads/application/explanations/expose.md new file mode 100644 index 000000000..1c08dc569 --- /dev/null +++ b/docs/workloads/application/explanations/expose.md @@ -0,0 +1,23 @@ +--- +tags: [application, explanation] +--- + +# Exposing your application + +What good is an application if no one can reach it? + +NAIS tries to to make it easy to expose your application to the correct audience. +Your audience may be other applications within the same environment, or it may be humans or machines on the outside. + +If your audience is other applications within the same environment, they can [communicate directly](../../how-to/communication.md) with each other provided you have defined the necessary [access policies](../../how-to/access-policies.md). +See the [zero trust](../../explanations/zero-trust.md) explanation for more information. + +If you want to present your application to someone or something outside the environment, you have to expose it using an ingress. +An ingress is simply an entrypoint into your application, defined by a URL. The domain of the URL controls from where your application can be reached. +There are different domains available in each environment, see the full [list of available domains for each cluster](../../reference/environments.md). + +You can have multiple ingresses for the same application, using the same or different domains. + +If you only want to expose a subset of your application, or you are on a shared domain, you can specify a path for each individual ingress. + +For practical instructions, see the [how-to guide for exposing an application](../how-to/expose.md). diff --git a/docs/workloads/application/how-to/create.md b/docs/workloads/application/how-to/create.md new file mode 100644 index 000000000..b42de1903 --- /dev/null +++ b/docs/workloads/application/how-to/create.md @@ -0,0 +1,65 @@ +--- +tags: [application, how-to] +--- + +# Create application + +This how-to guide will show you how to create a NAIS manifest for your [application](../README.md). + +## Setup + +Inside your application repository, create a `.nais`-folder. + +```bash +cd +mkdir .nais +``` + +Create a file called `app.yaml` in the `.nais`-folder. + +```bash +touch .nais/app.yaml +``` + +## Define your application + +Below is a basic example of an application manifest. + +Add the following content to the file, and insert the appropriate values in the placeholders on the highlighted lines: + +???+ note ".nais/app.yaml" + + ```yaml hl_lines="5-7 9" + apiVersion: nais.io/v1alpha1 + kind: Application + metadata: + labels: + team: + name: + namespace: + spec: + image: {{image}} # Placeholder variable to be replaced by the CI/CD pipeline + port: 8080 + replicas: + max: 2 + min: 4 + resources: + requests: + cpu: 200m + memory: 128Mi + ``` + +This application manifest represents a very basic daemon application. +You will likely want to add more configuration to your application manifest based on your needs. + +## Related pages + +:dart: [Build and deploy your application to NAIS](../../../build/how-to/build-and-deploy.md). + +:dart: [Expose your application](./expose.md). + +:books: [Application spec reference](../reference/application-spec.md). + +:books: [Full Application example](../reference/application-example.md). + +:bulb: [Good practices for NAIS workloads](../../explanations/good-practices.md). diff --git a/docs/workloads/application/how-to/delete.md b/docs/workloads/application/how-to/delete.md new file mode 100644 index 000000000..6ebd02720 --- /dev/null +++ b/docs/workloads/application/how-to/delete.md @@ -0,0 +1,24 @@ +--- +tags: [application, how-to] +--- + +# Delete your application + +## Prerequisites + +- You're part of a [NAIS team](../../../explanations/team.md) + +## Delete your application + +1. Open [NAIS console :octicons-link-external-16:](https://console.<>.cloud.nais.io) in your browser +2. Select your team +3. Select the `Apps` tab +4. Select the application that you want to delete +5. Click the `Delete` button + + You will be prompted to confirm the deletion. + If you have any resources connected to your application, these will also be listed. + + Once confirmed, the application will be permanently deleted. + +6. Make sure that you also remove any related workflows and manifests in your Git repository. diff --git a/docs/workloads/application/how-to/expose.md b/docs/workloads/application/how-to/expose.md new file mode 100644 index 000000000..5d9ff71bf --- /dev/null +++ b/docs/workloads/application/how-to/expose.md @@ -0,0 +1,27 @@ +--- +tags: [application, how-to] +--- + +# Expose an application + +This guide will show you how to [expose your application to the correct audience](../explanations/expose.md). + +## Select audience + +Select the correct audience from the available [domains in your environment](../../reference/environments.md). + +## Define ingress + +Specify the desired hostname for your application in the [application manifest](../reference/application-spec.md#ingresses): + +```yaml hl_lines="4-5" title=".nais/app.yaml" +apiVersion: nais.io/v1alpha1 +kind: Application +spec: + ingresses: + - https://. +``` + +!!! tip "Specific paths" + + You can optionally specify a path for each individual ingress to only expose a subset of your application. diff --git a/docs/reference/application-example.md b/docs/workloads/application/reference/application-example.md similarity index 96% rename from docs/reference/application-example.md rename to docs/workloads/application/reference/application-example.md index 451f21f7b..03aebbc76 100644 --- a/docs/reference/application-example.md +++ b/docs/workloads/application/reference/application-example.md @@ -1,3 +1,7 @@ +--- +tags: [application, reference] +--- + # NAIS Application example YAML |Yes| B[Are they in the same namespace?] + A --> |No| Ingress[🎯 Expose through ingress] + B --> |Yes| InternalSameNS[🎯 Allow access in same namespace] + B --> |No| InternalOtherNS[🎯 Allow access from other namespaces] +``` + +If you define an [ingress](../reference/ingress.md), your application will be available to a given audience based on the domain. +**Access policies have no effect on ingress traffic**. This means that all traffic through an ingress is implicitly allowed. +Your workload is thus responsible for verifying requests if exposed through an ingress. + +## Outbound traffic + +For outbound traffic, you can allow access to other workloads in the same environment, or to external endpoints: + +```mermaid +graph TD + A[Is the service you want to call in the same environment?] + A --> |Yes| B[Are they in the same namespace?] + A --> |No| Internet[🎯 Allow access to external endpoint] + B --> |Yes| InternalSameNS[🎯 Allow access to same namespace] + B --> |No| InternalOtherNS[🎯 Allow access to other namespaces] +``` + +For the native NAIS services - the platform takes care of this for you. For example, when you have a [database](../../persistence/postgres/README.md), the access policies required to reach the database will be created automatically. + +## Example + +Consider a simple application which consists of a frontend and a backend, where naturally the frontend needs to communicate with the backend. + +This communication is denied by default as indicated by the red arrow. +![access-policy-1](../../assets/access-policy-1.png) + +In order to fix this, the frontend needs to allow outbound traffic to the backend by adding the following access policy. + +```yaml +spec: + accessPolicy: + outbound: + - application: backend +``` + +![access-policy-2](../../assets/access-policy-2.png) + +However - the frontend is still not allowed to make any requests to the backend. +The missing piece of the puzzle is adding an inbound policy to the backend like so: + +```yaml +spec: + accessPolicy: + inbound: + - application: frontend +``` + +![access-policy-3](../../assets/access-policy-3.png) + +Now that both applications has explicitly declared their policies, the communication is allowed. + +See more about [how to define access policies](../how-to/access-policies.md) diff --git a/docs/how-to-guides/access-policies.md b/docs/workloads/how-to/access-policies.md similarity index 92% rename from docs/how-to-guides/access-policies.md rename to docs/workloads/how-to/access-policies.md index 7cb83182c..7f4f2476e 100644 --- a/docs/how-to-guides/access-policies.md +++ b/docs/workloads/how-to/access-policies.md @@ -1,6 +1,10 @@ -# Access policies +--- +tags: [workloads, how-to] +--- -This guide will show you how to define [access policies](../explanation/zero-trust.md) for your [workload](../explanation/workloads/README.md). +# Set up access policies + +This guide will show you how to define [access policies](../explanations/zero-trust.md) for your [workload](../README.md). ## Receive requests from workloads in the same namespace @@ -62,7 +66,7 @@ For app `` to be able to receive incoming requests from `` ```mermaid graph LR - accTitle: Receive requests from other app in the another namespace + accTitle: Receive requests from other app in another namespace accDescr: The diagram shows two applications in different namespaces, and . Application is allowing requests from . ANOTHER-APP--"✅"-->MY-APP @@ -111,7 +115,7 @@ For app `` to be able to send requests to `` in the same n end ``` -## Send requests to other app in the another namespace +## Send requests to other app in another namespace For app `` to be able to send requests to `` in ``, this specification is needed for ``: @@ -185,3 +189,5 @@ For app `` to be able to send requests outside of the environment, this end end ``` + +See the [access policy reference](../reference/access-policies.md) for a list of default external endpoints. diff --git a/docs/workloads/how-to/communication.md b/docs/workloads/how-to/communication.md new file mode 100644 index 000000000..76c2d2c6b --- /dev/null +++ b/docs/workloads/how-to/communication.md @@ -0,0 +1,42 @@ +--- +tags: [workloads, how-to] +--- + +# Communicate inside the environment + +This guide will show you how to communicate with other workloads inside the same environment. + +## Prerequisites + +- Working [access policies](access-policies.md) for the workloads you want to communicate with. + +## Identify the endpoint you want to communicate with + +To identity the endpoint of the workload we are communicating with, we need to know it's `name` and what `namespace` it's running in. + +If the workload you are calling is in the same namespace, you can reach it by calling it's name directly using HTTP like so: + +```plaintext +http:// +``` + +If the workload is running in another team's namespace, you need to specify the namespace as well: + +```plaintext +http://. +``` + +With this endpoint, you can now call the workload using HTTP from your own workload. + +{% if tenant() == "nav" %} + +!!! info "Note for on-prem" + If your workload has [webproxy](../application/reference/application-spec.md#webproxy) enabled, you should use the full hostname for all service discovery calls: + + ```text + http://..svc.nais.local + ``` + + This is to ensure that your workload does not attempt to perform these in-cluster calls through the proxy, as the environment variable `NO_PROXY` includes `*.local`. + +{% endif %} diff --git a/docs/workloads/job/.pages b/docs/workloads/job/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/docs/workloads/job/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/docs/workloads/job/README.md b/docs/workloads/job/README.md new file mode 100644 index 000000000..8552364cd --- /dev/null +++ b/docs/workloads/job/README.md @@ -0,0 +1,23 @@ +--- +tags: [job, explanation, workloads, services] +--- + +# NAIS job + +A NAIS job is used for tasks meant to complete and then exit. This can either run as a one-off task or on a schedule, like a [cron job](https://en.wikipedia.org/wiki/Cron). + +A job is defined by its job manifest, which is a YAML file that describes how the job should be run and what resources it needs. + +Once the job manifest is applied, NAIS will set up your job as specified. If you've requested resources, NAIS will provision and configure your job to use those resources. + +## Related pages + +[:dart: Create a job](how-to/create.md) + +[:dart: Set up access policies for your workload](../how-to/access-policies.md) + +[:dart: Communicate with another workload](../how-to/communication.md) + +[:books: Complete job example](reference/naisjob-example.md) + +[:books: Job specification](reference/naisjob-spec.md) diff --git a/docs/workloads/job/how-to/create.md b/docs/workloads/job/how-to/create.md new file mode 100644 index 000000000..f3b8ecad3 --- /dev/null +++ b/docs/workloads/job/how-to/create.md @@ -0,0 +1,57 @@ +--- +tags: [job, how-to] +--- + +# Create job + +This how-to guide will show you how to create a NAIS manifest for your [job](../README.md). + +## Setup + +Inside your job repository, create a `.nais`-folder. + +```bash +cd +mkdir .nais +``` + +Create a file called `job.yaml` in the `.nais`-folder. + +```bash +touch .nais/job.yaml +``` + +## Define your job + +Below is a basic example of an job manifest. + +Add the following content to the file, and insert the appropriate values in the placeholders on the highlighted lines: + +???+ note ".nais/app.yaml" + + ```yaml hl_lines="5-7 9" + apiVersion: nais.io/v1 + kind: Naisjob + metadata: + labels: + team: + name: + namespace: + spec: + schedule: "0 * * * *" # Runs every hour + image: {{image}} # Placeholder variable to be replaced by the CI/CD pipeline + resources: + requests: + cpu: 200m + memory: 128Mi + ``` + +This job manifest will run your code every hour. If you want to run your job only once, you can remove the `schedule` field. + +## Related pages + +:dart: [Build and deploy your job to NAIS](../../../build/how-to/build-and-deploy.md). + +:books: [Job spec reference](../reference/naisjob-spec.md). + +:books: [Complete job example](../reference/naisjob-example.md). diff --git a/docs/workloads/job/how-to/delete.md b/docs/workloads/job/how-to/delete.md new file mode 100644 index 000000000..fb9f7fdf5 --- /dev/null +++ b/docs/workloads/job/how-to/delete.md @@ -0,0 +1,24 @@ +--- +tags: [job, how-to] +--- + +# Delete your job + +## Prerequisites + +- You're part of a [NAIS team](../../../explanations/team.md) + +## Delete your job + +1. Open [NAIS console :octicons-link-external-16:](https://console.<>.cloud.nais.io) in your browser +2. Select your team +3. Select the `Job` tab +4. Select the job that you want to delete +5. Click the `Delete` button + + You will be prompted to confirm the deletion. + If you have any resources connected to your job, these will also be listed. + + Once confirmed, the job will be permanently deleted. + +6. Make sure that you also remove any related workflows and manifests in your Git repository. diff --git a/docs/reference/naisjob-example.md b/docs/workloads/job/reference/naisjob-example.md similarity index 96% rename from docs/reference/naisjob-example.md rename to docs/workloads/job/reference/naisjob-example.md index f0a146f15..aa430f8d5 100644 --- a/docs/reference/naisjob-example.md +++ b/docs/workloads/job/reference/naisjob-example.md @@ -1,3 +1,7 @@ +--- +tags: [job, reference] +--- + # NAIS Job example YAML B - - A--"✅"-->C - - B--"❌"-->A - - C--"❌"-->A - - subgraph accesspolicy-a[Access Policy A] - A[Application A] - end - - subgraph accesspolicy-b[Access Policy B] - B[Application B] - end - - subgraph accesspolicy-c[Access Policy C] - C[Application C] - end -``` - -## Limitations - -The access policy currently have the following limits: - -1. **No support for UDP**. This is a limitation of Kubernetes, and will be fixed in the future. -2. **Only for pod-to-pod communication**. The Access policies only apply when communicating internally within the cluster using [service discovery](../clusters/service-discovery.md), and not through [ingress traffic](ingress.md). -3. **Only available in GCP**. Network policies are only applied in GCP clusters. However, inbound rules for authorization in the context of [_TokenX_](../security/auth/tokenx.md) or [_Azure AD_](../security/auth/azure-ad/README.md) apply to all clusters. - -## Access Policy vs. Ingress - -Access policies are only applied to pod-to-pod communication, and not to ingress traffic. This means that you can still use ingress to expose your application to the internet, and use access policies to control which applications can communicate with your application. - -Outbound requests to ingresses are regarded as external hosts, even if these ingresses exist in the same cluster. - -## Inbound rules - -Inbound rules specifies what other applications _in the same cluster_ your application receives traffic from. - -### Receive requests from other app in the same namespace - -For app `app-a` to be able to receive incoming requests from `app-b` in the same cluster and the same namespace, this specification is needed for `app-a`: - -=== "nais.yaml" - - ```yaml - apiVersion: "nais.io/v1alpha1" - kind: "Application" - metadata: - name: app-a - ... - spec: - ... - accessPolicy: - inbound: - rules: - - application: app-b - ``` - -=== "visualization" - - ```mermaid - graph LR - accTitle: Receive requests from other app in the same namespace - accDescr: The diagram shows two applications in the same namespace, A and B. Application A is allowed to receive requests from B. - - app-b--"✅"-->app-a - - subgraph mynamespace - app-a - app-b - end - ``` - -### Receive requests from other app in the another namespace - -For app `app-a` to be able to receive incoming requests from `app-b` in the same cluster but another namespace \(`othernamespace`\), this specification is needed for `app-a`: - -=== "nais.yaml" - - ```yaml - apiVersion: "nais.io/v1alpha1" - kind: "Application" - metadata: - name: app-a - ... - spec: - ... - accessPolicy: - inbound: - rules: - - application: app-b - namespace: othernamespace - ``` - -=== "visualization" - - ```mermaid - graph LR - accTitle: Receive requests from other app in the another namespace - accDescr: The diagram shows two applications in different namespaces, A and B. Application A is allowed to receive requests from B. - - app-b--"✅"-->app-a - - subgraph mynamespace - app-a - end - - subgraph othernamespace - app-b - end - ``` - -## Outbound rules - -`spec.accessPolicy.outbound.rules` specifies which applications _in the same cluster_ you allow your application to send requests to. To open for external applications, use the field `spec.accessPolicy.outbound.external`. - -### Send requests to other app in the same namespace - -For app `app-a` to be able to send requests to `app-b` in the same cluster and the same namespace, this specification is needed for `app-a`: - -=== "nais.yaml" - - ```yaml - apiVersion: "nais.io/v1alpha1" - kind: "Application" - metadata: - name: app-a - ... - spec: - ... - accessPolicy: - outbound: - rules: - - application: app-b - ``` - -=== "visualization" - - ```mermaid - graph LR - accTitle: Send requests to other app in the same namespace - accDescr: The diagram shows two applications in the same namespace, A and B. Application A is allowed to send requests to B. - - app-a--"✅"-->app-b - - subgraph mynamespace - app-a - app-b - end - ``` - -### Send requests to other app in the another namespace - -For app `app-a` to be able to send requests requests to `app-b` in the same cluster but in another namespace \(`othernamespace`\), this specification is needed for `app-a`: - -=== "nais.yaml" - - ```yaml - apiVersion: "nais.io/v1alpha1" - kind: "Application" - metadata: - name: app-a - ... - spec: - ... - accessPolicy: - outbound: - rules: - - application: app-b - namespace: othernamespace - ``` - -=== "visualization" - - ```mermaid - graph LR - accTitle: Send requests to other app in the another namespace - accDescr: The diagram shows two applications in different namespaces, A and B. Application A is allowed to send requests to B. - - app-a--"✅"-->app-b - - subgraph mynamespace - app-a - end - - subgraph othernamespace - app-b - end - ``` - -### External services - -External services are services that are not running in the same cluster, but are reachable from the cluster. This could be services running in other clusters, or services running in the same cluster but outside the cluster network. - -Since this is not a native feature of Kubernetes Network Policies, we are leveraging Linkerd's [Service Profiles](https://linkerd.io/2/features/service-profiles/) to achieve this. - -In order to send requests to services outside of the cluster, `external.host` configuration is needed: - -=== "nais.yaml" - - ```yaml - apiVersion: "nais.io/v1alpha1" - kind: "Application" - metadata: - name: app-a - ... - spec: - ... - accessPolicy: - outbound: - external: - - host: www.external-application.com - ``` - -=== "visualization" - - ```mermaid - graph LR - accTitle: External services - accDescr: The diagram shows an application, A, that is allowed to send requests to an external service. - - app-a--"✅"-->www.external-application.com - - subgraph cluster - subgraph mynamespace - app-a - end - end - ``` - -Default hosts that are added and accessible for every application: - -| Host / service | Port | Protocol | -|-----------------------------|------|-----------| -| `kube-dns` | 53 | UDP / TCP | -| `metadata.google.internal` | 80 | TCP | -| `private.googleapis.com` | 443 | TCP | -| `login.microsoftonline.com` | 443 | TCP | -| `graph.microsoft.com` | 443 | TCP | -| `aivencloud.com` | 443 | TCP | -| `unleash.nais.io` | 443 | TCP | - -#### Global Service Entries - -There are some services that are automatically added to the mesh in [dev-gcp](https://github.com/navikt/nais-yaml/blob/master/vars/dev-gcp.yaml) and [prod-gcp](https://github.com/navikt/nais-yaml/blob/master/vars/prod-gcp.yaml) (search for `global_serviceentries`). - -## Advanced: Resources created by Naiserator - -The previous application manifest examples will create Kubernetes Network Policies. - -### Kubernetes Network Policy - -#### Default policy - -Every app created will have this default network policy that allows traffic to Linkerd and kube-dns. -It also allows incoming traffic from the Linkerd control plane and from tap and prometheus in the linkerd-viz namespace. This is what enables monitoring via the linkerd dashboard. -These policies will be created for every app, also those who don't have any access policies specified. - -```yaml -apiVersion: extensions/v1beta1 -kind: NetworkPolicy -metadata: - labels: - app: appname - team: teamname - name: appname - namespace: teamname -spec: - egress: - - to: - - namespaceSelector: - matchLabels: - linkerd.io/is-control-plane: "true" - - namespaceSelector: {} - podSelector: - matchLabels: - k8s-app: kube-dns - - ipBlock: - cidr: 0.0.0.0/0 - except: - - 10.6.0.0/15 - - 172.16.0.0/12 - - 192.168.0.0/16 - ingress: - - from: - - namespaceSelector: - matchLabels: - linkerd.io/is-control-plane: "true" - - from: - - namespaceSelector: - matchLabels: - linkerd.io/extension: viz - podSelector: - matchLabels: - component: tap - - from: - - namespaceSelector: - matchLabels: - linkerd.io/extension: viz - podSelector: - matchLabels: - component: prometheus - podSelector: - matchLabels: - app: appname - policyTypes: - - Ingress - - Egress - -``` - -#### Kubernetes network policies - -The applications specified in `spec.accessPolicy.inbound.rules` and `spec.accessPolicy.outbound.rules` will append these fields to the default Network Policy: - -```yaml -apiVersion: extensions/v1beta1 -kind: NetworkPolicy -... -spec: - egress: - - to: - ... - - namespaceSelector: - matchLabels: - kubernetes.io/metadata.name: othernamespace - podSelector: - matchLabels: - app: app-b - - podSelector: - matchLabels: - app: app-b - - from: - - namespaceSelector: - matchLabels: - kubernetes.io/metadata.name: othernamespace - podSelector: - matchLabels: - app: app-b - - podSelector: - matchLabels: - app: app-b - podSelector: - matchLabels: - app: appname - policyTypes: - - Egress - - Ingress -``` - -If you are working directly with Kubernetes Network Policies, we are recommending the Cilium Policy Editor which can be found at [editor.cilium.io](https://editor.cilium.io/). diff --git a/old/nais-application/ingress.md b/old/nais-application/ingress.md index b24125e4e..0665b5e54 100644 --- a/old/nais-application/ingress.md +++ b/old/nais-application/ingress.md @@ -1,6 +1,6 @@ # Ingress traffic -Ingress traffic is traffic that is directed to your application from the internet. This is done by configuring the [`ingresses`][nais-ingress] block in your NAIS application yaml manifest with the domains you want your application to receive traffic from. +Ingress traffic is traffic that is directed to your application from the Internet. This is done by configuring the [`ingresses`][nais-ingress] block in your NAIS application yaml manifest with the domains you want your application to receive traffic from. [nais-ingress]: https://doc.nais.io/nais-application/application/#ingresses diff --git a/old/observability/metrics.md b/old/observability/metrics.md index 3200c78c5..30942536b 100644 --- a/old/observability/metrics.md +++ b/old/observability/metrics.md @@ -167,7 +167,7 @@ Then you have full control of the database and retention. ## Metric naming -For metric names we use the internet-standard [Prometheus naming conventions](https://prometheus.io/docs/practices/naming/): +For metric names we use the Internet standard [Prometheus naming conventions](https://prometheus.io/docs/practices/naming/): * Metric names should have a (single-word) application prefix relevant to the domain the metric belongs to. * Metric names should be nouns in **snake_case**; do not use verbs. diff --git a/old/persistence/open-search.md b/old/persistence/open-search.md index 734d1c94c..55b09a66e 100644 --- a/old/persistence/open-search.md +++ b/old/persistence/open-search.md @@ -71,7 +71,7 @@ Once the resource is added to the cluster, some additional fields are filled in | field | | |-------------------------|-------------------------------------------------------------------------------------------------------| -| `projectVpcId` | Ensures the instance is connected to the correct project VPC and is not available on public internet. | +| `projectVpcId` | Ensures the instance is connected to the correct project VPC and is not available on public Internet. | | `tags` | Adds tags to the instance used for tracking billing in Aiven. | | `cloudName` | Where the OpenSearch instance should run. | | `terminationProtection` | Protects the instance against unintended termination. Must be set to `false` before deletion. | diff --git a/old/persistence/redis.md b/old/persistence/redis.md index 61cab3ef7..35f4c897e 100644 --- a/old/persistence/redis.md +++ b/old/persistence/redis.md @@ -104,7 +104,7 @@ Once the resource is added to the cluster, some additional fields are filled in | field | | | ----------------------- | ----------------------------------------------------------------------------------------------------- | -| `projectVpcId` | Ensures the instance is connected to the correct project VPC and is not available on public internet. | +| `projectVpcId` | Ensures the instance is connected to the correct project VPC and is not available on public Internet. | | `tags` | Adds tags to the instance used for tracking billing in Aiven. | | `cloudName` | Where the Redis instance should run. | | `terminationProtection` | Protects the instance against unintended termination. Must be set to `false` before deletion. | diff --git a/poetry.lock b/poetry.lock index ea6e1ed10..8d5693650 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,4 +1,4 @@ -# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand. +# This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand. [[package]] name = "babel" @@ -224,24 +224,24 @@ pyparsing = {version = ">=2.4.2,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.0.2 || >3.0 [[package]] name = "idna" -version = "3.6" +version = "3.7" description = "Internationalized Domain Names in Applications (IDNA)" optional = false python-versions = ">=3.5" files = [ - {file = "idna-3.6-py3-none-any.whl", hash = "sha256:c05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f"}, - {file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"}, + {file = "idna-3.7-py3-none-any.whl", hash = "sha256:82fee1fc78add43492d3a1898bfa6d8a904cc97d8427f683ed8e798d07761aa0"}, + {file = "idna-3.7.tar.gz", hash = "sha256:028ff3aadf0609c1fd278d8ea3089299412a7a8b9bd005dd08b9f8285bcb5cfc"}, ] [[package]] name = "jinja2" -version = "3.1.3" +version = "3.1.4" description = "A very fast and expressive template engine." optional = false python-versions = ">=3.7" files = [ - {file = "Jinja2-3.1.3-py3-none-any.whl", hash = "sha256:7d6d50dd97d52cbc355597bd845fabfbac3f551e1f99619e39a35ce8c370b5fa"}, - {file = "Jinja2-3.1.3.tar.gz", hash = "sha256:ac8bd6544d4bb2c9792bf3a159e80bba8fda7f07e81bc3aed565432d5925ba90"}, + {file = "jinja2-3.1.4-py3-none-any.whl", hash = "sha256:bc5dd2abb727a5319567b7a813e6a2e7318c39f4f487cfe6c89c6f9c7d25197d"}, + {file = "jinja2-3.1.4.tar.gz", hash = "sha256:4a3aee7acbbe7303aede8e9648d13b8bf88a429282aa6122a993f0ac800cb369"}, ] [package.dependencies] @@ -252,13 +252,13 @@ i18n = ["Babel (>=2.7)"] [[package]] name = "markdown" -version = "3.5.2" +version = "3.6" description = "Python implementation of John Gruber's Markdown." optional = false python-versions = ">=3.8" files = [ - {file = "Markdown-3.5.2-py3-none-any.whl", hash = "sha256:d43323865d89fc0cb9b20c75fc8ad313af307cc087e84b657d9eec768eddeadd"}, - {file = "Markdown-3.5.2.tar.gz", hash = "sha256:e1ac7b3dc550ee80e602e71c1d168002f062e49f1b11e26a36264dafd4df2ef8"}, + {file = "Markdown-3.6-py3-none-any.whl", hash = "sha256:48f276f4d8cfb8ce6527c8f79e2ee29708508bf4d40aa410fbc3b4ee832c850f"}, + {file = "Markdown-3.6.tar.gz", hash = "sha256:ed4f41f6daecbeeb96e576ce414c41d2d876daa9a16cb35fa8ed8c2ddfad0224"}, ] [package.extras] @@ -393,13 +393,13 @@ wcmatch = ">=7" [[package]] name = "mkdocs-build-plantuml-plugin" -version = "1.9.0" +version = "1.10.0" description = "An MkDocs plugin to call plantuml locally or remote" optional = false python-versions = ">=3.2" files = [ - {file = "mkdocs-build-plantuml-plugin-1.9.0.tar.gz", hash = "sha256:b0468a2024741ff5c39d5307e34e54e11225ee0da378ef2bc35ed0e0f1cfc99d"}, - {file = "mkdocs_build_plantuml_plugin-1.9.0-py3-none-any.whl", hash = "sha256:ba738b698b69dfba8fbd49f92336a17cbe9df16c7b4d384b85528058301ec0c3"}, + {file = "mkdocs-build-plantuml-plugin-1.10.0.tar.gz", hash = "sha256:81614189fefd1627607775645c9498d6dfeb2566329dc5739bcb92ffd1e6cc71"}, + {file = "mkdocs_build_plantuml_plugin-1.10.0-py3-none-any.whl", hash = "sha256:56620b948fd422ce7d71da1dbedd606fe5db7ad3421014b4643dc9f4ff70cf51"}, ] [package.dependencies] @@ -424,13 +424,13 @@ requests = "*" [[package]] name = "mkdocs-git-revision-date-localized-plugin" -version = "1.2.4" +version = "1.2.5" description = "Mkdocs plugin that enables displaying the localized date of the last git modification of a markdown file." optional = false python-versions = ">=3.8" files = [ - {file = "mkdocs-git-revision-date-localized-plugin-1.2.4.tar.gz", hash = "sha256:08fd0c6f33c8da9e00daf40f7865943113b3879a1c621b2bbf0fa794ffe997d3"}, - {file = "mkdocs_git_revision_date_localized_plugin-1.2.4-py3-none-any.whl", hash = "sha256:1f94eb510862ef94e982a2910404fa17a1657ecf29f45a07b0f438c00767fc85"}, + {file = "mkdocs_git_revision_date_localized_plugin-1.2.5-py3-none-any.whl", hash = "sha256:d796a18b07cfcdb154c133e3ec099d2bb5f38389e4fd54d3eb516a8a736815b8"}, + {file = "mkdocs_git_revision_date_localized_plugin-1.2.5.tar.gz", hash = "sha256:0c439816d9d0dba48e027d9d074b2b9f1d7cd179f74ba46b51e4da7bb3dc4b9b"}, ] [package.dependencies] @@ -462,13 +462,13 @@ test = ["mkdocs-include-markdown-plugin", "mkdocs-macros-test", "mkdocs-material [[package]] name = "mkdocs-material" -version = "9.5.12" +version = "9.5.18" description = "Documentation that simply works" optional = false python-versions = ">=3.8" files = [ - {file = "mkdocs_material-9.5.12-py3-none-any.whl", hash = "sha256:d6f0c269f015e48c76291cdc79efb70f7b33bbbf42d649cfe475522ebee61b1f"}, - {file = "mkdocs_material-9.5.12.tar.gz", hash = "sha256:5f69cef6a8aaa4050b812f72b1094fda3d079b9a51cf27a247244c03ec455e97"}, + {file = "mkdocs_material-9.5.18-py3-none-any.whl", hash = "sha256:1e0e27fc9fe239f9064318acf548771a4629d5fd5dfd45444fd80a953fe21eb4"}, + {file = "mkdocs_material-9.5.18.tar.gz", hash = "sha256:a43f470947053fa2405c33995f282d24992c752a50114f23f30da9d8d0c57e62"}, ] [package.dependencies] @@ -597,17 +597,17 @@ windows-terminal = ["colorama (>=0.4.6)"] [[package]] name = "pymdown-extensions" -version = "10.7" +version = "10.8.1" description = "Extension pack for Python Markdown." optional = false python-versions = ">=3.8" files = [ - {file = "pymdown_extensions-10.7-py3-none-any.whl", hash = "sha256:6ca215bc57bc12bf32b414887a68b810637d039124ed9b2e5bd3325cbb2c050c"}, - {file = "pymdown_extensions-10.7.tar.gz", hash = "sha256:c0d64d5cf62566f59e6b2b690a4095c931107c250a8c8e1351c1de5f6b036deb"}, + {file = "pymdown_extensions-10.8.1-py3-none-any.whl", hash = "sha256:f938326115884f48c6059c67377c46cf631c733ef3629b6eed1349989d1b30cb"}, + {file = "pymdown_extensions-10.8.1.tar.gz", hash = "sha256:3ab1db5c9e21728dabf75192d71471f8e50f216627e9a1fa9535ecb0231b9940"}, ] [package.dependencies] -markdown = ">=3.5" +markdown = ">=3.6" pyyaml = "*" [package.extras] @@ -830,13 +830,13 @@ files = [ [[package]] name = "requests" -version = "2.31.0" +version = "2.32.0" description = "Python HTTP for Humans." optional = false -python-versions = ">=3.7" +python-versions = ">=3.8" files = [ - {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, - {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, + {file = "requests-2.32.0-py3-none-any.whl", hash = "sha256:f2c3881dddb70d056c5bd7600a4fae312b2a300e39be6a118d30b90bd27262b5"}, + {file = "requests-2.32.0.tar.gz", hash = "sha256:fa5490319474c82ef1d2c9bc459d3652e3ae4ef4c4ebdd18a21145a47ca4b6b8"}, ] [package.dependencies] @@ -960,4 +960,4 @@ bracex = ">=2.1.1" [metadata] lock-version = "2.0" python-versions = "^3.11" -content-hash = "49c02eabe31011d174c8d596bb23c38364d2b6742d578e5cc3d56ceccc7f4b68" +content-hash = "54dcc8456dcf3adbfb23fb98c246dc556cf8e48f8f677c4073b248ee5cbb6307" diff --git a/pyproject.toml b/pyproject.toml index 05ae7fb17..9803b26a0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -8,10 +8,10 @@ authors = [] python = "^3.11" mkdocs = "^1.5.3" Pygments = "^2.17.2" -pymdown-extensions = "^10.2" -mkdocs-material = "^9.5.12" -mkdocs-build-plantuml-plugin = "^1.9.0" -mkdocs-git-revision-date-localized-plugin = "^1.2.1" +pymdown-extensions = "^10.8" +mkdocs-material = "^9.5.18" +mkdocs-build-plantuml-plugin = "^1.10.0" +mkdocs-git-revision-date-localized-plugin = "^1.2.5" mkdocs-redirects = "^1.2.1" mkdocs-git-committers-plugin-2 = "^2.3.0" mkdocs-macros-plugin = "^1.0.5" diff --git a/tenants/nav/explanation/migrating-to-gcp.md b/tenants/nav/explanation/migrating-to-gcp.md deleted file mode 100644 index ba74af03d..000000000 --- a/tenants/nav/explanation/migrating-to-gcp.md +++ /dev/null @@ -1,204 +0,0 @@ -# Migrating to GCP - -## Why migrate our application\(s\)? - -* Access to self-service [Google-managed buckets](../how-to-guides/persistence/buckets/create.md) and [Postgres databases](../how-to-guides/persistence/postgres.md). -* Access to Google Cloud features. -* [Zero Trust security model](zero-trust.md) instead of FSS/SBS zone model. -* Cost efficient and future proof. - -## Prerequisites - -* The team needs to update their ROS and PVK analysis to migrate to GCP. -* Read this [roles and responsibilites](../reference/legal/roles-responsibilities.md) - -### Security - -Our GCP clusters use a zero trust security model, implying that the application must specify both incoming and outgoing connections in order to receive or send traffic at all. This is expressed using [access policies](../how-to-guides/access-policies.md). - -### Privacy - -Google is cleared to be a data processor for personally identifiable information \(PII\) at NAV. However, before your team moves any applications or data to GCP the following steps should be taken: - -1. Verify that you have a valid and up-to-date PVK for your application. This document should be [tech stack agnostic](../reference/legal/app-pvk.md) and as such does not need to be changed to reflect the move to GCP. -2. If the application stores any data in GCP, update [Behandlingskatalogen](https://behandlingskatalog.nais.adeo.no/) to reflect that Google is a data processor. - -### ROS - -The ROS analysis for the team's applications need to be updated to reflect any changes in platform components used. For example, if your team has any specific measures implemented to mitigate risks related to "Kode 6 / 7 users", you should consider if these measures still apply on the new infrastructure or if you want to initiate any additional measures. When updating the ROS, please be aware that the GCP components you are most likely to use have already undergone [risk assessment by the nais team](../reference/legal/nais-ros.md) and that you can refer to these ROS documents in your own risk assessment process. - -## FAQ - -### What do we have to change? - -???+ faq "Answer" - - * Cluster name: All references to cluster name. \(Logs, grafana, deploy, etc.\) - * Secrets: are now stored as native secrets in the cluster, rather than externally in Vault. - * Namespace: If your application is in the `default` namespace, you will have to move to team namespace - * Storage: Use `GCS-buckets` instead of `s3` in GCP. Buckets, and access to them, are expressed in your [application manifest](../nais-application/example.md) - * Ingress: There are some domains that are available both on-prem and in GCP, but some differ, make sure to verify before you move. - * Postgres: A new database \(and access to it\) is automatically configured when expressing `sqlInstance` in your [application manifest](../nais-application/example.md) - - We're currently investigating the possibility of using on-prem databases during a migration window. - - * PVK: Update your existing PVK to include cloud - - [GCP compared to on-premises](migrating-to-gcp.md#gcp-compared-to-on-premises) summarizes the differences and how that may apply to your application. - -### What should we change? - -???+ faq "Answer" - - - Use [TokenX](../security/auth/tokenx.md) instead of API-GW. - - If using automatically configured [Google-managed buckets](../persistence/buckets.md) or [postgres](../persistence/postgres.md), use [Google APIs](https://cloud.google.com/storage/docs/reference/libraries) - -### What do we not need to change? - -???+ faq "Answer" - - You do not have to make any changes to your application code. Ingresses work the same way, although some domains overlap and others are exclusive. Logging, secure logging, metrics and alerts work the same way. - -### What can we do now to ease migration to GCP later? - -???+ faq "Answer" - - - Make sure your PVK is up to date. - - Deploy your application to your team's namespace instead of `default`, as this is not available in GCP. - - Use a token auth flow between your applications. Either [TokenX](../security/auth/tokenx.md), [AAD on-behalf-of or AAD client_credentials flow](../security/auth/azure-ad/README.md) depending on your use case. This allows for a more seamless migration of your applications. E.g. if you have two apps in FSS, you can migrate one without the other. - -### What about PVK? - -???+ faq "Answer" - - A PVK is not a unique requirement for GCP, so all applications should already have one. See [about security and privacy when using platform services](../README.md#about-security-and-privacy-when-using-platform-services) for details - -### How do we migrate our database? - -???+ faq "Answer" - - See [Migrating databases to GCP](./migrating-databases-to-gcp.md). - -### Why is there no Vault in GCP? - -???+ faq "Answer" - - There is native functionality in GCP that overlap with many of the use cases that Vault have covered on-prem. Using these mechanisms removes the need to deal with these secrets at all. Introducing team namespaces allows the teams to manage their own secrets in their own namespaces without the need for IAC and manual routines. For other secrets that are not used by the application during runtime, you can use the [Secret Manager](../security/secrets/google-secrets-manager.md) in each team's GCP project. - -### How do we migrate from Vault to Secrets Manager? - -???+ faq "Answer" - - See the [Secrets Manager documentation](../security/secrets/google-secrets-manager.md). - -### How do we migrate from filestorage to buckets? - -???+ faq "Answer" - - Add a bucket to your application spec Copy the data from the filestore using [s3cmd](https://s3tools.org/s3cmd) to the bucket using [gsutil](https://cloud.google.com/storage/docs/gsutil) - -### What are the plans for cloud migration in NAV? - -???+ faq "Answer" - - Both sbs clusters are now retired. NAVs strategic goal is to shut off all on-prem datacenters by the end of 2023 - -### What can we do in our GCP project? - -???+ faq "Answer" - - The teams GCP projects are primarily used for automatically generated resources \(buckets and postgres\). We're working on extending the service offering. However, additional access may be granted if required by the team - -### How long does it take to migrate? - -???+ faq "Answer" - - A minimal application without any external requirements only have to change a single configuration parameter when deploying and have migrated their application in 5 minutes. [GCP compared to on-premises](migrating-to-gcp.md#gcp-compared-to-on-premises) summarizes the differences and how that may apply to your application. - - -### We have personally identifiable and/or sensitive data in our application, and we heard about the Privacy Shield invalidation. Can we still use GCP? - -???+ faq "Answer" - - **Yes.** [NAV's evaluation of our Data Processor Agreement with Google post-Schrems II](https://navno.sharepoint.com/:w:/r/sites/Skytjenesterforvaltningsregime/_layouts/15/Doc.aspx?sourcedoc=%7BA9562232-BB00-40CB-930D-4EF254A5AD7F%7D&file=2020-10-10%20GCP%20-%20behandling%20og%20avtaler.docx&action=default&mobileredirect=true) is that it still protects us and is valid for use **given that data is stored and processed in data centers located within the EU/EEA**. If your team uses resources provisioned through NAIS, this is guaranteed by the nais team. If your team uses any other GCP services the team is responsible for ensuring that only resources within EU/EES are used \(as well as for evaluating the risk of using these services\). - - See [Laws and regulations/Application PVK](../legal/app-pvk.md) for details. - -### How do I reach an application found on-premises from my application in GCP? - -???+ faq "Answer" - - The application _on-premises_ should generally fulfill the following requirements: - - 1. Be secured with [OAuth 2.0](../security/auth/README.md). That is, either: - - a. [TokenX](../security/auth/tokenx.md), or - - b. [Azure AD](../security/auth/azure-ad/README.md) - 2. Exposed to GCP using a special ingress: - - `https://.dev-fss-pub.nais.io` - - `https://.prod-fss-pub.nais.io` - - The application _on-premises_ must then: - - 1. Add the ingress created above to the list of ingresses: - - ```yaml - spec: - ingresses: - - https://.-fss-pub.nais.io - ``` - - 2. If secured with OAuth 2.0, ensure that the application also has set up inbound access policies: - - a. [Access Policies for TokenX](../security/auth/tokenx.md#access-policies) - - b. [Access Policies for Azure AD](../security/auth/azure-ad/configuration.md#pre-authorization) - - The application _in GCP_ must then: - - 1. Add the above hosts to their [outbound external access policies](../nais-application/access-policy.md#external-services): - - ```yaml - spec: - accessPolicy: - outbound: - external: - - host: .-fss-pub.nais.io - ``` - -### How do I reach an application found on GCP from my application on-premises? - -???+ faq "Answer" - - The application in GCP must be exposed on a [matching ingress](gcp.md#accessing-the-application): - - | ingress | reachable from zone | - | :--- | :--- | - | `.intern.dev.nav.no` | `dev-fss` | - | `.intern.nav.no` | `prod-fss` | - | `.nav.no` | internet, i.e. all clusters | - - The application on-premises should _not_ have to use webproxy to reach these ingresses. - -## GCP compared to on-premises - -| Feature | on-prem | gcp | Comment | -|:--------------------------|:-----------|:----------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Deploy | yes | yes | different clustername when deploying | -| Logging | yes | yes | different clustername in logs.adeo.no | -| Metrics | yes | yes | same mechanism, different datasource | -| Nais app dashboard | yes | yes | new and improved in GCP | -| Alerts | yes | yes | identical | -| Secure logs | yes | yes | different clustername in logs.adeo.no | -| Kafka | yes | yes | identical | -| Secrets | Vault | Secret manager | | -| Team namespaces | yes | yes | | -| Shared namespaces | yes | no | Default namespace not available for teams in GCP | -| Health checks | yes | yes | identical | -| Ingress | yes | yes | See [environments overview](../reference/environments.md) for available domains | -| Storage | Ceph | Buckets | | -| Postgres | yes \(IAC\) | yes \(self-service\) | | -| Laptop access | yes | yes | | -| domain: dev.intern.nav.no | | yes \(Automatic\) | Wildcard DNS points to GCP load balancer | -| Access to FSS services | | yes | Identical \(either API-gw or TokenX) -| NAV truststore | yes | yes | | -| PVK required | yes | yes | amend to cover storage in cloud | -| Security | Zone Model | [zero-trust](./zero-trust.md) | | - diff --git a/tenants/nav/explanation/naisdevice.md b/tenants/nav/explanation/naisdevice.md deleted file mode 100644 index fd64379f0..000000000 --- a/tenants/nav/explanation/naisdevice.md +++ /dev/null @@ -1,12 +0,0 @@ -# naisdevice - -naisdevice is a mechanism provided by NAIS, that lets you connect to services not available on the public internet from your machine. - -Examples of such services are: -- Access to the NAIS cluster with kubectl -- Applications on internal domains -- Internal NAIS services such as [console](https://console.<>.cloud.nais.io). - -In order to be able to connect, your machine needs to meet certain requirements. These requirements are enforced by a third-party service called [Kolide](https://kolide.com/) that is installed alongside naisdevice. - -Kolide uses Slack to communicate with you when something is wrong with your machine, guiding you through the process of fixing it. diff --git a/tenants/nav/how-to-guides/observability/metrics/grafana-from-infoscreen.md b/tenants/nav/how-to-guides/observability/metrics/grafana-from-infoscreen.md deleted file mode 100644 index 817a98cb0..000000000 --- a/tenants/nav/how-to-guides/observability/metrics/grafana-from-infoscreen.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -description: How to show Grafana on an info-screen -tags: [guide, grafana] ---- -# Show Grafana on info-screen - -This how-to shows you how to show Grafana on a info-screen. - -## 1. Get service account - -In order to run Grafana from a big screen, you will need a Grafana service account. - -You get this by contacting us in the [#nais](https://nav-it.slack.com/archives/C5KUST8N6) channel on Slack. - -## 2. Access Grafana - -=== "using browser extension" -To add the service account credentials to the header of your requests, you can use the [Modify Header Value](https://mybrowseraddon.com/modify-header-value.html) browser extension available for Chrome and Firefox. - -Set the following configuration in the extension: - -| Field | Value | -| ------------ | --------------------------------------------------------- | -| URL | `https://grafana-infoskjerm.<>.cloud.nais.io/*` | -| Domain | ✅ | -| Header name | `Authorization` | -| Add | ✅ | -| Header value | `Bearer ` | -| State | Active | - -=== "using token manually" - - Access `https://grafana-infoskjerm.<>.cloud.nais.io` with the sevice account credential as a Bearer token: - - ```code - Authorization: Bearer - ``` - - !!! note - - Maske sure you spell the `Authorization` header name correctly and that the value `Bearer` starts with a capital B followed by the token. Replace `` with your service account token value. If done incorrectly Grafana will reload constantly. diff --git a/tenants/nav/how-to-guides/persistence/influxdb/create.md b/tenants/nav/how-to-guides/persistence/influxdb/create.md deleted file mode 100644 index 92c3e9a5d..000000000 --- a/tenants/nav/how-to-guides/persistence/influxdb/create.md +++ /dev/null @@ -1,13 +0,0 @@ -!!! info "Disclaimer" - We [discourage use of Aiven InfluxDB](/explanation/database/influxdb) for new use cases and don't support InfluxDB as software. [BigQuery ](/explanation/database/bigquery) might be a better fit for many use cases. Questions about Aiven and provisioning can be directed to #nais on Slack. - -# 1. Get an InfluxDB instance - -We use a IaC-repo to provision InfluxDB instances. Head over to [aiven-iac](https://github.com/navikt/aiven-iac#influxdb) to learn how to get your own instance. -(TODO: NAV ONLY) -# 2. Retention policies -The default database is created with a default retention policy of 30 days. You might want to adjust this by e.g. creating a new default retention policy with 1 year retention: - -``` -create retention policy "365d" on "defaultdb" duration 365d replication 1 shard duration 1w default -``` \ No newline at end of file diff --git a/tenants/nav/how-to-guides/persistence/influxdb/datasource.md b/tenants/nav/how-to-guides/persistence/influxdb/datasource.md deleted file mode 100644 index bf33a2a3e..000000000 --- a/tenants/nav/how-to-guides/persistence/influxdb/datasource.md +++ /dev/null @@ -1,15 +0,0 @@ -# Datasource in Grafana - -Let us know in [#nais](https://nav-it.slack.com/archives/C5KUST8N6) if you want your InfluxDB to be exposed in Grafana. -This means that everyone can access your data. -(TODO: Nav ONLY) -# Access from laptop - -With Naisdevice you have access to the _aiven-prod_ gateway. -This is a JITA (just in time access) gateway, so you need to describe why, but the access is automatically given. - -``` -influx -username avnadmin -password foo -host influx-instancename-nav-dev.aivencloud.com -port 26482 -ssl -``` - -PS: Remember to use Influxdb CLI pre v2. For example v1.8.3. \ No newline at end of file diff --git a/tenants/nav/how-to-guides/persistence/migrating-databases-to-gcp.md b/tenants/nav/how-to-guides/persistence/migrating-databases-to-gcp.md deleted file mode 100644 index 6364eba01..000000000 --- a/tenants/nav/how-to-guides/persistence/migrating-databases-to-gcp.md +++ /dev/null @@ -1,161 +0,0 @@ -# Migrating databases to GCP - -## Migrating databases to GCP postgreSQL - -Suggested patterns for moving on-prem databases to GCP postgreSQL. - -Disclaimer: These are options for migrations to GCP postgreSQL. Others may work better for your team. - -## Prerequisites - -The team needs to update their ROS and PVK analysis to migrate to GCP. Refer to the ROS and PVK section under [Google Cloud Platform clusters](gcp.md). - -See database creation in GCP in [Google Cloud Platform persistence](../persistence/postgres.md). - -## Migration paths available - -### From on-premise Oracle - -#### Replication migration using migration application - -Create a simple migration application that supports writing data to the new database. Using requests sent to this application you can populate the new postgreSQL database. - -Rewrite the oracle DDL scripts to postgreSQL. If your oracle database contains specific oracle procedures or functions, that do not exist in postgreSQL, they will have to be recreated in some other way. There are tools available to help ease this rewrite, for example [ora2pg](http://ora2pg.darold.net/start.html). Create the postgreSQL database in GCP and start deploy the application to GCP with the empty database and let flyway \(or other database versioning software\) create the DDLs. - -Create migration app as a container in the same pod as the database application \(this is to avoid permission issues using the same database\). This migration application only handles the data transfer from the oracle database to postgreSQL in GCP. - -Examples: - -* [PAM Stillingsregistrering API Migration](https://github.com/navikt/pam-stillingsregistrering-api-migration/#pam-stillingsregistrering-api-migration) (documentation and code) -* [PAM AD Migration](https://github.com/navikt/pam-ad-migration) (no documentation, just code) -* [Rekrutteringsbistand migration](https://github.com/navikt/rekrutteringsbistand-kandidat-api-migrering) (not using JPA for easier code and less memory-intensive data handling, but as-is only suitable on-premise migration) - -Trigger migration from command line \(or use another form of trigger\) and read the data from a feed or kafka. - -Pros: - -* No downtime -* Live synchronization between on-premise and GCP -* Migration controlled entirely by team -* Migration can be stopped and restarted at any moment - -Cons: - -* Can be slow if large amounts of data are to be transferred, if this is the case use kafka for the streaming process instead -* Can be tricky for complex databases - -!!! note - This procedure is also valid for on-premise postgreSQL migration, and even simpler as no rewrite is necessary. - -### From on-premise postgreSQL - -#### Migration using pg\_dump - -This method is suitable for applications that can have the database in read-only or application that allow for some downtime. It requires that the database instance and DDLs are created up front \(i.e. deploy your application in GCP and let flyway create DDLs\): - -Use docker container image with psql and cloudsdk: [GCP migration image](https://github.com/navikt/gcp-migrering). This image let you do all the following actions from one place. You can either use the manual described below or the migration_data.sh script in the repository. - -In any case you need to create a secret in your namespace containing the Google SA you want to use to do the migration. Add the token to the secret.yaml file and apply it in your namespace. - -Deploy the pod into on-premise cluster that can connect to the database - -```shell -kubectl apply -f https://raw.githubusercontent.com/navikt/gcp-migrering/main/gcloud.yaml -``` - -`exec` into that pod - -```shell -kubectl exec -it gcloud -- /bin/bash -``` - -Log in to gcloud with your own NAV-account - -```shell -gcloud auth login -``` - -Configure the project id \(find project id with `gcloud projects list --filter team`\) - -```shell -gcloud config set project -``` - -Create a GCP bucket. You will need the `roles/storage.admin` IAM role for the required operations on the bucket. - -```shell -gsutil mb -l europe-north1 gs:// -``` - -Find the GCP service account e-mail \(the instance id is specified in your `nais.yaml` file\) - -```shell -gcloud sql instances describe | yq r - serviceAccountEmailAddress -``` - -Set the objectAdmin role for the bucket \(with the previous e-mail\) - -```shell -gsutil iam ch serviceAccount::objectAdmin gs:/// -``` - -Use `pg_dump` to create the dump file. Notes: - -- Make sure that you stop writes to database before running `pg_dump`. -- Get a [database user from Vault](https://github.com/navikt/utvikling/blob/main/docs/teknisk/Vault.md#--hente-ut-postgresql-credentials-til-en-utvikler). -- If the database in GCP already has the `flyway_schema_history` table, - you might want to exclude the equivalent table in the dump by using the `--exclude-table=flyway_schema_history` option. - -```shell -pg_dump \ - -h \ - -d \ - -U \ - --format=plain --no-owner --no-acl --data-only -Z 9 > dump.sql.gz -``` - -Copy the dump file to GCP bucket - -```shell -gsutil -o GSUtil:parallel_composite_upload_threshold=150M -h "Content-Type:application/x-gzip" cp dump.sql.gz gs:/// -``` - -Import the dump into the GCP postgreSQL database. Notes: - -- You need the `roles/cloudsql.admin` IAM role in order to perform the import. -- The `user` in the command below should be a GCP SQL Instance user. -- If the GCP Postgres database has any existing tables or sequences, make sure that the `user` has all required grants for these. - -```shell -gcloud sql import sql gs:///dump.sql.gz \ - --database= \ - --user= -``` - -Verify that the application is behaving as expected and that the data in the new database is correct. Finally we need to switch loadbalancer to route to the GCP application instead of the on-premise equivalent. - -Delete the bucket in GCP after migration is complete - -```shell -gsutil -m rm -r gs:// -gsutil rb gs:// -``` - -Pros: - -* Easy and relatively fast migration path -* No need for separate migration application or streams to populate database - -Cons: - -* Requires downtime for the application, or at least no writes to database -* Requires node with access to on-premise database and GCP buckets - -#### Replication migration using migration application - -Same procedure as for Oracle. - -#### Replication migration using pgbouncer - -Not available as of now. - diff --git a/tenants/nav/reference/legal/app-pvk.md b/tenants/nav/legal/app-pvk.md similarity index 57% rename from tenants/nav/reference/legal/app-pvk.md rename to tenants/nav/legal/app-pvk.md index cfac968e0..f16c2f0bc 100644 --- a/tenants/nav/reference/legal/app-pvk.md +++ b/tenants/nav/legal/app-pvk.md @@ -1,7 +1,7 @@ # Application privacy impact assessments (PVK) -Before deploying any application that processes and/or stores data to nais, a privacy assessment \(PVK\) must be conducted. This PVK should be kept reasonably up-to-date through the life cycle of the application. Refer to the [PVK process documentation for details](https://navno.sharepoint.com/sites/intranett-personvern/SitePages/PVK.aspx) \(NAV internal link\). +Before deploying any application that processes and/or stores data to nais, a privacy assessment (PVK) must be conducted. This PVK should be kept reasonably up-to-date through the life cycle of the application. Refer to the [PVK process documentation for details](https://navno.sharepoint.com/sites/intranett-personvern/SitePages/PVK.aspx) \(NAV internal link\). -Ideally, the language in your PVK should be tech stack agnostic, meaning there will not be any need to change this document as components are changed or the application is [moved between on-premises and cloud clusters](../clusters/migrating-to-gcp.md). Any technology specific considerations belong in [the risk assessment](app-ros.md). +Ideally, the language in your PVK should be tech stack agnostic, meaning there will not be any need to change this document as components are changed or the application is [moved between on-premises and cloud clusters](../workloads/explanations/migrating-to-gcp.md). Any technology specific considerations belong in [the risk assessment](app-ros.md). For migrating applications to the cloud, a document has been prepared to aid the PVK process internal for NAV [NAIS i sky – personvern og sikkerhet. internal for NAV](https://navno.sharepoint.com/:w:/s/Skystrategi817/EcVERNkDfLlIt6s0hjSIxoQBGnyufj1ZEZABLtJ-wIdbNg?e=4%3AOj6D2G&at=9&CID=fc0f7ebb-69d8-4ceb-33f4-2778fa0dfd50) diff --git a/tenants/nav/reference/legal/app-ros.md b/tenants/nav/legal/app-ros.md similarity index 100% rename from tenants/nav/reference/legal/app-ros.md rename to tenants/nav/legal/app-ros.md diff --git a/tenants/nav/reference/legal/arkivloven.md b/tenants/nav/legal/arkivloven.md similarity index 100% rename from tenants/nav/reference/legal/arkivloven.md rename to tenants/nav/legal/arkivloven.md diff --git a/tenants/nav/reference/legal/dpa/README.md b/tenants/nav/legal/dpa/README.md similarity index 100% rename from tenants/nav/reference/legal/dpa/README.md rename to tenants/nav/legal/dpa/README.md diff --git a/tenants/nav/reference/legal/dpa/aiven-dpa.md b/tenants/nav/legal/dpa/aiven-dpa.md similarity index 78% rename from tenants/nav/reference/legal/dpa/aiven-dpa.md rename to tenants/nav/legal/dpa/aiven-dpa.md index 0b2a503b0..a20305800 100644 --- a/tenants/nav/reference/legal/dpa/aiven-dpa.md +++ b/tenants/nav/legal/dpa/aiven-dpa.md @@ -4,5 +4,5 @@ NAV has, based on the Schrems II verdict that invalidated The EU-US Privacy Shie Other relevant documents: -* [Risk analysis related to the Kafka service and underlaying platform](https://apps.powerapps.com/play/f8517640-ea01-46e2-9c09-be6b05013566?ID=190) +* [Risk analysis related to the Kafka service and underlying platform](https://apps.powerapps.com/play/f8517640-ea01-46e2-9c09-be6b05013566?ID=190) diff --git a/tenants/nav/reference/legal/dpa/azure-dpa.md b/tenants/nav/legal/dpa/azure-dpa.md similarity index 100% rename from tenants/nav/reference/legal/dpa/azure-dpa.md rename to tenants/nav/legal/dpa/azure-dpa.md diff --git a/tenants/nav/reference/legal/dpa/gcp-dpa.md b/tenants/nav/legal/dpa/gcp-dpa.md similarity index 100% rename from tenants/nav/reference/legal/dpa/gcp-dpa.md rename to tenants/nav/legal/dpa/gcp-dpa.md diff --git a/tenants/nav/reference/legal/nais-pvk.md b/tenants/nav/legal/nais-pvk.md similarity index 95% rename from tenants/nav/reference/legal/nais-pvk.md rename to tenants/nav/legal/nais-pvk.md index 36688eb08..591b526f6 100644 --- a/tenants/nav/reference/legal/nais-pvk.md +++ b/tenants/nav/legal/nais-pvk.md @@ -6,5 +6,5 @@ The exception to the data agnostic rule is data that the platform itself is the Thus, the only personally identifiable information (PII) that the platform processes are access logs for developers at NAV. A PVK for this data processing has been conducted. -For details on what each team's responsibilites are with respect to data processing on the platform, refer to [roles and responsibilities](./roles-responsibilities.md). +For details on what each team's responsibilites are with respect to data processing on the platform, refer to [roles and responsibilities](roles-responsibilities.md). diff --git a/tenants/nav/reference/legal/nais-ros.md b/tenants/nav/legal/nais-ros.md similarity index 86% rename from tenants/nav/reference/legal/nais-ros.md rename to tenants/nav/legal/nais-ros.md index b045a236e..d357fb8e9 100644 --- a/tenants/nav/reference/legal/nais-ros.md +++ b/tenants/nav/legal/nais-ros.md @@ -12,6 +12,7 @@ The nais team has conducted the following risk assessments: * [Aiven ElasticSearch](https://apps.powerapps.com/play/f8517640-ea01-46e2-9c09-be6b05013566?ID=515) * [Google Secret Manager](https://apps.powerapps.com/play/f8517640-ea01-46e2-9c09-be6b05013566?ID=538) * [Dataplattform (including BigQuery)](https://apps.powerapps.com/play/f8517640-ea01-46e2-9c09-be6b05013566?ID=607) +* [Bruk av ansatt.nav.no](https://apps.powerapps.com/play/f8517640-ea01-46e2-9c09-be6b05013566?ID=1670) -\(Note that all risk assessments link to the NAV internal tool TryggNok.\) +(Note that all risk assessments link to the NAV internal tool TryggNok.) diff --git a/tenants/nav/reference/legal/roles-responsibilities.md b/tenants/nav/legal/roles-responsibilities.md similarity index 100% rename from tenants/nav/reference/legal/roles-responsibilities.md rename to tenants/nav/legal/roles-responsibilities.md diff --git a/tenants/nav/how-to-guides/observability/logs/access-secure-logs.md b/tenants/nav/observability/logging/how-to/access-secure-logs.md similarity index 83% rename from tenants/nav/how-to-guides/observability/logs/access-secure-logs.md rename to tenants/nav/observability/logging/how-to/access-secure-logs.md index 30bb2bb65..b407d0425 100644 --- a/tenants/nav/how-to-guides/observability/logs/access-secure-logs.md +++ b/tenants/nav/observability/logging/how-to/access-secure-logs.md @@ -1,11 +1,11 @@ --- -tags: [guide] +tags: [how-to, logging] --- # Access secure logs Once everything is configured, your secure logs will be sent to the `tjenestekall-*` index in kibana. To gain access to these logs, you need to do the following: -## 1 Create an AD-group +## Create an AD-group To make sure you gain access to the proper logs, you need an AD-group connected to the nais-team. So the first thing you do is create this group. @@ -27,17 +27,16 @@ Den må inn i Remedy. Enheter i Nav som skal ha tilgang: . E.g (2990 - IT-AVDELINGEN) ``` -![ticket](../../assets/jira_secure_log.png) +![ticket](../../../assets/jira_secure_log.png) -## 2 Connect the AD group to your team in Kibana +## Connect the AD group to your team in Kibana -The logs your apps produces are linked with your [nais-team](../../team.md). +The logs your apps produces are linked with your [NAIS team](../../../explanations/team.md). Administrators of Kibana will create a role for your team with read rights to those logs. -Whoever is in the AD-group (created in step 1) will get the Kibana role, and can thus read all logs produced by apps belonging to the nais-team. +Whoever is in the AD-group (created in step 1) will get the Kibana role, and can thus read all logs produced by apps belonging to the team. +Ask for this in the [#kibana](https://nav-it.slack.com/archives/C7T8QHXD3) Slack channel; provide the name of the AD-group and the name of your team in the message. -Ask in the [#atom](https://nav-it.slack.com/archives/C7TQ25L9J) Slack channel to connect the AD-group (created in step 1) to your nais-team. - -## 3 Put people into the AD-group +## Put people into the AD-group This must be done by "identansvarlig". For NAV-IT employees, this is `nav.it.identhandtering@nav.no`. Send them an email and ask for access with a CC to whoever is your superior. diff --git a/tenants/nav/how-to-guides/observability/logs/audit-logs.md b/tenants/nav/observability/logging/how-to/audit-logs.md similarity index 91% rename from tenants/nav/how-to-guides/observability/logs/audit-logs.md rename to tenants/nav/observability/logging/how-to/audit-logs.md index 01dd38b70..c41dce7c2 100644 --- a/tenants/nav/how-to-guides/observability/logs/audit-logs.md +++ b/tenants/nav/observability/logging/how-to/audit-logs.md @@ -1,3 +1,7 @@ +--- +tags: [how-to, logging] +--- + # Audit logs Most applications where a user processes data related to another user need to log audit statements, detailing which user did what action on which subject. These logs need to follow a specific format and be accessible by ArcSight. diff --git a/tenants/nav/how-to-guides/observability/logs/enable-secure-logs.md b/tenants/nav/observability/logging/how-to/enable-secure-logs.md similarity index 87% rename from tenants/nav/how-to-guides/observability/logs/enable-secure-logs.md rename to tenants/nav/observability/logging/how-to/enable-secure-logs.md index 4e07ade22..cbc783c22 100644 --- a/tenants/nav/how-to-guides/observability/logs/enable-secure-logs.md +++ b/tenants/nav/observability/logging/how-to/enable-secure-logs.md @@ -1,5 +1,5 @@ --- -tags: [guide] +tags: [how-to, logging] --- # Enable secure logs @@ -11,7 +11,13 @@ Some applications have logs with information that should not be stored with the This is guide contains a deprecated syntax for enabling secure logs. With the new syntax all logs will be sent to secure logs when enabled and will not require any special log configuration. -## 1. Enabling secure logs [manifest](../../../reference/application-spec.md) +## Prerequisites + +If your NAIS team has already at any point produced secure logs, you can skip this step. + +If your team has never before produced secure logs, before enabling them for the first time, give a warning in [#kibana](https://nav-it.slack.com/archives/C7T8QHXD3) Slack channel. There are some things that need to be adjusted before a new team can start sending. Remember to include the name of your NAIS team in the message. + +## Enabling secure logs [manifest](../../../workloads/application/reference/application-spec.md) ???+ note ".nais/app.yaml" @@ -21,7 +27,7 @@ Some applications have logs with information that should not be stored with the enabled: true ``` -## 2. Set log rotation +## Set log rotation With secure logs enabled a directory `/secure-logs/` will be mounted in the application container. Every `*.log` file in this directory will be monitored and the content transferred to Elasticsearch. Make sure that these files are readable for the log shipper \(the process runs as uid/gid 1065\). @@ -50,7 +56,7 @@ Log files should be in JSON format as the normal application logs. Here is an ex ``` -## 3. Configure log shipping +## Configure log shipping Example configuration selecting which logs go to secure logs @@ -96,7 +102,7 @@ Example configuration selecting which logs go to secure logs ``` -## 4. Use secure logs in application +## Use secure logs in application Using the Logback config below you can log to secure logs by writing Kotlin-code like this: @@ -113,7 +119,7 @@ log.info("Non-sensitive data here") // Logging to non-secure app logs ``` See doc on [Logback filters](https://logback.qos.ch/manual/filters.html#evaluatorFilter) and [markers](https://www.slf4j.org/api/org/slf4j/MarkerFactory.html) -See [Example log configuration](../../../reference/logs-example.md) for further configuration examples. +See [Example log configuration](#example-log-configuration) for further configuration examples. ### Non-JSON logs diff --git a/tenants/nav/how-to-guides/observability/logs/kibana.md b/tenants/nav/observability/logging/how-to/kibana.md similarity index 84% rename from tenants/nav/how-to-guides/observability/logs/kibana.md rename to tenants/nav/observability/logging/how-to/kibana.md index fe8195c0d..734a32990 100644 --- a/tenants/nav/how-to-guides/observability/logs/kibana.md +++ b/tenants/nav/observability/logging/how-to/kibana.md @@ -1,6 +1,6 @@ --- description: This guide will help you get started with Kibana. -tags: [guide, kibana] +tags: [how-to, logging, kibana] --- # Get started with Elastic Kibana @@ -38,6 +38,6 @@ When you open Kibana you are prompted to select a workspace, select "Nav Logs" t Once the page loads you will see an empty page with a search bar. This is the query bar, and it is used to search for logs. You can use the query bar to search for logs by message, by field, or by a combination of both. -The query language is called [Kibana Query Language](../../../reference/observability/logs/kql.md) (`KQL`). KQL is a simplified version of Lucene query syntax. You can use KQL to search for logs by message, by field, or by a combination of both. +The query language is called [Kibana Query Language](../reference/kql.md) (`KQL`). KQL is a simplified version of Lucene query syntax. You can use KQL to search for logs by message, by field, or by a combination of both. There is also a time picker in the upper right corner of the page. You can use the time picker to select a time range to search for logs. The default time range is the last 15 minutes. If no logs shows up, try to increase the time range. diff --git a/tenants/nav/reference/observability/logs/kql.md b/tenants/nav/observability/logging/reference/kql.md similarity index 98% rename from tenants/nav/reference/observability/logs/kql.md rename to tenants/nav/observability/logging/reference/kql.md index e3f8a4c47..547b17805 100644 --- a/tenants/nav/reference/observability/logs/kql.md +++ b/tenants/nav/observability/logging/reference/kql.md @@ -1,7 +1,7 @@ --- title: KQL Reference description: Kibana Query Language (KQL) Reference for filtering data in Kibana. -tags: [reference, kibana] +tags: [reference, logging, kibana] --- # Kibana Query Language (KQL) Reference @@ -42,4 +42,4 @@ The following fields are common to all logs and can be used in your `KQL` query: | `message: "my message" OR level: "ERROR"` | Search for logs with the message "my message" or the level "ERROR" | | `message: "my message" AND NOT level: "ERROR"` | Search for logs with the message "my message" and not the level "ERROR" | | `message: "my message" AND level: "ERROR" AND NOT level: "WARN"` | Search for logs with the message "my message" and the level "ERROR" and not the level "WARN" | -| `message: "my message" AND level: "ERROR" OR level: "WARN"` | Search for logs with the message "my message" and the level "ERROR" or the level "WARN" | \ No newline at end of file +| `message: "my message" AND level: "ERROR" OR level: "WARN"` | Search for logs with the message "my message" and the level "ERROR" or the level "WARN" | diff --git a/tenants/nav/observability/metrics/how-to/grafana-from-infoscreen.md b/tenants/nav/observability/metrics/how-to/grafana-from-infoscreen.md new file mode 100644 index 000000000..63e86cd0d --- /dev/null +++ b/tenants/nav/observability/metrics/how-to/grafana-from-infoscreen.md @@ -0,0 +1,29 @@ +--- +description: How to show Grafana on an infoscreen +tags: [how-to, metrics, grafana] +--- +# Show Grafana on infoscreen + +To access Grafana from an infoscreen, some additional steps are required. + +## Create service account token + +1. Find your team's service account in [Grafana](https://grafana.<>.cloud.nais.io/org/serviceaccounts). +1. Click on `Add token`, and set the desired expiration. +1. Copy the token value, and use in the following step. + +## Access Grafana from infoscreen browser + +To add the service account credentials to the header of your requests, you can use the [Modify Header Value](https://mybrowseraddon.com/modify-header-value.html) browser extension available for Chrome and Firefox. + +Set the following configuration in the extension: + +| Field | Value | +| ------------ | --------------------------------------------------------- | +| URL | `https://grafana-infoskjerm.<>.cloud.nais.io/*` | +| Domain | ✅ | +| Header name | `Authorization` | +| Add | ✅ | +| Header value | `Bearer ` | +| State | Active | + diff --git a/tenants/nav/reference/cli/aiven.md b/tenants/nav/operate/cli/reference/aiven.md similarity index 88% rename from tenants/nav/reference/cli/aiven.md rename to tenants/nav/operate/cli/reference/aiven.md index 010f62a70..22b285655 100644 --- a/tenants/nav/reference/cli/aiven.md +++ b/tenants/nav/operate/cli/reference/aiven.md @@ -1,3 +1,7 @@ +--- +tags: [command-line, reference] +--- + # aiven command The aiven command can be used to create a AivenApplication and extract credentials. @@ -35,7 +39,7 @@ nais aiven create service username namespace ``` | Argument | Required | Description | -| --------- | -------- | ------------------------------------------------------------ | +|-----------|----------|--------------------------------------------------------------| | service | Yes | Service to use, Kafka or OpenSearch supported. | | username | Yes | Preferred username. | | namespace | Yes | Kubernetes namespace where AivenApplication will be created. | @@ -46,11 +50,11 @@ nais aiven create service username namespace nais aiven create -p nav-prod -s some-unique-secretname -e 10 kafka username namespace ``` -| Flag | Required | Short | Default | Description | -| ----------- | -------- | ----- | ------------------------------- | ------------------------------------------------ | -| pool | No | -p | nav-dev | [Kafka pool](../../persistence/kafka/README.md). | -| secret-name | No | -s | namespace-username-randomstring | Preferred secret-name. | -| expire | No | -e | 1 | Time in days the secret should be valid. | +| Flag | Required | Short | Default | Description | +|-------------|----------|-------|---------------------------------|-----------------------------------------------------| +| pool | No | -p | nav-dev | [Kafka pool](../../../persistence/kafka/README.md). | +| secret-name | No | -s | namespace-username-randomstring | Preferred secret-name. | +| expire | No | -e | 1 | Time in days the secret should be valid. | ### OpenSearch @@ -61,12 +65,12 @@ nais aiven create -i instance -a read -s some-unique-secretname -e 10 opensearch In OpenSearch, the username in the command is not related to the actual OpenSearch username, but used for internal purposes to identify the request. This is because the usernames on OpenSearch instances are pre-defined as of now, one for each possible access level. -| Flag | Required | Short | Default | Description | -| ----------- | -------- | ----- | ------------------------------- | ---------------------------------------------------------------------- | -| access | No | -a | read | One of: admin, read, write, readwrite. | -| instance | Yes | -i | | Name of the [instance](../../persistence/open-search.md#get-your-own). | -| secret-name | No | -s | namespace-username-randomstring | Preferred secret-name. | -| expire | No | -e | 1 | Time in days the secret should be valid. | +| Flag | Required | Short | Default | Description | +|-------------|----------|-------|---------------------------------|---------------------------------------------------------------------------| +| access | No | -a | read | One of: admin, read, write, readwrite. | +| instance | Yes | -i | | Name of the [instance](../../../persistence/opensearch/how-to/create.md). | +| secret-name | No | -s | namespace-username-randomstring | Preferred secret-name. | +| expire | No | -e | 1 | Time in days the secret should be valid. | ## get @@ -75,7 +79,7 @@ nais aiven get service secret-name namespace ``` | Argument | Required | Description | -| ----------- | -------- | ------------------------------------------------------ | +|-------------|----------|--------------------------------------------------------| | service | Yes | Service to use, Kafka or OpenSearch supported. | | secret-name | Yes | Default secret-name or flag `-s` in `create` command. | | namespace | Yes | Kubernetes namespace for the created AivenApplication. | diff --git a/tenants/nav/reference/cli/device.md b/tenants/nav/operate/cli/reference/device.md similarity index 87% rename from tenants/nav/reference/cli/device.md rename to tenants/nav/operate/cli/reference/device.md index 5dadcd665..39054cbe4 100644 --- a/tenants/nav/reference/cli/device.md +++ b/tenants/nav/operate/cli/reference/device.md @@ -1,6 +1,10 @@ +--- +tags: [command-line, reference] +--- + # device command -The device command can be used to connect to, disconnect from, and view the connection status of [naisdevice](../../explanation/naisdevice.md). +The device command can be used to connect to, disconnect from, and view the connection status of [naisdevice](../../naisdevice/README.md). Currently, the command requires the processes `naisdevice-agent` and `naisdevice-helper` to run, both of which can be run by starting naisdevice. ## connect @@ -32,7 +36,7 @@ nais device status ``` | Flag | Required | Short | Default | Description | -| ------ | -------- | ----- | ------- | -------------------------------------------- | +|--------|----------|-------|---------|----------------------------------------------| | quiet | No | -q | false | Only print connection status. | | output | No | -o | yaml | Specify one of yaml or json as output format | @@ -50,7 +54,7 @@ nais device jita my-privileged-access-gateway ``` | Argument | Required | Description | -| -------- | -------- | ------------------------------------------------- | +|----------|----------|---------------------------------------------------| | gateway | Yes | The desired gateway to establish a connection to. | !!! tip "Which gateways require just-in-time access?" @@ -84,6 +88,6 @@ nais device config set AutoConnect true ``` | Argument | Required | Description | -| -------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------ | +|----------|----------|--------------------------------------------------------------------------------------------------------------------------------------| | setting | Yes | The setting to adjust. Must be one of `[autoconnect, certrenewal]`, case insensitive. | | value | Yes | The value to set. Must be one of `[true, false]`, or anything [`strconv.ParseBool`](https://pkg.go.dev/strconv#ParseBool) can parse. | diff --git a/tenants/nav/operate/naisdevice/.pages b/tenants/nav/operate/naisdevice/.pages new file mode 100644 index 000000000..d6b4f5d21 --- /dev/null +++ b/tenants/nav/operate/naisdevice/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 💡 Explanations: explanations +- 🎯 How-To: how-to +- ... diff --git a/tenants/nav/reference/naisdevice/jita.md b/tenants/nav/operate/naisdevice/explanations/jita.md similarity index 71% rename from tenants/nav/reference/naisdevice/jita.md rename to tenants/nav/operate/naisdevice/explanations/jita.md index 639fc0d20..82341354a 100644 --- a/tenants/nav/reference/naisdevice/jita.md +++ b/tenants/nav/operate/naisdevice/explanations/jita.md @@ -1,3 +1,7 @@ +--- +tags: [naisdevice, jita, explanation] +--- + # Just In Time Access (JITA) ## Providing access to sensitive information @@ -7,10 +11,8 @@ When you start naisdevice you will not be automatically connected to all the ava !!! info The gateways currently requiring JITA are `aiven-prod`, `onprem-k8s-prod` and `postgres-prod`. - Access can be requested via the naisdevice menu or through the [nais cli](../cli/commands/device.md#jita). + Access can be requested via the naisdevice menu or through the [nais cli](../../cli/reference/device.md#jita). Once authenticated you will be presented with a form where you have to supply a short reason for why access is needed and for how long. You will then be granted access for the requested amount of time instantly and automatically. -![A screenshow shows the part of the form, where you supply the short reason for why access is needed and for how long. The label of the input field is "Reason". A slider below allows you to select the duraction.](../assets/jita_portal.png) - - +![A screenshot that shows the part of the form, where you supply the short reason for why access is needed and for how long. The label of the input field is "Reason". A slider below allows you to select the duraction.](../../../assets/jita_portal.png) diff --git a/tenants/nav/how-to-guides/naisdevice/install-kolide.md b/tenants/nav/operate/naisdevice/how-to/install-kolide.md similarity index 98% rename from tenants/nav/how-to-guides/naisdevice/install-kolide.md rename to tenants/nav/operate/naisdevice/how-to/install-kolide.md index 5342d765a..7e45366a0 100644 --- a/tenants/nav/how-to-guides/naisdevice/install-kolide.md +++ b/tenants/nav/operate/naisdevice/how-to/install-kolide.md @@ -1,3 +1,7 @@ +--- +tags: [naisdevice, how-to] +--- + # Install Kolide ### How to Install Kolide agent diff --git a/tenants/nav/how-to-guides/naisdevice/troubleshooting.md b/tenants/nav/operate/naisdevice/how-to/troubleshooting.md similarity index 93% rename from tenants/nav/how-to-guides/naisdevice/troubleshooting.md rename to tenants/nav/operate/naisdevice/how-to/troubleshooting.md index 2dc39e796..0fdec098b 100644 --- a/tenants/nav/how-to-guides/naisdevice/troubleshooting.md +++ b/tenants/nav/operate/naisdevice/how-to/troubleshooting.md @@ -1,4 +1,8 @@ -# Troubleshooting +--- +tags: [naisdevice, how-to] +--- + +# Troubleshooting naisdevice - _naisdevice cannot connect, yet `/msg @Kolide status` is happy!_ - Disconnect and re-connect `naisdevice` =\)! diff --git a/tenants/nav/how-to-guides/naisdevice/uninstall-kolide.md b/tenants/nav/operate/naisdevice/how-to/uninstall-kolide.md similarity index 97% rename from tenants/nav/how-to-guides/naisdevice/uninstall-kolide.md rename to tenants/nav/operate/naisdevice/how-to/uninstall-kolide.md index 794fde776..30844d5c1 100644 --- a/tenants/nav/how-to-guides/naisdevice/uninstall-kolide.md +++ b/tenants/nav/operate/naisdevice/how-to/uninstall-kolide.md @@ -1,3 +1,7 @@ +--- +tags: [naisdevice, how-to] +--- + # Uninstall Kolide === "macOS" diff --git a/tenants/nav/persistence/influxdb/.pages b/tenants/nav/persistence/influxdb/.pages new file mode 100644 index 000000000..f092fbf61 --- /dev/null +++ b/tenants/nav/persistence/influxdb/.pages @@ -0,0 +1,5 @@ +nav: +- README.md +- 🎯 How-To: how-to +- 📚 Reference: reference +- ... diff --git a/tenants/nav/persistence/influxdb/README.md b/tenants/nav/persistence/influxdb/README.md new file mode 100644 index 000000000..1413d0298 --- /dev/null +++ b/tenants/nav/persistence/influxdb/README.md @@ -0,0 +1,56 @@ +--- +title: InfluxDB +tags: [influxdb, persistence, explanation] +--- + +# InfluxDB + +!!! warning "Deprcated" + + During 2021 Aiven informed us that they would probably stop supporting InfluxDB at some point in the next couple years, but that no final decision was made. + For that reason, we discouraged use of Aiven InfluxDB and recommended that teams instead build a solution based around BigQuery for these kinds of business metrics. + + At the start of 2023, Aiven informed us that dropping InfluxDB was no longer in the roadmap, and that InfluxDB support would continue for the foreseeable future. + However, Aiven is still only supporting InfluxDB 1.8, and they have no plans to allow upgrading to InfluxDB 2 because of licensing issues. + + For that reason, we still discourage use of Aiven InfluxDB for new use cases. + For many use cases, the BigQuery alternative might be a better fit. + + See the end of this document for a description of [the BigQuery alternative](#suggested-alternative). + +## Getting started + +As there are few teams that need an InfluxDB instance we use a IaC-repo to provision each instance. +Head over to [aiven-iac](https://github.com/navikt/aiven-iac#influxdb) to learn how to get your own instance. + +:dart: [Create an InfluxDB instance](how-to/create.md) + +:dart: [Access InfluxDB from an application](how-to/access.md) + +## Support + +We do not offer support on Influxdb as software, but questions about Aiven and provisioning can be directed to [#nais](https://nav-it.slack.com/archives/C5KUST8N6) on Slack. + +## Suggested alternative + +Team Digihot has spent some time piloting a concept that uses BigQuery and Metabase as a replacement for InfluxDB and Grafana. +They are very satisfied with the solution, and we have concluded that this is a viable replacement going forward. +In their case, all applications that sent data to InfluxDB also used Kafka, so their solution is based around Kafka. +Depending on the situation and use case, it would also be possible to send data to BigQuery directly from the applications. + +Once the data is in BigQuery, you can use Metabase to create dashboards or dataproducts. + +```mermaid +graph LR + accTitle: From Kafka to Metabase via BigQuery + accDescr: The diagram shows how the data is sent from the producer to Metabase. Producers on a kafka client, uses a kafka rapid, sending it to BigQuery sink ruler (BigQuery client) and BigQuery, that can be read from Metabase. Non-kafka apps can send data directly to BigQuery. + + P1[Producer 1
Kafka Client] --> K + P2[Producer 2
Kafka Client] --> K + + K[Kafka Rapid] --> BQSR + BQSR[BigQuery sink river
BigQuery Client] --> BQ + E[Non-Kafka App
BigQuery Client] --> BQ + + BQ[BigQuery] --> M[Metabase] +``` diff --git a/tenants/nav/how-to-guides/persistence/influxdb/access.md b/tenants/nav/persistence/influxdb/how-to/access.md similarity index 83% rename from tenants/nav/how-to-guides/persistence/influxdb/access.md rename to tenants/nav/persistence/influxdb/how-to/access.md index 53faf4ade..73dce25f7 100644 --- a/tenants/nav/how-to-guides/persistence/influxdb/access.md +++ b/tenants/nav/persistence/influxdb/how-to/access.md @@ -1,7 +1,10 @@ +--- +tags: [influxdb, how-to] +--- # Access from NAIS-app -You need to specify the InfluxDB instance to get access from an application. See [nais.yaml-reference](/reference/application-spec#influxinstance). +You need to specify the InfluxDB instance to get access from an application. See [nais.yaml-reference](../../../workloads/application/reference/application-spec.md#influxinstance). When an application requesting an InfluxDB instance is deployed, credentials will be provided as environment variables. There is only one user for Influxdb, with complete access. diff --git a/tenants/nav/persistence/influxdb/how-to/create.md b/tenants/nav/persistence/influxdb/how-to/create.md new file mode 100644 index 000000000..dee737e5b --- /dev/null +++ b/tenants/nav/persistence/influxdb/how-to/create.md @@ -0,0 +1,11 @@ +--- +tags: [influxdb, how-to] +--- + +# Create an InfluxDB instance + +!!! info "Disclaimer" + + We discourage use of [InfluxDB](../README.md) for new use cases. [BigQuery](../../bigquery/README.md) might be a better fit for many use cases. Questions about Aiven and provisioning can be directed to #nais on Slack. + +We use a IaC-repo to provision InfluxDB instances. Head over to [aiven-iac](https://github.com/navikt/aiven-iac#influxdb) to learn how to get your own instance. diff --git a/tenants/nav/persistence/influxdb/reference/README.md b/tenants/nav/persistence/influxdb/reference/README.md new file mode 100644 index 000000000..7e0b48e84 --- /dev/null +++ b/tenants/nav/persistence/influxdb/reference/README.md @@ -0,0 +1,29 @@ +--- +title: InfluxDB reference +tags: [influxdb, reference] +--- + +# InfluxDB reference + +## Retention policies +The default database is created with a default retention policy of 30 days. You might want to adjust this by e.g. creating a new default retention policy with 1 year retention: + +``` +create retention policy "365d" on "defaultdb" duration 365d replication 1 shard duration 1w default +``` + +## Datasource in Grafana + +Let us know in [#nais](https://nav-it.slack.com/archives/C5KUST8N6) if you want your InfluxDB to be exposed in Grafana. +This means that everyone can access your data. + +## Access from laptop + +With Naisdevice you have access to the _aiven-prod_ gateway. +This is a JITA (just in time access) gateway, so you need to describe why, but the access is automatically given. + +``` +influx -username avnadmin -password foo -host influx-instancename-nav-dev.aivencloud.com -port 26482 -ssl +``` + +PS: Remember to use Influxdb CLI pre v2. For example v1.8.3. diff --git a/tenants/nav/how-to-guides/persistence/kafka/access-from-non-nais.md b/tenants/nav/persistence/kafka/how-to/access-from-non-nais.md similarity index 80% rename from tenants/nav/how-to-guides/persistence/kafka/access-from-non-nais.md rename to tenants/nav/persistence/kafka/how-to/access-from-non-nais.md index 7419cbc19..1aa9f0d3b 100644 --- a/tenants/nav/how-to-guides/persistence/kafka/access-from-non-nais.md +++ b/tenants/nav/persistence/kafka/how-to/access-from-non-nais.md @@ -1,8 +1,12 @@ +--- +tags: [kafka, how-to] +--- + # Accessing topics from an application outside NAIS This guide will show you how to access a [Kafka topic](create.md) from an application outside NAIS clusters. -## 1. Enable access to the relevant pool in your [manifest](../../nais-application/application.md) +## Enable access to the relevant pool in your manifest ???+ note ".nais/aivenapp.yaml" @@ -22,21 +26,21 @@ This guide will show you how to access a [Kafka topic](create.md) from an applic !!! info The secretName must be a valid [DNS label](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names), and must be unique within the namespace. -## 2. Apply the AivenApplication +## Apply the AivenApplication === "Automatically" - Add the file to your application repository, alongside `nais.yaml` to deploy with [NAIS github action](../../cicd/github-action.md). + Add the file to your application repository, alongside `nais.yaml` to deploy with [NAIS github action](../../../build/how-to/build-and-deploy.md). === "Manually" ```bash kubectl apply -f ./nais/aivenapp.yaml --namespace= --context= ``` -## 3. Extract the value of the generated secret +## Extract the value of the generated secret ```bash kubectl get secret --namespace --contect -o jsonpath='{.data}' ``` Make the values available to your application. -## 4. Grant access to the topic +## Grant access to the topic The owner of the topic must [grant your application access to the topic](manage-acl.md). diff --git a/tenants/nav/how-to-guides/persistence/kafka/renew-credentials-for-non-nais.md b/tenants/nav/persistence/kafka/how-to/renew-credentials-for-non-nais.md similarity index 90% rename from tenants/nav/how-to-guides/persistence/kafka/renew-credentials-for-non-nais.md rename to tenants/nav/persistence/kafka/how-to/renew-credentials-for-non-nais.md index 0667eedde..8a7ddd63b 100644 --- a/tenants/nav/how-to-guides/persistence/kafka/renew-credentials-for-non-nais.md +++ b/tenants/nav/persistence/kafka/how-to/renew-credentials-for-non-nais.md @@ -1,3 +1,7 @@ +--- +tags: [kafka, how-to] +--- + # Renew credentials for non-NAIS applications Eventually the credentials created in [Accessing topics from an application outside NAIS](access-from-non-nais.md) will expire. @@ -5,18 +9,18 @@ Well in advance of this, Aiven will issue a notification to the technical contac When it is time to renew the credentials, follow these steps: -## 1. Edit the AivenApplication resource +## Edit the AivenApplication resource You need to change the `.spec.secretName` field in the `AivenApplication` resource you used to create the credentials in the first place. Make a note of the current value, and change it to something suitable. You can use any valid name you want, but make sure it is different from the old name. -## 2. Wait for a new secret to appear +## Wait for a new secret to appear When you save/apply the changed secret name, new credentials are generated. When complete, a secret with the requested name will become available in the cluster. -## 3. Extract updated credentials +## Extract updated credentials Extract the credentials from the newly created secret, in the same way as you originally did when you first created the `AivenApplication` resource. @@ -26,7 +30,7 @@ kubectl get secret --namespace --contect .dev-fss-pub.nais.io` + - `https://.prod-fss-pub.nais.io` + + The application _on-premises_ must then: + + 1. Add the ingress created above to the list of ingresses: + + ```yaml + spec: + ingresses: + - https://.-fss-pub.nais.io + ``` + + 2. If secured with OAuth 2.0, ensure that the application also has set up inbound access policies: + - a. [Access Policies for TokenX][tokenx-access] + - b. [Access Policies for Azure AD)][azure-ad-access] + +The application _in GCP_ must then: + +1. Add the above hosts to their [outbound external access policies][access-policies]: + + ```yaml + spec: + accessPolicy: + outbound: + external: + - host: .-fss-pub.nais.io + ``` + +### How do I reach an application found on GCP from my application on-premises? + +???+ faq "Answer" + + The application in GCP must be exposed on a [matching ingress][environments]: + + | ingress | reachable from zone | + | :--- | :--- | + | `.intern.dev.nav.no` | `dev-fss` | + | `.intern.nav.no` | `prod-fss` | + | `.nav.no` | internet, i.e. all clusters | + + The application on-premises should _not_ have to use webproxy to reach these ingresses. + +## GCP compared to on-premises + +| Feature | on-prem | gcp | Comment | +|:--------------------------|:-----------|:-------------------|:----------------------------------------------------------------| +| Deploy | yes | yes | different clustername when deploying | +| Logging | yes | yes | different clustername in logs.adeo.no | +| Metrics | yes | yes | same mechanism, different datasource | +| Nais app dashboard | yes | yes | new and improved in GCP | +| Alerts | yes | yes | identical | +| Secure logs | yes | yes | different clustername in logs.adeo.no | +| Kafka | yes | yes | identical | +| Secrets | Vault | Console secrets | | +| Team namespaces | yes | yes | | +| Shared namespaces | yes | no | Default namespace not available for teams in GCP | +| Health checks | yes | yes | identical | +| Ingress | yes | yes | See [environments overview][environments] for available domains | +| Storage | Ceph | Buckets | | +| Postgres | yes (IAC) | yes (self-service) | | +| Laptop access | yes | yes | | +| domain: dev.intern.nav.no | | yes (Automatic) | Wildcard DNS points to GCP load balancer | +| Access to FSS services | | yes | Identical (either API-gw or TokenX) | +| NAV truststore | yes | yes | | +| PVK required | yes | yes | amend to cover storage in cloud | +| Security | Zone Model | [zero-trust] | | + +[zero-trust]: zero-trust.md +[nais-yaml]: ../../workloads/application/reference/application-example.md +[buckets]: ../../persistence/buckets/README.md +[postgres]: ../../persistence/postgres/README.md +[migrate-database]: ../../persistence/postgres/how-to/migrating-databases-to-gcp.md +[environments]: ../reference/environments.md +[secrets]: ../../services/secrets/README.md +[auth]: ../../security/auth/README.md +[tokenx]: ../../auth/tokenx/README.md +[tokenx-access]: ../../auth/tokenx/how-to/secure.md#grant-access +[azure-ad]: ../../security/auth/azure-ad/README.md +[azure-ad-access]: ../../security/auth/azure-ad/configuration.md#pre-authorization +[access-policies]: ../how-to/access-policies.md +[roles-responsibilites]: ../../legal/roles-responsibilities.md +[pvk]: ../../legal/app-pvk.md +[ros]: ../../legal/nais-ros.md diff --git a/tenants/ssb/reference/environments.md b/tenants/ssb/reference/environments.md deleted file mode 100644 index 83f1921a0..000000000 --- a/tenants/ssb/reference/environments.md +++ /dev/null @@ -1,31 +0,0 @@ -# Available environments - -This is a overview over the different environments and available domains. - -We also enumerate the external IPs used by the environments, so that you can provide them to services that require IP allow-listing. - -### staging - -#### Domains - -| domain | accessible from | description | -| :--- | :--- | :--- | -| external.staging.ssb.cloud.nais.io | internet | ingress for applications exposed to internet. URLs containing `/metrics`, `/actuator` or `/internal` are blocked. | -| staging.ssb.cloud.nais.io | [naisdevice](../explanation/naisdevice.md) | ingress for internal applications | - -#### External/outbound IPs - -- 34.88.116.202 - -### prod - -#### Domains - -| domain | accessible from | description | -| :--- | :--- | :--- | -| external.prod.ssb.cloud.nais.io | internet | ingress for applications exposed to internet. URLs containing `/metrics`, `/actuator` or `/internal` are blocked. | -| prod.ssb.cloud.nais.io | [naisdevice](../explanation/naisdevice.md) | ingress for internal applications | - -#### External/outbound IPs - -- 34.88.47.40