Skip to content

Process changes to docs from: repo: EnterpriseDB/cloud-native-postgres ref: refs/tags/v1.26.0-rc3 #6806

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Connecting from an application'
originalFilePath: 'src/applications.md'
---



Applications are supposed to work with the services created by EDB Postgres for Kubernetes
in the same Kubernetes cluster.

Expand Down
11 changes: 7 additions & 4 deletions product_docs/docs/postgres_for_kubernetes/1/architecture.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Architecture'
originalFilePath: 'src/architecture.md'
---



!!! Hint
For a deeper understanding, we recommend reading our article on the CNCF
blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
Expand Down Expand Up @@ -138,7 +140,7 @@ the [replica cluster feature](replica_cluster.md)).
![Example of a Kubernetes architecture with only 2 data centers](./images/k8s-architecture-2-az.png)

!!! Hint
If you are at en early stage of your Kubernetes journey, please share this
If you are at an early stage of your Kubernetes journey, please share this
document with your infrastructure team. The two data centers setup might
be simply the result of a "lift-and-shift" transition to Kubernetes
from a traditional bare-metal or VM based infrastructure, and the benefits
Expand Down Expand Up @@ -414,9 +416,10 @@ This is typically triggered by:
declarative configuration, enabling you to automate these procedures as part of
your Infrastructure as Code (IaC) process, including GitOps.

The designated primary in the above example is fed via WAL streaming
(`primary_conninfo`), with fallback option for file-based WAL shipping through
the `restore_command` and `barman-cloud-wal-restore`.
In the example above, the designated primary receives WAL updates via streaming
replication (`primary_conninfo`). As a fallback, it can retrieve WAL segments
from an object store using file-based WAL shipping—for instance, with the
Barman Cloud plugin through `restore_command` and `barman-cloud-wal-restore`.

EDB Postgres for Kubernetes allows you to define topologies with multiple replica clusters.
You can also define replica clusters with a lower number of replicas, and then
Expand Down
17 changes: 11 additions & 6 deletions product_docs/docs/postgres_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@ title: 'Backup'
originalFilePath: 'src/backup.md'
---



!!! Warning
With the deprecation of native Barman Cloud support in EDB Postgres for Kubernetes in
favor of the Barman Cloud Plugin, this page—and the backup and recovery
documentation—may undergo changes before the official release of version
1.26.0.

PostgreSQL natively provides first class backup and recovery capabilities based
on file system level (physical) copy. These have been successfully used for
more than 15 years in mission critical production databases, helping
Expand Down Expand Up @@ -30,7 +38,9 @@ The WAL archive can only be stored on object stores at the moment.
On the other hand, EDB Postgres for Kubernetes supports two ways to store physical base backups:

- on [object stores](backup_barmanobjectstore.md), as tarballs - optionally
compressed
compressed:
- Using the Barman Cloud plugin
- Natively via `.spec.backup.barmanObjectStore` (*deprecated, to be removed in EDB Postgres for Kubernetes 1.28*)
- on [Kubernetes Volume Snapshots](backup_volumesnapshot.md), if supported by
the underlying storage class

Expand All @@ -44,11 +54,6 @@ On the other hand, EDB Postgres for Kubernetes supports two ways to store physic
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
that provide snapshotting capabilities.

!!! Info
Starting with version 1.25, EDB Postgres for Kubernetes includes experimental support for
backup and recovery using plugins, such as the
[Barman Cloud plugin](https://github.com/cloudnative-pg/plugin-barman-cloud).

## WAL archive

The WAL archive in PostgreSQL is at the heart of **continuous backup**, and it
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@ title: 'Backup on object stores'
originalFilePath: 'src/backup_barmanobjectstore.md'
---



!!! Warning
With the deprecation of native Barman Cloud support in EDB Postgres for Kubernetes in
favor of the Barman Cloud Plugin, this page—and the backup and recovery
documentation—may undergo changes before the official release of version
1.26.0.

EDB Postgres for Kubernetes natively supports **online/hot backup** of PostgreSQL
clusters through continuous physical backup and WAL archiving on an object
store. This means that the database is always up (no downtime required)
Expand Down Expand Up @@ -96,7 +104,10 @@ algorithms via `barman-cloud-backup` (for backups) and

- bzip2
- gzip
- lz4
- snappy
- xz
- zstd

The compression settings for backups and WALs are independent. See the
[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration) and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ title: 'Backup and Recovery'
originalFilePath: 'src/backup_recovery.md'
---



[Backup](backup.md) and [recovery](recovery.md) are in two separate sections.
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Backup on volume snapshots'
originalFilePath: 'src/backup_volumesnapshot.md'
---



!!! Warning
As noted in the [backup document](backup.md), a cold snapshot explicitly
set to target the primary will result in the primary being fenced for
Expand Down Expand Up @@ -60,6 +62,8 @@ volumes of a given storage class, and managed as `VolumeSnapshot` and

## How to configure Volume Snapshot backups



EDB Postgres for Kubernetes allows you to configure a given Postgres cluster for Volume
Snapshot backups through the `backup.volumeSnapshot` stanza.

Expand Down Expand Up @@ -245,7 +249,97 @@ referenced in the `.spec.backup.volumeSnapshot.className` option.
Please refer to the [Kubernetes documentation on Volume Snapshot Classes](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/)
for details on this standard behavior.

## Example
## Backup Volume Snapshot Deadlines

EDB Postgres for Kubernetes supports backups using the volume snapshot method. In some
environments, volume snapshots may encounter temporary issues that can be
retried.

The `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation defines how long
EDB Postgres for Kubernetes should continue retrying recoverable errors before marking the
backup as failed.

You can add the `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation to both
`Backup` and `ScheduledBackup` resources. For `ScheduledBackup` resources, this
annotation is automatically inherited by any `Backup` resources created from
the schedule.

If not specified, the default retry deadline is **10 minutes**.

### Error Handling

When a retryable error occurs during a volume snapshot operation:

1. EDB Postgres for Kubernetes records the time of the first error.
2. The system retries the operation every **10 seconds**.
3. If the error persists beyond the specified deadline (or the default 10
minutes), the backup is marked as **failed**.

### Retryable Errors

EDB Postgres for Kubernetes treats the following types of errors as retryable:

- **Server timeout errors** (HTTP 408, 429, 500, 502, 503, 504)
- **Conflicts** (optimistic locking errors)
- **Internal errors**
- **Context deadline exceeded errors**
- **Timeout errors from the CSI snapshot controller**

### Examples

You can add the annotation to a `ScheduledBackup` resource as follows:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: ScheduledBackup
metadata:
name: daily-backup-schedule
annotations:
backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "20"
spec:
schedule: "0 0 * * *"
backupOwnerReference: self
method: volumeSnapshot
# other configuration...
```

When you define a `ScheduledBackup` with the annotation, any `Backup` resources
created from this schedule automatically inherit the specified timeout value.

In the following example, all backups created from the schedule will have a
30-minute timeout for retrying recoverable snapshot errors.

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: ScheduledBackup
metadata:
name: weekly-backup
annotations:
backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "30"
spec:
schedule: "0 0 * * 0" # Weekly backup on Sunday
method: volumeSnapshot
cluster:
name: my-postgresql-cluster
```

Alternatively, you can add the annotation directly to a `Backup` Resource:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Backup
metadata:
name: my-backup
annotations:
backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "15"
spec:
method: volumeSnapshot
# other backup configuration...
```

## Example of Volume Snapshot Backup



The following example shows how to configure volume snapshot base backups on an
EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Before You Start'
originalFilePath: 'src/before_you_start.md'
---



Before we get started, it is essential to go over some terminology that is
specific to Kubernetes and PostgreSQL.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Benchmarking'
originalFilePath: 'src/benchmarking.md'
---



The CNP kubectl plugin provides an easy way for benchmarking a PostgreSQL deployment in Kubernetes using EDB Postgres for Kubernetes.

Benchmarking is focused on two aspects:
Expand Down Expand Up @@ -177,7 +179,7 @@ It will:
3. Create a fio deployment composed by a single Pod, which will run fio on
the PVC, create graphs after completing the benchmark and start serving the
generated files with a webserver. We use the
[`fio-tools`](https://github.com/wallnerryan/fio-tools`) image for that.
[`fio-tools`](https://github.com/wallnerryan/fio-tools) image for that.

The Pod created by the deployment will be ready when it starts serving the
results. You can forward the port of the pod created by the deployment
Expand Down
12 changes: 7 additions & 5 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Bootstrap'
originalFilePath: 'src/bootstrap.md'
---



!!! Note
When referring to "PostgreSQL cluster" in this section, the same
concepts apply to both PostgreSQL and EDB Postgres Advanced Server, unless
Expand Down Expand Up @@ -621,7 +623,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```

The following manifest creates a new PostgreSQL 17.4 cluster,
The following manifest creates a new PostgreSQL 17.5 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
Expand All @@ -636,7 +638,7 @@ metadata:
name: target-db
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:17.4
imageName: quay.io/enterprisedb/postgresql:17.5

bootstrap:
pg_basebackup:
Expand All @@ -656,7 +658,7 @@ spec:
```

All the requirements must be met for the clone operation to work, including
the same PostgreSQL version (in our case 17.4).
the same PostgreSQL version (in our case 17.5).

#### TLS certificate authentication

Expand All @@ -671,7 +673,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.

The manifest defines a new PostgreSQL 17.4 cluster called `cluster-clone-tls`,
The manifest defines a new PostgreSQL 17.5 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
Expand All @@ -686,7 +688,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:17.4
imageName: quay.io/enterprisedb/postgresql:17.5

bootstrap:
pg_basebackup:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Certificates'
originalFilePath: 'src/certificates.md'
---



EDB Postgres for Kubernetes was designed to natively support TLS certificates.
To set up a cluster, the operator requires:

Expand Down Expand Up @@ -53,6 +55,11 @@ expiration (within a 90-day validity period).
certificates not controlled by EDB Postgres for Kubernetes must be re-issued following the
renewal process.

When generating certificates, the operator assumes that the Kubernetes
cluster's DNS zone is set to `cluster.local` by default. This behavior can be
customized by setting the `KUBERNETES_CLUSTER_DOMAIN` environment variable. A
convenient alternative is to use the [operator's configuration capability](operator_conf.md).

### Server certificates

#### Server CA secret
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Instance pod configuration'
originalFilePath: 'src/cluster_conf.md'
---



## Projected volumes

EDB Postgres for Kubernetes supports mounting custom files inside the Postgres pods through
Expand Down
Loading