Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
177 changes: 33 additions & 144 deletions content/consul/v1.21.x/content/docs/manage/scale/autopilot.mdx
Original file line number Diff line number Diff line change
@@ -1,51 +1,26 @@
---
layout: docs
page_title: Consul autopilot
page_title: Consul Autopilot
description: >-
Use Autopilot features to monitor the Raft cluster, introduce stable servers, and clean up dead servers.
---

# Consul autopilot
# Consul Autopilot

This page describes Consul autopilot, which supports automatic, operator-friendly management of Consul
servers. It includes cleanup of dead servers, monitoring the state of the Raft
cluster, and stable server introduction.
This page describes Consul Autopilot, which supports automatic, operator-friendly management of Consul servers. Autopilot helps maintain the health and stability of the Consul server cluster by monitoring server health, introducing stable servers, and cleaning up dead servers. Furthermore, Consul Enterprise customers can leverage two additional Autopilot features, namely redundancy zones and automated upgrades to enhance datacenter resiliency and simplify operations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This page describes Consul Autopilot, which supports automatic, operator-friendly management of Consul servers. Autopilot helps maintain the health and stability of the Consul server cluster by monitoring server health, introducing stable servers, and cleaning up dead servers. Furthermore, Consul Enterprise customers can leverage two additional Autopilot features, namely redundancy zones and automated upgrades to enhance datacenter resiliency and simplify operations.
This page describes Consul Autopilot, a set of features that provide operator-friendly management automations for Consul servers.


To use autopilot features (with the exception of dead server cleanup), the
[`raft_protocol`](/consul/docs/reference/agent/configuration-file/raft#raft_protocol)
setting in the Consul agent configuration must be set to 3 or higher on all
servers. In Consul `0.8` this setting defaults to 2; in Consul `1.0` it will
default to 3. For more information, check the [Version Upgrade
section](/consul/docs/upgrade/version-specific) on Raft protocol
versions in Consul `1.0`.
## Overview

In this tutorial you will learn how Consul tracks the stability of servers, how
to tune those conditions, and get some details on the other autopilot's features.
Autopilot includes the following features:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Autopilot includes the following features:
Consul autopilot helps you maintain the health and stability of the Consul server cluster. It includes the following features:

Moved the sentence so that there's less at the top and more before the list.

- [Server health checking](#server-health-checking)
- [Server stabilization time](#server-stabilization-time)
- [Dead server cleanup](#dead-server-cleanup)
- [Redundancy zones (only available in Consul Enterprise)](#redundancy-zones)
- [Automated upgrades (only available in Consul Enterprise)](#automated-upgrades)

- Server Stabilization
- Dead server cleanup
- Redundancy zones (only available in Consul Enterprise)
- Automated upgrades (only available in Consul Enterprise)
### Default configuration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Default configuration
## Default configuration


Note, in this tutorial we are using examples from a Consul `1.7` datacenter, we
are starting with Autopilot enabled by default.

## Default configuration

The configuration of Autopilot is loaded by the leader from the agent's
[autopilot settings](/consul/docs/reference/agent/configuration-file/general#autopilot)
when initially bootstrapping the datacenter. Since autopilot and its features
are already enabled, you only need to update the configuration to disable them.

All Consul servers should have Autopilot and its features either enabled or
disabled to ensure consistency across servers in case of a failure.
Additionally, Autopilot must be enabled to use any of the features, but the
features themselves can be configured independently. Meaning you can enable or
disable any of the features separately, at any time.

You can check the default values using the `consul operator` CLI command or
using the [`/v1/operator/autopilot`
endpoint](/consul/api-docs/operator/autopilot)
You can check the default Autopilot values using the `consul operator` CLI command or using the [`/v1/operator/autopilot` endpoint](/consul/api-docs/operator/autopilot)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You can check the default Autopilot values using the `consul operator` CLI command or using the [`/v1/operator/autopilot` endpoint](/consul/api-docs/operator/autopilot)
To check the default autopilot values, use the `consul operator` CLI command or the [`/v1/operator/autopilot` endpoint](/consul/api-docs/operator/autopilot).


<Tabs>
<Tab heading="CLI command">
Expand Down Expand Up @@ -90,36 +65,20 @@ $ curl http://127.0.0.1:8500/v1/operator/autopilot/configuration
</Tab>
</Tabs>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the tabs, include a table that lists the value names, descriptions, types, etc. Example:

| Autopilot setting  | Description        | Type    | Default value |
| :----------------: | :------------------| :-----: | :-----------: |
| CleanupDeadServers | Consul servers xxx | Boolean | `true`        |

### Autopilot and Consul snapshots
Changes to the Autopilot configuration are persisted in the Raft database maintained by the Consul servers. This means that Autopilott configuration will be included in the Consul snapshot data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Changes to the Autopilot configuration are persisted in the Raft database maintained by the Consul servers. This means that Autopilott configuration will be included in the Consul snapshot data.
Consul servers maintain changes to the autopilot configuration in the Raft database. As a result, autopilot configurations are included in the Consul snapshot data.


Changes to the autopilot configuration are persisted in the Raft database
maintained by the Consul servers. This means that autopilot configuration will
be included in the Consul snapshot data. Any snapshot taken prior to autopilot
configuration changes will contain the old configuration, and should be
considered unsafe to restore since they will remove the change and cause
unpredictable behaviors for the automations that might rely on the new
configuration.
## Workflow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Workflow

Unnecessary based on how the page is already formatted.


We recommend that you take a snapshot after any changes to the autopilot
configuration, and consider that as the last safe point in time to roll-back in
case a restore is needed.
### Server health checking
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Server health checking
## Server health checking


## Server health checking

An internal health check runs on the leader to track the stability of servers.

A server is considered healthy if all of the following conditions are true.
An internal health check runs on the leader to track the stability of servers. A server is considered healthy if all of the following conditions are true.

- It has a SerfHealth status of 'Alive'.
- The time since its last contact with the current leader is below
`LastContactThreshold` (that by default is `200ms`).
- The time since its last contact with the current leader is below `LastContactThreshold` (that by default is `200ms`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- The time since its last contact with the current leader is below `LastContactThreshold` (that by default is `200ms`).
- The time since its last contact with the current leader is below `LastContactThreshold`. The default value is `200ms`.

- Its latest Raft term matches the leader's term.
- The number of Raft log entries it trails the leader by does not exceed
`MaxTrailingLogs` (that by default is `250`).
- The number of Raft log entries it trails the leader by does not exceed `MaxTrailingLogs` (that by default is `250`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- The number of Raft log entries it trails the leader by does not exceed `MaxTrailingLogs` (that by default is `250`).
- The number of Raft log entries it trails the leader by does not exceed `MaxTrailingLogs`. The default value is `250`.


The status of these health checks can be viewed through the
`/v1/operator/autopilot/health` HTTP endpoint, with a top level `Healthy` field
indicating the overall status of the datacenter:
The status of these health checks can be viewed through the `/v1/operator/autopilot/health` HTTP endpoint, with a top level `Healthy` field indicating the overall status of the datacenter:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The status of these health checks can be viewed through the `/v1/operator/autopilot/health` HTTP endpoint, with a top level `Healthy` field indicating the overall status of the datacenter:
To return the status of these health checks, use the `/v1/operator/autopilot/health` HTTP endpoint. The `Healthy` field at the top indicates the overall status of the datacenter:


```shell-session
$ curl localhost:8500/v1/operator/autopilot/health | jq .
Expand Down Expand Up @@ -172,31 +131,19 @@ $ curl localhost:8500/v1/operator/autopilot/health | jq .

## Server stabilization time

When a new server is added to the datacenter, there is a waiting period where it
must be healthy and stable for a certain amount of time before being promoted to
a full, voting member. This is defined by the `ServerStabilizationTime`
autopilot's parameter and by default is 10 seconds.
When a new server is added to the datacenter, there is a waiting period where it must be healthy and stable for a certain amount of time before being promoted to a full, voting member. This is defined by the `ServerStabilizationTime`, Autopilot's parameter, and by default is 10 seconds.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When a new server is added to the datacenter, there is a waiting period where it must be healthy and stable for a certain amount of time before being promoted to a full, voting member. This is defined by the `ServerStabilizationTime`, Autopilot's parameter, and by default is 10 seconds.
When a new server joins the datacenter, there is an initial waiting period where it must stay healthy and stable before it can become a voting member. This duration is configured by the `ServerStabilizationTime` parameter. By default it is 10 seconds.


In case your configuration require a different amount of time for the node to
get ready, for example in case you have some extra VM checks at startup that
might affect node resource availability, you can tune the parameter and assign
it a different duration.
In case your configuration requires a different amount of time, you can tune the parameter and assign it a different duration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In case your configuration requires a different amount of time, you can tune the parameter and assign it a different duration.
If you need a different amount of time, you can tune the parameter to set a different duration. The following example extends the waiting period to 15 seconds:


```shell-session
$ consul operator autopilot set-config -server-stabilization-time=15s
```

```plaintext hideClipboard
Configuration updated!
```

Use the `get-config` command to check the configuration.

```shell-session
$ consul operator autopilot get-config
```

```plaintext hideClipboard
CleanupDeadServers = true
LastContactThreshold = 200ms
MaxTrailingLogs = 250
Expand All @@ -207,92 +154,34 @@ DisableUpgradeMigration = false
UpgradeVersionTag = ""
```

## Dead server cleanup

If autopilot is disabled, it will take 72 hours for dead servers to be
automatically reaped or an operator must write a script to `consul force-leave`.
If another server failure occurred it could jeopardize the quorum, even if the
failed Consul server had been automatically replaced. Autopilot helps prevent
these kinds of outages by quickly removing failed servers as soon as a
replacement Consul server comes online. When servers are removed by the cleanup
process they will enter the "left" state.

With Autopilot's dead server cleanup enabled, dead servers will periodically be
cleaned up and removed from the Raft peer set to prevent them from interfering
with the quorum size and leader elections. The cleanup process will also be
automatically triggered whenever a new server is successfully added to the
datacenter.

We suggest leaving the feature enabled to avoid introducing manual steps in
the Consul management to make sure the faulty nodes are not remaining in the
Raft pool for too long without the need for manual pruning. In test scenarios or
in environments where you want to delegate the faulty node pruning to an
external tool or system you can disable the dead server cleanup feature using
the `consul operator` command.

```shell-session
$ consul operator autopilot set-config -cleanup-dead-servers=false
```

```plaintext hideClipboard
Configuration updated!
```
### Dead server cleanup

Use the `get-config` command to check the configuration.
If Autopilot is disabled, it will take 72 hours for dead servers to be automatically reaped, or an operator must manually issue the `consul force-leave <dead-server-name>` command. If another server failure occurred it could jeopardize the quorum, even if the failed Consul server had been automatically replaced. Autopilot helps prevent these kinds of outages by quickly removing failed servers as soon as a replacement Consul server comes online. When servers are removed by the cleanup process they will enter the "left" state.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If Autopilot is disabled, it will take 72 hours for dead servers to be automatically reaped, or an operator must manually issue the `consul force-leave <dead-server-name>` command. If another server failure occurred it could jeopardize the quorum, even if the failed Consul server had been automatically replaced. Autopilot helps prevent these kinds of outages by quickly removing failed servers as soon as a replacement Consul server comes online. When servers are removed by the cleanup process they will enter the "left" state.
When autopilot is disabled, it takes 72 hours for Consul to automatically reap dead servers. The alternative would be for an operator to manually issue the `consul force-leave <dead-server-name>` command for each dead server.
In this situation, another server failure could jeopardize the cluster's quorum. The Consul cluster still considers the missing server a member of the datacenter, even if the failed Consul server was automatically replaced.
Autopilot helps prevent these kinds of outages from becoming outages. It quickly removes failed servers as soon as a replacement Consul server comes online. When servers are removed by the cleanup process, they enter the "left" state and are not considered for the datacenter's quorum.


```shell-session
$ consul operator autopilot get-config
```
With Autopilot's dead server cleanup enabled, dead servers will periodically be cleaned up and removed from the Raft peer set to prevent them from interfering with the quorum size and leader elections. The cleanup process will also be automatically triggered whenever a new server is successfully added to the datacenter.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
With Autopilot's dead server cleanup enabled, dead servers will periodically be cleaned up and removed from the Raft peer set to prevent them from interfering with the quorum size and leader elections. The cleanup process will also be automatically triggered whenever a new server is successfully added to the datacenter.
Autopilot also triggers the cleanup process automatically whenever a new server successfully joins the datacenter.


```plaintext hideClipboard
CleanupDeadServers = false
LastContactThreshold = 200ms
MaxTrailingLogs = 250
MinQuorum = 0
ServerStabilizationTime = 10s
RedundancyZoneTag = ""
DisableUpgradeMigration = false
UpgradeVersionTag = ""
```
We suggest leaving the feature enabled to avoid faulty nodes remaining in the Raft pool for too long without the need for manual pruning. In test scenarios or in environments you can disable the faulty node pruning by using the `consul operator autopilot set-config -cleanup-dead-servers=false` command.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We suggest leaving the feature enabled to avoid faulty nodes remaining in the Raft pool for too long without the need for manual pruning. In test scenarios or in environments you can disable the faulty node pruning by using the `consul operator autopilot set-config -cleanup-dead-servers=false` command.
We recommend leaving autopilot enabled to avoid issues with faulty nodes that require manual pruning. In test scenarios and dev environments you can disable the faulty node pruning with the `consul operator autopilot set-config -cleanup-dead-servers=false` command.


## Enterprise features
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion


Consul Enterprise customer can take advantage of two more features of autopilot
to further strengthen and automate Consul operations.
To further strengthen and automate Consul operations, there are two more Autopilot features in Consul Enterprise that customers can take advantage of to improve their datacenter resiliency and to simplify operations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To further strengthen and automate Consul operations, there are two more Autopilot features in Consul Enterprise that customers can take advantage of to improve their datacenter resiliency and to simplify operations.

To simplify and make the page more manageable, let's keep the features at the H2 level


### Redundancy zones
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Redundancy zones
## Redundancy zones (Enterprise)


Consul’s redundancy zones provide high availability in the case of server
failure through the Enterprise feature of autopilot. Autopilot allows you to add
read replicas to your datacenter that will be promoted to the "voting" status in
case of voting server failure.
Consul’s redundancy zones provide high availability in the case of server failure through the Enterprise feature of Autopilot. Autopilot allows you to add read replicas to your datacenter that will be promoted to the "voting" status in case of voting server failure.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Consul’s redundancy zones provide high availability in the case of server failure through the Enterprise feature of Autopilot. Autopilot allows you to add read replicas to your datacenter that will be promoted to the "voting" status in case of voting server failure.
Redundancy zones provide high availability in case of server failure. With Consul Enterprise, autopilot helps you create redundancy zones by adding read replicas to your datacenter that will be promoted to the "voting" status if a voting server fails.


You can use this tutorial to implement isolated failure domains such as AWS
Availability Zones (AZ) to obtain redundancy within an AZ without having to
sustain the overhead of a large quorum.
You can utilise redundancy zones to implement isolated failure domains such as AWS Availability Zones (AZ), and therefore to obtain redundancy within an AZ without having to sustain the overhead of a large quorum.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You can utilise redundancy zones to implement isolated failure domains such as AWS Availability Zones (AZ), and therefore to obtain redundancy within an AZ without having to sustain the overhead of a large quorum.
You can set up redundancy zones to implement isolated failure domains. For example, deploying a server and a read replica in each AWS Availability Zones (AZ) provides additional protection against failure within a region.


Check [provide fault tolerance with redundancy zones](/consul/tutorials/operate-consul/redundancy-zones)
to learn more on the functionality.
Check [provide fault tolerance with redundancy zones](/consul/docs/manage/scale/redundancy-zone) to learn more on the functionality.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Check [provide fault tolerance with redundancy zones](/consul/docs/manage/scale/redundancy-zone) to learn more on the functionality.
To learn more, refer to [provide fault tolerance with redundancy zones](/consul/docs/manage/scale/redundancy-zone).


### Automated upgrades

Consul’s automatic upgrades provide a simplified way to upgrade existing Consul
datacenter. This functionally is provided through the Enterprise feature of
autopilot. Autopilot allows you to add new servers directly to the datacenter
and waits until you have enough servers running the new version to perform a
leadership change and demote the old servers as "non-voters".
Consul’s automatic upgrades provide a simplified way to upgrade existing Consul datacenter. This functionally is provided through the Enterprise feature of Autopilot. Autopilot allows you to add new servers directly to the datacenter and waits until you have enough servers running the new version to perform a leadership change and demote the old servers as "non-voters".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Consul’s automatic upgrades provide a simplified way to upgrade existing Consul datacenter. This functionally is provided through the Enterprise feature of Autopilot. Autopilot allows you to add new servers directly to the datacenter and waits until you have enough servers running the new version to perform a leadership change and demote the old servers as "non-voters".
Automatic upgrades are an Enterprise feature that helps you upgrade existing Consul datacenter. With autopilot, you can add new servers running a new Consul version directly to the datacenter. Then when you have enough servers running the new version, you can perform a leadership change and demote the old servers to "non-voters".


Check [automate upgrades with Consul Enterprise](/consul/tutorials/datacenter-operations/upgrade-automation)
to learn more on the functionality.
Check [automate upgrades with Consul Enterprise](/consul/docs/upgrade/automated) to learn more on the functionality.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Check [automate upgrades with Consul Enterprise](/consul/docs/upgrade/automated) to learn more on the functionality.
To learn more, refer to [automate upgrades with Consul Enterprise](/consul/docs/upgrade/automated).


## Next steps

In this tutorial you got an overview of the autopilot features and got examples
on how and when tune the default values.
To read further about [Autopilot](/consul/docs/manage/scale/autopilot) functionality, check the [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone) pages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To read further about [Autopilot](/consul/docs/manage/scale/autopilot) functionality, check the [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone) pages.
To learn more about the autopilot features described on this page, refer to [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone).


To learn more about the Autopilot settings you did not configure in this tutorial,
[last_contact_threshold](/consul/docs/reference/agent/configuration-file/general#last_contact_threshold)
and
[max_trailing_logs](/consul/docs/reference/agent/configuration-file/general#max_trailing_logs),
either read the agent configuration documentation or use the help flag with the
operator autopilot `consul operator autopilot set-config -h`.
To learn more about operational Autopilot settings regarding stability, check the [last_contact_threshold](/consul/docs/reference/agent/configuration-file/bootstrap#last_contact_threshold) and [max_trailing_logs](/consul/docs/reference/agent/configuration-file/bootstrap#max_trailing_logs) parameters in the Consul agent configuration documentation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To learn more about operational Autopilot settings regarding stability, check the [last_contact_threshold](/consul/docs/reference/agent/configuration-file/bootstrap#last_contact_threshold) and [max_trailing_logs](/consul/docs/reference/agent/configuration-file/bootstrap#max_trailing_logs) parameters in the Consul agent configuration documentation.
For agent specifications related to autopilot settings for stability, refer to the [last_contact_threshold](/consul/docs/reference/agent/configuration-file/bootstrap#last_contact_threshold) and [max_trailing_logs](/consul/docs/reference/agent/configuration-file/bootstrap#max_trailing_logs) parameters in the Consul agent configuration documentation.

Original file line number Diff line number Diff line change
Expand Up @@ -77,13 +77,10 @@ Consul server agents are an important part of Consul’s architecture. This sect

Consul servers can be deployed on a few different runtimes:

- **HashiCorp Cloud Platform (HCP) Consul (Managed)**. These Consul servers are deployed in a hosted environment managed by HCP. To get started with HCP Consul servers in Kubernetes or VM deployments, refer to the [Deploy HCP Consul tutorial](/consul/tutorials/get-started-hcp/hcp-gs-deploy).
- **VMs or bare metal servers (Self-managed)**. To get started with Consul on VMs or bare metal servers, refer to the [Deploy Consul server tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy). For a full list of configuration options, refer to [Agents Overview](/consul/docs/fundamentals/agent).
- **Kubernetes (Self-managed)**. To get started with Consul on Kubernetes, refer to the [Deploy Consul on Kubernetes tutorial](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy).
- **Other container environments, including Docker, Rancher, and Mesos (Self-managed)**.

@include 'alerts/hcp-dedicated-eol.mdx'

When operating Consul at scale, self-managed VM or bare metal server deployments offer the most flexibility. Some Consul Enterprise features that can enhance fault tolerance and read scalability, such as [redundancy zones](/consul/docs/manage/scale/redundancy-zone) and [read replicas](/consul/docs/manage/scale/read-replica), are not available to server agents on Kubernetes runtimes. To learn more, refer to [Consul Enterprise feature availability by runtime](/consul/docs/enterprise#feature-availability-by-runtime).

### Number of Consul servers
Expand Down Expand Up @@ -327,7 +324,7 @@ Enterprise customers might also rely on [automated backups](/consul/docs/manage/

We do not recommend automated scaling of Consul server nodes based on load or usage unless it is coupled by some logic that prevents the cluster from losing quorum.

One way to improve your datacenter resiliency and to leverage automatic scaling is to use [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone).
One way to improve your datacenter resiliency and to leverage automatic scaling is to use [autopilot](/consul/docs/manage/scale/autopilot), [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
One way to improve your datacenter resiliency and to leverage automatic scaling is to use [autopilot](/consul/docs/manage/scale/autopilot), [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone).
One way to improve your datacenter resiliency and to leverage automatic scaling is to use [autopilot](/consul/docs/manage/scale/autopilot), as well as Enterprise features such as [read replicas](/consul/docs/manage/scale/read-replica) and [redundancy zones](/consul/docs/manage/scale/redundancy-zone).


These features provide support for read-heavy workload periods without risking the stability of the overall cluster.

Expand Down
Loading
Loading