-
-- Casts from `BYTES` to `STRING` have been changed and now work the same way as in PostgreSQL. New functions `encode()` and `decode()` are available to replace the former functionality. [#18843](https://github.com/cockroachdb/cockroach/pull/18843)
-
-
General Changes
-
-- CockroachDB now requires Go 1.9. [#18459](https://github.com/cockroachdb/cockroach/pull/18459)
-- Release binaries now link against `libtinfo` dynamically. Building CockroachDB from source now requires `libtinfo` (or `ncurses`) development packages. [#18979](https://github.com/cockroachdb/cockroach/pull/18979)
-- Building the web UI now requires Node version 6 and Yarn version 0.22.0 or newer. [#18830](https://github.com/cockroachdb/cockroach/pull/18830)
-- Most dependencies have been updated to their latest versions. [#17490](https://github.com/cockroachdb/cockroach/pull/17490)
-- Release docker images are now based on Debian 8.9. [#18748](https://github.com/cockroachdb/cockroach/pull/18748)
-
-
SQL Language Changes
-
-- `DROP DATABASE` now defaults to `CASCADE`, restoring the 1.0 (and PostgreSQL-compatible) behavior. [#19182](https://github.com/cockroachdb/cockroach/pull/19182)
-- The `INET` column type and related functions are now supported. [#18171](https://github.com/cockroachdb/cockroach/pull/18171) [#18585](https://github.com/cockroachdb/cockroach/pull/18585)
-- The `ANY`, `SOME`, and `ALL` functions now support subquery and tuple operands. [#18094](https://github.com/cockroachdb/cockroach/pull/18094) [#19266](https://github.com/cockroachdb/cockroach/pull/19266)
-- `current_schemas(false)` behaves more consistently with PostgreSQL. [#18108](https://github.com/cockroachdb/cockroach/pull/18108)
-- `SET CLUSTER SETTING` now supports prepared statement placeholders. [#18377](https://github.com/cockroachdb/cockroach/pull/18377)
-- `SHOW CLUSTER SETTINGS` is now only available to `root`. [#19031](https://github.com/cockroachdb/cockroach/pull/19031)
-- A new cluster setting `cloudstorage.gs.default.key` can be used to store authentication credentials to be used by `BACKUP` and `RESTORE`. [#19018](https://github.com/cockroachdb/cockroach/pull/19018)
-- The `RESTORE DATABASE` statement is now supported. [#19182](https://github.com/cockroachdb/cockroach/pull/19182)
-- `IMPORT` now reports progress incrementally. [#18677](https://github.com/cockroachdb/cockroach/pull/18677)
-- `IMPORT` now supports the `into_db` option. [#18899](https://github.com/cockroachdb/cockroach/pull/18899)
-- The `date_trunc()` function is now available. [#19297](https://github.com/cockroachdb/cockroach/pull/19297)
-- New function `gen_random_uuid()` is equivalent to `uuid_v4()` but returns type `UUID` instead of `BYTES`. [#19379](https://github.com/cockroachdb/cockroach/pull/19379)
-- The `extract` function now works with `TIMESTAMP WITH TIME ZONE` in addition to plain `TIMESTAMP` and `DATE`. [#19045](https://github.com/cockroachdb/cockroach/pull/19045)
-- `TIMESTAMP WITH TIME ZONE` values are now printed in the correct session time zone. [#19081](https://github.com/cockroachdb/cockroach/pull/19081)
-- PostgreSQL compatibility updates: The `pg_namespace.aclitem` column has been renamed to `nspacl`. `pg_class` now has a `relpersistence` column. New functions `pg_encoding_to_char`, `pg_get_viewdef`, and `pg_get_keywords`. The `pg_tablespace` table is now available. The type name `"char"` (with quotes) is recognized as an alias for `CHAR`. Session variable `server_version_num` is now available. [#18530](https://github.com/cockroachdb/cockroach/pull/18530) [#18618](https://github.com/cockroachdb/cockroach/pull/18618) [#19127](https://github.com/cockroachdb/cockroach/pull/19127) [#19150](https://github.com/cockroachdb/cockroach/pull/19150) [#19405](https://github.com/cockroachdb/cockroach/pull/19405)
-
-
Command-Line Interface Changes
-
-- A new flag `--temp-dir` can be used to set the location of temporary files (defaults to a subdirectory of the first store). [#18544](https://github.com/cockroachdb/cockroach/pull/18544)
-- Many bugs in the interactive SQL shell have been fixed by switching to `libedit` for command-line input. The `normalize_history` option has been removed. [#18531](https://github.com/cockroachdb/cockroach/pull/18531) [#19125](https://github.com/cockroachdb/cockroach/pull/19125)
-- New command `cockroach load show` displays information about available backups. [#18434](https://github.com/cockroachdb/cockroach/pull/18434)
-- `cockroach node status` and `cockroach node ls` no longer show nodes that are decommissioned and dead. [#18270](https://github.com/cockroachdb/cockroach/pull/18270)
-- The `cockroach node decommission` command now has less noisy output. [#18458](https://github.com/cockroachdb/cockroach/pull/18458)
-
-
Bug Fixes
-
-- Fixed issues when `meta2` ranges split, lifting the ~64TB cluster size limitation. [#18709](https://github.com/cockroachdb/cockroach/pull/18709) [#18970](https://github.com/cockroachdb/cockroach/pull/18970)
-- More errors now return the same error codes as PostgreSQL. [#19103](https://github.com/cockroachdb/cockroach/pull/19103)
-- `ROLLBACK` can no longer return a "transaction aborted" error. [#19167](https://github.com/cockroachdb/cockroach/pull/19167)
-- Fixed a panic in `SHOW TRACE FOR SELECT COUNT(*)`. [#19006](https://github.com/cockroachdb/cockroach/pull/19006)
-- Escaped backslashes are now supported in `regexp_replace` substitution strings. [#19168](https://github.com/cockroachdb/cockroach/pull/19168)
-- `extract(quarter FROM ts)` now works correctly. [#19298](https://github.com/cockroachdb/cockroach/pull/19298)
-- The node liveness system is now more robust on a heavily-loaded cluster. [#19279](https://github.com/cockroachdb/cockroach/pull/19279)
-- Added debug logging when attempting to commit a non-existent intent. [#17580](https://github.com/cockroachdb/cockroach/pull/17580)
-
-
Performance Improvements
-
-- New cluster setting `timeseries.resolution_10s.storage_duration` can be used to reduce the storage used by built-in monitoring. [#18632](https://github.com/cockroachdb/cockroach/pull/18632)
-- Foreign key checks are now performed in batches. [#18730](https://github.com/cockroachdb/cockroach/pull/18730)
-- Raft ready processing is now batched, increasing performance of uncontended single-range write workloads. [#19056](https://github.com/cockroachdb/cockroach/pull/19056) [#19164](https://github.com/cockroachdb/cockroach/pull/19164)
-- The leaseholder cache is now sharded to improve concurrency and uses less memory. [#17987](https://github.com/cockroachdb/cockroach/pull/17987) [#18443](https://github.com/cockroachdb/cockroach/pull/18443)
-- Finding split keys is now more efficient. [#18649](https://github.com/cockroachdb/cockroach/pull/18649) [#18718](https://github.com/cockroachdb/cockroach/pull/18718)
-- `STDDEV` and `VARIANCE` aggregations can now be parallelized by the distributed SQL engine. [#18520](https://github.com/cockroachdb/cockroach/pull/18520)
-- Store statistics are now updated immediately after rebalancing. [#18425](https://github.com/cockroachdb/cockroach/pull/18425) [#19115](https://github.com/cockroachdb/cockroach/pull/19115)
-- Raft truncation is now faster. [#18706](https://github.com/cockroachdb/cockroach/pull/18706)
-- Replica rebalancing is now prioritized over lease rebalancing. [#17595](https://github.com/cockroachdb/cockroach/pull/17595)
-- `IMPORT` and `RESTORE` are more efficient. [#19070](https://github.com/cockroachdb/cockroach/pull/19070)
-- Restoring a backup no longer creates an extra empty range. [#19052](https://github.com/cockroachdb/cockroach/pull/19052)
-- Improved performance of type checking. [#19078](https://github.com/cockroachdb/cockroach/pull/19078)
-- The replica allocator now avoids adding new replicas that it would immediately try to undo. [#18364](https://github.com/cockroachdb/cockroach/pull/18364)
-- Improved performance of the SQL parser. [#19068](https://github.com/cockroachdb/cockroach/pull/19068)
-- Cache strings used for stats reporting in prepared statement. [#19240](https://github.com/cockroachdb/cockroach/pull/19240)
-- Reduced command queue contention during intent resolution. [#19093](https://github.com/cockroachdb/cockroach/pull/19093)
-- Transactions that do not use the client-directed retry protocol and experience retry errors are now more likely to detect those errors early instead of at commit time. [#18858](https://github.com/cockroachdb/cockroach/pull/18858)
-- Commands that have already exceeded their deadline are now dropped before proposal. [#19380](https://github.com/cockroachdb/cockroach/pull/19380)
-- Reduced the encoded size of some internal protocol buffers, reducing disk write amplification. [#18689](https://github.com/cockroachdb/cockroach/pull/18689) [#18834](https://github.com/cockroachdb/cockroach/pull/18834) [#18835](https://github.com/cockroachdb/cockroach/pull/18835) [#18828](https://github.com/cockroachdb/cockroach/pull/18828) [#18910](https://github.com/cockroachdb/cockroach/pull/18910) [#18950](https://github.com/cockroachdb/cockroach/pull/18950)
-- Reduced memory allocations and GC overhead. [#18914](https://github.com/cockroachdb/cockroach/pull/18914) [#18927](https://github.com/cockroachdb/cockroach/pull/18927) [#18928](https://github.com/cockroachdb/cockroach/pull/18928) [#19136](https://github.com/cockroachdb/cockroach/pull/19136) [#19246](https://github.com/cockroachdb/cockroach/pull/19246)
diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171113.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171113.md
deleted file mode 100644
index 2922a6c1546..00000000000
--- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171113.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
-- Redefined `NaN` comparisons to be compatible with PostgreSQL. `NaN` is now equal to itself and sorts before all other non-NULL values. [#19144](https://github.com/cockroachdb/cockroach/pull/19144)
-
-- It is no longer possible to [drop a user](https://www.cockroachlabs.com/docs/v2.0/drop-user) with grants; the user's grants must first be [revoked](https://www.cockroachlabs.com/docs/v2.0/revoke). [#19095](https://github.com/cockroachdb/cockroach/pull/19095)
-
-
Build Changes
-
-- Fixed compilation on the 64-bit ARM architecture. [#19795](https://github.com/cockroachdb/cockroach/pull/19795)
-
-- NodeJS 6+ and Yarn 1.0+ are now required to build CockroachDB. [#18349](https://github.com/cockroachdb/cockroach/pull/18349)
-
-
SQL Language Changes
-
-- [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.0/show-grants) (no user specified) and `SHOW GRANTS FOR ` are now supported. The former lists all grants for all users on all databases and tables; the latter does so for a specified user. [#19095](https://github.com/cockroachdb/cockroach/pull/19095)
-
-- [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.0/show-grants) statements now report the database name for tables. [#19095](https://github.com/cockroachdb/cockroach/pull/19095)
-
-- [`CREATE USER`](https://www.cockroachlabs.com/docs/v2.0/create-user) statements are no longer included in the results of [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v2.0/show-queries) statements. [#19095](https://github.com/cockroachdb/cockroach/pull/19095)
-
-- The new `ALTER USER ... WITH PASSWORD ...` statement can be used to change a user's password. [#19095](https://github.com/cockroachdb/cockroach/pull/19095)
-
-- [`CREATE USER IF NOT EXISTS`](https://www.cockroachlabs.com/docs/v2.0/create-user) is now supported. [#19095](https://github.com/cockroachdb/cockroach/pull/19095)
-
-- New [foreign key constraints](https://www.cockroachlabs.com/docs/v2.0/foreign-key) without an action specified for `ON DELETE` or `ON UPDATE` now default to `NO ACTION`, and existing foreign key constraints are now considered to have both `ON UPDATE` and `ON DELETE` actions set to `NO ACTION` even if `RESTRICT` was specified at the time of creation. To set an existing foreign key constraint's action to `RESTRICT`, the constraint must be dropped and recreated.
-
- Note that `NO ACTION` and `RESTRICT` are currently equivalent and will remain so until options for deferring constraint checking are added. [#19416](https://github.com/cockroachdb/cockroach/pull/19416)
-
-- Added more columns to [`information_schema.table_constraints`](https://www.cockroachlabs.com/docs/v2.0/information-schema#table_constraints). [#19466](https://github.com/cockroachdb/cockroach/pull/19466)
-
-
Command-Line Interface Changes
-
-- On node startup, the location for temporary files, as defined by the `--temp-dir` flag, is printed to the standard output. [#19272](https://github.com/cockroachdb/cockroach/pull/19272)
-
-
Admin UI Changes
-
-- [Decommissioned nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) no longer cause warnings about staggered versions. [#19547](https://github.com/cockroachdb/cockroach/pull/19547)
-
-
Bug Fixes
-
-- Fixed a bug causing redundant log messages when running [`SHOW TRACE FOR`](https://www.cockroachlabs.com/docs/v2.0/show-trace). [#19468](https://github.com/cockroachdb/cockroach/pull/19468)
-
-- [`DROP INDEX IF EXISTS`](https://www.cockroachlabs.com/docs/v2.0/drop-index) now behaves properly when not using `table@idx` syntax. [#19390](https://github.com/cockroachdb/cockroach/pull/19390)
-
-- Fixed a double close of the merge joiner output. [#19794](https://github.com/cockroachdb/cockroach/pull/19794)
-
-- Fixed a panic caused by placeholders in `PREPARE` statements. [#19636](https://github.com/cockroachdb/cockroach/pull/19636)
-
-- Improved error messages about Raft progress in the replicate queue. [#19593](https://github.com/cockroachdb/cockroach/pull/19593)
-
-- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) command now properly supports [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values. [#19498](https://github.com/cockroachdb/cockroach/pull/19498)
-
-- Fixed range splitting to work when the first row of a range is larger than half the configured range size. [#19339](https://github.com/cockroachdb/cockroach/pull/19339)
-
-- Reduced unnecessary log messages when a cluster becomes temporarily unbalanced, for example, when a new node joins. [#19494](https://github.com/cockroachdb/cockroach/pull/19494)
-
-- Using [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete) without `WHERE` and `RETURNING` inside `[...]` no longer causes a panic. [#19822](https://github.com/cockroachdb/cockroach/pull/19822)
-
-- SQL comparisons using the `ANY`, `SOME`, or `ALL` [operators](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#operators) with sub-queries and cast expressions work properly again. [#19801](https://github.com/cockroachdb/cockroach/pull/19801)
-
-- On macOS, the built-in SQL shell ([`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client)) once again properly supports window resizing and suspend-to-background. [#19429](https://github.com/cockroachdb/cockroach/pull/19429)
-
-- Silenced an overly verbose log message. [#19504](https://github.com/cockroachdb/cockroach/pull/19504)
-
-- Fixed a bug preventing large, distributed queries that overflow onto disk from completing. [#19689](https://github.com/cockroachdb/cockroach/pull/19689)
-
-- It is not possible to `EXECUTE` inside of `PREPARE` statements or alongside other `EXECUTE` statements; attempting to do so no longer causes a panic. [#19809](https://github.com/cockroachdb/cockroach/pull/19809) [#19720](https://github.com/cockroachdb/cockroach/pull/19720)
-
-- The admin UI now works when a different `--advertise-host` is used. [#19426](https://github.com/cockroachdb/cockroach/pull/19426)
-
-- An improperly typed subquery used with `IN` no longer panics. [#19858](https://github.com/cockroachdb/cockroach/pull/19858)
-
-- It is now possible to [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) using an incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) taken after a table was dropped. [#19601](https://github.com/cockroachdb/cockroach/pull/19601)
-
-- Fixed an always-disabled crash reporting setting. [#19554](https://github.com/cockroachdb/cockroach/pull/19554)
-
-- Prevented occasional crashes when the server is shut down during startup. [#19591](https://github.com/cockroachdb/cockroach/pull/19591)
-
-- Prevented a potential Gossip deadlock on cluster startup. [#19493](https://github.com/cockroachdb/cockroach/pull/19493)
-
-- Improved error handling during splits. [#19448](https://github.com/cockroachdb/cockroach/pull/19448)
-
-- Some I/O errors now cause the server to shut down. [#19447](https://github.com/cockroachdb/cockroach/pull/19447)
-
-- Improved resiliency to S3 quota limits by retrying some operations during [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore)/[`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import)
-
-- Executing [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.0/truncate) on a table with self-referential foreign key constraints no longer creates broken foreign key backward references. [#19322](https://github.com/cockroachdb/cockroach/issues/19322)
-
-
Performance Improvements
-
-- Improved memory usage for certain queries that use limits at multiple levels. [#19682](https://github.com/cockroachdb/cockroach/pull/19682)
-
-- Eliminated some redundant Raft messages, improving write performance for some workloads by up to 30%. [#19540](https://github.com/cockroachdb/cockroach/pull/19540)
-
-- Trimmed the wire size of various RPCs. [#18930](https://github.com/cockroachdb/cockroach/pull/18930)
-
-- Table leases are now acquired in the background when frequently used, removing a jump in latency when they expire. [#19005](https://github.com/cockroachdb/cockroach/pull/19005)
-
-
Enterprise Edition Changes
-
-- When an enterprise [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) fails or is canceled, partially restored data is now properly cleaned up. [#19578](https://github.com/cockroachdb/cockroach/pull/19578)
-
-- Added a placeholder during long-running [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) and [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs to protect against accidentally using it by concurrent operations. [#19713](https://github.com/cockroachdb/cockroach/pull/19713)
-
-
Doc Updates
-
-- New RFCs:
- - [Inverted indexes](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171020_inverted_indexes.md) [#18992](https://github.com/cockroachdb/cockroach/pull/18992)
- - [`JSONB` encoding](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171005_jsonb_encoding.md) [#19062](https://github.com/cockroachdb/cockroach/pull/19062)
- - [SQL Sequences](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171102_sql_sequences.md) [#19196](https://github.com/cockroachdb/cockroach/pull/19196)
- - [Interleaved table JOINs](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171025_interleaved_table_joins.md) [#19028](https://github.com/cockroachdb/cockroach/pull/19028)
- - [SQL consistency check command](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171025_scrub_sql_consistency_check_command.md) [#18675](https://github.com/cockroachdb/cockroach/pull/18675)
-- Documented how to [increase the system-wide file descriptors limit on Linux](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#file-descriptors-limit). [#2139](https://github.com/cockroachdb/docs/pull/2139)
-- Clarified that multiple transaction options in a single [`SET TRANSACTION`](https://www.cockroachlabs.com/docs/v2.0/set-transaction#set-isolation-priority) statement can be space-separated as well as comma-separated. [#2139](https://github.com/cockroachdb/docs/pull/2139)
-- Added `e'\\x` to the list of supported [hexadecimal-encoded byte array literals](https://www.cockroachlabs.com/docs/v2.0/sql-constants#hexadecimal-encoded-byte-array-literals) formats. [#2134](https://github.com/cockroachdb/docs/pull/2134)
-- Clarified the FAQ on [auto-generating unique row IDs](https://www.cockroachlabs.com/docs/v2.0/sql-faqs#how-do-i-auto-generate-unique-row-ids-in-cockroachdb). [#2128](https://github.com/cockroachdb/docs/pull/2128)
-- Corrected the aliases and allowed widths of various [`INT`](https://www.cockroachlabs.com/docs/v1.1/int) types. [#2116](https://github.com/cockroachdb/docs/pull/2116)
-- Corrected the description of the `--host` flag in our insecure [cloud deployment tutorials](https://www.cockroachlabs.com/docs/v1.1/manual-deployment). [#2117](https://github.com/cockroachdb/docs/pull/2117)
-- Minor improvements to the [CockroachDB Architecture Overview](https://www.cockroachlabs.com/docs/v1.1/architecture/overview) page. [#2103](https://github.com/cockroachdb/docs/pull/2103) [#2104](https://github.com/cockroachdb/docs/pull/2104) [#2105](https://github.com/cockroachdb/docs/pull/2105)
diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171204.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171204.md
deleted file mode 100644
index 03dbbef7fcb..00000000000
--- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171204.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
-- CockroachDB now uses RocksDB version 5.9.0. [#20070](https://github.com/cockroachdb/cockroach/pull/20070)
-
-
Build Changes
-
-- Restored compatibility with older x86 CPUs that do not support SSE4.2 extensions. [#19909](https://github.com/cockroachdb/cockroach/issues/19909)
-
-
SQL Language Changes
-
-- The `TIME` data type is now supported. [#19923](https://github.com/cockroachdb/cockroach/pull/19923)
-- The [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) command now tolerates empty CSV files and supports `201` and `204` return codes from HTTP storage. [#19861](https://github.com/cockroachdb/cockroach/pull/19861) [#20027](https://github.com/cockroachdb/cockroach/pull/20027)
-- [`nodelocal://`](https://www.cockroachlabs.com/docs/v2.0/import#import-file-urls) paths in `IMPORT` now default to relative within the "extern" subdirectory of the first store directory, configurable via the new `--external-io-dir` flag. [#19865](https://github.com/cockroachdb/cockroach/pull/19865)
-- Added `AWS_ENDPOINT` and `AWS_REGION` parameters in S3 URIs to specify the AWS endpoint or region for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import). The endpoint can be any S3-compatible service. [#19860](https://github.com/cockroachdb/cockroach/pull/19860)
-- For compatibility with PostgreSQL:
- - The `time zone` [session variable](https://www.cockroachlabs.com/docs/v2.0/set-vars) (with a space) has been renamed `timezone` (without a space), and `SET TIMEZONE` and `SHOW TIMEZONE` are now supported alongside the existing `SET TIME ZONE` and `SHOW TIME ZONE` syntax. Also, `SET TIMEZONE =` can now be used as an alternative to `SET TIMEZONE TO`. [#19931](https://github.com/cockroachdb/cockroach/pull/19931)
- - The `transaction_read_only` [session variable](https://www.cockroachlabs.com/docs/v2.0/set-vars) is now supported. It is always set to `off`. [#19971](https://github.com/cockroachdb/cockroach/pull/19971)
- - The `transaction isolation level`, `transaction priority`, and `transaction status` [session variables](https://www.cockroachlabs.com/docs/v2.0/set-vars) have been renamed `transaction_isolation`, `transaction_priority`, and `transaction_status`. [#20264](https://github.com/cockroachdb/cockroach/pull/20264)
-- [`SHOW TRACE FOR SELECT`](https://www.cockroachlabs.com/docs/v2.0/show-trace) now supports `AS OF SYSTEM TIME`. [#20162](https://github.com/cockroachdb/cockroach/pull/20162)
-- Added the `system.table_statistics` table for maintaining statistics about columns or groups of columns. These statistics will eventually be used by the query optimizer. [#20072](https://github.com/cockroachdb/cockroach/pull/20072)
-- The [`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update) and [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete) statements now support `ORDER BY` and `LIMIT` clauses. [#20069](https://github.com/cockroachdb/cockroach/pull/20069)
- - For `UPDATE`, this is a MySQL extension that can help with updating the primary key of a table (`ORDER BY`) and control the maximum size of write transactions (`LIMIT`).
- - For `DELETE`, the `ORDER BY` clause constrains the deletion order, the output of its `LIMIT` clause (if any), and the result order of its `RETURNING` clause (if any).
-- On table creation, [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expressions no longer get evaluated. [#20031](https://github.com/cockroachdb/cockroach/pull/20031)
-
-
Command-Line Interface Changes
-
-- The [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command now indicates if a node is dead. [#20192](https://github.com/cockroachdb/cockroach/pull/20192)
-- The new `--external-io-dir` flag in [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) can be used to configure the location of [`nodelocal://`](https://www.cockroachlabs.com/docs/v2.0/import#import-file-urls) paths in `BACKUP`, `RESTORE`, and `IMPORT`. [#19725](https://github.com/cockroachdb/cockroach/pull/19725)
-
-
Admin UI Changes
-
-- Updated time series axis labels to show the correct byte units. [#19870](https://github.com/cockroachdb/cockroach/pull/19870)
-- Added a cluster overview page showing current capacity usage, node liveness, and replication status. [#19657](https://github.com/cockroachdb/cockroach/pull/19657)
-
-
Bug Fixes
-
-- Fixed how column modifiers interact with [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values. [#19499](https://github.com/cockroachdb/cockroach/pull/19499)
-- Enabled an RPC-saving optimization when the `--advertise-host` is used. [#20006](https://github.com/cockroachdb/cockroach/pull/20006)
-- It is now possible to [drop a column](https://www.cockroachlabs.com/docs/v2.0/drop-column) that is referenced as a [foreign key](https://www.cockroachlabs.com/docs/v2.0/foreign-key) when it is the only column in that reference. [#19772](https://github.com/cockroachdb/cockroach/pull/19772)
-- Fixed a panic involving the use of the `IN` operator and improperly typed subqueries. [#19858](https://github.com/cockroachdb/cockroach/pull/19858)
-- Fixed a spurious panic about divergence of on-disk and in-memory state. [#19867](https://github.com/cockroachdb/cockroach/pull/19867)
-- Fixed a bug allowing duplicate columns in primary indexes. [#20238](https://github.com/cockroachdb/cockroach/pull/20238)
-- Fixed a bug with `NaN`s and `Infinity`s in `EXPLAIN` outputs. [#20233](https://github.com/cockroachdb/cockroach/pull/20233)
-- Fixed a possible crash due to statements finishing execution after the client connection has been closed. [#20175](https://github.com/cockroachdb/cockroach/pull/20175)
-- Fixed a correctness bug when using distributed SQL engine sorted merge joins. [#20090](https://github.com/cockroachdb/cockroach/pull/20090)
-- Fixed a bug excluding some trace data from [`SHOW TRACE FOR `](https://www.cockroachlabs.com/docs/v2.0/show-trace). [#20081](https://github.com/cockroachdb/cockroach/pull/20081)
-- Fixed a case in which ambiguous errors were treated as unambiguous and led to inappropriate retries. [#20073](https://github.com/cockroachdb/cockroach/pull/20073)
-- Fixed a bug leading to incorrect results for some queries with `IN` constraints. [#20036](https://github.com/cockroachdb/cockroach/pull/20036)
-- Fixed the encoding of indexes that use [`STORING`](https://www.cockroachlabs.com/docs/v2.0/create-index#store-columns) columns. [#20001](https://github.com/cockroachdb/cockroach/pull/20001)
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) checkpoints are now correctly cleaned up. [#20211](https://github.com/cockroachdb/cockroach/pull/20211)
-- Fixed a bug that could cause system overload during cleanup of large transactions. [#19538](https://github.com/cockroachdb/cockroach/pull/19538)
-- On macOS, the built-in SQL shell ([`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client)) once again properly supports window resizing. [#20148](https://github.com/cockroachdb/cockroach/pull/20148), [#20153](https://github.com/cockroachdb/cockroach/pull/20153)
-- `PARTITION BY` multiple columns with window functions now works properly. [#20151](https://github.com/cockroachdb/cockroach/pull/20151)
-- Fixed a bug so deleting chains of 2 or more foreign key references is now possible. [#20050](https://github.com/cockroachdb/cockroach/pull/20050)
-- Prometheus vars are now written outside the metrics lock. [#20194](https://github.com/cockroachdb/cockroach/pull/20194)
-
-
Enterprise Edition Changes
-
-- Enterprise [`BACKUP`s](https://www.cockroachlabs.com/docs/v2.0/backup) no longer automatically include the `system.users` and `system.descriptor` tables. [#19975](https://github.com/cockroachdb/cockroach/pull/19975)
-- Added `AWS_ENDPOINT` and `AWS_REGION` parameters in S3 URIs to specify the AWS endpoint or region for [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore). The endpoint can be any S3-compatible service. [#19860](https://github.com/cockroachdb/cockroach/pull/19860)
-- `RESTORE DATABASE` is now allowed only when the backup contains a whole database. [#20023](https://github.com/cockroachdb/cockroach/pull/20023)
-- Fixed `RESTORE` being resumed with `skip_missing_foreign_keys` specified. [#20092](https://github.com/cockroachdb/cockroach/pull/20092)
-- [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) jobs now support `201` and `204` return codes from HTTP storage. [#20027](https://github.com/cockroachdb/cockroach/pull/20027)
-- [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) now checks that all interleaved tables are included (as required by `RESTORE`). [#20206](https://github.com/cockroachdb/cockroach/pull/20206)
-- Marked `revision_history` [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) as experimental. [#20164](https://github.com/cockroachdb/cockroach/pull/20164)
-- [`nodelocal://`](https://www.cockroachlabs.com/docs/v2.0/import#import-file-urls) paths in `BACKUP`/`RESTORE` now default to relative within the "extern" subdirectory of the first store directory, configurable via the new `--external-io-dir` flag. [#19865](https://github.com/cockroachdb/cockroach/pull/19865)
-
-
Doc Updates
-
-- In conjunction with beta-level support for the C# (.NET) Npgsql driver, added a tutorial on [building a C# app with CockroachDB](https://www.cockroachlabs.com/docs/v2.0/build-a-csharp-app-with-cockroachdb). [#2236](https://github.com/cockroachdb/docs/pull/2236)
-- Improved Kubernetes guidance:
- - Added a tutorial on [orchestrating a secure CockroachDB cluster with Kubernetes](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes), improved the tutorial for [insecure orchestrations](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes-insecure), and added a [local cluster tutorial using `minikube`](https://www.cockroachlabs.com/docs/v2.0/orchestrate-a-local-cluster-with-kubernetes-insecure). [#2147](https://github.com/cockroachdb/docs/pull/2147)
- - Updated the StatefulSet configurations to support rolling upgrades, and added [initial documentation](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes#doing-a-rolling-upgrade-to-a-different-cockroachdb-version). [#19995](https://github.com/cockroachdb/cockroach/pull/19995)
-- Added performance best practices for [`INSERT`](https://www.cockroachlabs.com/docs/v2.0/insert#performance-best-practices) and [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert#considerations) statements. [#2199](https://github.com/cockroachdb/docs/pull/2199)
-- Documented how to use the `timeseries.resolution_10s.storage_duration` cluster setting to [truncate timeseries data sooner than the default 30 days](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#why-is-disk-usage-increasing-despite-lack-of-writes). [#2210](https://github.com/cockroachdb/docs/pull/2210)
-- Clarified the treatment of `NULL` values in [`SELECT` statements with an `ORDER BY` clause](https://www.cockroachlabs.com/docs/v2.0/select-clause#sorting-and-limiting-query-results). [#2237](https://github.com/cockroachdb/docs/pull/2237)
-
-
New RFCs
-
-- [`SELECT FOR UPDATE`](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171024_select_for_update.md) [#19577](https://github.com/cockroachdb/cockroach/pull/19577)
-- [SQL Optimizer Statistics](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20170908_sql_optimizer_statistics.md) [#18399](https://github.com/cockroachdb/cockroach/pull/18399)
-- [SCRUB Index and Physical Check Implementation](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171120_scrub_index_and_physical_implementation.md) [#19327](https://github.com/cockroachdb/cockroach/pull/19327)
diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171211.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171211.md
deleted file mode 100644
index d3bef7a3995..00000000000
--- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171211.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-- Alpha and beta releases are now published as [Docker images under the name cockroachdb/cockroach-unstable](https://hub.docker.com/r/cockroachdb/cockroach-unstable/). [#20331](https://github.com/cockroachdb/cockroach/pull/20331)
-
-
SQL Language Changes
-
-- The protocol statement tag for [`CREATE TABLE ... AS ...`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) is now [`SELECT`](https://www.cockroachlabs.com/docs/v2.0/select-clause), like in PostgreSQL. [#20268](https://github.com/cockroachdb/cockroach/pull/20268)
-- OIDs can now be compared with inequality operators. [#20367](https://github.com/cockroachdb/cockroach/pull/20367)
-- The [`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.0/cancel-job) statement now supports canceling [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs. [#20343](https://github.com/cockroachdb/cockroach/pull/20343)
-- [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v2.0/pause-job)/[`RESUME JOB`](https://www.cockroachlabs.com/docs/v2.0/resume-job)/[`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.0/cancel-job) statements can now be used within SQL [transactions](https://www.cockroachlabs.com/docs/v2.0/transactions). [#20185](https://github.com/cockroachdb/cockroach/pull/20185)
-- Added a cache for the internal `system.table_statistics` table. [#20212](https://github.com/cockroachdb/cockroach/pull/20212)
-- The `intervalstyle` [session variable](https://www.cockroachlabs.com/docs/v2.0/set-vars) is now supported for PostgreSQL compatibility. [#20274](https://github.com/cockroachdb/cockroach/pull/20274)
-- The [`SHOW [KV] TRACE`](https://www.cockroachlabs.com/docs/v2.0/show-trace) statement now properly extracts file/line number information when analyzing traces produced in debug mode. Also, the new `SHOW COMPACT [KV] TRACE` statement provides a more compact view on the same data. [#20093](https://github.com/cockroachdb/cockroach/pull/20093)
-- Some queries using `IS NOT NULL` conditions are now better optimized. [#20436](https://github.com/cockroachdb/cockroach/pull/20436)
-- [Views](https://www.cockroachlabs.com/docs/v2.0/views) now support `LIMIT` and `ORDER BY`. [#20246](https://github.com/cockroachdb/cockroach/pull/20246)
-
-
Command-Line Interface Changes
-
-- Reduced temporary disk space usage for the `cockroach debug compact` command. [#20460](https://github.com/cockroachdb/cockroach/pull/20460)
-- The [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) and [`cockroach node ls`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) commands now support a timeout. [#20308](https://github.com/cockroachdb/cockroach/pull/20308)
-
-
Admin UI Changes
-
-- The Admin UI now sets the `Last-Modified` header when serving assets to permit browser caching. This improves page load times, especially on slow connections [#20429](https://github.com/cockroachdb/cockroach/pull/20429).
-
-
Bug Fixes
-
-- Removed the possibility for OOM errors during distributed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) from csv. [#20506](https://github.com/cockroachdb/cockroach/pull/20506)
-- Fixed a crash triggered by some corner-case queries containing [`ORDER BY`](https://www.cockroachlabs.com/docs/v2.0/query-order). [#20489](https://github.com/cockroachdb/cockroach/pull/20489)
-- Added missing Distributed SQL flows to the exported `sql.distsql.flows.active` and `sql.distsql.flows.total` metrics and the "Active Flows for Distributed SQL Queries" admin UI graph. [#20503](https://github.com/cockroachdb/cockroach/pull/20503)
-- Fixed an issue with stale buffer data when using the binary format for [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values. [#20461](https://github.com/cockroachdb/cockroach/pull/20461)
-- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) shell now better reports the number of rows inserted by a [`CREATE TABLE ... AS ...`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) statement. Note, however, that the result are still formatted incorrectly if the `CREATE TABLE ... AS ...` statement creates zero rows in the new table. [#20268](https://github.com/cockroachdb/cockroach/pull/20268)
-- Self-referencing tables can now reference a non-primary index without manually adding an index on the referencing column. [#20325](https://github.com/cockroachdb/cockroach/pull/20325)
-- Fixed an issue where spans for descending indexes were displaying incorrectly and updated `NOT NULL` tokens from `#` to `!NULL`. [#20318](https://github.com/cockroachdb/cockroach/pull/20318)
-- Fixed `BACKUP` jobs to correctly resume in all conditions. [#20185](https://github.com/cockroachdb/cockroach/pull/20185)
-- Fix various race conditions with jobs. [#20185](https://github.com/cockroachdb/cockroach/pull/20185)
-- It is no longer possible to use conflicting `AS OF SYSTEM TIME` clauses in different parts of a query. [#20267](https://github.com/cockroachdb/cockroach/pull/20267)
-- Fixed a panic caused by dependency cycles with `cockroach dump`. [#20255](https://github.com/cockroachdb/cockroach/pull/20255)
-- Prevented context cancellation during lease acquisition from leaking to coalesced requests. [#20424](https://github.com/cockroachdb/cockroach/pull/20424)
-
-
Performance Improvements
-
-- Improved handling of `IS NULL` conditions. [#20366](https://github.com/cockroachdb/cockroach/pull/20366)
-- Improved p99 latencies for garbage collection of previous versions of a key, when there are many versions. [#20373](https://github.com/cockroachdb/cockroach/pull/20373)
-- Smoothed out disk usage under very write heavy workloads by syncing to disk more frequently. [#20352](https://github.com/cockroachdb/cockroach/pull/20352)
-- Improved garbage collection of very large [transactions](https://www.cockroachlabs.com/docs/v2.0/transactions) and large volumes of abandoned write intents. [#20396](https://github.com/cockroachdb/cockroach/pull/20396)
-- Improved table scans and seeks on interleaved parent tables by skipping interleaved children rows at the end of a scan. [#20235](https://github.com/cockroachdb/cockroach/pull/20235)
-- Replaced the interval tree structure in `TimestampCache` with arena-backed concurrent skiplist. This reduces global locking and garbage collection pressure, improving average and tail latencies. [#20300](https://github.com/cockroachdb/cockroach/pull/20300)
-
-
Doc Updates
-
-- Added an [introduction to CockroachDB video](https://www.cockroachlabs.com/docs/v2.0/). [#2234](https://github.com/cockroachdb/docs/pull/2234)
-- Clarified that we have tested the PostgreSQL-compatible drivers and ORMs featured in our documentation enough to claim **beta-level** support. This means that applications using advanced or obscure features of a driver or ORM may encounter incompatibilities. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. [#2235](https://github.com/cockroachdb/docs/pull/2235)
diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20171218.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20171218.md
deleted file mode 100644
index 52d01146370..00000000000
--- a/src/current/_includes/releases/v2.0/v2.0-alpha.20171218.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
-- Added support for read-only [transactions](https://www.cockroachlabs.com/docs/v2.0/transactions) via PostgreSQL-compatible syntax. [#20547](https://github.com/cockroachdb/cockroach/pull/20547)
- - `SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY/READ WRITE`
- - `SET TRANSACTION READ ONLY/READ WRITE`
- - `SET default_transaction_read_only`
- - `SET transaction_read_only`
-- For compatibility with PostgreSQL, the return type of the `date_trunc(STRING,TIME)` [function](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) was changed from `TIME` to `INTERVAL`, and the return type of the `date_trunc(STRING,DATE)` function was changed from `DATE` to `TIMESTAMPTZ`. [#20467](https://github.com/cockroachdb/cockroach/pull/20467)
-
-
Bug Fixes
-
-- Fixed a bug preventing CockroachDB from starting when the filesystem generates a `lost+found` directory in the Cockroach data directory. [#20565](https://github.com/cockroachdb/cockroach/pull/20565)
-- Fixed the over-counting of memory usage by aggregations. [#20585](https://github.com/cockroachdb/cockroach/pull/20585)
-- Fix a panic when using the `date_trunc(STRING,TIMESTAMP)` or `date_trunc(STRING,DATE)` function during queries that run with the distributed execution engine. [#20467](https://github.com/cockroachdb/cockroach/pull/20467)
-- Fixed a bug where the `date_trunc(STRING,TIMESTAMP)` function would return a `TIMESTAMPTZ` value. [#20467](https://github.com/cockroachdb/cockroach/pull/20467)
-- Fixed a race condition that would result in some queries hanging after cancellation. [#20088](https://github.com/cockroachdb/cockroach/pull/20088)
-- Fixed a bug allowing [privileges](https://www.cockroachlabs.com/docs/v2.0/privileges) to be granted to non-existent users. [#20438](https://github.com/cockroachdb/cockroach/pull/20438)
-
-
Performance Improvements
-
-- Queries that use inequalities using tuples (e.g., `(a,b,c) < (x,y,z)`) are now slightly better optimized. [#20484](https://github.com/cockroachdb/cockroach/pull/20484)
-- `IS DISTINCT FROM` and `IS NOT DISTINCT FROM` clauses are now smarter about using available indexes. [#20346](https://github.com/cockroachdb/cockroach/pull/20346)
diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20180116.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20180116.md
deleted file mode 100644
index 6a5ff4ff30b..00000000000
--- a/src/current/_includes/releases/v2.0/v2.0-alpha.20180116.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
{{ include.release }}
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}A bug that could trigger an assertion failure was discovered in this
-release. Bringing up a node too soon after the assertion fired could introduce consistency problems, so this release has been withdrawn.{{site.data.alerts.end}}
-
-{% comment %}
-
Backwards-Incompatible Changes
-
-- Removed the obsolete `kv.gc.batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings). [#21070](https://github.com/cockroachdb/cockroach/pull/21070)
-- Removed the `COCKROACH_METRICS_SAMPLE_INTERVAL` environment variable. Users that relied on it should reduce the value for the `timeseries.resolution_10s.storage_duration` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) instead. [#20810](https://github.com/cockroachdb/cockroach/pull/20810)
-
-
General Changes
-
-- CockroachDB now proactively rebalances data when the diversity of the localities that a given range is located on can be improved. [#19489](https://github.com/cockroachdb/cockroach/pull/19489)
-- Clusters are now initialized with default `.meta` and `.liveness` replication zones with lower GC TTL configurations. [#17628](https://github.com/cockroachdb/cockroach/pull/17628)
-
-
SQL Language Changes
-
-- The new `SHOW CREATE SEQUENCE` statement shows the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence) statement that would create a carbon copy of the specified sequence. [#21208](https://github.com/cockroachdb/cockroach/pull/21208)
-- The [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.0/drop-column) statement now drops [`CHECK`](https://www.cockroachlabs.com/docs/v2.0/check) constraints. [#21203](https://github.com/cockroachdb/cockroach/pull/21203)
-- The `pg_sequence_parameters()` built-in function is now supported. [#21069](https://github.com/cockroachdb/cockroach/pull/21069)
-- `ON DELETE CASCADE` foreign key constraints are now fully supported and memory bounded. [#20064](https://github.com/cockroachdb/cockroach/pull/20064) [#20706](https://github.com/cockroachdb/cockroach/pull/20706)
-- For improved troubleshooting, more complete and useful details are now reported to clients when SQL errors are encountered. [#19793](https://github.com/cockroachdb/cockroach/pull/19793)
-- The new `SHOW SYNTAX` statement allows clients to analyze arbitrary SQL syntax server-side and retrieve either the (pretty-printed) syntax decomposition of the string or the details of the syntax error, if any. This statement is intended for use in the CockroachDB interactive SQL shell. [#19793](https://github.com/cockroachdb/cockroach/pull/19793)
-- Enhanced type checking of subqueries in order to generalize subquery support. As a side-effect, fixed a crash with subquery edge cases such as `SELECT (SELECT (1, 2)) IN (SELECT (1, 2))`. [#21076](https://github.com/cockroachdb/cockroach/pull/21076)
-- Single-use common table expressions are now supported. [#20359](https://github.com/cockroachdb/cockroach/pull/20359)
-- Statement sources with no output columns are now disallowed. [#20998](https://github.com/cockroachdb/cockroach/pull/20998)
-- `WHERE` predicates that simplify to NULL no longer performs table scans. [#21067](https://github.com/cockroachdb/cockroach/pull/21067)
-- The experimental `CREATE ROLE`, `DROP ROLE`, and `SHOW ROLES` statements are now supported. [#21020](https://github.com/cockroachdb/cockroach/pull/21020) [#20980](https://github.com/cockroachdb/cockroach/pull/20980)
-- Improved the output of `EXPLAIN` to show the plan tree structure. [#20697](https://github.com/cockroachdb/cockroach/pull/20697)
-- `OUTER` interleaved joins are now supported. [#20963](https://github.com/cockroachdb/cockroach/pull/20963)
-- Added the `rolreplication` and `rolbypassrls` columns to the `pg_catalog.pg_roles` table. [#20397](https://github.com/cockroachdb/cockroach/pull/20397)
-- [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values can now be cast to their own type. [#19816](https://github.com/cockroachdb/cockroach/pull/19816)
-- The `||` operator is now supported for `JSONB`. [#20689](https://github.com/cockroachdb/cockroach/pull/20689)
-- The `CASCADE` option is now required to drop an index that is used by Unique constraint. [#20837](https://github.com/cockroachdb/cockroach/pull/20837)
-- The `BOOL` type now matches PostgreSQL's list of accepted formats. [#20833](https://github.com/cockroachdb/cockroach/pull/20833)
-- The `sql_safe_updates` session variable now defaults to `false` unless the shell is truly interactive (using `cockroach sql`, `-e` not specified, standard input not redirected) and `--unsafe-updates` is not specified. Previously, `sql_safe_updates` would always default to `true` unless `--unsafe-updates` was specified. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- The `errexit` client-side option now defaults to `false` only if the shell is truly interactive, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- The `display_format` client-side option now defaults to `pretty` in every case where the output goes to a terminal, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- The `check_syntax` and `smart_prompt` client-side options, together with the interactive line editor, are only enabled if the session is interactive and output goes to a terminal. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- Table aliases are now permitted in `RETURNING` clauses. [#20808](https://github.com/cockroachdb/cockroach/pull/20808)
-- Added the `SERIAL2`, `SERIAL4`, and `SERIAL8` aliases for the [`SERIAL`](https://www.cockroachlabs.com/docs/v2.0/serial) type. [#20776](https://github.com/cockroachdb/cockroach/pull/20776)
-- NULL values are now supported in `COLLATE` expressions. [#20795](https://github.com/cockroachdb/cockroach/pull/20795)
-- The new `crdb_internal.node_executable_version()` built-in function simplifies rolling upgrades. [#20292](https://github.com/cockroachdb/cockroach/pull/20292)
-- The `json_pretty()`, `json_extract_path()`, `jsonb_extract_path()`, `json_object()`, and `asJSON()` built-in function are now supported. [#20702](https://github.com/cockroachdb/cockroach/pull/20702) [#20520](https://github.com/cockroachdb/cockroach/pull/20520) [#21015](https://github.com/cockroachdb/cockroach/pull/21015) [#20234](https://github.com/cockroachdb/cockroach/pull/20234)
-- The `DISTINCT ON` clause is now supported for `SELECT` statements. [#20463](https://github.com/cockroachdb/cockroach/pull/20463)
-- For compatibility with PostgreSQL and related tools:
- - Parsing of the `COMMENT ON` syntax is now allowed. [#21063](https://github.com/cockroachdb/cockroach/pull/21063)
- - The following built-in functions are now supported: `pg_catalog.pg_trigger()`, `pg_catalog.pg_rewrite()`, `pg_catalog.pg_operator()`, `pg_catalog.pg_user_mapping()`, `pg_catalog.foreign_data_wrapper()`, `pg_get_constraintdef()`, `inet_client_addr()`, `inet_client_port()`, `inet_server_addr()`, `inet_server_port()`. [#21065](https://github.com/cockroachdb/cockroach/pull/21065) [#20788](https://github.com/cockroachdb/cockroach/pull/20788)
- - Missing columns have been added to `information_schema.columns`, and the `pg_catalog.pg_user()` virtual table has been added. [#20788](https://github.com/cockroachdb/cockroach/pull/20788)
- - A string cast to `regclass` is interpreted as a possibly qualified name like `db.name`. [#20788](https://github.com/cockroachdb/cockroach/pull/20788)
- - Rendered columns for built-in functions are now titled by the name of the built-in function. [#20820](https://github.com/cockroachdb/cockroach/pull/20820)
-
-
Command-Line Changes
-
-- Client `cockroach` commands that use SQL (`cockroach sql`, `cockroach node ls`, etc.) now print a warning if the server is running an older version of CockroachDB than the client. Also, this and other warning messages are now clearly indicated with the "warning:" prefix. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-- Client-side syntax checking performed by [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) when the `check_syntax` option is enabled has been enhanced for forward-compatibility with later CockroachDB versions. [#21119](https://github.com/cockroachdb/cockroach/pull/21119)
-- The `?` [client-side command](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client#sql-shell-commands) of `cockroach sql` now prints out a description of each option. [#21119](https://github.com/cockroachdb/cockroach/pull/21119)
-- The `--unsafe-updates` of `cockroach sql` was renamed to `--safe-updates`. The default behavior is unchanged: The previous flag defaulted to `false`; the new flag defaults to `true`. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-- The `cockroach sql` command no longer fails when the server is running a version of CockroachDB that does not support the sql_safe_updates session variable. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-
-
Admin UI Changes
-
-- Added graphs of node liveness heartbeat latency, an important internal signal of health, to the **Distributed** dashboard. [#21002](https://github.com/cockroachdb/cockroach/pull/21002)
-- **Capacity Used** is now shown as "-" instead of 100% when the UI cannot load the real data from the server. [#20824](https://github.com/cockroachdb/cockroach/pull/20824)
-- Removed a redundant rendering of the GC pause time from the **CPU Time** graph. [#20802](https://github.com/cockroachdb/cockroach/pull/20802)
-- The **Databases** page now reports table sizes that are better approximations to actual disk space usage. [#20627](https://github.com/cockroachdb/cockroach/pull/20627)
-- Added a system table to allow operators to designate geographic coordinates for any locality. This is for use with upcoming cluster visualization functionality in the Admin UI. [#19652](https://github.com/cockroachdb/cockroach/pull/19652)
-
-
Bug Fixes
-
-- Fixed the `debug compact` command to compact all sstables. [#21293](https://github.com/cockroachdb/cockroach/pull/21293)
-- Fixed tuple equality to evaluate correctly in the presence of NULL elements. [#21230](https://github.com/cockroachdb/cockroach/pull/21230)
-- Fixed a bug where the temporary directory was being wiped on failed CockroachDB restart, causing importing and DistSQL queries to fail. [#20854](https://github.com/cockroachdb/cockroach/pull/20854)
-- The "JSON" column in the output of `EXPLAIN(DISTSQL)` is now properly hidden by default. It can be shown using `SELECT *, JSON FROM [EXPLAIN(DISTSQL) ...]`. [#21154](https://github.com/cockroachdb/cockroach/pull/21154)
-- `EXPLAIN` queries with placeholders no longer panic. [#21168](https://github.com/cockroachdb/cockroach/pull/21168)
-- The `--safe-updates` flag of `cockroach sql` can now be used effectively in non-interactive sessions. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-- Fixed a bug where non-matching interleaved rows were being inner-joined with their parent rows. [#20938](https://github.com/cockroachdb/cockroach/pull/20938)
-- Fixed an issue where seemingly irrelevant error messages were being returned for certain `INSERT` statements. [#20841](https://github.com/cockroachdb/cockroach/pull/20841)
-- Crash details are now properly copied to the log file even when a node was started with `--logtostderr` as well as in other circumstances when crash details could be lost previously. [#20839](https://github.com/cockroachdb/cockroach/pull/20839)
-- It is no longer possible to log in as a non-existent user in insecure mode. [#20800](https://github.com/cockroachdb/cockroach/pull/20800)
-- The `BIGINT` type alias is now correctly shown when using `SHOW CREATE TABLE`. [#20798](https://github.com/cockroachdb/cockroach/pull/20798)
-- Fixed a scenario where a range that is too big to snapshot can lose availability even with a majority of nodes alive. [#20589](https://github.com/cockroachdb/cockroach/pull/20589)
-- Fixed `BETWEEN SYMMETRIC`, which was incorrectly considered an alias for `BETWEEN`. Per the SQL99 specification, `BETWEEN SYMMETRIC` is like `BETWEEN`, except that its arguments are automatically swapped if they would specify an empty range. [#20747](https://github.com/cockroachdb/cockroach/pull/20747)
-- Fixed a replica corruption that could occur if a process crashed in the middle of a range split. [#20704](https://github.com/cockroachdb/cockroach/pull/20704)
-- Fixed an issue with the formatting of unicode values in string arrays. [#20657](https://github.com/cockroachdb/cockroach/pull/20657)
-- Fixed detection and proper handling of certain variations of network partitions using server-side RPC keepalive in addition to client-side RPC keepalive. [#20707](https://github.com/cockroachdb/cockroach/pull/20707)
-- Prevented RPC connections between nodes with incompatible versions. [#20587](https://github.com/cockroachdb/cockroach/pull/20587)
-- Dangling intents are now eagerly cleaned up when `AmbiguousResultErrors` are seen. [#20628](https://github.com/cockroachdb/cockroach/pull/20628)
-- Fixed the return type signature of the JSON `#>>` operator and `array_positions()` built-in function. [#20524](https://github.com/cockroachdb/cockroach/pull/20524)
-- Fixed an issue where escaped characters like `A` and `\` in `LIKE`/`ILIKE` patterns were not handled properly. [#20600](https://github.com/cockroachdb/cockroach/pull/20600)
-- Fixed an issue with `(NOT) (I)LIKE` pattern matching on `_...%` and `%..._` returning incorrect results. [#20600](https://github.com/cockroachdb/cockroach/pull/20600)
-- Fixed a small spelling bug that made it such that a `DOUBLE PRECISION` specified type would erroneously display as a float. [#20727](https://github.com/cockroachdb/cockroach/pull/20727)
-- Fixed a crash caused by null collated strings. [#20637](https://github.com/cockroachdb/cockroach/pull/20637)
-
-
Performance Improvements
-
-- Improved the efficiency of scans with joins and certain complex `WHERE` clauses containing tuple equality. [#21288](https://github.com/cockroachdb/cockroach/pull/21288)
-- Improved the efficiency scans for certain `WHERE` clauses. [#21217](https://github.com/cockroachdb/cockroach/pull/21217)
-- Reduced per-row overhead in distsql query execution. [#21251](https://github.com/cockroachdb/cockroach/pull/21251)
-- Added support for distributed execution of [`UNION`](https://www.cockroachlabs.com/docs/v2.0/set-operations#union-combine-two-queries) queries. [#21175](https://github.com/cockroachdb/cockroach/pull/21175)
-- Improved performance for aggregation and distinct operations by arena allocating "bucket" storage. [#21160](https://github.com/cockroachdb/cockroach/pull/21160)
-- Distributed execution of `UNION ALL` queries is now supported. [#20742](https://github.com/cockroachdb/cockroach/pull/20742)
-- Reduced the fixed overhead of commands sent through Raft by 40% by only sending lease sequence numbers instead of sending the entire lease structure. [#20953](https://github.com/cockroachdb/cockroach/pull/20953)
-- When tables are dropped, the space is now reclaimed in a more timely fashion. [#20607](https://github.com/cockroachdb/cockroach/pull/20607)
-- Increased speed of except and merge joins by avoiding an unnecessary allocation. [#20759](https://github.com/cockroachdb/cockroach/pull/20759)
-- Improved rebalancing to make thrashing back and forth between nodes much less likely, including when localities have very different numbers of nodes. [#20709](https://github.com/cockroachdb/cockroach/pull/20709)
-- Improved performance of `DISTINCT` queries by avoiding an unnecessary allocation. [#20755](https://github.com/cockroachdb/cockroach/pull/20755) [#20750](https://github.com/cockroachdb/cockroach/pull/20750)
-- Significantly improved the efficiency of `DROP TABLE` and `TRUNCATE`. [#20601](https://github.com/cockroachdb/cockroach/pull/20601)
-- Improved performance of low-level row manipulation routines. [#20688](https://github.com/cockroachdb/cockroach/pull/20688)
-- Raft followers now write to their disks in parallel with the leader. [#19229](https://github.com/cockroachdb/cockroach/pull/19229)
-- Significantly reduced the overhead of SQL memory accounting. [#20590](https://github.com/cockroachdb/cockroach/pull/20590)
-- Equality joins on the entire interleave prefix between parent and (not necessarily direct) child interleaved tables are now faster. [#19853](https://github.com/cockroachdb/cockroach/pull/19853)
-
-
Doc Updates
-
-- Added a tutorial on using our Kubernetes-orchestrated AWS CloudFormation template for easy deployment and testing of CockroachDB. [#2356](https://github.com/cockroachdb/docs/pull/2356)
-- Added docs on the [`TIME`](https://www.cockroachlabs.com/docs/v2.0/time) data type. [#2336](https://github.com/cockroachdb/docs/pull/2336)
-- Added guidance on [reducing or disabling the storage of timeseries data](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0). [#2361](https://github.com/cockroachdb/docs/pull/2361)
-- Added docs on the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence), [`ALTER SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/alter-sequence), and [`DROP SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/drop-sequence) statements. [#2292](https://github.com/cockroachdb/docs/pull/2292)
-- Improved the font and coloring of code samples. [#2323](https://github.com/cockroachdb/docs/pull/2323)
-{% endcomment %}
diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20180122.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20180122.md
deleted file mode 100644
index c67e61f4c2e..00000000000
--- a/src/current/_includes/releases/v2.0/v2.0-alpha.20180122.md
+++ /dev/null
@@ -1,290 +0,0 @@
-
{{ include.release }}
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}A bug that could trigger range splits in a tight loop was discovered in this release, so this release has been withdrawn.{{site.data.alerts.end}}
-
-
Backwards-Incompatible Changes
-
-- Removed the obsolete `kv.gc.batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings). [#21070](https://github.com/cockroachdb/cockroach/pull/21070)
-- Removed the `COCKROACH_METRICS_SAMPLE_INTERVAL` environment variable. Users that relied on it should reduce the value for the `timeseries.resolution_10s.storage_duration` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) instead. [#20810](https://github.com/cockroachdb/cockroach/pull/20810)
-
-
General Changes
-
-- CockroachDB now proactively rebalances data when the diversity of the localities that a given range is located on can be improved. [#19489](https://github.com/cockroachdb/cockroach/pull/19489)
-- Clusters are now initialized with default `.meta` and `.liveness` replication zones with lower GC TTL configurations. [#17628](https://github.com/cockroachdb/cockroach/pull/17628)
-- CockroachDB now uses gRPC version 1.9.1 [#21398]
-
-
Build Changes
-
-- The build system now enforces a minimum version of Go, rather than enforcing a specific version of Go. Since the Go 1.x series has strict backward-compatibility guarantees, the old rule was unnecessarily restrictive. [#21426]
-
-
SQL Language Changes
-
-- The new `SHOW CREATE SEQUENCE` statement shows the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence) statement that would create a carbon copy of the specified sequence. [#21208](https://github.com/cockroachdb/cockroach/pull/21208)
-- The [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.0/drop-column) statement now drops [`CHECK`](https://www.cockroachlabs.com/docs/v2.0/check) constraints. [#21203](https://github.com/cockroachdb/cockroach/pull/21203)
-- The `pg_sequence_parameters()` built-in function is now supported. [#21069](https://github.com/cockroachdb/cockroach/pull/21069)
-- `ON DELETE CASCADE` foreign key constraints are now fully supported and memory bounded. [#20064](https://github.com/cockroachdb/cockroach/pull/20064) [#20706](https://github.com/cockroachdb/cockroach/pull/20706)
-- `ON UPDATE CASCADE` foreign key constraints are now fully supported [#21329]
-- For improved troubleshooting, more complete and useful details are now reported to clients when SQL errors are encountered. [#19793](https://github.com/cockroachdb/cockroach/pull/19793)
-- The new `SHOW SYNTAX` statement allows clients to analyze arbitrary SQL syntax server-side and retrieve either the (pretty-printed) syntax decomposition of the string or the details of the syntax error, if any. This statement is intended for use in the CockroachDB interactive SQL shell. [#19793](https://github.com/cockroachdb/cockroach/pull/19793)
-- Enhanced type checking of subqueries in order to generalize subquery support. As a side-effect, fixed a crash with subquery edge cases such as `SELECT (SELECT (1, 2)) IN (SELECT (1, 2))`. [#21076](https://github.com/cockroachdb/cockroach/pull/21076)
-- Single-use common table expressions are now supported. [#20359](https://github.com/cockroachdb/cockroach/pull/20359)
-- Statement sources with no output columns are now disallowed. [#20998](https://github.com/cockroachdb/cockroach/pull/20998)
-- `WHERE` predicates that simplify to `NULL` no longer performs table scans. [#21067](https://github.com/cockroachdb/cockroach/pull/21067)
-- The experimental `CREATE ROLE`, `DROP ROLE`, `SHOW ROLES`, `GRANT `, and `REVOKE ` statements are now supported as part of [role-based access
-control](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171220_sql_role_based_access_control.md). [#21020](https://github.com/cockroachdb/cockroach/pull/21020) [#20980](https://github.com/cockroachdb/cockroach/pull/20980) [#21341]
-- Improved the output of `EXPLAIN` to show the plan tree structure. [#20697](https://github.com/cockroachdb/cockroach/pull/20697)
-- `OUTER` interleaved joins are now supported. [#20963](https://github.com/cockroachdb/cockroach/pull/20963)
-- Added the `rolreplication` and `rolbypassrls` columns to the `pg_catalog.pg_roles` table. [#20397](https://github.com/cockroachdb/cockroach/pull/20397)
-- [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values can now be cast to their own type. [#19816](https://github.com/cockroachdb/cockroach/pull/19816)
-- The `||` operator is now supported for `JSONB`. [#20689](https://github.com/cockroachdb/cockroach/pull/20689)
-- The `CASCADE` option is now required to drop an index that is used by Unique constraint. [#20837](https://github.com/cockroachdb/cockroach/pull/20837)
-- The `BOOL` type now matches PostgreSQL's list of accepted formats. [#20833](https://github.com/cockroachdb/cockroach/pull/20833)
-- The `sql_safe_updates` session variable now defaults to `false` unless the shell is truly interactive (using `cockroach sql`, `-e` not specified, standard input not redirected) and `--unsafe-updates` is not specified. Previously, `sql_safe_updates` would always default to `true` unless `--unsafe-updates` was specified. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- The `errexit` client-side option now defaults to `false` only if the shell is truly interactive, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- The `display_format` client-side option now defaults to `pretty` in every case where the output goes to a terminal, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- The `check_syntax` and `smart_prompt` client-side options, together with the interactive line editor, are only enabled if the session is interactive and output goes to a terminal. [#20805](https://github.com/cockroachdb/cockroach/pull/20805)
-- Table aliases are now permitted in `RETURNING` clauses. [#20808](https://github.com/cockroachdb/cockroach/pull/20808)
-- Added the `SERIAL2`, `SERIAL4`, and `SERIAL8` aliases for the [`SERIAL`](https://www.cockroachlabs.com/docs/v2.0/serial) type. [#20776](https://github.com/cockroachdb/cockroach/pull/20776)
-- `NULL` values are now supported in `COLLATE` expressions. [#20795](https://github.com/cockroachdb/cockroach/pull/20795)
-- The new `crdb_internal.node_executable_version()` built-in function simplifies rolling upgrades. [#20292](https://github.com/cockroachdb/cockroach/pull/20292)
-- The `json_pretty()`, `json_extract_path()`, `jsonb_extract_path()`, `json_object()`, `asJSON()`, `jsonb_set()`, `json_build_object()`, `json_strip_nulls()`, and `json_each{_text}()` built-in functions are now supported. [#20702](https://github.com/cockroachdb/cockroach/pull/20702) [#20520](https://github.com/cockroachdb/cockroach/pull/20520) [#21015](https://github.com/cockroachdb/cockroach/pull/21015) [#20234](https://github.com/cockroachdb/cockroach/pull/20234) [#21010] [#21019] [#21335] [#21044]
-- The `DISTINCT ON` clause is now supported for `SELECT` statements. [#20463](https://github.com/cockroachdb/cockroach/pull/20463)
-- The output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.0/explain) is now enhanced when applied to statements containing scalar sub-queries. [#21305]
-- The `array_agg()` built-in function is now supported in distributed queries. [#21475]
-- Removed the limit on transaction keys. There was formerly a limit of 100,000 keys. [#21078]
-- Improved the error message when a column reference is ambiguous. [#21361]
-- The new `SHOW EXPERIMENTAL_REPLICA TRACE` statement executes a query and returns which nodes served reads and writes. [#21349]
-- Multiplication between [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.0/interval) and [`FLOAT`](https://www.cockroachlabs.com/docs/v2.0/float) values, and between `INTERVAL` and [`DECIMAL`](https://www.cockroachlabs.com/docs/v2.0/decimal) values, is now fully supported. [#21292]
-- Reduced the size of entries stored in the `system.rangelog` table by not storing empty JSON fields. [#21318]
-- The `ALTER TABLE SCATTER` statement now randomizes the locations of the leases for all ranges in the referenced table or index. [#21431]
-- When using HTTPS storage for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import), a custom CA for HTTPS storage providers can now be specified via the `cloudstorage.http.custom_ca` cluster setting. This is also used when accessing custom s3 export storage endpoints. [#21358] [#21404]
-- Storage of timeseries data within the cluster can be disabled by setting the new `timeseries.storage.enabled` cluster setting to false. This is recommended only if you exclusively use a third-party tool such as Prometheus for timeseries monitoring. See this [FAQ](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0) for more details. [#21314]
-- For compatibility with PostgreSQL and related tools:
- - Parsing of the `COMMENT ON` syntax is now allowed. [#21063](https://github.com/cockroachdb/cockroach/pull/21063)
- - The following built-in functions are now supported: `pg_catalog.pg_trigger()`, `pg_catalog.pg_rewrite()`, `pg_catalog.pg_operator()`, `pg_catalog.pg_user_mapping()`, `pg_catalog.foreign_data_wrapper()`, `pg_get_constraintdef()`, `inet_client_addr()`, `inet_client_port()`, `inet_server_addr()`, `inet_server_port()`. [#21065](https://github.com/cockroachdb/cockroach/pull/21065) [#20788](https://github.com/cockroachdb/cockroach/pull/20788)
- - Missing columns have been added to `information_schema.columns`, and the `pg_catalog.pg_user()` virtual table has been added. [#20788](https://github.com/cockroachdb/cockroach/pull/20788)
- - A string cast to `regclass` is interpreted as a possibly qualified name like `db.name`. [#20788](https://github.com/cockroachdb/cockroach/pull/20788)
- - Rendered columns for built-in functions are now titled by the name of the built-in function. [#20820](https://github.com/cockroachdb/cockroach/pull/20820)
-
-
Command-Line Changes
-
-- Client `cockroach` commands that use SQL (`cockroach sql`, `cockroach node ls`, etc.) now print a warning if the server is running an older version of CockroachDB than the client. Also, this and other warning messages are now clearly indicated with the "warning:" prefix. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-- Client-side syntax checking performed by [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) when the `check_syntax` option is enabled has been enhanced for forward-compatibility with later CockroachDB versions. [#21119](https://github.com/cockroachdb/cockroach/pull/21119)
-- The `?` [client-side command](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client#sql-shell-commands) of `cockroach sql` now prints out a description of each option. [#21119](https://github.com/cockroachdb/cockroach/pull/21119)
-- The `--unsafe-updates` of `cockroach sql` was renamed to `--safe-updates`. The default behavior is unchanged: The previous flag defaulted to `false`; the new flag defaults to `true`. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-- The `cockroach sql` command no longer fails when the server is running a version of CockroachDB that does not support the sql_safe_updates session variable. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-
-
Admin UI Changes
-
-- Added graphs of node liveness heartbeat latency, an important internal signal of health, to the **Distributed** dashboard. [#21002](https://github.com/cockroachdb/cockroach/pull/21002)
-- **Capacity Used** is now shown as "-" instead of 100% when the UI cannot load the real data from the server. [#20824](https://github.com/cockroachdb/cockroach/pull/20824)
-- Removed a redundant rendering of the GC pause time from the **CPU Time** graph. [#20802](https://github.com/cockroachdb/cockroach/pull/20802)
-- The **Databases** page now reports table sizes that are better approximations to actual disk space usage. [#20627](https://github.com/cockroachdb/cockroach/pull/20627)
-- Added a system table to allow operators to designate geographic coordinates for any locality. This is for use with upcoming cluster visualization functionality in the Admin UI. [#19652](https://github.com/cockroachdb/cockroach/pull/19652)
-- When a new version of CockroachDB is available, the Admin UI now links you to the documentation explaining how to upgrade your cluster. [#19718]
-- Job descriptions are now expandable. [#21333]
-
-
Bug Fixes
-
-- Fixed the `debug compact` command to compact all sstables. [#21293](https://github.com/cockroachdb/cockroach/pull/21293)
-- Fixed tuple equality to evaluate correctly in the presence of `NULL` elements. [#21230](https://github.com/cockroachdb/cockroach/pull/21230)
-- Fixed a bug where the temporary directory was being wiped on failed CockroachDB restart, causing importing and DistSQL queries to fail. [#20854](https://github.com/cockroachdb/cockroach/pull/20854)
-- The "JSON" column in the output of `EXPLAIN(DISTSQL)` is now properly hidden by default. It can be shown using `SELECT *, JSON FROM [EXPLAIN(DISTSQL) ...]`. [#21154](https://github.com/cockroachdb/cockroach/pull/21154)
-- `EXPLAIN` queries with placeholders no longer panic. [#21168](https://github.com/cockroachdb/cockroach/pull/21168)
-- The `--safe-updates` flag of `cockroach sql` can now be used effectively in non-interactive sessions. [#20935](https://github.com/cockroachdb/cockroach/pull/20935)
-- Fixed a bug where non-matching interleaved rows were being inner-joined with their parent rows. [#20938](https://github.com/cockroachdb/cockroach/pull/20938)
-- Fixed an issue where seemingly irrelevant error messages were being returned for certain `INSERT` statements. [#20841](https://github.com/cockroachdb/cockroach/pull/20841)
-- Crash details are now properly copied to the log file even when a node was started with `--logtostderr` as well as in other circumstances when crash details could be lost previously. [#20839](https://github.com/cockroachdb/cockroach/pull/20839)
-- It is no longer possible to log in as a non-existent user in insecure mode. [#20800](https://github.com/cockroachdb/cockroach/pull/20800)
-- The `BIGINT` type alias is now correctly shown when using `SHOW CREATE TABLE`. [#20798](https://github.com/cockroachdb/cockroach/pull/20798)
-- Fixed a scenario where a range that is too big to snapshot can lose availability even with a majority of nodes alive. [#20589](https://github.com/cockroachdb/cockroach/pull/20589)
-- Fixed `BETWEEN SYMMETRIC`, which was incorrectly considered an alias for `BETWEEN`. Per the SQL99 specification, `BETWEEN SYMMETRIC` is like `BETWEEN`, except that its arguments are automatically swapped if they would specify an empty range. [#20747](https://github.com/cockroachdb/cockroach/pull/20747)
-- Fixed a replica corruption that could occur if a process crashed in the middle of a range split. [#20704](https://github.com/cockroachdb/cockroach/pull/20704)
-- Fixed an issue with the formatting of unicode values in string arrays. [#20657](https://github.com/cockroachdb/cockroach/pull/20657)
-- Fixed detection and proper handling of certain variations of network partitions using server-side RPC keepalive in addition to client-side RPC keepalive. [#20707](https://github.com/cockroachdb/cockroach/pull/20707)
-- Prevented RPC connections between nodes with incompatible versions. [#20587](https://github.com/cockroachdb/cockroach/pull/20587)
-- Dangling intents are now eagerly cleaned up when `AmbiguousResultErrors` are seen. [#20628](https://github.com/cockroachdb/cockroach/pull/20628)
-- Fixed the return type signature of the JSON `#>>` operator and `array_positions()` built-in function. [#20524](https://github.com/cockroachdb/cockroach/pull/20524)
-- Fixed an issue where escaped characters like `A` and `\` in `LIKE`/`ILIKE` patterns were not handled properly. [#20600](https://github.com/cockroachdb/cockroach/pull/20600)
-- Fixed an issue with `(NOT) (I)LIKE` pattern matching on `_...%` and `%..._` returning incorrect results. [#20600](https://github.com/cockroachdb/cockroach/pull/20600)
-- Fixed a small spelling bug that made it such that a `DOUBLE PRECISION` specified type would erroneously display as a float. [#20727](https://github.com/cockroachdb/cockroach/pull/20727)
-- Fixed a crash caused by null collated strings. [#20637](https://github.com/cockroachdb/cockroach/pull/20637)
-- Fixed a problem that could cause spurious garbage collection activity, in particular after dropping a table. [#21407]
-- Fixed incorrect logic in lease rebalancing that prevented leases from being transferred [#21430]
-- `INSERT`/`UPDATE`/`DELETE ... RETURNING` statements used with the "statement data source" syntax (e.g., `SELECT * FROM [INSERT INTO ... RETURNING ...]`) no longer ever commit the transaction, causing errors for the higher-level query. [#20847]
-- Fixed a crash caused by a column being backfilled in an index constraint. [#21308]
-- Fixed a bug in which ranges could get stuck if the uncommitted raft log grew too large [#21356]
-- The `setval()` built-in function no longer lets you set a value outside the `MAXVALUE`/`MINVALUE` of a [`SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#20973]
-
-
Performance Improvements
-
-- Improved the efficiency of scans with joins and certain complex `WHERE` clauses containing tuple equality. [#21288](https://github.com/cockroachdb/cockroach/pull/21288)
-- Improved the efficiency scans for certain `WHERE` clauses. [#21217](https://github.com/cockroachdb/cockroach/pull/21217)
-- Reduced per-row overhead in distsql query execution. [#21251](https://github.com/cockroachdb/cockroach/pull/21251)
-- Added support for distributed execution of [`UNION`](https://www.cockroachlabs.com/docs/v2.0/selection-queries#union-combine-two-queries) queries. [#21175](https://github.com/cockroachdb/cockroach/pull/21175)
-- Improved performance for aggregation and distinct operations by arena allocating "bucket" storage. [#21160](https://github.com/cockroachdb/cockroach/pull/21160)
-- Distributed execution of `UNION ALL` queries is now supported. [#20742](https://github.com/cockroachdb/cockroach/pull/20742)
-- Reduced the fixed overhead of commands sent through Raft by 40% by only sending lease sequence numbers instead of sending the entire lease structure. [#20953](https://github.com/cockroachdb/cockroach/pull/20953)
-- When tables are dropped, the space is now reclaimed in a more timely fashion. [#20607](https://github.com/cockroachdb/cockroach/pull/20607)
-- Increased speed of except and merge joins by avoiding an unnecessary allocation. [#20759](https://github.com/cockroachdb/cockroach/pull/20759)
-- Improved rebalancing to make thrashing back and forth between nodes much less likely, including when localities have very different numbers of nodes. [#20709](https://github.com/cockroachdb/cockroach/pull/20709)
-- Improved performance of `DISTINCT` queries by avoiding an unnecessary allocation. [#20755](https://github.com/cockroachdb/cockroach/pull/20755) [#20750](https://github.com/cockroachdb/cockroach/pull/20750)
-- Significantly improved the efficiency of `DROP TABLE` and `TRUNCATE`. [#20601](https://github.com/cockroachdb/cockroach/pull/20601)
-- Improved performance of low-level row manipulation routines. [#20688](https://github.com/cockroachdb/cockroach/pull/20688)
-- Raft followers now write to their disks in parallel with the leader. [#19229](https://github.com/cockroachdb/cockroach/pull/19229)
-- Significantly reduced the overhead of SQL memory accounting. [#20590](https://github.com/cockroachdb/cockroach/pull/20590)
-- Equality joins on the entire interleave prefix between parent and (not necessarily direct) child interleaved tables are now faster. [#19853](https://github.com/cockroachdb/cockroach/pull/19853)
-- Improved the performance of scans that need to look at non-indexed columns. [#21459]
-- Improved the performance of all scans that encounter a large number of versions per key, and the low-level reverse scan throughput by up almost 4x when there are a few versions per key.. [#21438]
-- Queries on tables with many columns are now more efficient. [#21450]
-- Queries that only need to read part of a table with many columns are now more efficient. [#21450]
-- Improved low-level scan performance by 15-20% by disabling redundant checksums. [#21395]
-- Re-implemented low-level scan operations in C++, doubling performance for scans of contiguous keys/rows. [#21395]
-- Reduced the occurrence of ambiguous errors when a node is down. [#21376]
-- Sped up distsql query execution by "fusing" processors executing on the same node together. [#21254]
-
-
Enterprise Edition Changes
-
-- [Incremental backups](https://www.cockroachlabs.com/docs/v2.0/backup#incremental-backups) of a database after a table or index was added is now supported.[#21170]
-- When using HTTPS storage for [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) or [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore), a custom CA for HTTPS storage providers can now be specified via the `cloudstorage.http.custom_ca` cluster setting. This is also used when accessing custom s3 export storage endpoints. [#21358] [#21404]
-- Bulk writes are now synced to disk periodically to ensure more predictable performance. [#20449]
-
-
Doc Updates
-
-- Added [best practices for optimizing SQL performance](https://www.cockroachlabs.com/docs/v2.0/performance-best-practices-overview) in CockroachDB. [#2243](https://github.com/cockroachdb/docs/pull/2243)
-- Added more detailed [clock synchronization guidance per cloud provider](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#clock-synchronization). [#2295](https://github.com/cockroachdb/docs/pull/2295)
-- Added a tutorial on using our Kubernetes-orchestrated AWS CloudFormation template for easy deployment and testing of CockroachDB. [#2356](https://github.com/cockroachdb/docs/pull/2356)
-- Added docs on the [`TIME`](https://www.cockroachlabs.com/docs/v2.0/time) data type. [#2336](https://github.com/cockroachdb/docs/pull/2336)
-- Added guidance on [reducing or disabling the storage of timeseries data](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0). [#2361](https://github.com/cockroachdb/docs/pull/2361)
-- Added docs on the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence), [`ALTER SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/alter-sequence), and [`DROP SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/drop-sequence) statements. [#2292](https://github.com/cockroachdb/docs/pull/2292)
-- Various improvements to the docs on the [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import), [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup), and [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) statements. [#2340](https://github.com/cockroachdb/docs/pull/2340)
-- Improved the styling of code samples and page tocs. [#2323](https://github.com/cockroachdb/docs/pull/2323) [#2371](https://github.com/cockroachdb/docs/pull/2371)
-
-
-
-
Contributors
-
-This release includes 111 merged PRs by 33 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Yang Yuting
-- Dmitry Saveliev
-- Jincheng Li
-- Mohamed Elqdusy
-- 何羿宏
-- louishust
-
-
-
-- CockroachDB now uses gRPC version 1.9.2 [#21600][#21600]
-
-
Enterprise Changes
-
-- Failed [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) cleanup no longer causes the [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) job to perpetually loop if external storage fails or is removed. [#21559][#21559]
-- Non-transactional [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) and [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) statements are now disallowed inside transactions. [#21488][#21488]
-
-
SQL Language Changes
-
-- Reduced the size of `system.rangelog` entries to save disk space. [#21410][#21410]
-- Prevented adding both a cascading referential constraint action and a check constraint to a column. [#21690][#21690]
-- Added `json_array_length` function that returns the number of elements in the outermost `JSON` or `JSONB` array. [#21611][#21611]
-- Added `referential_constraints` table to the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema). The `referential_constraints` table contains all foreign key constraints in the current database. [#21615][#21615]
-- Replaced `BOOL` columns in the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema) with `STRING` columns to conform to the SQL specification. [#21612][#21612]
-- Added support for inverted indexes for `JSON`. [#20941][#20941]
-- [`DROP SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/drop-sequence) cannot drop a `SEQUENCE` that is currently in use. [#21364][#21364]
-- The [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) function now does not require the `temp` directory and no longer creates a `RESTORE` job. Additionally, the `TRANSFORM_ONLY` option for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) has been renamed to `TRANSFORM` and now takes an argument specifying the target directory. [#21490][#21490]
-
-
Command-Line Changes
-
-- `cockroach start --background` now returns earlier for nodes awaiting the `cockroach init` command, facilitating use of automated scripts. [#21682][#21682]
-- Command-line utilities that print results with the `pretty` format now use consistent horizontal alignment for every row of the result. [#18491][#18491]
-
-
Admin UI Changes
-
-- The **Command Queue** debug page now displays errors correctly. [#21529][#21529]
-- The **Problem Ranges** debug page now displays all problem ranges for the cluster. [#21522][#21522]
-
-
Bug Fixes
-
-- Fixed a crash caused by `NULL` placeholder in comparison expressions. [#21705][#21705]
-- The [`EXPLAIN (VERBOSE)`](https://www.cockroachlabs.com/docs/v2.0/explain#verbose-option) output now correctly shows the columns and properties of each query node (instead of incorrectly showing the columns and properties of the `root`). [#21527][#21527]
-- Added a mechanism to recompute range stats automatically over time to reflect changes in the underlying logic. [#21345][#21345]
-
-
Performance Improvements
-
-- Multiple ranges can now split at the same time, improving our ability to handle hotspot workloads. [#21673][#21673]
-- Improved performance for queries that do not read any columns from the key component of the row. [#21571][#21571]
-- Improved performance of scans by reducing efforts for non-required columns. [#21572][#21572]
-- Improved efficiency of the key decoding operation. [#21498][#21498]
-- Sped up the performance of low-level delete operations. [#21507][#21507]
-- Prevented the jobs table from growing excessively large during jobs table updates. [#21575][#21575]
-
-
-
-
Contributors
-
-This release includes 110 merged PRs by 31 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Constantine Peresypkin
-- 何羿宏
-
-Special thanks to first-time contributors Andrew Kimball, Nathaniel Stewart, Constantine Peresypkin and Paul Bardea.
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-{{site.data.alerts.callout_danger}}A bug that could cause transactional anomalies was introduced in this release, so this release has been withdrawn.{{site.data.alerts.end}}
-
-
Enterprise Edition Changes
-
-- [Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) are now supported in enterprise [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) jobs.
-
- This changes how sequences are stored in the key-value storage layer, so existing sequences must be dropped and recreated. Since a sequence cannot be dropped while it is being used in a column's `DEFAULT` expression, those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval` function can be used to set the value of a sequence to what it was previously. [#21684][#21684]
-
-
SQL Language Changes
-
-- Casts between [array types](https://www.cockroachlabs.com/docs/v2.0/array) are now allowed when a cast between the parameter types is allowed. [#22338][#22338]
-- Scalar functions can now be used in `FROM` clauses. [#22314][#22314]
-- Added privilege checks on [sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#22284][#22284]
-- The `ON DELETE SET DEFAULT`, `ON UPDATE SET DEFAULT`, `ON DELETE SET NULL`, and `ON UPDATE SET NULL` foreign key constraint actions are now fully supported. [#22220][#22220] [#21767][#21767] [#21716][#21716]
-- JSON inverted indexes can now be specified in a [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v2.0/create-table) statement. [#22217][#22217]
-- When a node is gracefully shut down, planning queries are avoided and distributed queries are allowed the amount of time specified by the new `server.drain_max_wait` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) before the node is drained and stopped. [#20450][#20450]
-- [Collated string](https://www.cockroachlabs.com/docs/v2.0/collate) are now supported in [IMPORT](https://www.cockroachlabs.com/docs/v2.0/import) jobs. [#21859][#21859]
-- The new `SHOW GRANTS ON ROLE` statement and `pg_catalog.pg_auth_members` table lists role memberships. [#22205][#22205] [#21780][#21780]
-- Role memberships are now considered in permission checks. [#21820][#21820]
-
-
Command-Line Changes
-
-- [Debug commands](https://www.cockroachlabs.com/docs/v2.0/debug-zip) now open RocksDB in read-only mode. This makes them faster and able to run in parallel. [#21778][#21778]
-- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) command now outputs `CREATE SEQUENCE` statements before the `CREATE TABLE` statements that use them. [#21774][#21774]
-- For better compatibility with `psql`'s extended format, the table formatter `records` now properly indicates line continuations in multi-line rows. [#22325][#22325]
-- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) client-side option `show_times` is now always enabled when output goes to a terminal, not just when `display_format` is set to `pretty`. [#22326][#22326]
-- When formatting `cockroach sql` results with `--format=sql`, the row count is now printed in a SQL comment at the end. [#22327][#22327]
-- When formatting `cockroach sql` results with `--format=csv` or `--format=tsv`, result rows that contain special characters are now quoted properly. [#19306][#19306]
-
-
Admin UI Changes
-
-- Added an icon to indicate when descriptions in the **Jobs** table are shortened and expandable. [#22221][#22221]
-- Added "compaction queue" graphs to the **Queues** dashboard. [#22218][#22218]
-- Added Raft snapshot queue metrics to the **Queue** dashboard. [#22210][#22210]
-- When there are dead nodes, they are now show before live nodes on the **Nodes List** page. [#22222][#22222]
-- Links to documentation in the Admin UI now point to the docs for v2.0. [#21894][#21894]
-
-
Bug Fixes
-
-- Errors from DDL statements sent by a client as part of a transaction, but in a different query string than the final commit, are no longer silently swallowed. [#21829][#21829]
-- Fixed a bug in cascading foreign key actions. [#21799][#21799]
-- Tabular results where the column labels contain newline characters are now rendered properly. [#19306][#19306]
-- Fixed a bug that prevented long descriptions in the Admin UI **Jobs** table from being collapsed after being expanding. [#22221][#22221]
-- Fixed a bug that prevented using `SHOW GRANTS` with a grantee but no targets. [#21864][#21864]
-- Fixed a panic with certain queries involving the `REGCLASS` type. [#22310][#22310]
-- Fixed the behavior and types of the `encode()` and `decode()` functions. [#22230][#22230]
-- Fixed a bug that prevented passing the same tuple for `FROM` and `TO` in `ALTER TABLE ... SCATTER`. [#21830][#21830]
-- Fixed a regression that caused certain queries using `LIKE` or `SIMILAR TO` with an indexed column to be slow. [#21842][#21842]
-- Fixed a stack overflow in the code for shutting down a server when out of disk space [#21768][#21768]
-- Fixed Windows release builds. [#21793][#21793]
-- Fixed an issue with the wire-formatting of `BYTES` arrays. [#21712][#21712]
-- Fixed a bug that could lead to a node crashing and needing to be reinitialized. [#21771][#21771]
-- When a database is created, dropped, or renamed, the SQL session is blocked until the effects of the operation are visible to future queries in that session. [#21900][#21900]
-- Fixed a bug where healthy nodes could appear as "Suspect" in the Admin UI if the web browser's local clock was skewed. [#22237][#22237]
-
-
Performance Improvements
-
-- Significantly reduced the likelihood of serializable restarts seen by clients due to concurrent workloads. [#21140][#21140]
-- Reduced disruption from nodes recovering from network partitions. [#22316][#22316]
-- Improved the performance of scans by copying less data in memory. [#22309][#22309]
-- Slightly improved the performance of low-level scan operations. [#22244][#22244]
-- When a range grows too large, writes are now be backpressured until the range is successfully able to split. This prevents unbounded range growth and improves a clusters ability to stay healthy under hotspot workloads. [#21777][#21777]
-- The `information_schema` and `pg_catalog` databases are now faster to query. [#21609][#21609]
-- Reduced the write amplification of Raft replication. [#20647][#20647]
-
-
Doc Updates
-
-- Added [cloud-specific hardware recommendations](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#cloud-specific-recommendations). [#2312](https://github.com/cockroachdb/docs/pull/2312)
-- Added a [detailed listing of SQL standard features with CockroachDB's level of support](https://www.cockroachlabs.com/docs/v2.0/sql-feature-support). [#2442](https://github.com/cockroachdb/docs/pull/2442)
-- Added docs on the [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) data type. [#2439](https://github.com/cockroachdb/docs/pull/2439)
-- Added docs on the [`SHOW CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/show-create-sequence) statement. [#2406](https://github.com/cockroachdb/docs/pull/2406)
-
-
-
-
Contributors
-
-This release includes 133 merged PRs by 28 authors. We would like to thank the contributors from the CockroachDB community, especially first-time contributor pocockn.
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This week's release includes:
-
- - Improved support for large delete statements.
- - Reduced disruption during upgrades and restarts.
- - Reduced occurrence of serializable transaction restarts.
-
-{{site.data.alerts.callout_danger}}This release has a bug that may result in incorrect results for certain queries using JOIN and ORDER BY. This bug will be fixed in next week's beta.{{site.data.alerts.end}}
-
-
Backwards-Incompatible Changes
-
-- [Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) are now supported in enterprise [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) jobs.
-
- This changes how sequences are stored in the key-value storage layer, so existing sequences must be dropped and recreated. Since a sequence cannot be dropped while it is being used in a column's [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expression, those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval()` function can be used to set the value of a sequence to what it was previously. [#21684][#21684]
-
-- Positive [constraints in replication zone configs](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#replication-constraints) no longer work. Any existing positive constraints will be ignored. This change should not impact existing deployments since positive constraints have not been documented or supported for some time. [#22906][#22906]
-
-
Build Changes
-
-- CockroachDB now builds with go 1.9.4 and higher. [#22608][#22608]
-
-
General Changes
-
-- [Diagnostics reports](https://www.cockroachlabs.com/docs/v2.0/diagnostics-reporting) now include information about changed settings and statistics on types of errors encountered during SQL execution. [#22705][#22705], [#22693][#22693], [#22948][#22948]
-
-
Enterprise Edition Changes
-
-- Revision history [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) is no longer considered experimental. [#22679][#22679]
-- Revision history [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) now handles schema changes. [#21717][#21717]
-- CockroachDB now checks that a backup actually contains the requested restore time. [#22659][#22659]
-- Improved [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)'s handling of tables after [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.0/truncate). [#21895][#21895]
-- Ensured that only the backups created by the same cluster can be used in incremental backups. [#22474][#22474]
-- Avoided extra internal copying of files during [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore). [#22281][#22281]
-- Added a geographical map to the homepage of the Admin UI enterprise version, showing the location of nodes and localities in the cluster. The map is annotated with several top-level metrics: storage capacity used, queries per second, and current CPU usage, as well as the liveness status of nodes in the cluster. [#22763][#22763]
-
-
SQL Language Changes
-
-- The type determined for constant `NULL` expressions is renamed to `unknown` for better compatibility with PostgreSQL. [#23150][#23150]
-- Deleting multiple rows at once now consumes less memory. [#23013][#23013]
-- Attempts to modify virtual schemas with DDL statements now fail with a clearer error message. [#23041][#23041]
-- The new `SHOW SCHEMAS` statement reveals which are the valid virtual schemas next to the physical schema `public`. [#23041][#23041]
-- CockroachDB now recognizes the special syntax `SET SCHEMA ` as an alias for `SET search_path = ` for better compatibility with PostgreSQL. [#23041][#23041]
-- `current_role()` and `current_catalog()` are supported as aliases for the `current_user()` and `current_database()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) for better compatibility with PostgreSQL. [#23041][#23041]
-- CockroachDB now returns the correct error code for division by zero. [#22948][#22948]
-- The GC of table data after a [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.0/drop-table) statement now respects changes to the GC TTL interval specified in the zone config [#22903][#22903]
-- The full names of tables/view/sequences are now properly logged in the system event log. [#22848][#22848]
-- CockroachDB now recognizes the syntax `db.public.tbl` in addition to `db.tbl` for better compatibility with PostgreSQL. The handling of the session variable `search_path`, as well as that of the built-in functions `current_schemas()` and `current_schema()`, is now closer to that of PostgreSQL. Thus `SHOW TABLES FROM` can now inspect the tables of a specific schema (for example, `SHOW TABLES FROM db.public` or `SHOW TABLES FROM db.pg_catalog`). `SHOW GRANTS` also shows the schema of the databases and tables. [#22753][#22753]
-- Users can now configure auditing per table and per access mode with `ALTER TABLE`. [#22534][#22534]
-- SQL execution logs enabled by the [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) `sql.trace.log_statement_execute` now go to a separate log file. This is an experimental feature meant to aid troubleshooting CockroachDB. [#22534][#22534]
-- Added the `string_to_array()` [built-in function](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#22391][#22391]
-- Added the `constraint_column_usage` table and roles-related tables to the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema) database. [#22323][#22323] [#22242][#22242]
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) no longer requires the experimental setting. [#22531][#22531]
-- Computed columns and `CHECK` constraints now correctly report column names in the case of a type error. [#22500][#22500]
-- The output of [`JSON`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data now matches that of PostgreSQL. [#22393][#22393]
-- Allowed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs to be paused. `IMPORT` jobs now correctly resume instead of being abandoned if the coordinator goes down. [#22291][#22291]
-- Removed the `into_db` option in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import). The database is now specified as part of the table name. [#21813][#21813]
-- Changed computed column syntax and improved related error messages. [#22429][#22429]
-- Implemented additional [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) column type operators such as `contains` and `contained by`, binary operations, and addition/subtraction.
-- Implemented the following operators for [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) column types: `<<`, `<<=`, `>>`, `>>=`, `&&`, `+`, `-`, `^`, `|`, `&`. These operators are compatible with PostgreSQL 10 and are described in Table: 9.36 of the PostgreSQL documentation. [#21437][#21437]
-- CockroachDB now properly rejects incorrectly-cased SQL function names with an error. [#22365][#22365]
-- Allowed [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expressions in the `CREATE TABLE` of an [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) CSV. The expressions are not evaluated (data in the CSV is still required to be present). This change only allows them to be part of the table definition. [#22307][#22307]
-- Added the `#-` [operator](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) for `JSON`. [#22375][#22375]
-- The `SET transaction_isolation` statement is now supported for better PostgreSQL compatibility. [#22389][#22389]
-- Allowed creation of computed columns. [#21823][#21823]
-- Avoided extra internal copying of files during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import). [#22281][#22281]
-- Casts between [array types](https://www.cockroachlabs.com/docs/v2.0/array) are now allowed when a cast between the parameter types is allowed. [#22338][#22338]
-- Scalar functions can now be used in `FROM` clauses. [#22314][#22314]
-- Added privilege checks on [sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#22284][#22284]
-- The `ON DELETE SET DEFAULT`, `ON UPDATE SET DEFAULT`, `ON DELETE SET NULL`, and `ON UPDATE SET NULL` [foreign key constraint actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) are now fully supported. [#22220][#22220] [#21767][#21767] [#21716][#21716]
-- The `ON DELETE CASCADE` and `ON UPDATE CASCADE` [foreign key constraint actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) can now also contain `CHECK` constraints. [#22535][#22535]
-- JSON inverted indexes can now be specified in a [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v2.0/create-table) statement. [#22217][#22217]
-- When a node is gracefully shut down, planning queries are avoided and distributed queries are allowed the amount of time specified by the new `server.drain_max_wait` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) before the node is drained and stopped. [#20450][#20450]
-- [Collated string](https://www.cockroachlabs.com/docs/v2.0/collate) are now supported in [IMPORT](https://www.cockroachlabs.com/docs/v2.0/import) jobs. [#21859][#21859]
-- The new `SHOW GRANTS ON ROLE` statement and `pg_catalog.pg_auth_members` table lists role memberships. [#22205][#22205] [#21780][#21780]
-- Role memberships are now considered in permission checks. [#21820][#21820]
-
-
Command-Line Changes
-
-- [Replication zone constraints](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#replication-constraints) can now be specified on a per-replica basis, meaning you can configure some replicas in a zone's ranges to follow one set of constraints and other replicas to follow other constraints. [#22906][#22906]
-- Per-replica constraints no longer have to add up to the total number of replicas in a range. If you do not specify all the replicas, then the remaining replicas will be allowed on any store. [#23081][#23081]
-- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) now reminds the user about `SET database = ...` and `CREATE DATABASE` if started with no current database set. [#23089][#23089]
-- Error messages displayed while connecting to a server with an incompatible version have been improved. [#22709][#22709]
-- The `--cache` and `--max-sql-memory` flags of [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) now also support decimal notation to support a fraction of total available RAM size, e.g., `--cache=.25` is equivalent to `--cache=25%`. This simplifies integration with system management tools. [#22460][#22460]
-- When printing tabular results as CSV or TSV, no final row count is emitted. This is intended to increase interoperability with external tools. [#20835][#20835]
-- The `pretty` formatter does not introduce special unicode characters in multi-line table cells, for better compatibility with certain clients. To disambiguate multi-line cells from multiple single-line cells, a user can use `WITH ORDINALITY` to add a row numbering column. [#22324][#22324]
-- Allowed specification of arbitrary RocksDB options. [#22401][#22401]
-- [Debug commands](https://www.cockroachlabs.com/docs/v2.0/debug-zip) now open RocksDB in read-only mode. This makes them faster and able to run in parallel. [#21778][#21778]
-- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) command now outputs `CREATE SEQUENCE` statements before the `CREATE TABLE` statements that use them. [#21774][#21774]
-- For better compatibility with `psql`'s extended format, the table formatter `records` now properly indicates line continuations in multi-line rows. [#22325][#22325]
-- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) client-side option `show_times` is now always enabled when output goes to a terminal, not just when `display_format` is set to `pretty`. [#22326][#22326]
-- When formatting [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) results with `--format=sql`, the row count is now printed in a SQL comment at the end. [#22327][#22327]
-- When formatting [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) results with `--format=csv` or `--format=tsv`, result rows that contain special characters are now quoted properly. [#19306][#19306]
-
-
Admin UI Changes
-
-- [Decommissioned nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) are no longer included in cluster stats aggregates. [#22711][#22711]
-- Time series metrics dashboards now show their own title rather than the generic "Cluster Overview". [#22746][#22746]
-- The URLs for Admin UI pages have been reorganized to provide more consistent structure to the site. Old links will redirect to the new location of the page. [#22746][#22746]
-- Nodes being decommissioned are now included in the total nodes count until they are completely decommissioned. [#22690][#22690]
-- Added new graphs for monitoring activity of the time series system. [#22672][#22672]
-- Disk usage for time series data is now visible on the **Databases** page. [#22398][#22398]
-- Added a ui-clean task. [#22552][#22552]
-- Added an icon to indicate when descriptions in the **Jobs** table are shortened and expandable. [#22221][#22221]
-- Added "compaction queue" graphs to the **Queues** dashboard. [#22218][#22218]
-- Added Raft snapshot queue metrics to the **Queue** dashboard. [#22210][#22210]
-- Dead nodes are now displayed before live nodes on the **Nodes List** page. [#22222][#22222]
-- Links to documentation in the Admin UI now point to the docs for v2.0. [#21894][#21894]
-
-
Bug Fixes
-
-- Fixed an issue where Admin UI graph tooltips would continue to display zero values for nodes which had long been decommissioned. [#22626][#22626]
-- Fixed an issue where Admin UI graphs would occasionally have a persistent "dip" at the leading edge of data. [#22570][#22570]
-- Fixed an issue where viewing Admin UI graphs for very long time spans (e.g., 1 month) could cause excessive memory usage. [#22392][#22392]
-- Fixed padding on the **Node Diagnostics** page of the Admin UI. [#23019][#23019]
-- Corrected the title of the decommissioned node list, which was mistakenly updated to say "Decommissioning". [#22703][#22703]
-- Fixed a bug in [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) output with `SEQUENCES`. [#22619][#22619]
-- Fixed a bug that created uneven distribution of data (or failures in some cases) during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) of tables without an explicit primary key. [#22542][#22542]
-- Fixed a bug that could prevent disk space from being reclaimed. [#23153][#23153]
-- [Replication zone configs](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) no longer accept negative numbers as input. [#23081][#23081]
-- Fixed the occasional selection of sub-optimal rebalance targets. [#23081][#23081]
-- [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) is now able to dump sequences with non-default parameters. [#23062][#23062]
-- [Arrays](https://www.cockroachlabs.com/docs/v2.0/array) now support the `IS [NOT] DISTINCT FROM` operators. [#23090][#23090]
-- [`SHOW TABLES`](https://www.cockroachlabs.com/docs/v2.0/show-tables) is now again able to inspect virtual schemas. [#23041][#23041]
-- The special form of [`CREATE TABLE .. AS`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) now properly supports placeholders in the subquery. [#23046][#23046]
-- Fixed a bug where ranges could get stuck in an infinite "removal pending" state and would refuse to accept new writes. [#23024][#23024]
-- Fixed incorrect index constraints on primary key columns on unique indexes. [#23003][#23003]
-- Fixed a panic when upgrading quickly from v1.0.x to v2.0.x [#22971][#22971]
-- Fixed a bug that prevented joins on interleaved tables with certain layouts from working. [#22935][#22935]
-- The service latency tracked for SQL statement now includes the wait time of the execute message in the input queue. [#22881][#22881]
-- The conversion from [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.0/interval) to [`FLOAT`](https://www.cockroachlabs.com/docs/v2.0/float) now properly returns the number of seconds in the interval. [#22894][#22894]
-- Fixed incorrect query results when the `WHERE` condition contains `IN` expressions where the right-hand side tuple contains `NULL`s. [#22735][#22735]
-- Fixed incorrect handling for `IS (NOT) DISTINCT FROM` when either side is a tuple that contains `NULL`. [#22718][#22718]
-- Fixed incorrect evaluation of `IN` expressions where the left-hand side is a tuple, and some of the tuples on either side contain `NULL`. [#22718][#22718]
-- Expressions stored in check constraints and computed columns are now stored de-qualified so that they no longer refer to a specific database or table. [#22667][#22667]
-- Fixed a bug where reusing addresses of decommissioned nodes could cause issues with Admin UI graphs. [#22614][#22614]
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs can no longer be started if the target table already exists. [#22627][#22627]
-- Computed columns can no longer be added to a table after table creation. [#22653][#22653]
-- Allowed [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert)ing into a table with computed columns. [#22517][#22517]
-- Computed columns are now correctly disallowed from being foreign key references. [#22511][#22511]
-- Various primitives that expect table names as argument now properly reject invalid table names. [#22577][#22577]
-- `AddSSTable` no longer accidentally destroys files in the log on success. [#22551][#22551]
-- `IsDistinctFrom` with `NULL` placeholder no longer returns incorrect results. [#22433][#22433]
-- Fixed a bug that caused incorrect results for joins where columns that are constrained to be equal have different types. [#22549][#22549]
-- Implemented additional safeguards against RPC connections between nodes that belong to different clusters. [#22518][#22518]
-- The [`/health` endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) now returns a node as unhealthy when draining or decommissioning. [#22502][#22502]
-- Aggregates that take null arguments no return the correct results. [#22507][#22507]
-- Fixed empty plan columns of `sequenceSelectNode`. [#22495][#22495]
-- Disallowed any inserts into computed columns. [#22470][#22470]
-- Tables with computed columns will produce a meaningful dump. [#22402][#22402]
-- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) no longer produces an error anymore when an empty statement is entered at the interactive prompt. [#22449][#22449]
-- The `pg_typeof()` function now returns the correct type for the output of `UNION ALL` even when the left sub-select has a `NULL` column. [#22438][#22438]
-- `` literal casts now work correctly for all fixed-length types. [#22397][#22397]
-- Errors from DDL statements sent by a client as part of a transaction, but in a different query string than the final commit, are no longer silently swallowed. [#21829][#21829]
-- Fixed a bug in cascading foreign key actions. [#21799][#21799]
-- Tabular results where the column labels contain newline characters are now rendered properly. [#19306][#19306]
-- Fixed a bug that prevented long descriptions in the Admin UI **Jobs** table from being collapsed after being expanding. [#22221][#22221]
-- Fixed a bug that prevented using [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.0/show-grants) with a grantee but no targets. [#21864][#21864]
-- Fixed a panic with certain queries involving the `REGCLASS` type. [#22310][#22310]
-- Fixed the behavior and types of the `encode()` and `decode()` functions. [#22230][#22230]
-- Fixed a bug that prevented passing the same tuple for `FROM` and `TO` in `ALTER TABLE ... SCATTER`. [#21830][#21830]
-- Fixed a regression that caused certain queries using `LIKE` or `SIMILAR TO` with an indexed column to be slow. [#21842][#21842]
-- Fixed a stack overflow in the code for shutting down a server when out of disk space [#21768][#21768]
-- Fixed Windows release builds. [#21793][#21793]
-- Fixed an issue with the wire-formatting of [`BYTES`](https://www.cockroachlabs.com/docs/v2.0/bytes) arrays. [#21712][#21712]
-- Fixed a bug that could lead to a node crashing and needing to be reinitialized. [#21771][#21771]
-- When a database is created, dropped, or renamed, the SQL session is blocked until the effects of the operation are visible to future queries in that session. [#21900][#21900]
-- Fixed a bug where healthy nodes could appear as "Suspect" in the Admin UI if the web browser's local clock was skewed. [#22237][#22237]
-- Fixed bugs when running DistSQL queries across mixed-version (1.1.x and 2.0-alpha) clusters. [#22897][#22897]
-
-
Performance Improvements
-
-- Improved a cluster's ability to continue operating when nearly out of disk space on most nodes. [#21866][#21866]
-- Disk space is more aggressively freed up when the disk is almost full. [#22235][#22235]
-- Experimentally enabled some joins to perform a lookup join and increase join speed for cases where the right side of the join is much larger than the left. [#22674][#22674]
-- Supported distributed execution of `INTERSECT` and `EXCEPT` queries. [#22442][#22442]
-- Reduced cancellation time of DistSQL aggregation queries. [#22684][#22684]
-- Unnecessary value checksums are no longer computed, speeding up database writes. [#22487][#22487]
-- Reduced unnecessary logging in the storage layer. [#22516][#22516]
-- Improved the performance of distributed sql queries. [#22471][#22471]
-- Distributed execution of `INTERSECT ALL` and `EXCEPT ALL` queries is now supported. [#21896][#21896]
-- Allowed `-` in usernames, but not as the first character. [#22728][#22728]
-- A `COMMIT` reporting an error generated by a previous parallel statement (i.e., `RETURNING NOTHING`) no longer leaves the connection in an aborted transaction state. Instead, the transaction is considered completed and a `ROLLBACK` is not necessary. [#22683][#22683]
-- Significantly reduced the likelihood of serializable restarts seen by clients due to concurrent workloads. [#21140][#21140]
-- Reduced disruption from nodes recovering from network partitions. [#22316][#22316]
-- Improved the performance of scans by copying less data in memory. [#22309][#22309]
-- Slightly improved the performance of low-level scan operations. [#22244][#22244]
-- When a range grows too large, writes are now be backpressured until the range is successfully able to split. This prevents unbounded range growth and improves a clusters ability to stay healthy under hotspot workloads. [#21777][#21777]
-- The `information_schema` and `pg_catalog` databases are now faster to query. [#21609][#21609]
-- Reduced the write amplification of Raft replication. [#20647][#20647]
-
-
Doc Updates
-
-- Added [cloud-specific hardware recommendations](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#cloud-specific-recommendations). [#2312](https://github.com/cockroachdb/docs/pull/2312)
-- Added a [detailed listing of SQL standard features with CockroachDB's level of support](https://www.cockroachlabs.com/docs/v2.0/sql-feature-support). [#2442](https://github.com/cockroachdb/docs/pull/2442)
-- Added docs on the [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) data type. [#2439](https://github.com/cockroachdb/docs/pull/2439)
-- Added docs on the [`SHOW CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/show-create-sequence) statement. [#2406](https://github.com/cockroachdb/docs/pull/2406)
-- Clarified that [password creation](https://www.cockroachlabs.com/docs/v2.0/create-and-manage-users#user-authentication) is only supported in secure clusters. [#2567](https://github.com/cockroachdb/docs/pull/2567)
-- Added docs on [production monitoring tools](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) and the critical events and metrics to alert on. [#2564](https://github.com/cockroachdb/docs/pull/2564)
-- Added docs on the [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data type. [#2561](https://github.com/cockroachdb/docs/pull/2561)
-- Added docs on the `BETWEEN SYMMETRIC` [operator](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#operators). [#2551](https://github.com/cockroachdb/docs/pull/2551)
-- Updated docs on [supporting castings for `ARRAY` values](https://www.cockroachlabs.com/docs/v2.0/array#supported-casting-conversionnew-in-v2-0). [#2549](https://github.com/cockroachdb/docs/pull/2549)
-- Various improvements to docs on the [built-in SQL client](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client). [#2544](https://github.com/cockroachdb/docs/pull/2544)
-
-
-
-
Contributors
-
-This release includes 430 merged PRs by 37 authors. We would like to thank all contributors from the CockroachDB community, with special thanks to first-time contributors noonan, Mark Wistrom, pocockn, and 何羿宏.
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-In this release, we’ve enhanced our debug pages to support graphing custom metrics, improved handling of large deletes, and fixed several bugs.
-
-- Custom graph debug page [#23227](https://github.com/cockroachdb/cockroach/pull/23227)
-- Improve handling of large deletes [#23289](https://github.com/cockroachdb/cockroach/pull/23289)
-
-
Build Changes
-
-- Binaries are now built with Go 1.10 by default. [#23311][#23311]
-
-
General Changes
-
-- [Logging data](https://www.cockroachlabs.com/docs/v2.0/debug-and-error-logs) is now flushed to files every second to aid troubleshooting and monitoring. Synchronization to disk is performed separately every 30 seconds. [#23231][#23231]
-- Disabling [diagnostics reporting](https://www.cockroachlabs.com/docs/v1.1/diagnostics-reporting) also disables new version notification checks. [#23007][#23007]
-- Removed the `diagnostics.reporting.report_metrics` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings), which is duplicative with the `COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING` environment variable. [#23007][#23007]
-- All internal error messages are now logged when logging is set to a high enough verbosity. [#23127][#23127]
-
-
SQL Language Changes
-
-- Improved handling of large [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete) statements. They are now either allowed to complete or exited with an error message indicating that the transaction is too large to complete. [#23289][#23289]
-- The `pg_catalog` virtual tables, as well as the special casts `::regproc` and `::regclass`, can now only be queried by clients that [set a current database](https://www.cockroachlabs.com/docs/v2.0/set-vars). [#23148][#23148]
-
-
Command-Line Changes
-
-- When a node spends more than 30 seconds waiting for an `init` command or to join a cluster, a help message now gets prints to `stdout`. [#23181][#23181]
-
-
Admin UI Changes
-
-- Added a new debug page that allows users to create a "custom" graph displaying any combination of metrics. [#23227][#23227]
-- In the geographical map on the homepage of the Admin UI enterprise version, node components now link to **Note Details** page. [#23283][#23283]
-- Removed the **Keys Written per Second per Store** graph. [#23303][#23303]
-- Added the **Lead Transferee** field to the **Range Debug** page. [#23241][#23241]
-
-
Bug Fixes
-
-- Fixed a correctness bug where some `ORDER BY` queries would not return the correct results. [#23541][#23541]
-- The Admin UI no longer hangs after a node's network configuration has changed. [#23348][#23348]
-- The binary format for [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) values is now supported. [#23215][#23215]
-- A node now waits in an unready state for the length of time specified by the `server.shutdown.drain_wait` cluster setting before draining. This helps ensure that load balancers will not send client traffic to a node about to be drained. [#23319][#23319]
-- Fixed a panic when using `UPSERT ... RETURNING` with `UNION`. [#23317][#23317]
-- Prevented disruptions in performance when gracefully shutting down a node. [#23300][#23300]
-- Hardened the [cluster version upgrade](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version) mechanism. Rapid upgrades through more than two versions could sometimes fail recoverably. [#23287][#23287]
-- Fixed a deadlock when tables are rapidly created or dropped. [#23288][#23288]
-- Fixed a small memory leak in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) CSV. [#23259][#23259]
-- Prevented a panic in DistSQL under certain error conditions. [#23201][#23201]
-- Added a [readiness endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting#health-ready-1) (`/health?ready=1`) for better integration with load balancers. [#23247][#23247]
-- Fixed a zero QPS scenario when gracefully shutting down a node. [#23246][#23246]
-- Secondary log files (e.g., the SQL execution log) are now periodically flushed to disk, in addition to the flush
- that occurs naturally when single log files are full (`--log-file-max-size`) and when the process terminates gracefully. Log file rotation is also now properly active for these files. [#23231][#23231]
-- Previously, the `ranges` column in the `node status` command output only included ranges whose raft leader was on the node in question. It now includes the count of all ranges on the node, regardless of where the raft leader is. [#23180][#23180]
-- Fixed a panic caused by empty `COCKROACH_UPDATE_CHECK_URL` or `COCKROACH_USAGE_REPORT_URL` environment variables. [#23008][#23008]
-- Prevented stale reads caused by the system clock moving backwards while the `cockroach` process is not running. [#23122][#23122]
-- Corrected the handling of cases where a replica fails to retrieve the last processed timestamp of a queue. [#23127][#23127]
-- Fixed a bug where the liveness status would not always display correctly on the single-node page in the Admin UI. [#23193][#23193]
-- Fixed a bug that showed incorrect descriptions on the **Jobs** page in the Admin UI. [#23256][#23256]
-
-
Doc Updates
-
-- Updated the [GCE deployment tutorial](https://www.cockroachlabs.com/docs/v2.0/deploy-cockroachdb-on-google-cloud-platform) with guidance on increasing the backend timeout setting for TCP Proxy load balancers. [#2687](https://github.com/cockroachdb/docs/pull/2687)
-- Documented [read refreshing](https://www.cockroachlabs.com/docs/v2.0/architecture/transaction-layer#read-refreshing) in the CockroachDB architecture documentation. [#2684](https://github.com/cockroachdb/docs/pull/2684)
-- Updated the explanation of [automatic retries](https://www.cockroachlabs.com/docs/v2.0/transactions#automatic-retries). [#2680](https://github.com/cockroachdb/docs/pull/2680)
-- Documented changes to the [built-in replication zone](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#list-the-pre-configured-replication-zones). [#2677](https://github.com/cockroachdb/docs/pull/2677)
-- Updated the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema) documentation to cover new views. [#2675](https://github.com/cockroachdb/docs/pull/2675) [#2673](https://github.com/cockroachdb/docs/pull/2673) [#2672](https://github.com/cockroachdb/docs/pull/2672) [#2668](https://github.com/cockroachdb/docs/pull/2668) [#2662](https://github.com/cockroachdb/docs/pull/2662) [#2654](https://github.com/cockroachdb/docs/pull/2654) [#2637](https://github.com/cockroachdb/docs/pull/2637)
-- Clarified the target of the [`cockroach init`](https://www.cockroachlabs.com/docs/v2.0/initialize-a-cluster) command. [#2670](https://github.com/cockroachdb/docs/pull/2670)
-- Added details about [how to monitor clock offsets](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#how-can-i-tell-how-well-node-clocks-are-synchronized). [#2663](https://github.com/cockroachdb/docs/pull/2663)
-- Documented how to [perform a rolling upgrade on a Kubernetes-orchestrated cluster](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes). [#2661](https://github.com/cockroachdb/docs/pull/2661)
-- Updated the [Azure deployment tutorial](https://www.cockroachlabs.com/docs/v2.0/deploy-cockroachdb-on-microsoft-azure) with correct network security rule guidance. [#2653](https://github.com/cockroachdb/docs/pull/2653)
-- Improved the documentation of the [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command. [#2639](https://github.com/cockroachdb/docs/pull/2639)
-- Clarified that the [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.0/drop-column) statement now drops `CHECK` constraints. [#2638](https://github.com/cockroachdb/docs/pull/2638)
-- Added details about [disk space usage after deletes and select performance on deleted rows](https://www.cockroachlabs.com/docs/v2.0/delete#disk-space-usage-after-deletes). [#2635](https://github.com/cockroachdb/docs/pull/2635)
-- Clarified that [`DROP INDEX .. CASCADE`](https://www.cockroachlabs.com/docs/v2.0/drop-index) is required to drop a `UNIQUE` index. [#2633](https://github.com/cockroachdb/docs/pull/2633)
-- Updated the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.0/explain) documentation to identify all explainable statements, cover the new output structure, and better explain the contents of the `Ordering` column. [#2632](https://github.com/cockroachdb/docs/pull/2632) [#2682](https://github.com/cockroachdb/docs/pull/2682)
-- Defined "range lease" in the [CockroachDB architecture overview](https://www.cockroachlabs.com/docs/v2.0/architecture/overview#terms). [#2625](https://github.com/cockroachdb/docs/pull/2625)
-- Added an FAQ on [preparing for planned node maintenance](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#how-do-i-prepare-for-planned-node-maintenance). [#2600](https://github.com/cockroachdb/docs/pull/2600)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-In this release, we’ve improved CockroachDB’s ability to run in orchestrated environments and closed several Postgres capability gaps.
-
-- Better [`/health`](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) behavior to support orchestrated deployments. [#23551][#23551]
-
-
SQL Language Changes
-
-- `NO CYCLE` and `CACHE 1` are now supported options during [sequence creation](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#23518][#23518]
-- `ISNULL` and `NOTNULL` are now accepted as alternatives to `IS NULL` and `IS NOT NULL`. [#23518][#23518]
-
-
Command-Line Changes
-
-- Changed the `server.drain_max_wait` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) to `server.shutdown.query_wait` [#23629][#23629]
-- The generated HAProxy config generated by [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v2.0/generate-cockroachdb-resources) has been extended with readiness checks. [#23590][#23590]
-
-
Admin UI Changes
-
-- Implemented a spinner on the **Logs** page instead of saying "No Data" while loading. [#23556][#23556]
-- Now using monospace font and rendering new lines on the **Logs** page. Also, packed lines together more tightly. [#23556][#23556]
-- Moved the **Clock Offset** graph from the **Distributed** dashboard to the [Runtime dashboard](https://www.cockroachlabs.com/docs/v2.0/admin-ui-runtime-dashboard). Now displaying each node's clock offset independently rather than aggregating them together. [#23627][#23627]
-- When [reporting anonymous usage details](https://www.cockroachlabs.com/docs/v2.0/diagnostics-reporting), locality tier names are now redacted. [#23588][#23588]
-- Clicking on the entire node component is now allowed in the Node Map, not just the visible elements. [#23536][#23536]
-- The Node Map now shows how long a node has been dead. [#23404][#23404]
-- Correct liveness status text now displays on nodes in the Node Map. [#23404][#23404]
-
-
Bug Fixes
-
-- Fixed a bug in which the usable capacity of nodes was not added up correctly in the Admin UI. [#23695][#23695]
-- An [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) can now be used with the PostgreSQL binary format. [#23467][#23467]
-- Fixed a panic when a query would incorrectly attempt to use an aggregation or window function in [`ON CONFLICT DO UPDATE`](https://www.cockroachlabs.com/docs/v2.0/insert#update-values-on-conflict). [#23658][#23658]
-- [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) can now be used with [scalar subqueries](https://www.cockroachlabs.com/docs/v2.0/scalar-expressions#scalar-subqueries). [#23470][#23470]
-- Connection attempts to former members of the cluster and the associated spurious log messages are now prevented. [#23605][#23605]
-- Fixed a panic when executing `INSERT INTO ... SELECT` queries where the number of columns targeted for insertion does not match the number of columns returned by the [`SELECT`](https://www.cockroachlabs.com/docs/v2.0/select-clause). [#23642][#23642]
-- Reduced the risk that a node in the process of crashing could serve inconsistent data. [#23616][#23616]
-- Fixed a correctness bug where some `ORDER BY` queries would not return the correct results under concurrent transactional load. [#23602][#23602]
-- `RETURNING NOTHING` now properly detects [parallel statement execution](https://www.cockroachlabs.com/docs/v2.0/parallel-statement-execution) opportunities against statements that contain data-modifying statements in subqueries. [#23524][#23524]
-- The [`/health` HTTP endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting#health-endpoints) is now accessible before a node has successfully become part of an initialized cluster, meaning that it now accurately reflects the health of the process rather than the ability of the process to serve queries. This has been the intention all along, but it didn't work until the node had joined a cluster or had `cockroach init` run on it. [#23551][#23551]
-- Fixed a panic that could occur with certain types of casts. [#23535][#23535]
-- Prevented a hang while crashing when `stderr` is blocked. [#23484][#23484]
-- Fixed panics related to distributed execution of queries with `REGCLASS` casts. [#23482][#23482]
-- Fixed a panic with computed columns. [#23435][#23435]
-- Added prevention against potential consistency issues when a node is stopped and restarted in rapid succession. [#23339][#23339]
-- [Decommissioning a node](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) that has already been terminated now works in all cases. Success previously depended on whether the gateway node "remembered" the absent decommissioned node. [#23378][#23378]
-
-
Build Changes
-
-- [Go 1.10](https://golang.org/dl/) is now the minimum required version necessary to build CockroachDB. [#23494][#23494]
-
-
Doc Updates
-
-- Documented the [`SPLIT AT`](https://www.cockroachlabs.com/docs/v2.0/split-at) statement, which forces a key-value layer range split at a specified row in a table or index. [#2704](https://github.com/cockroachdb/docs/pull/2704)
-- Various updates to the [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) documentation. [#2676](https://github.com/cockroachdb/docs/pull/2676)
-- Various updates to the [`SHOW TRACE`](https://www.cockroachlabs.com/docs/v2.0/show-trace) documentation. [#2674](https://github.com/cockroachdb/docs/pull/2674)
-- Clarified the upgrade path for [rolling upgrades](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version). [#2627](https://github.com/cockroachdb/docs/pull/2627)
-- Added more detailed documentation on [ordering query results](https://www.cockroachlabs.com/docs/v2.0/query-order) with `ORDER BY`. [#2658](https://github.com/cockroachdb/docs/pull/2658)
-- Documented [inverted indexes](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) for [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data. [#2608](https://github.com/cockroachdb/docs/pull/2608)
-
-
-
-- A CockroachDB process now flushes its logs upon receiving `SIGHUP` instead of `SIGUSR1` as it did earlier. This is aimed to simplify the automation of process monitoring, test, and backup tools. [#23783][#23783]
-- Information about [zone config](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) usage is now included in [diagnostic reports](https://www.cockroachlabs.com/docs/v2.0/diagnostics-reporting). [#23750][#23750]
-
-
Enterprise Edition Changes
-
-- Added the `cloudstorage.timeout` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) for import/export operations. [#23776][#23776]
-
-
SQL Language Changes
-
-- SQL features introduced in CockroachDB v2.0 cannot be used in clusters that are not [upgraded fully to v2.0](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version). [#24013][#24013]
-- Added an `escape` option to the `encode()` and `decode()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#23781][#23781]
-- Introduced a series of PostgreSQL-compatible, privilege-related [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#23839][#23839]
-- Added the `pg_language` table to the `pg_catalog` virtual schema. [#23839][#23839]
-- Added the `anyarray` type to the `pg_type` virtual schema. [#23836][#23836]
-- [Retryable errors](https://www.cockroachlabs.com/docs/v2.0/transactions#error-handling) on schema change operations are now less likely to be returned to clients; more operations are retried internally. [#24050][#24050]
-
-
Command-Line Changes
-
-- Client commands now report a warning if the connection URL is specified by the `--url` as well as some other command-line flag. If you use the `--url` flag, other flags can fill in pieces missing from the URL.
-- Added per-node heap profiles to the [`debug zip`](https://www.cockroachlabs.com/docs/v2.0/debug-zip) command output. [#23858][#23858]
-
-
Admin UI Changes
-
-- More debug pages are now locked down by the `server.remote_debugging.mode` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings). [#24022][#24022]
-- The **Network Diagnostics** report no longer crashes when the latencies are very small or on a single node cluster. [#23868][#23868]
-- Fixed the flicker in the **Node Map** as data is being reloaded. [#23757][#23757]
-- Fixed the text overflowing past the table cell boundaries on the **Jobs** page. [#23748][#23748]
-- Updated the labels for the **Snapshots** graph on the **Replication** dashboard to be more specific. [#23742][#23742]
-- Fixed a bug where graphs would not display on clusters with large numbers of nodes. [#24045][#24045]
-- [Decommissioned nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) no longer appear in the node selection dropdown on the **Metrics** page. [#23800][#23800]
-- Fixed a condition where a persistent trailing dip could appear in graphs for longer time periods. [#23874][#23874]
-
-
Bug Fixes
-
-- Redacted string values in debug API responses. [#24070][#24070]
-- Old replicas are now garbage collected in a more timely fashion after a node has been offline for a long time (this bug only exists in recent v2.0 alpha/beta releases, not in v1.1). [#24066][#24066]
-- Fixed a bug where some [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries could return incorrect results. [#23968][#23968]
-- Fixed the behavior of the `@>` operator with arrays and scalars. [#23969][#23969]
-- [Inverted indexes](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) can no longer be hinted for inappropriate queries. [#23989][#23989]
-- Enforced minimum privileges for the `admin` role. [#23935][#23935]
-- A panic is now avoided if the SQL audit log directory does not exist when the node is started. [#23928][#23928]
-- Supported Postgres syntax for `USING GIN`. [#23910][#23910]
-- Fixed a bug where [`INSERT`](https://www.cockroachlabs.com/docs/v2.0/insert)/[`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete)/[`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update)/[`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert) may lose updates if run using `WITH` or the `[ ... ]` syntax. [#23895][#23895]
-- Made sure that all built-in functions have a unique Postgres OID for compatibility. [#23880][#23880]
-- Fixed an error message generated by the experimental SCRUB feature. [#23845][#23845]
-- Fixed a bug where [`CREATE VIEW`](https://www.cockroachlabs.com/docs/v2.0/create-view) after [`ALTER TABLE ADD COLUMN`](https://www.cockroachlabs.com/docs/v2.0/add-column) would fail to register the dependency on the newly added column. [#23845][#23845]
-- Fixed crashes or incorrect results when combining an `OUTER JOIN` with a `VALUES` clause that contains only `NULL` values on a column (or other subqueries which result in a `NULL` column). [#23838][#23838]
-- Fixed rare nil pointer exception in rebalance target selection. [#23807][#23807]
-- The [`cockroach zone set`](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) command will now automatically retry if it encounters an error while setting zone configurations. [#23782][#23782]
-- Fixed a bug where closing a connection in the middle of executing a query sometimes crashed the server. [#23761][#23761]
-- Fixed a bug where expressions could be mistakenly considered equal, despite their types being different. [#23722][#23722]
-- Fixed a bug where the `RANGE COUNT` metric on the Cluster Overview page of the Admin UI could significantly undercount the number of ranges. [#23746][#23746]
-- The client URL reported upon [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) now does not include the option `application_name`. [#23894][#23894]
-
-
-
-- Added a local cluster tutorial demonstrating [JSON Support](https://www.cockroachlabs.com/docs/v2.0/demo-json-support). [#2716](https://github.com/cockroachdb/docs/pull/2716)
-- Added full documentation for the [`VALIDATE CONSTRAINT`](https://www.cockroachlabs.com/docs/v2.0/validate-constraint) statement. [#2730](https://github.com/cockroachdb/docs/pull/2730)
-
-
-
-
Contributors
-
-This release includes 64 merged PRs by 23 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Bob Vawter.
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This is the first release candidate for CockroachDB v2.0. All known bugs have either been fixed or pushed to a future release, with large bugs documented as [known limitations](https://www.cockroachlabs.com/docs/v2.0/known-limitations).
-
-- Improved the **Node Map** to provide guidance when an enterprise license or additional configuration is required. [#24271][#24271]
-- Bug fixes and stability improvements.
-
-
Admin UI Changes
-
-- Improved the **Node Map** to provide guidance when an enterprise license or additional configuration is required. [#24271][#24271]
-- Added the available storage capacity to the **Cluster Overview** metrics. [#24254][#24254]
-
-
Bug Fixes
-
-- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) that could lead to missing rows if the `RESTORE` was interrupted. [#24089][#24089]
-- New nodes running CockroachDB v2.0 can now join clusters that contain nodes running v1.1. [#24257][#24257]
-- Fixed a crash in [`cockroach zone ls`](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) that would happen if a table with a zone config on it had been deleted but not yet garbage collected. (This was broken in v2.0 alphas, not in v1.1.) [#24180][#24180]
-- Fixed a bug where zooming on the **Node Map** could break after zooming out to the maximum extent. [#24183][#24183]
-- Fixed a crash while performing rolling restarts. [#24260][#24260]
-- Fixed a bug where [privileges](https://www.cockroachlabs.com/docs/v2.0/privileges) were sometimes set incorrectly after upgrading from an older release. [#24393][#24393]
-
-
-
-
Contributors
-
-This release includes 11 merged PRs by 10 authors. We would like to thank all contributors from the CockroachDB community, with special thanks to first-time contributor Vijay Karthik.
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-With the release of CockroachDB v2.0, we’ve made significant performance improvements, expanded our PostgreSQL compatibility by adding support for JSON (among other types), and provided functionality for managing multi-regional clusters in production.
-
-- Read more about these changes in the [v2.0 blog post](https://www.cockroachlabs.com/blog/cockroachdb-2-0-release/).
-- Check out a [summary of the most significant user-facing changes](#v2-0-0-summary).
-- Then [upgrade to CockroachDB v2.0](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version).
-
-
Summary
-
-This section summarizes the most significant user-facing changes in v2.0.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases.
-
-- [Enterprise Features](#v2-0-0-enterprise-features)
-- [Core Features](#v2-0-0-core-features)
-- [Backward-Incompatible Changes](#v2-0-0-backward-incompatible-changes)
-- [Known Limitations](#v2-0-0-known-limitations)
-- [Documentation Updates](#v2-0-0-documentation-updates)
-
-
-
-
Enterprise Features
-
-These new features require an [enterprise license](https://www.cockroachlabs.com/docs/v2.0/enterprise-licensing). You can [register for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/).
-
-Feature | Description
---------|------------
-[Table Partitioning](https://www.cockroachlabs.com/docs/v2.0/partitioning) | Table partitioning gives you row-level control of how and where your data is stored. This feature can be used to keep data close to users, thereby reducing latency, or to store infrequently-accessed data on slower and cheaper storage, thereby reducing costs.
-[Node Map](https://www.cockroachlabs.com/docs/v2.0/enable-node-map) | The **Node Map** in the Admin UI visualizes the geographical configuration of a multi-region cluster by plotting the node localities on a world map. This feature provides real-time cluster metrics, with the ability to drill down to individual nodes to monitor and troubleshoot cluster health and performance.
-[Role-Based Access Control](https://www.cockroachlabs.com/docs/v2.0/roles) | Roles simplify access control by letting you assign SQL privileges to groups of users rather than to individuals.
-[Point-in-time Backup/Restore](https://www.cockroachlabs.com/docs/v2.0/restore#point-in-time-restore-new-in-v2-0) (Beta) | Data can now be restored as it existed at a specific point-in-time within the [revision history of a backup](https://www.cockroachlabs.com/docs/v2.0/backup#backups-with-revision-history-new-in-v2-0).
This is a **beta** feature. It is currently undergoing continued testing. Please [file a Github issue](https://www.cockroachlabs.com/docs/v2.0/file-an-issue) with us if you identify a bug.
-
-
Core Features
-
-These new features are freely available in the core version and do not require an enterprise license.
-
-
SQL
-
-Feature | Description
---------|------------
-[JSON Support](https://www.cockroachlabs.com/docs/v2.0/demo-json-support) | The [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data type and [inverted indexes](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) give you the flexibility to store and efficiently query semi-structured data.
-[Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) | Sequences generate sequential integers according to defined rules. They are generally used for creating numeric primary keys.
-[SQL Audit Logging](https://www.cockroachlabs.com/docs/v2.0/sql-audit-logging) (Experimental)| SQL audit logging gives you detailed information about queries being executed against your system. This feature is especially useful when you want to log all queries that are run against a table containing personally identifiable information (PII).
This is an **experimental** feature. Its interface and output are subject to change.
-[Common Table Expressions](https://www.cockroachlabs.com/docs/v2.0/common-table-expressions) | Common Table Expressions (CTEs) simplify the definition and use of subqueries. They can be used in combination with [`SELECT` clauses](https://www.cockroachlabs.com/docs/v2.0/select-clause) and [`INSERT`](https://www.cockroachlabs.com/docs/v2.0/insert), [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete), [`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update) and [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert) statements.
-[Computed Columns](https://www.cockroachlabs.com/docs/v2.0/computed-columns) | Computed columns store data generated from other columns by an expression that's included in the column definition. They are especially useful in combination with [table partitioning](https://www.cockroachlabs.com/docs/v2.0/partitioning), [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) columns, and [secondary indexes](https://www.cockroachlabs.com/docs/v2.0/indexes).
-[Foreign Key Actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) | The `ON UPDATE` and `ON DELETE` foreign key actions control what happens to a constrained column when the column it's referencing (the foreign key) is deleted or updated.
-[Virtual Schemas](https://www.cockroachlabs.com/docs/v2.0/sql-name-resolution#logical-schemas-and-namespaces) | For PostgreSQL compatibility, CockroachDB now supports a three-level structure for names: database name > virtual schema name > object name. The new [`SHOW SCHEMAS`](https://www.cockroachlabs.com/docs/v2.0/show-schemas) statement can be used to list all virtual schemas for a given database.
-[`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) | The `IMPORT` statement now imports tabular data in a fully distributed fashion, and import jobs can now be [paused](https://www.cockroachlabs.com/docs/v2.0/pause-job), [resumed](https://www.cockroachlabs.com/docs/v2.0/resume-job), and [cancelled](https://www.cockroachlabs.com/docs/v2.0/cancel-job).
-[`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) | The `INET` data type stores an IPv4 or IPv6 address.
-[`TIME`](https://www.cockroachlabs.com/docs/v2.0/time) | The `TIME` data type stores the time of day without a time zone.
-
-
Operations
-
-Feature | Description
---------|------------
-[Node Readiness Endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting#health-ready-1) | The new `/health?ready=1` endpoint returns an `HTTP 503 Service Unavailable` status response code with an error when a node is being decommissioned or is in the process of shutting down and is therefore not able to accept SQL connections and execute queries. This is especially useful for making sure [load balancers](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#load-balancing) do not direct traffic to nodes that are live but not "ready", which is a necessary check during [rolling upgrades](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version).
-[Node Decommissioning](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) | Nodes that have been decommissioned and stopped no longer appear in Admin UI and command-line interface metrics.
-[Per-Replica Constraints in Replication Zones](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#scope-of-constraints) | When defining a replication zone, unique constraints can be defined for each affected replica, meaning you can effectively pick the exact location of each replica.
-[Replication Zone for "Liveness" Range](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#create-a-replication-zone-for-a-system-range) | Clusters now come with a pre-defined replication zone for the "liveness" range, which contains the authoritative information about which nodes are live at any given time.
-[Timeseries Data Controls](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0) | It is now possible to reduce the amount of timeseries data stored by a CockroachDB cluster or to disable the storage of timeseries data entirely. The latter is recommended only when using a third-party tool such as Prometheus for timeseries monitoring.
-
-
Backward-Incompatible Changes
-
-Change | Description
--------|------------
-Replication Zones | [Positive replication zone constraints](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#replication-constraints) no longer work. Any existing positive constraints will be ignored. This change should not impact existing deployments since positive constraints have not been documented or supported for some time.
-Casts from `BYTES` to `STRING` | Casting between these types now works the same way as in PostgreSQL. New functions `encode()` and `decode()` are available to replace the former functionality.
-`NaN` Comparisons | `NaN` comparisons have been redefined to be compatible with PostgreSQL. `NaN` is now equal to itself and sorts before all other non-NULL values.
-[`DROP USER`](https://www.cockroachlabs.com/docs/v2.0/drop-user) | It is no longer possible to drop a user with grants; the user's grants must first be [revoked](https://www.cockroachlabs.com/docs/v2.0/revoke).
-[Cluster Settings](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) | The obsolete `kv.gc.batch_size` cluster setting has been removed.
-Environment Variables | The `COCKROACH_METRICS_SAMPLE_INTERVAL` environment variable has been removed. Users that relied on it should reduce the value for the `timeseries.resolution_10s.storage_duration` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) instead.
-[Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) | As of the [v1.2-alpha.20171113](#v1-2-alpha-20171113) release, how sequences are stored in the key-value layer changed. Sequences created prior to that release must therefore be dropped and recreated. Since a sequence cannot be dropped while it is being used in a column's [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expression, those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval()` function can be used to set the value of a sequence to what it was previously.
-[Reserved Keywords](https://www.cockroachlabs.com/docs/v2.0/sql-grammar#reserved_keyword) | `ROLE`, `VIRTUAL`, and `WORK` have been added as reserved keywords and are no longer allowed as [identifiers](https://www.cockroachlabs.com/docs/v2.0/keywords-and-identifiers).
-
-
Known Limitations
-
-For information about limitations we've identified in CockroachDB v2.0, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v2.0/known-limitations).
-
-
Documentation Updates
-
-Topic | Description
-------|------------
-[Production Checklist](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings) | This topic now provides cloud-specific hardware, security, load balancing, monitoring and alerting, and clock synchronization recommendations as well as expanded cluster topology guidance. Related [deployment tutorials](https://www.cockroachlabs.com/docs/v2.0/manual-deployment) have been enhanced with much of this information as well.
-[Monitoring and Alerting](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) | This new topic explains available tools for monitoring the overall health and performance of a cluster and critical events and metrics to alert on.
-[Common Errors](https://www.cockroachlabs.com/docs/v2.0/common-errors) | This new topic helps you understand and resolve errors you might encounter, including retryable and ambiguous errors for transactions.
-[SQL Performance](https://www.cockroachlabs.com/docs/v2.0/performance-best-practices-overview) | This new topic provides best practices for optimizing SQL performance in CockroachDB.
-[SQL Standard Comparison](https://www.cockroachlabs.com/docs/v2.0/sql-feature-support) | This new topic lists which SQL standard features are supported, partially-supported, and unsupported by CockroachDB.
-[Selection Queries](https://www.cockroachlabs.com/docs/v2.0/selection-queries) | This new topic explains the function and syntax of queries and operations involved in reading and processing data in CockroachDB, alongside more detailed information about [ordering query results](https://www.cockroachlabs.com/docs/v2.0/query-order), [limiting query results](https://www.cockroachlabs.com/docs/v2.0/limit-offset), [subqueries](https://www.cockroachlabs.com/docs/v2.0/subqueries), and [join expressions](https://www.cockroachlabs.com/docs/v2.0/joins).
diff --git a/src/current/_includes/releases/v2.0/v2.0.1.md b/src/current/_includes/releases/v2.0/v2.0.1.md
deleted file mode 100644
index c74d2d8a79a..00000000000
--- a/src/current/_includes/releases/v2.0/v2.0.1.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
-- The new `server.clock.persist_upper_bound_interval` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to guarantees monotonic wall time across server restarts. [#24624][#24624]
-- The new `server.clock.forward_jump_check_enabled` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to cause nodes to panic on clock jumps. [#24606][#24606]
-- Prevented execution errors reporting a missing `libtinfo.so.5` on Linux systems. [#24531][#24531]
-
-
Enterprise Edition Changes
-
-- It is now possible to [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) views when using the `into_db` option. [#24590][#24590] {% comment %}doc{% endcomment %}
-- The new `jobs.registry.leniency` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to allow long-running [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs to survive temporary node saturation. [#24505][#24505]
-- Relaxed the limitation on using [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) in a mixed version cluster. [#24515][#24515]
-
-
SQL Language Changes
-
-- Improved the error message returned on object creation when no current database is set or only invalid schemas are in the `search_path`. [#24812][#24812]
-- The `current_schema()` and `current_schemas()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) now only consider valid schemas, like PostgreSQL does. [#24758][#24758]
-- The experimental SQL features `SHOW TESTING_RANGES` and `ALTER ... TESTING_RELOCATE` have been renamed [`SHOW EXPERIMENTAL_RANGES`](https://www.cockroachlabs.com/docs/v2.0/show-experimental-ranges) and `ALTER ... EXPERIMENTAL_RELOCATE`. [#24699][#24699]
-- `ROLE`, `VIRTUAL`, and `WORK` are no longer reserved keywords and can again be used as unrestricted names. [#24665][#24665] [#24549][#24549]
-
-
Command-Line Changes
-
-- When [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v2.0/generate-cockroachdb-resources) is run, if an `haproxy.cfg` file already exists in the current directory, it now gets fully overwritten instead of potentially resulting in an unusable config. [#24336][#24336] {% comment %}doc{% endcomment %}
-
-
Bug Fixes
-
-- Fixed a bug when using fractional units (e.g., `0.5GiB`) for the `--cache` and `--sql-max-memory` flags of [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node). [#24388][#24388]
-- Fixed the handling of role membership lookups within transactions. [#24334][#24334]
-- Fixed a bug causing some lookup join queries to report incorrect type errors. [#24825][#24825]
-- `ALTER INDEX ... RENAME` can now be used on the primary index. [#24777][#24777]
-- Fixed a panic involving [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries using the `->` operator. [#24596][#24596]
-- Fix a panic involving [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries over `NULL`. [#24566][#24566]
-- Fixed a bug preventing [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries that have a root with a single entry or element but multiple children overall. [#24376][#24376]
-- [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) values can now be cast to [`STRING`](https://www.cockroachlabs.com/docs/v2.0/string) values. [#24553][#24553]
-- Prevented executing distributed SQL operations on draining nodes. [#23916][#23916]
-- Fixed a panic caused by a `WHERE` condition that requires a column to equal a specific value and at the same time equal another column. [#24517][#24517]
-- Fixed a panic caused by passing a `Name` type to `has_database_privilege()`. [#24270][#24270]
-- Fixed a bug causing index backfills to fail in a loop after exceeding the GC TTL of their source table. [#24427][#24427]
-- Fixed a panic caused by null config zones in diagnostics reporting. [#24526][#24526]
-
-
Performance Improvements
-
-- Some [`SELECT`s](https://www.cockroachlabs.com/docs/v2.0/select-clause) with limits no longer require a second low-level scan, resulting in much faster execution. [#24796][#24796]
-
-
-
-- The header of new log files generated by [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) will now include the cluster ID once it has been determined. [#24982][#24982]
-- The cluster ID is now reported with tag `[config]` in the first log file, not only when log files are rotated. [#24982][#24982]
-- Stopped spamming the server logs with "error closing gzip response writer" messages. [#25108][#25108]
-
-
SQL Language Changes
-
-- Added more ways to specify an index name for statements that require one (e.g., `DROP INDEX`, `ALTER INDEX ... RENAME`, etc.), improving PostgreSQL compatibility. [#24817][#24817] {% comment %}doc{% endcomment %}
-- Clarified the error message produced upon accessing a virtual schema with no database prefix (e.g., when `database` is not set). [#24809][#24809]
-- `STORED` is no longer a reserved keyword and can again be used as an unrestricted name for databases, tables and columns. [#24864][#24864] {% comment %}doc{% endcomment %}
-- Errors detected by `SHOW SYNTAX` are now tracked internally like other SQL errors. [#24900][#24900]
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) now supports hex-encoded byte literals for [`BYTES`](https://www.cockroachlabs.com/docs/v2.0/bytes) columns. [#25063][#25063] {% comment %}doc{% endcomment %}
-- [Collated strings](https://www.cockroachlabs.com/docs/v2.0/collate) can now be used in `WHERE` clauses on indexed columns. [#25175][#25175] {% comment %}doc{% endcomment %}
-- The Level and Type columns of [`EXPLAIN (VERBOSE)`](https://www.cockroachlabs.com/docs/v2.0/explain) results are now hidden; if they are needed, they can be `SELECT`ed explicitly. [#25206][#25206] {% comment %}doc{% endcomment %}
-
-
-
-- It is once again possible to use a simply qualified table name in qualified stars (e.g., `SELECT mydb.kv.* FROM kv`) for compatibility with CockroachDB v1.x. [#24842][#24842]
-- Fixed a scenario in which a node could deadlock while starting up. [#24831][#24831]
-- Ranges in partitioned tables now properly split to respect their configured maximum size. [#24912][#24912]
-- Some kinds of schema change errors that were stuck in a permanent loop now correctly fail. [#25015][#25015]
-- When [adding a column](https://www.cockroachlabs.com/docs/v2.0/add-column), CockroachDB now verifies that the column is referenced by no more than one foreign key. Existing tables with a column that is used by multiple foreign key constraints should be manually changed to have at most one foreign key per column. [#25079][#25079]
-- CockroachDB now properly reports an error when using the internal-only functions `final_variance()` and `final_stddev()` instead of causing a crash. [#25218][#25218]
-- The `constraint_schema` column in `information_schema.constraint_column_usage` now displays the constraint's schema instead of its catalog. [#25220][#25220]
-- Fix a panic caused by certain queries containing `OFFSET` and `ORDER BY`. [#25238][#25238]
-- `BEGIN; RELEASE SAVEPOINT` now returns an error instead of causing a crash. [#25251][#25251]
-- Fixed a rare `segfault` that occurred when reading from an invalid memory location returned from C++. [#25361][#25361]
-- Fixed a bug with `IS DISTINCT FROM` not returning `NULL` values that pass the condition in some cases. [#25339][#25339]
-- Restarting a CockroachDB server on Windows no longer fails due to file system locks in the store directory. [#25439][#25439]
-- Prevented the consistency checker from deadlocking. This would previously manifest itself as a steady number of replicas queued for consistency checking on one or more nodes and would resolve by restarting the affected nodes. [#25474][#25474]
-- Fixed problems with [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) sometimes failing after node decommissioning. [#25307][#25307]
-- Fixed a bug causing `PREPARE` to hang when run in the same transaction as a `CREATE TABLE` statement. [#24874][#24874]
-
-
Build Changes
-
-- Build metadata, like the commit SHA and build time, is properly injected into the binary when using Go 1.10 and building from a symlink. [#25062][#25062]
-
-
Doc Updates
-
-- Improved the documentation of the `now()`, `current_time()`, `current_date()`, `current_timestamp()`, `clock_timestamp()`, `statement_timestamp()`, `cluster_logical_timestamp()`, and `age()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#25383][#25383] [#25145][#25145]
-
-
-
-
Contributors
-
-This release includes 42 merged PRs by 16 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Garvit Juniwal
-- Jingguo Yao
-
-
-
-- The new `compactor.threshold_bytes` and `max_record_age` [cluster settings](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to configure the compactor. [#25458][#25458]
-- The new `cluster.preserve_downgrade_option` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) makes it possible to preserve the option to downgrade after [performing a rolling upgrade to v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version). [#25811][#25811]
-
-
SQL Language Changes
-
-- Prevented [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.0/drop-table) from using too much CPU. [#25852][#25852]
-
-
Command-Line Changes
-
-- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) command no longer prompts for a password when a certificate is provided. [#26232][#26232]
-- The [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.0/stop-a-node) command now prints warning messages to the standard error stream, not to standard output. [#26163][#26163]
-
-
Bug Fixes
-
-- Prevented the internal gossip network from being partitioned by making it much less likely that nodes in the network could forget about each other. [#25521][#25521]
-- Prevented spurious `BudgetExceededErrors` for some queries that read a lot of JSON data from disk. [#25719][#25719]
-- Fixed a crash in some cases when using a `GROUP BY` with `HAVING`. [#25654][#25654]
-- Fixed a crash caused by inserting data into a table with [computed columns](https://www.cockroachlabs.com/docs/v2.0/computed-columns) that reference other columns that weren't present in the `INSERT` statement. [#25807][#25807]
-- [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert) is now properly able to write `NULL` values to every column in tables containing more than one [column family](https://www.cockroachlabs.com/docs/v2.0/column-families). [#26181][#26181]
-- Fixed a bug where a long-running query running from one day to the next would not always produce the right value for `current_date()`. [#26413][#26413]
-- Fixed a bug where [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.0/stop-a-node) would erroneously fail even though the node already successfully shut down. [#26163][#26163]
-- Rows larger than 8192 bytes are now supported by the "copy from" protocol. [#26641][#26641]
-- Trying to "copy from stdin" into a table that doesn't exist no longer drops the connection. [#26641][#26641]
-- Previously, expired compactions could stay in the queue forever. Now, they are removed when they expire. [#26659][#26659]
-
-
Performance Improvements
-
-- The performance impact of dropping a large table has been substantially reduced. [#26615][#26615]
-
-
Doc Updates
-
-- Documented [special syntax forms](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#special-syntax-forms) of built-in SQL functions and [conditional and function-like operators](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#conditional-and-function-like-operators), and updated the [SQL operator order of precedence](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#operators). [#3192][#3192]
-- Added best practices on [understanding and avoiding transaction contention](https://www.cockroachlabs.com/docs/v2.0/performance-best-practices-overview#understanding-and-avoiding-transaction-contention) and a related [FAQ](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#why-would-increasing-the-number-of-nodes-not-result-in-more-operations-per-second). [#3156][#3156]
-- Improved the documentation of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v2.0/as-of-system-time). [#3155][#3155]
-- Expanded the [manual deployment](https://www.cockroachlabs.com/docs/v2.0/manual-deployment) guides to cover running a sample workload against a cluster. [#3149][#3149]
-- Added FAQs on [generating unique, slowly increasing sequential numbers](https://www.cockroachlabs.com/docs/v2.0/sql-faqs#how-do-i-generate-unique-slowly-increasing-sequential-numbers-in-cockroachdb) and [the differences between `UUID`, sequences, and `unique_rowid()`](https://www.cockroachlabs.com/docs/v2.0/sql-faqs#what-are-the-differences-between-uuid-sequences-and-unique_rowid). [#3104][#3104]
-
-
-
-- [`CHECK`](https://www.cockroachlabs.com/docs/v2.0/check) constraints are now checked when updating a conflicting row in [`INSERT ... ON CONFLICT DO UPDATE`](https://www.cockroachlabs.com/docs/v2.0/insert#update-values-on-conflict) statements. [#26699][#26699] {% comment %}doc{% endcomment %}
-- An error is now returned to the user instead of panicking when trying to add a column with a [`UNIQUE`](https://www.cockroachlabs.com/docs/v2.0/unique) constraint when that column's type is not indexable. [#26728][#26728] {% comment %}doc{% endcomment %}
-
-
Command-Line Changes
-
-- CockroachDB now computes the correct number of replicas on down nodes. Therefore, when [decommissioning nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) via the [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command, the `--wait=all` option no longer hangs indefinitely when there are down nodes. As a result, the `--wait=live` option is no longer necessary and has been deprecated. [#27158][#27158]
-
-
Bug Fixes
-
-- Fixed a typo on **Node Map** screen of the Admin UI. [#27129][#27129]
-- Fixed a rare crash on node [decommissioning](https://www.cockroachlabs.com/docs/v2.0/remove-nodes). [#26717][#26717]
-- Joins across two [interleaved tables](https://www.cockroachlabs.com/docs/v2.0/interleave-in-parent) no longer return incorrect results under certain circumstances when the equality columns aren't all part of the interleaved columns. [#26832][#26832]
-- Successes of time series maintenance queue operations are no longer counted as errors in the **Metrics** dashboard of the Admin UI. [#26820][#26820]
-- Prevented a situation in which ranges repeatedly fail to perform a split. [#26944][#26944]
-- Fixed a crash that could occur when distributed `LIMIT` queries were run on a cluster with at least one unhealthy node. [#26953][#26953]
-- Failed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import)s now begin to clean up partially imported data immediately and in a faster manner. [#26986][#26986]
-- Alleviated a scenario in which a large number of uncommitted Raft commands could cause memory pressure at startup time. [#27024][#27024]
-- The pg-specific syntax `SET transaction_isolation` now supports settings other than `SNAPSHOT`. This bug did not affect the standard SQL `SET TRANSACTION
- ISOLATION LEVEL`. [#27047][#27047]
-- The `DISTINCT ON` clause is now reported properly in statement statistics. [#27222][#27222]
-- Fixed a crash when trying to plan certain `UNION ALL` queries. [#27233][#27233]
-- Commands are now abandoned earlier once a deadline has been reached. [#27215][#27215]
-- Fixed a panic in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) when creating a table using a sequence operation (e.g., `nextval()`) in a column's [DEFAULT](https://www.cockroachlabs.com/docs/v2.0/default-value) expression. [#27294][#27294]
-
-
Doc Updates
-
-- Added a tutorial on [benchmarking CockroachDB with TPC-C](https://www.cockroachlabs.com/docs/v2.0/performance-benchmarking-with-tpc-c). [#3281][#3281]
-- Added `systemd` configs and instructions to [deployment tutorials](https://www.cockroachlabs.com/docs/v2.0/manual-deployment). [#3268][#3268]
-- Updated the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes) to reflect that pods aren't "Ready" before init. [#3291][#3291]
-
-
-
-
Contributors
-
-This release includes 22 merged PRs by 17 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Emmanuel.
-
-- Emmanuel
-- neeral
-
-
-
-- The binary Postgres wire format is now supported for [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.0/interval) values. [#28135][#28135] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) no longer silently converts `rn` characters in CSV files into `n`. [#28225][#28225]
-- Fixed a bug that could cause the row following a deleted row to be skipped during [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup). [#28196][#28196]
-- Limited the size of "batch groups" when committing a batch to RocksDB to avoid rare scenarios in which multi-gigabyte batch groups are created, which can cause a server to run out of memory when replaying the RocksDB log at startup. [#28009][#28009]
-- Prevented unbounded growth of the Raft log caused by a loss of quorum. [#27868][#27868]
-- CockroachDB now properly reports an error when a query attempts to use `ORDER BY` within a function argument list,
- which is an unsupported feature. [#25147][#25147]
-
-
Doc updates
-
-- Added a [Performance Tuning tutorial](https://www.cockroachlabs.com/docs/v2.0/performance-tuning) that demonstrates essential techniques for getting fast reads and writes in CockroachDB, starting with a single-region deployment and expanding into multiple regions. [#3378][#3378]
-- Expanded the [Production Checklist](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#networking) to cover a detailed explanation of network flags and scenarios and updated [production deployment tutorials](https://www.cockroachlabs.com/docs/v2.0/manual-deployment) to encourage the use of `--advertise-host` on node start. [#3352][#3352]
-- Expanded the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes) to include setting up monitoring and alerting with Prometheus and Alertmanager. [#3370][#3370]
-- Updated the [OpenSSL certificate tutorial](https://www.cockroachlabs.com/docs/v2.0/create-security-certificates-openssl) to allow multiple node certificates with the same subject. [#3423][#3423]
-
-
-
-
Contributors
-
-This release includes 9 merged PRs by 7 authors. We would like to thank the following contributor from the CockroachDB community:
-
-- neeral
-
-
-
-- Fixed a vulnerability in which TLS certificates were not validated correctly for internal RPC interfaces. This vulnerability could allow an unauthenticated user with network access to read and write to the cluster. [#30821](https://github.com/cockroachdb/cockroach/issues/30821)
-
-
Command-line changes
-
-- The [`cockroach zone`](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) command is now compatible with CockroachDB v2.1. However, note that `cockroach zone` is also *deprecated* in CockroachDB v2.1 in favor of `ALTER ... CONFIGURE ZONE` and `SHOW ZONE CONFIGURATION` statements to update and view replication zones. [#29632][#29632]
-
-
Bug fixes
-
-- The [**Jobs** page](https://www.cockroachlabs.com/docs/v2.0/admin-ui-jobs-page) now sorts by **Creation Time** by default instead of by **User**. [#30429][#30429]
-- Fixed out-of-memory errors caused by very large raft logs. [#28398][#28398] [#28526][#28526]
-- Fixed a rare scenario where the value written for one system key was seen when another system key was read, leading to the violation of internal invariants. [#28798][#28798]
-- Fixed a memory leak when contended queries time out. [#29100][#29100]
-- Fixed a bug causing index creation to fail under rare circumstances. [#29203][#29203]
-- Fixed a panic that occurred when not all values were present in a composite foreign key. [#30154][#30154]
-- The `ON DELETE CASCADE` and `ON UPDATE CASCADE` [foreign key actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) no longer cascade through `NULL`s. [#30129][#30129]
-- Fixed the occasional improper processing of the `WITH` operand with `IMPORT`/`BACKUP`/`RESTORE` and [common table expressions](https://www.cockroachlabs.com/docs/v2.0/common-table-expressions). [#30199][#30199]
-- Transaction size limit errors are no longer returned for transactions that have already committed. [#30309][#30309]
-- Fixed a potential infinite loop when the merge joiner encountered an error or cancellation. [#30380][#30380]
-- This release includes the following fixes to the [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) command:
- - The command now properly prints a warning when a `?` character is mistakenly used to receive contextual help in a non-interactive session, instead of crashing. [#28325][#28325]
- - The command now works properly even when the `TERM` environment variable is not set. [#28614][#28614]
- - The commands are now properly able to customize the prompt with `~/.editrc` on Linux. [#28614][#28614]
- - The commands once again support copy-pasting special unicode character from other documents. [#28614][#28614]
-
-
Performance improvements
-
-- Greatly improved the performance of catching up followers that are behind when Raft logs are large. [#28526][#28526]
-
-
-
-- Fixed a security vulnerability in which data could be leaked from a cluster, or tampered with in a cluster, in secure mode. [#30823][#30823]
-- Fixed a bug where queries could get stuck for seconds or minutes, usually following node restarts. [#31350][#31350]
-- CockroachDB no longer crashes due to a `SIGTRAP` error soon after startup on macOS Mojave. [#31522][#31522]
-- Fixed bug causing transactions to unnecessarily hit a "too large" error. [#31827][#31827]
-- Fixed a bug causing transactions to appear partially committed. Occasionally, CockroachDB claimed to have failed to commit a transaction when some (or all) of its writes were actually persisted. [#32223][#32223]
-- Fixed a bug where entry application on Raft followers could fall behind entry application on the leader, causing stalls during splits. [#32601][#32601]
-- CockroachDB now properly rejects queries that use an invalid function (e.g., an aggregation) in the `SET` clause of an [`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update) statement. [#32507][#32507]
-
-
Build changes
-
-- CockroachDB can now be built from source on macOS 10.14 (Mojave). [#31310][#31310]
-
-
- How the system refers to this metric, e.g., sql.bytesin.
-
-
-
-
- Downsampler
-
-
-
- The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view.
-
-
- Options:
-
-
AVG: Returns the average value over the time period.
-
MIN: Returns the lowest value seen.
-
MAX: Returns the highest value seen.
-
SUM: Returns the sum of all values seen.
-
-
-
-
-
-
- Aggregator
-
-
-
- Used to combine data points from different nodes. It has the same operations available as the Downsampler.
-
-
- Options:
-
-
AVG: Returns the average value over the time period.
-
MIN: Returns the lowest value seen.
-
MAX: Returns the highest value seen.
-
SUM: Returns the sum of all values seen.
-
-
-
-
-
-
- Rate
-
-
-
- Determines how to display the rate of change during the selected time period.
-
-
- Options:
-
-
-
- Normal: Returns the actual recorded value.
-
-
- Rate: Returns the rate of change of the value per second.
-
-
- Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase).
-
-
-
-
-
-
-
- Source
-
-
- The set of nodes being queried, which is either:
-
-
- The entire cluster.
-
-
- A single, named node.
-
-
-
-
-
-
- Per Node
-
-
- If checked, the chart will show a line for each node's value of this metric.
-
-
-
-
diff --git a/src/current/_includes/v2.0/app/BasicSample.java b/src/current/_includes/v2.0/app/BasicSample.java
deleted file mode 100644
index 5fcac022e4e..00000000000
--- a/src/current/_includes/v2.0/app/BasicSample.java
+++ /dev/null
@@ -1,54 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac BasicSample.java && java BasicSample
-*/
-
-public class BasicSample {
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the "bank" database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "require");
- props.setProperty("sslrootcert", "certs/ca.crt");
- props.setProperty("sslkey", "certs/client.maxroach.pk8");
- props.setProperty("sslcert", "certs/client.maxroach.crt");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
- try {
- // Create the "accounts" table.
- db.createStatement()
- .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- db.createStatement()
- .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- System.out.println("Initial balances:");
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n",
- res.getInt("id"),
- res.getInt("balance"));
- }
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/TxnSample.java b/src/current/_includes/v2.0/app/TxnSample.java
deleted file mode 100644
index dd0851a56cf..00000000000
--- a/src/current/_includes/v2.0/app/TxnSample.java
+++ /dev/null
@@ -1,147 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac TxnSample.java && java TxnSample
-*/
-
-// Ambiguous whether the transaction committed or not.
-class AmbiguousCommitException extends SQLException{
- public AmbiguousCommitException(Throwable cause) {
- super(cause);
- }
-}
-
-class InsufficientBalanceException extends Exception {}
-
-class AccountNotFoundException extends Exception {
- public int account;
- public AccountNotFoundException(int account) {
- this.account = account;
- }
-}
-
-// A simple interface that provides a retryable lambda expression.
-interface RetryableTransaction {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException;
-}
-
-public class TxnSample {
- public static RetryableTransaction transferFunds(int from, int to, int amount) {
- return new RetryableTransaction() {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- // Check the current balance.
- ResultSet res = conn.createStatement()
- .executeQuery("SELECT balance FROM accounts WHERE id = "
- + from);
- if(!res.next()) {
- throw new AccountNotFoundException(from);
- }
-
- int balance = res.getInt("balance");
- if(balance < from) {
- throw new InsufficientBalanceException();
- }
-
- // Perform the transfer.
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance - "
- + amount + " where id = " + from);
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance + "
- + amount + " where id = " + to);
- }
- };
- }
-
- public static void retryTransaction(Connection conn, RetryableTransaction tx)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- Savepoint sp = conn.setSavepoint("cockroach_restart");
- while(true) {
- boolean releaseAttempted = false;
- try {
- tx.run(conn);
- releaseAttempted = true;
- conn.releaseSavepoint(sp);
- break;
- }
- catch(SQLException e) {
- String sqlState = e.getSQLState();
-
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if(sqlState.equals("40001")) {
- // Signal the database that we will attempt a retry.
- conn.rollback(sp);
- } else if(releaseAttempted) {
- throw new AmbiguousCommitException(e);
- } else {
- throw e;
- }
- }
- }
- conn.commit();
- }
-
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the 'bank' database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "require");
- props.setProperty("sslrootcert", "certs/ca.crt");
- props.setProperty("sslkey", "certs/client.maxroach.pk8");
- props.setProperty("sslcert", "certs/client.maxroach.crt");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
-
- try {
- // We need to turn off autocommit mode to allow for
- // multi-statement transactions.
- db.setAutoCommit(false);
-
- // Perform the transfer. This assumes the 'accounts'
- // table has already been created in the database.
- RetryableTransaction transfer = transferFunds(1, 2, 100);
- retryTransaction(db, transfer);
-
- // Check balances after transfer.
- db.setAutoCommit(true);
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n", res.getInt("id"),
- res.getInt("balance"));
- }
-
- } catch(InsufficientBalanceException e) {
- System.out.println("Insufficient balance");
- } catch(AccountNotFoundException e) {
- System.out.println("No users in the table with id " + e.account);
- } catch(AmbiguousCommitException e) {
- System.out.println("Ambiguous result encountered: " + e);
- } catch(SQLException e) {
- System.out.println("SQLException encountered:" + e);
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/activerecord-basic-sample.rb b/src/current/_includes/v2.0/app/activerecord-basic-sample.rb
deleted file mode 100644
index f1d35e1de3a..00000000000
--- a/src/current/_includes/v2.0/app/activerecord-basic-sample.rb
+++ /dev/null
@@ -1,48 +0,0 @@
-require 'active_record'
-require 'activerecord-cockroachdb-adapter'
-require 'pg'
-
-# Connect to CockroachDB through ActiveRecord.
-# In Rails, this configuration would go in config/database.yml as usual.
-ActiveRecord::Base.establish_connection(
- adapter: 'cockroachdb',
- username: 'maxroach',
- database: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey: 'certs/client.maxroach.key',
- sslcert: 'certs/client.maxroach.crt'
-)
-
-
-# Define the Account model.
-# In Rails, this would go in app/models/ as usual.
-class Account < ActiveRecord::Base
- validates :id, presence: true
- validates :balance, presence: true
-end
-
-# Define a migration for the accounts table.
-# In Rails, this would go in db/migrate/ as usual.
-class Schema < ActiveRecord::Migration[5.0]
- def change
- create_table :accounts, force: true do |t|
- t.integer :balance
- end
- end
-end
-
-# Run the schema migration by hand.
-# In Rails, this would be done via rake db:migrate as usual.
-Schema.new.change()
-
-# Create two accounts, inserting two rows into the accounts table.
-Account.create(id: 1, balance: 1000)
-Account.create(id: 2, balance: 250)
-
-# Retrieve accounts and print out the balances
-Account.all.each do |acct|
- puts "#{acct.id} #{acct.balance}"
-end
diff --git a/src/current/_includes/v2.0/app/basic-sample.c b/src/current/_includes/v2.0/app/basic-sample.c
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/src/current/_includes/v2.0/app/basic-sample.clj b/src/current/_includes/v2.0/app/basic-sample.clj
deleted file mode 100644
index b139d27b8e1..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.clj
+++ /dev/null
@@ -1,31 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:subprotocol "postgresql"
- :subname "//localhost:26257/bank"
- :user "maxroach"
- :password ""})
-
-(defn test-basic []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Insert two rows into the "accounts" table.
- (j/insert! conn :accounts {:id 1 :balance 1000})
- (j/insert! conn :accounts {:id 2 :balance 250})
-
- ;; Print out the balances.
- (println "Initial balances:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- doall)
-
- ;; The database connection is automatically closed by with-db-connection.
- ))
-
-
-(defn -main [& args]
- (test-basic))
diff --git a/src/current/_includes/v2.0/app/basic-sample.cpp b/src/current/_includes/v2.0/app/basic-sample.cpp
deleted file mode 100644
index 0cdb6f65bfd..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.cpp
+++ /dev/null
@@ -1,41 +0,0 @@
-// Build with g++ -std=c++11 basic-sample.cpp -lpq -lpqxx
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-int main() {
- try {
- // Connect to the "bank" database.
- pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
-
- pqxx::nontransaction w(c);
-
- // Create the "accounts" table.
- w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- cout << "Initial balances:" << endl;
- pqxx::result r = w.exec("SELECT id, balance FROM accounts");
- for (auto row : r) {
- cout << row[0].as() << ' ' << row[1].as() << endl;
- }
-
- w.commit(); // Note this doesn't doesn't do anything
- // for a nontransaction, but is still required.
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v2.0/app/basic-sample.cs b/src/current/_includes/v2.0/app/basic-sample.cs
deleted file mode 100644
index 487ab7ba67c..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.cs
+++ /dev/null
@@ -1,49 +0,0 @@
-using System;
-using System.Data;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- Simple(connStringBuilder.ConnectionString);
- }
-
- static void Simple(string connString)
- {
- using(var conn = new NpgsqlConnection(connString))
- {
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using(var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using(var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using(var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/basic-sample.go b/src/current/_includes/v2.0/app/basic-sample.go
deleted file mode 100644
index 6e22c858dbb..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.go
+++ /dev/null
@@ -1,46 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "log"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- // Connect to the "bank" database.
- db, err := sql.Open("postgres",
- "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer db.Close()
-
- // Create the "accounts" table.
- if _, err := db.Exec(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := db.Exec(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := db.Query("SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v2.0/app/basic-sample.js b/src/current/_includes/v2.0/app/basic-sample.js
deleted file mode 100644
index 4e86cb2cbca..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.js
+++ /dev/null
@@ -1,63 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the "bank" database.
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257,
- ssl: {
- ca: fs.readFileSync('certs/ca.crt')
- .toString(),
- key: fs.readFileSync('certs/client.maxroach.key')
- .toString(),
- cert: fs.readFileSync('certs/client.maxroach.crt')
- .toString()
- }
-};
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
-
- // Close communication with the database and exit.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
- async.waterfall([
- function (next) {
- // Create the 'accounts' table.
- client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next);
- },
- function (results, next) {
- // Insert two rows into the 'accounts' table.
- client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next);
- },
- function (results, next) {
- // Print out account balances.
- client.query('SELECT id, balance FROM accounts;', next);
- },
- ],
- function (err, results) {
- if (err) {
- console.error('Error inserting into and selecting from accounts: ', err);
- finish();
- }
-
- console.log('Initial balances:');
- results.rows.forEach(function (row) {
- console.log(row);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.0/app/basic-sample.php b/src/current/_includes/v2.0/app/basic-sample.php
deleted file mode 100644
index 4edae09b12a..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.php
+++ /dev/null
@@ -1,20 +0,0 @@
- PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- PDO::ATTR_PERSISTENT => true
- ));
-
- $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
-
- print "Account balances:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.0/app/basic-sample.py b/src/current/_includes/v2.0/app/basic-sample.py
deleted file mode 100644
index edf1b2617d0..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Import the driver.
-import psycopg2
-
-# Connect to the "bank" database.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.maxroach.key',
- sslcert='certs/client.maxroach.crt',
- port=26257,
- host='localhost'
-)
-
-# Make each statement commit immediately.
-conn.set_session(autocommit=True)
-
-# Open a cursor to perform database operations.
-cur = conn.cursor()
-
-# Create the "accounts" table.
-cur.execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)")
-
-# Insert two rows into the "accounts" table.
-cur.execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)")
-
-# Print out the balances.
-cur.execute("SELECT id, balance FROM accounts")
-rows = cur.fetchall()
-print('Initial balances:')
-for row in rows:
- print([str(cell) for cell in row])
-
-# Close the database connection.
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v2.0/app/basic-sample.rb b/src/current/_includes/v2.0/app/basic-sample.rb
deleted file mode 100644
index 93f0dc3d20c..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.rb
+++ /dev/null
@@ -1,31 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey:'certs/client.maxroach.key',
- sslcert:'certs/client.maxroach.crt'
-)
-
-# Create the "accounts" table.
-conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
-
-# Insert two rows into the "accounts" table.
-conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
-
-# Print out the balances.
-puts 'Initial balances:'
-conn.exec('SELECT id, balance FROM accounts') do |res|
- res.each do |row|
- puts row
- end
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.0/app/basic-sample.rs b/src/current/_includes/v2.0/app/basic-sample.rs
deleted file mode 100644
index f381d500028..00000000000
--- a/src/current/_includes/v2.0/app/basic-sample.rs
+++ /dev/null
@@ -1,22 +0,0 @@
-extern crate postgres;
-
-use postgres::{Connection, TlsMode};
-
-fn main() {
- let conn = Connection::connect("postgresql://maxroach@localhost:26257/bank", TlsMode::None)
- .unwrap();
-
- // Insert two rows into the "accounts" table.
- conn.execute(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
- &[],
- ).unwrap();
-
- // Print out the balances.
- println!("Initial balances:");
- for row in &conn.query("SELECT id, balance FROM accounts", &[]).unwrap() {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v2.0/app/before-you-begin.md b/src/current/_includes/v2.0/app/before-you-begin.md
deleted file mode 100644
index dfb97226414..00000000000
--- a/src/current/_includes/v2.0/app/before-you-begin.md
+++ /dev/null
@@ -1,8 +0,0 @@
-1. [Install CockroachDB](install-cockroachdb.html).
-2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster.
-3. Choose the instructions that correspond to whether your cluster is secure or insecure:
-
-
-
-
-
diff --git a/src/current/_includes/v2.0/app/common-steps.md b/src/current/_includes/v2.0/app/common-steps.md
deleted file mode 100644
index 76dfe6a008c..00000000000
--- a/src/current/_includes/v2.0/app/common-steps.md
+++ /dev/null
@@ -1,36 +0,0 @@
-## Step 2. Start a single-node cluster
-
-For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---insecure \
---store=hello-1 \
---host=localhost
-~~~
-
-## Step 3. Create a user
-
-In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach user set maxroach --insecure
-~~~
-
-## Step 4. Create a database and grant privileges
-
-As the `root` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create a `bank` database.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'CREATE DATABASE bank'
-~~~
-
-Then [grant privileges](grant.html) to the `maxroach` user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'GRANT ALL ON DATABASE bank TO maxroach'
-~~~
diff --git a/src/current/_includes/v2.0/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v2.0/app/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index e887162f380..00000000000
--- a/src/current/_includes/v2.0/app/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v2.0/app/gorm-basic-sample.go b/src/current/_includes/v2.0/app/gorm-basic-sample.go
deleted file mode 100644
index d18948b80b2..00000000000
--- a/src/current/_includes/v2.0/app/gorm-basic-sample.go
+++ /dev/null
@@ -1,41 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
-
- // Import GORM-related packages.
- "github.com/jinzhu/gorm"
- _ "github.com/jinzhu/gorm/dialects/postgres"
-)
-
-// Account is our model, which corresponds to the "accounts" database table.
-type Account struct {
- ID int `gorm:"primary_key"`
- Balance int
-}
-
-func main() {
- // Connect to the "bank" database as the "maxroach" user.
- const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt"
- db, err := gorm.Open("postgres", addr)
- if err != nil {
- log.Fatal(err)
- }
- defer db.Close()
-
- // Automatically create the "accounts" table based on the Account model.
- db.AutoMigrate(&Account{})
-
- // Insert two rows into the "accounts" table.
- db.Create(&Account{ID: 1, Balance: 1000})
- db.Create(&Account{ID: 2, Balance: 250})
-
- // Print out the balances.
- var accounts []Account
- db.Find(&accounts)
- fmt.Println("Initial balances:")
- for _, account := range accounts {
- fmt.Printf("%d %d\n", account.ID, account.Balance)
- }
-}
diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/Sample.java b/src/current/_includes/v2.0/app/hibernate-basic-sample/Sample.java
deleted file mode 100644
index ed36ae15ad3..00000000000
--- a/src/current/_includes/v2.0/app/hibernate-basic-sample/Sample.java
+++ /dev/null
@@ -1,64 +0,0 @@
-package com.cockroachlabs;
-
-import org.hibernate.Session;
-import org.hibernate.SessionFactory;
-import org.hibernate.cfg.Configuration;
-
-import javax.persistence.Column;
-import javax.persistence.Entity;
-import javax.persistence.Id;
-import javax.persistence.Table;
-import javax.persistence.criteria.CriteriaQuery;
-
-public class Sample {
- // Create a SessionFactory based on our hibernate.cfg.xml configuration
- // file, which defines how to connect to the database.
- private static final SessionFactory sessionFactory =
- new Configuration()
- .configure("hibernate.cfg.xml")
- .addAnnotatedClass(Account.class)
- .buildSessionFactory();
-
- // Account is our model, which corresponds to the "accounts" database table.
- @Entity
- @Table(name="accounts")
- public static class Account {
- @Id
- @Column(name="id")
- public long id;
-
- @Column(name="balance")
- public long balance;
-
- // Convenience constructor.
- public Account(int id, int balance) {
- this.id = id;
- this.balance = balance;
- }
-
- // Hibernate needs a default (no-arg) constructor to create model objects.
- public Account() {}
- }
-
- public static void main(String[] args) throws Exception {
- Session session = sessionFactory.openSession();
-
- try {
- // Insert two rows into the "accounts" table.
- session.beginTransaction();
- session.save(new Account(1, 1000));
- session.save(new Account(2, 250));
- session.getTransaction().commit();
-
- // Print out the balances.
- CriteriaQuery query = session.getCriteriaBuilder().createQuery(Account.class);
- query.select(query.from(Account.class));
- for (Account account : session.createQuery(query).getResultList()) {
- System.out.printf("%d %d\n", account.id, account.balance);
- }
- } finally {
- session.close();
- sessionFactory.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/build.gradle b/src/current/_includes/v2.0/app/hibernate-basic-sample/build.gradle
deleted file mode 100644
index 36f33d73fe6..00000000000
--- a/src/current/_includes/v2.0/app/hibernate-basic-sample/build.gradle
+++ /dev/null
@@ -1,16 +0,0 @@
-group 'com.cockroachlabs'
-version '1.0'
-
-apply plugin: 'java'
-apply plugin: 'application'
-
-mainClassName = 'com.cockroachlabs.Sample'
-
-repositories {
- mavenCentral()
-}
-
-dependencies {
- compile 'org.hibernate:hibernate-core:5.2.4.Final'
- compile 'org.postgresql:postgresql:42.2.2.jre7'
-}
diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz
deleted file mode 100644
index d97232aa172..00000000000
Binary files a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ
diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml
deleted file mode 100644
index 2213cc85ea5..00000000000
--- a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-
-
-
-
- org.postgresql.Driver
- org.hibernate.dialect.PostgreSQLDialect
-
- maxroach
-
-
- create
-
-
- true
- true
-
-
diff --git a/src/current/_includes/v2.0/app/insecure/BasicSample.java b/src/current/_includes/v2.0/app/insecure/BasicSample.java
deleted file mode 100644
index 001d38feb48..00000000000
--- a/src/current/_includes/v2.0/app/insecure/BasicSample.java
+++ /dev/null
@@ -1,51 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac BasicSample.java && java BasicSample
-*/
-
-public class BasicSample {
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the "bank" database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "disable");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
- try {
- // Create the "accounts" table.
- db.createStatement()
- .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- db.createStatement()
- .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- System.out.println("Initial balances:");
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n",
- res.getInt("id"),
- res.getInt("balance"));
- }
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/insecure/TxnSample.java b/src/current/_includes/v2.0/app/insecure/TxnSample.java
deleted file mode 100644
index 11021ec0e71..00000000000
--- a/src/current/_includes/v2.0/app/insecure/TxnSample.java
+++ /dev/null
@@ -1,145 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac TxnSample.java && java TxnSample
-*/
-
-// Ambiguous whether the transaction committed or not.
-class AmbiguousCommitException extends SQLException{
- public AmbiguousCommitException(Throwable cause) {
- super(cause);
- }
-}
-
-class InsufficientBalanceException extends Exception {}
-
-class AccountNotFoundException extends Exception {
- public int account;
- public AccountNotFoundException(int account) {
- this.account = account;
- }
-}
-
-// A simple interface that provides a retryable lambda expression.
-interface RetryableTransaction {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException;
-}
-
-public class TxnSample {
- public static RetryableTransaction transferFunds(int from, int to, int amount) {
- return new RetryableTransaction() {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- // Check the current balance.
- ResultSet res = conn.createStatement()
- .executeQuery("SELECT balance FROM accounts WHERE id = "
- + from);
- if(!res.next()) {
- throw new AccountNotFoundException(from);
- }
-
- int balance = res.getInt("balance");
- if(balance < from) {
- throw new InsufficientBalanceException();
- }
-
- // Perform the transfer.
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance - "
- + amount + " where id = " + from);
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance + "
- + amount + " where id = " + to);
- }
- };
- }
-
- public static void retryTransaction(Connection conn, RetryableTransaction tx)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- Savepoint sp = conn.setSavepoint("cockroach_restart");
- while(true) {
- boolean releaseAttempted = false;
- try {
- tx.run(conn);
- releaseAttempted = true;
- conn.releaseSavepoint(sp);
- }
- catch(SQLException e) {
- String sqlState = e.getSQLState();
-
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if(sqlState.equals("40001")) {
- // Signal the database that we will attempt a retry.
- conn.rollback(sp);
- continue;
- } else if(releaseAttempted) {
- throw new AmbiguousCommitException(e);
- } else {
- throw e;
- }
- }
- break;
- }
- conn.commit();
- }
-
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the 'bank' database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "disable");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
-
- try {
- // We need to turn off autocommit mode to allow for
- // multi-statement transactions.
- db.setAutoCommit(false);
-
- // Perform the transfer. This assumes the 'accounts'
- // table has already been created in the database.
- RetryableTransaction transfer = transferFunds(1, 2, 100);
- retryTransaction(db, transfer);
-
- // Check balances after transfer.
- db.setAutoCommit(true);
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n", res.getInt("id"),
- res.getInt("balance"));
- }
-
- } catch(InsufficientBalanceException e) {
- System.out.println("Insufficient balance");
- } catch(AccountNotFoundException e) {
- System.out.println("No users in the table with id " + e.account);
- } catch(AmbiguousCommitException e) {
- System.out.println("Ambiguous result encountered: " + e);
- } catch(SQLException e) {
- System.out.println("SQLException encountered:" + e);
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/insecure/activerecord-basic-sample.rb b/src/current/_includes/v2.0/app/insecure/activerecord-basic-sample.rb
deleted file mode 100644
index 601838ee789..00000000000
--- a/src/current/_includes/v2.0/app/insecure/activerecord-basic-sample.rb
+++ /dev/null
@@ -1,44 +0,0 @@
-require 'active_record'
-require 'activerecord-cockroachdb-adapter'
-require 'pg'
-
-# Connect to CockroachDB through ActiveRecord.
-# In Rails, this configuration would go in config/database.yml as usual.
-ActiveRecord::Base.establish_connection(
- adapter: 'cockroachdb',
- username: 'maxroach',
- database: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-# Define the Account model.
-# In Rails, this would go in app/models/ as usual.
-class Account < ActiveRecord::Base
- validates :id, presence: true
- validates :balance, presence: true
-end
-
-# Define a migration for the accounts table.
-# In Rails, this would go in db/migrate/ as usual.
-class Schema < ActiveRecord::Migration[5.0]
- def change
- create_table :accounts, force: true do |t|
- t.integer :balance
- end
- end
-end
-
-# Run the schema migration by hand.
-# In Rails, this would be done via rake db:migrate as usual.
-Schema.new.change()
-
-# Create two accounts, inserting two rows into the accounts table.
-Account.create(id: 1, balance: 1000)
-Account.create(id: 2, balance: 250)
-
-# Retrieve accounts and print out the balances
-Account.all.each do |acct|
- puts "#{acct.id} #{acct.balance}"
-end
diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.go b/src/current/_includes/v2.0/app/insecure/basic-sample.go
deleted file mode 100644
index 6a647f51641..00000000000
--- a/src/current/_includes/v2.0/app/insecure/basic-sample.go
+++ /dev/null
@@ -1,44 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "log"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- // Connect to the "bank" database.
- db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
-
- // Create the "accounts" table.
- if _, err := db.Exec(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := db.Exec(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := db.Query("SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.js b/src/current/_includes/v2.0/app/insecure/basic-sample.js
deleted file mode 100644
index f89ea020a74..00000000000
--- a/src/current/_includes/v2.0/app/insecure/basic-sample.js
+++ /dev/null
@@ -1,55 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the "bank" database.
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257
-};
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
-
- // Close communication with the database and exit.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
- async.waterfall([
- function (next) {
- // Create the 'accounts' table.
- client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next);
- },
- function (results, next) {
- // Insert two rows into the 'accounts' table.
- client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next);
- },
- function (results, next) {
- // Print out account balances.
- client.query('SELECT id, balance FROM accounts;', next);
- },
- ],
- function (err, results) {
- if (err) {
- console.error('Error inserting into and selecting from accounts: ', err);
- finish();
- }
-
- console.log('Initial balances:');
- results.rows.forEach(function (row) {
- console.log(row);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.php b/src/current/_includes/v2.0/app/insecure/basic-sample.php
deleted file mode 100644
index db5a26e3111..00000000000
--- a/src/current/_includes/v2.0/app/insecure/basic-sample.php
+++ /dev/null
@@ -1,20 +0,0 @@
- PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- PDO::ATTR_PERSISTENT => true
- ));
-
- $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
-
- print "Account balances:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.py b/src/current/_includes/v2.0/app/insecure/basic-sample.py
deleted file mode 100644
index db023a19e33..00000000000
--- a/src/current/_includes/v2.0/app/insecure/basic-sample.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Import the driver.
-import psycopg2
-
-# Connect to the "bank" database.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='disable',
- port=26257,
- host='localhost'
-)
-
-# Make each statement commit immediately.
-conn.set_session(autocommit=True)
-
-# Open a cursor to perform database operations.
-cur = conn.cursor()
-
-# Create the "accounts" table.
-cur.execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)")
-
-# Insert two rows into the "accounts" table.
-cur.execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)")
-
-# Print out the balances.
-cur.execute("SELECT id, balance FROM accounts")
-rows = cur.fetchall()
-print('Initial balances:')
-for row in rows:
- print([str(cell) for cell in row])
-
-# Close the database connection.
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.rb b/src/current/_includes/v2.0/app/insecure/basic-sample.rb
deleted file mode 100644
index 904460381f6..00000000000
--- a/src/current/_includes/v2.0/app/insecure/basic-sample.rb
+++ /dev/null
@@ -1,28 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-# Create the "accounts" table.
-conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
-
-# Insert two rows into the "accounts" table.
-conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
-
-# Print out the balances.
-puts 'Initial balances:'
-conn.exec('SELECT id, balance FROM accounts') do |res|
- res.each do |row|
- puts row
- end
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.0/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v2.0/app/insecure/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index 3c7859f0d8d..00000000000
--- a/src/current/_includes/v2.0/app/insecure/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v2.0/app/insecure/gorm-basic-sample.go b/src/current/_includes/v2.0/app/insecure/gorm-basic-sample.go
deleted file mode 100644
index b8529962c2b..00000000000
--- a/src/current/_includes/v2.0/app/insecure/gorm-basic-sample.go
+++ /dev/null
@@ -1,41 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
-
- // Import GORM-related packages.
- "github.com/jinzhu/gorm"
- _ "github.com/jinzhu/gorm/dialects/postgres"
-)
-
-// Account is our model, which corresponds to the "accounts" database table.
-type Account struct {
- ID int `gorm:"primary_key"`
- Balance int
-}
-
-func main() {
- // Connect to the "bank" database as the "maxroach" user.
- const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable"
- db, err := gorm.Open("postgres", addr)
- if err != nil {
- log.Fatal(err)
- }
- defer db.Close()
-
- // Automatically create the "accounts" table based on the Account model.
- db.AutoMigrate(&Account{})
-
- // Insert two rows into the "accounts" table.
- db.Create(&Account{ID: 1, Balance: 1000})
- db.Create(&Account{ID: 2, Balance: 250})
-
- // Print out the balances.
- var accounts []Account
- db.Find(&accounts)
- fmt.Println("Initial balances:")
- for _, account := range accounts {
- fmt.Printf("%d %d\n", account.ID, account.Balance)
- }
-}
diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java
deleted file mode 100644
index ed36ae15ad3..00000000000
--- a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java
+++ /dev/null
@@ -1,64 +0,0 @@
-package com.cockroachlabs;
-
-import org.hibernate.Session;
-import org.hibernate.SessionFactory;
-import org.hibernate.cfg.Configuration;
-
-import javax.persistence.Column;
-import javax.persistence.Entity;
-import javax.persistence.Id;
-import javax.persistence.Table;
-import javax.persistence.criteria.CriteriaQuery;
-
-public class Sample {
- // Create a SessionFactory based on our hibernate.cfg.xml configuration
- // file, which defines how to connect to the database.
- private static final SessionFactory sessionFactory =
- new Configuration()
- .configure("hibernate.cfg.xml")
- .addAnnotatedClass(Account.class)
- .buildSessionFactory();
-
- // Account is our model, which corresponds to the "accounts" database table.
- @Entity
- @Table(name="accounts")
- public static class Account {
- @Id
- @Column(name="id")
- public long id;
-
- @Column(name="balance")
- public long balance;
-
- // Convenience constructor.
- public Account(int id, int balance) {
- this.id = id;
- this.balance = balance;
- }
-
- // Hibernate needs a default (no-arg) constructor to create model objects.
- public Account() {}
- }
-
- public static void main(String[] args) throws Exception {
- Session session = sessionFactory.openSession();
-
- try {
- // Insert two rows into the "accounts" table.
- session.beginTransaction();
- session.save(new Account(1, 1000));
- session.save(new Account(2, 250));
- session.getTransaction().commit();
-
- // Print out the balances.
- CriteriaQuery query = session.getCriteriaBuilder().createQuery(Account.class);
- query.select(query.from(Account.class));
- for (Account account : session.createQuery(query).getResultList()) {
- System.out.printf("%d %d\n", account.id, account.balance);
- }
- } finally {
- session.close();
- sessionFactory.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle
deleted file mode 100644
index 36f33d73fe6..00000000000
--- a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle
+++ /dev/null
@@ -1,16 +0,0 @@
-group 'com.cockroachlabs'
-version '1.0'
-
-apply plugin: 'java'
-apply plugin: 'application'
-
-mainClassName = 'com.cockroachlabs.Sample'
-
-repositories {
- mavenCentral()
-}
-
-dependencies {
- compile 'org.hibernate:hibernate-core:5.2.4.Final'
- compile 'org.postgresql:postgresql:42.2.2.jre7'
-}
diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz
deleted file mode 100644
index 6da6fc86925..00000000000
Binary files a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ
diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
deleted file mode 100644
index 6da90ad06ab..00000000000
--- a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-
-
-
- org.postgresql.Driver
- org.hibernate.dialect.PostgreSQL94Dialect
- jdbc:postgresql://127.0.0.1:26257/bank?sslmode=disable
- maxroach
-
-
- create
-
-
- true
- true
-
-
diff --git a/src/current/_includes/v2.0/app/insecure/sequelize-basic-sample.js b/src/current/_includes/v2.0/app/insecure/sequelize-basic-sample.js
deleted file mode 100644
index ca92b98e375..00000000000
--- a/src/current/_includes/v2.0/app/insecure/sequelize-basic-sample.js
+++ /dev/null
@@ -1,35 +0,0 @@
-var Sequelize = require('sequelize-cockroachdb');
-
-// Connect to CockroachDB through Sequelize.
-var sequelize = new Sequelize('bank', 'maxroach', '', {
- dialect: 'postgres',
- port: 26257,
- logging: false
-});
-
-// Define the Account model for the "accounts" table.
-var Account = sequelize.define('accounts', {
- id: { type: Sequelize.INTEGER, primaryKey: true },
- balance: { type: Sequelize.INTEGER }
-});
-
-// Create the "accounts" table.
-Account.sync({force: true}).then(function() {
- // Insert two rows into the "accounts" table.
- return Account.bulkCreate([
- {id: 1, balance: 1000},
- {id: 2, balance: 250}
- ]);
-}).then(function() {
- // Retrieve accounts.
- return Account.findAll();
-}).then(function(accounts) {
- // Print out the balances.
- accounts.forEach(function(account) {
- console.log(account.id + ' ' + account.balance);
- });
- process.exit(0);
-}).catch(function(err) {
- console.error('error: ' + err.message);
- process.exit(1);
-});
diff --git a/src/current/_includes/v2.0/app/insecure/sqlalchemy-basic-sample.py b/src/current/_includes/v2.0/app/insecure/sqlalchemy-basic-sample.py
deleted file mode 100644
index 696350f0915..00000000000
--- a/src/current/_includes/v2.0/app/insecure/sqlalchemy-basic-sample.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from __future__ import print_function
-from sqlalchemy import create_engine, Column, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-
-Base = declarative_base()
-
-# The Account class corresponds to the "accounts" database table.
-class Account(Base):
- __tablename__ = 'accounts'
- id = Column(Integer, primary_key=True)
- balance = Column(Integer)
-
-# Create an engine to communicate with the database. The "cockroachdb://" prefix
-# for the engine URL indicates that we are connecting to CockroachDB.
-engine = create_engine('cockroachdb://maxroach@localhost:26257/bank',
- connect_args = {
- 'sslmode' : 'disable'
- })
-Session = sessionmaker(bind=engine)
-
-# Automatically create the "accounts" table based on the Account class.
-Base.metadata.create_all(engine)
-
-# Insert two rows into the "accounts" table.
-session = Session()
-session.add_all([
- Account(id=1, balance=1000),
- Account(id=2, balance=250),
-])
-session.commit()
-
-# Print out the balances.
-for account in session.query(Account):
- print(account.id, account.balance)
diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.go b/src/current/_includes/v2.0/app/insecure/txn-sample.go
deleted file mode 100644
index 2c0cd1b6da6..00000000000
--- a/src/current/_includes/v2.0/app/insecure/txn-sample.go
+++ /dev/null
@@ -1,51 +0,0 @@
-package main
-
-import (
- "context"
- "database/sql"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb"
-)
-
-func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
-
- // Run a transfer in a transaction.
- err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
- return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.js b/src/current/_includes/v2.0/app/insecure/txn-sample.js
deleted file mode 100644
index c44309b01a2..00000000000
--- a/src/current/_includes/v2.0/app/insecure/txn-sample.js
+++ /dev/null
@@ -1,146 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the bank database.
-
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257
-};
-
-// Wrapper for a transaction. This automatically re-calls "op" with
-// the client as an argument as long as the database server asks for
-// the transaction to be retried.
-
-function txnWrapper(client, op, next) {
- client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return next(err);
- }
-
- var released = false;
- async.doWhilst(function (done) {
- var handleError = function (err) {
- // If we got an error, see if it's a retryable one
- // and, if so, restart.
- if (err.code === '40001') {
- // Signal the database that we'll retry.
- return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done);
- }
- // A non-retryable error; break out of the
- // doWhilst with an error.
- return done(err);
- };
-
- // Attempt the work.
- op(client, function (err) {
- if (err) {
- return handleError(err);
- }
- var opResults = arguments;
-
- // If we reach this point, release and commit.
- client.query('RELEASE SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return handleError(err);
- }
- released = true;
- return done.apply(null, opResults);
- });
- });
- },
- function () {
- return !released;
- },
- function (err) {
- if (err) {
- client.query('ROLLBACK', function () {
- next(err);
- });
- } else {
- var txnResults = arguments;
- client.query('COMMIT', function (err) {
- if (err) {
- return next(err);
- } else {
- return next.apply(null, txnResults);
- }
- });
- }
- });
- });
-}
-
-// The transaction we want to run.
-
-function transferFunds(client, from, to, amount, next) {
- // Check the current balance.
- client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) {
- if (err) {
- return next(err);
- } else if (results.rows.length === 0) {
- return next(new Error('account not found in table'));
- }
-
- var acctBal = results.rows[0].balance;
- if (acctBal >= amount) {
- // Perform the transfer.
- async.waterfall([
- function (next) {
- // Subtract amount from account 1.
- client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next);
- },
- function (updateResult, next) {
- // Add amount to account 2.
- client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next);
- },
- function (updateResult, next) {
- // Fetch account balances after updates.
- client.query('SELECT id, balance FROM accounts', function (err, selectResult) {
- next(err, selectResult ? selectResult.rows : null);
- });
- }
- ], next);
- } else {
- next(new Error('insufficient funds'));
- }
- });
-}
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
- // Closes communication with the database and exits.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
-
- // Execute the transaction.
- txnWrapper(client,
- function (client, next) {
- transferFunds(client, 1, 2, 100, next);
- },
- function (err, results) {
- if (err) {
- console.error('error performing transaction', err);
- finish();
- }
-
- console.log('Balances after transfer:');
- results.forEach(function (result) {
- console.log(result);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.php b/src/current/_includes/v2.0/app/insecure/txn-sample.php
deleted file mode 100644
index e060d311cc3..00000000000
--- a/src/current/_includes/v2.0/app/insecure/txn-sample.php
+++ /dev/null
@@ -1,71 +0,0 @@
-beginTransaction();
- // This savepoint allows us to retry our transaction.
- $dbh->exec("SAVEPOINT cockroach_restart");
- } catch (Exception $e) {
- throw $e;
- }
-
- while (true) {
- try {
- $stmt = $dbh->prepare(
- 'UPDATE accounts SET balance = balance + :deposit ' .
- 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
-
- // First, withdraw the money from the old account (if possible).
- $stmt->bindValue(':account', $from, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "source account does not exist or is underfunded\r\n";
- return;
- }
-
- // Next, deposit into the new account (if it exists).
- $stmt->bindValue(':account', $to, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "destination account does not exist\r\n";
- return;
- }
-
- // Attempt to release the savepoint (which is really the commit).
- $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
- $dbh->commit();
- return;
- } catch (PDOException $e) {
- if ($e->getCode() != '40001') {
- // Non-recoverable error. Rollback and bubble error up the chain.
- $dbh->rollBack();
- throw $e;
- } else {
- // Cockroach transaction retry code. Rollback to the savepoint and
- // restart.
- $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
- }
- }
- }
-}
-
-try {
- $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=disable',
- 'maxroach', null, array(
- PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- ));
-
- transferMoney($dbh, 1, 2, 10);
-
- print "Account balances after transfer:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.py b/src/current/_includes/v2.0/app/insecure/txn-sample.py
deleted file mode 100644
index 2ea05a85704..00000000000
--- a/src/current/_includes/v2.0/app/insecure/txn-sample.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# Import the driver.
-import psycopg2
-import psycopg2.errorcodes
-
-# Connect to the cluster.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='disable',
- port=26257,
- host='localhost'
-)
-
-def onestmt(conn, sql):
- with conn.cursor() as cur:
- cur.execute(sql)
-
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn, op):
- with conn:
- onestmt(conn, "SAVEPOINT cockroach_restart")
- while True:
- try:
- # Attempt the work.
- op(conn)
-
- # If we reach this point, commit.
- onestmt(conn, "RELEASE SAVEPOINT cockroach_restart")
- break
-
- except psycopg2.OperationalError as e:
- if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE:
- # A non-retryable error; report this up the call stack.
- raise e
- # Signal the database that we'll retry.
- onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart")
-
-
-# The transaction we want to run.
-def transfer_funds(txn, frm, to, amount):
- with txn.cursor() as cur:
-
- # Check the current balance.
- cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
- from_balance = cur.fetchone()[0]
- if from_balance < amount:
- raise "Insufficient funds"
-
- # Perform the transfer.
- cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
- (amount, frm))
- cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
- (amount, to))
-
-
-# Execute the transaction.
-run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100))
-
-
-with conn:
- with conn.cursor() as cur:
- # Check account balances.
- cur.execute("SELECT id, balance FROM accounts")
- rows = cur.fetchall()
- print('Balances after transfer:')
- for row in rows:
- print([str(cell) for cell in row])
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.rb b/src/current/_includes/v2.0/app/insecure/txn-sample.rb
deleted file mode 100644
index 416efb9e24d..00000000000
--- a/src/current/_includes/v2.0/app/insecure/txn-sample.rb
+++ /dev/null
@@ -1,49 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn)
- conn.transaction do |txn|
- txn.exec('SAVEPOINT cockroach_restart')
- while
- begin
- # Attempt the work.
- yield txn
-
- # If we reach this point, commit.
- txn.exec('RELEASE SAVEPOINT cockroach_restart')
- break
- rescue PG::TRSerializationFailure
- txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
- end
- end
- end
-end
-
-def transfer_funds(txn, from, to, amount)
- txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
- res.each do |row|
- raise 'insufficient funds' if Integer(row['balance']) < amount
- end
- end
- txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
- txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
-end
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-run_transaction(conn) do |txn|
- transfer_funds(txn, 1, 2, 100)
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.0/app/project.clj b/src/current/_includes/v2.0/app/project.clj
deleted file mode 100644
index 41efc324b59..00000000000
--- a/src/current/_includes/v2.0/app/project.clj
+++ /dev/null
@@ -1,7 +0,0 @@
-(defproject test "0.1"
- :description "CockroachDB test"
- :url "http://cockroachlabs.com/"
- :dependencies [[org.clojure/clojure "1.8.0"]
- [org.clojure/java.jdbc "0.6.1"]
- [org.postgresql/postgresql "9.4.1211"]]
- :main test.test)
diff --git a/src/current/_includes/v2.0/app/see-also-links.md b/src/current/_includes/v2.0/app/see-also-links.md
deleted file mode 100644
index 90f06751e13..00000000000
--- a/src/current/_includes/v2.0/app/see-also-links.md
+++ /dev/null
@@ -1,9 +0,0 @@
-You might also be interested in using a local cluster to explore the following CockroachDB benefits:
-
-- [Client Connection Parameters](connection-parameters.html)
-- [Data Replication](demo-data-replication.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Automatic Rebalancing](demo-automatic-rebalancing.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Follow-the-Workload](demo-follow-the-workload.html)
-- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
diff --git a/src/current/_includes/v2.0/app/sequelize-basic-sample.js b/src/current/_includes/v2.0/app/sequelize-basic-sample.js
deleted file mode 100644
index d87ff2ca5a5..00000000000
--- a/src/current/_includes/v2.0/app/sequelize-basic-sample.js
+++ /dev/null
@@ -1,62 +0,0 @@
-var Sequelize = require('sequelize-cockroachdb');
-var fs = require('fs');
-
-// Connect to CockroachDB through Sequelize.
-var sequelize = new Sequelize('bank', 'maxroach', '', {
- dialect: 'postgres',
- port: 26257,
- logging: false,
- dialectOptions: {
- ssl: {
- ca: fs.readFileSync('certs/ca.crt')
- .toString(),
- key: fs.readFileSync('certs/client.maxroach.key')
- .toString(),
- cert: fs.readFileSync('certs/client.maxroach.crt')
- .toString()
- }
- }
-});
-
-// Define the Account model for the "accounts" table.
-var Account = sequelize.define('accounts', {
- id: {
- type: Sequelize.INTEGER,
- primaryKey: true
- },
- balance: {
- type: Sequelize.INTEGER
- }
-});
-
-// Create the "accounts" table.
-Account.sync({
- force: true
- })
- .then(function () {
- // Insert two rows into the "accounts" table.
- return Account.bulkCreate([{
- id: 1,
- balance: 1000
- },
- {
- id: 2,
- balance: 250
- }
- ]);
- })
- .then(function () {
- // Retrieve accounts.
- return Account.findAll();
- })
- .then(function (accounts) {
- // Print out the balances.
- accounts.forEach(function (account) {
- console.log(account.id + ' ' + account.balance);
- });
- process.exit(0);
- })
- .catch(function (err) {
- console.error('error: ' + err.message);
- process.exit(1);
- });
diff --git a/src/current/_includes/v2.0/app/sqlalchemy-basic-sample.py b/src/current/_includes/v2.0/app/sqlalchemy-basic-sample.py
deleted file mode 100644
index 0b32b18bd27..00000000000
--- a/src/current/_includes/v2.0/app/sqlalchemy-basic-sample.py
+++ /dev/null
@@ -1,38 +0,0 @@
-from __future__ import print_function
-from sqlalchemy import create_engine, Column, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-
-Base = declarative_base()
-
-# The Account class corresponds to the "accounts" database table.
-class Account(Base):
- __tablename__ = 'accounts'
- id = Column(Integer, primary_key=True)
- balance = Column(Integer)
-
-# Create an engine to communicate with the database. The "cockroachdb://" prefix
-# for the engine URL indicates that we are connecting to CockroachDB.
-engine = create_engine('cockroachdb://maxroach@localhost:26257/bank',
- connect_args = {
- 'sslmode' : 'require',
- 'sslrootcert': 'certs/ca.crt',
- 'sslkey':'certs/client.maxroach.key',
- 'sslcert':'certs/client.maxroach.crt'
- })
-Session = sessionmaker(bind=engine)
-
-# Automatically create the "accounts" table based on the Account class.
-Base.metadata.create_all(engine)
-
-# Insert two rows into the "accounts" table.
-session = Session()
-session.add_all([
- Account(id=1, balance=1000),
- Account(id=2, balance=250),
-])
-session.commit()
-
-# Print out the balances.
-for account in session.query(Account):
- print(account.id, account.balance)
diff --git a/src/current/_includes/v2.0/app/txn-sample.clj b/src/current/_includes/v2.0/app/txn-sample.clj
deleted file mode 100644
index 75ee7b4ba62..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.clj
+++ /dev/null
@@ -1,43 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:subprotocol "postgresql"
- :subname "//localhost:26257/bank"
- :user "maxroach"
- :password ""})
-
-;; The transaction we want to run.
-(defn transferFunds
- [txn from to amount]
-
- ;; Check the current balance.
- (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from])
- (mapv :balance)
- (first))]
- (when (< fromBalance amount)
- (throw (Exception. "Insufficient funds"))))
-
- ;; Perform the transfer.
- (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)])
- (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)]))
-
-(defn test-txn []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Execute the transaction within an automatic retry block;
- ;; the transaction object is bound to 'txn'.
- (util/with-txn-retry [txn conn]
- (transferFunds txn 1 2 100))
-
- ;; Execute a query outside of an automatic retry block.
- (println "Balances after transfer:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- (doall))))
-
-(defn -main [& args]
- (test-txn))
diff --git a/src/current/_includes/v2.0/app/txn-sample.cpp b/src/current/_includes/v2.0/app/txn-sample.cpp
deleted file mode 100644
index dcdf0ca973d..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.cpp
+++ /dev/null
@@ -1,76 +0,0 @@
-// Build with g++ -std=c++11 txn-sample.cpp -lpq -lpqxx
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-void transferFunds(
- pqxx::dbtransaction *tx, int from, int to, int amount) {
- // Read the balance.
- pqxx::result r = tx->exec(
- "SELECT balance FROM accounts WHERE id = " + to_string(from));
- assert(r.size() == 1);
- int fromBalance = r[0][0].as();
-
- if (fromBalance < amount) {
- throw domain_error("insufficient funds");
- }
-
- // Perform the transfer.
- tx->exec("UPDATE accounts SET balance = balance - "
- + to_string(amount) + " WHERE id = " + to_string(from));
- tx->exec("UPDATE accounts SET balance = balance + "
- + to_string(amount) + " WHERE id = " + to_string(to));
-}
-
-
-// ExecuteTx runs fn inside a transaction and retries it as needed.
-// On non-retryable failures, the transaction is aborted and rolled
-// back; on success, the transaction is committed.
-//
-// For more information about CockroachDB's transaction model see
-// https://cockroachlabs.com/docs/transactions.html.
-//
-// NOTE: the supplied exec closure should not have external side
-// effects beyond changes to the database.
-void executeTx(
- pqxx::connection *c, function fn) {
- pqxx::work tx(*c);
- while (true) {
- try {
- pqxx::subtransaction s(tx, "cockroach_restart");
- fn(&s);
- s.commit();
- break;
- } catch (const pqxx::pqxx_exception& e) {
- // Swallow "transaction restart" errors; the transaction will be retried.
- // Unfortunately libpqxx doesn't give us access to the error code, so we
- // do string matching to identify retriable errors.
- if (string(e.base().what()).find("restart transaction:") == string::npos) {
- throw;
- }
- }
- }
- tx.commit();
-}
-
-int main() {
- try {
- pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
-
- executeTx(&c, [](pqxx::dbtransaction *tx) {
- transferFunds(tx, 1, 2, 100);
- });
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v2.0/app/txn-sample.cs b/src/current/_includes/v2.0/app/txn-sample.cs
deleted file mode 100644
index d0824aaa42c..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.cs
+++ /dev/null
@@ -1,119 +0,0 @@
-using System;
-using System.Data;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- TxnSample(connStringBuilder.ConnectionString);
- }
-
- static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
- {
- int balance = 0;
- using(var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
- using(var reader = cmd.ExecuteReader())
- {
- if (reader.Read())
- {
- balance = reader.GetInt32(0);
- }
- else
- {
- throw new DataException(String.Format("Account id={0} not found", from));
- }
- }
- if (balance < amount)
- {
- throw new DataException(String.Format("Insufficient balance in account id={0}", from));
- }
- using(var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- using(var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- }
-
- static void TxnSample(string connString)
- {
- using(var conn = new NpgsqlConnection(connString))
- {
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using(var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using(var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using(var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
-
- try
- {
- using(var tran = conn.BeginTransaction())
- {
- tran.Save("cockroach_restart");
- while (true)
- {
- try
- {
- TransferFunds(conn, tran, 1, 2, 100);
- tran.Commit();
- break;
- }
- catch (NpgsqlException e)
- {
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if (e.ErrorCode == 40001)
- {
- // Signal the database that we will attempt a retry.
- tran.Rollback("cockroach_restart");
- }
- else
- {
- throw;
- }
- }
- }
- }
- }
- catch (DataException e)
- {
- Console.WriteLine(e.Message);
- }
-
- // Now printout the results.
- Console.WriteLine("Final balances:");
- using(var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using(var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
- }
-}
diff --git a/src/current/_includes/v2.0/app/txn-sample.go b/src/current/_includes/v2.0/app/txn-sample.go
deleted file mode 100644
index fc15275abca..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.go
+++ /dev/null
@@ -1,53 +0,0 @@
-package main
-
-import (
- "context"
- "database/sql"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb"
-)
-
-func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- db, err := sql.Open("postgres",
- "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer db.Close()
-
- // Run a transfer in a transaction.
- err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
- return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v2.0/app/txn-sample.js b/src/current/_includes/v2.0/app/txn-sample.js
deleted file mode 100644
index 1eebaacad30..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.js
+++ /dev/null
@@ -1,154 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the bank database.
-
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257,
- ssl: {
- ca: fs.readFileSync('certs/ca.crt')
- .toString(),
- key: fs.readFileSync('certs/client.maxroach.key')
- .toString(),
- cert: fs.readFileSync('certs/client.maxroach.crt')
- .toString()
- }
-};
-
-// Wrapper for a transaction. This automatically re-calls "op" with
-// the client as an argument as long as the database server asks for
-// the transaction to be retried.
-
-function txnWrapper(client, op, next) {
- client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return next(err);
- }
-
- var released = false;
- async.doWhilst(function (done) {
- var handleError = function (err) {
- // If we got an error, see if it's a retryable one
- // and, if so, restart.
- if (err.code === '40001') {
- // Signal the database that we'll retry.
- return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done);
- }
- // A non-retryable error; break out of the
- // doWhilst with an error.
- return done(err);
- };
-
- // Attempt the work.
- op(client, function (err) {
- if (err) {
- return handleError(err);
- }
- var opResults = arguments;
-
- // If we reach this point, release and commit.
- client.query('RELEASE SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return handleError(err);
- }
- released = true;
- return done.apply(null, opResults);
- });
- });
- },
- function () {
- return !released;
- },
- function (err) {
- if (err) {
- client.query('ROLLBACK', function () {
- next(err);
- });
- } else {
- var txnResults = arguments;
- client.query('COMMIT', function (err) {
- if (err) {
- return next(err);
- } else {
- return next.apply(null, txnResults);
- }
- });
- }
- });
- });
-}
-
-// The transaction we want to run.
-
-function transferFunds(client, from, to, amount, next) {
- // Check the current balance.
- client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) {
- if (err) {
- return next(err);
- } else if (results.rows.length === 0) {
- return next(new Error('account not found in table'));
- }
-
- var acctBal = results.rows[0].balance;
- if (acctBal >= amount) {
- // Perform the transfer.
- async.waterfall([
- function (next) {
- // Subtract amount from account 1.
- client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next);
- },
- function (updateResult, next) {
- // Add amount to account 2.
- client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next);
- },
- function (updateResult, next) {
- // Fetch account balances after updates.
- client.query('SELECT id, balance FROM accounts', function (err, selectResult) {
- next(err, selectResult ? selectResult.rows : null);
- });
- }
- ], next);
- } else {
- next(new Error('insufficient funds'));
- }
- });
-}
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
- // Closes communication with the database and exits.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
-
- // Execute the transaction.
- txnWrapper(client,
- function (client, next) {
- transferFunds(client, 1, 2, 100, next);
- },
- function (err, results) {
- if (err) {
- console.error('error performing transaction', err);
- finish();
- }
-
- console.log('Balances after transfer:');
- results.forEach(function (result) {
- console.log(result);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.0/app/txn-sample.php b/src/current/_includes/v2.0/app/txn-sample.php
deleted file mode 100644
index 363dbcd73cd..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.php
+++ /dev/null
@@ -1,71 +0,0 @@
-beginTransaction();
- // This savepoint allows us to retry our transaction.
- $dbh->exec("SAVEPOINT cockroach_restart");
- } catch (Exception $e) {
- throw $e;
- }
-
- while (true) {
- try {
- $stmt = $dbh->prepare(
- 'UPDATE accounts SET balance = balance + :deposit ' .
- 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
-
- // First, withdraw the money from the old account (if possible).
- $stmt->bindValue(':account', $from, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "source account does not exist or is underfunded\r\n";
- return;
- }
-
- // Next, deposit into the new account (if it exists).
- $stmt->bindValue(':account', $to, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "destination account does not exist\r\n";
- return;
- }
-
- // Attempt to release the savepoint (which is really the commit).
- $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
- $dbh->commit();
- return;
- } catch (PDOException $e) {
- if ($e->getCode() != '40001') {
- // Non-recoverable error. Rollback and bubble error up the chain.
- $dbh->rollBack();
- throw $e;
- } else {
- // Cockroach transaction retry code. Rollback to the savepoint and
- // restart.
- $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
- }
- }
- }
-}
-
-try {
- $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=require;sslrootcert=certs/ca.crt;sslkey=certs/client.maxroach.key;sslcert=certs/client.maxroach.crt',
- 'maxroach', null, array(
- PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- ));
-
- transferMoney($dbh, 1, 2, 10);
-
- print "Account balances after transfer:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.0/app/txn-sample.py b/src/current/_includes/v2.0/app/txn-sample.py
deleted file mode 100644
index d4c86a36cc8..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Import the driver.
-import psycopg2
-import psycopg2.errorcodes
-
-# Connect to the cluster.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.maxroach.key',
- sslcert='certs/client.maxroach.crt',
- port=26257,
- host='localhost'
-)
-
-def onestmt(conn, sql):
- with conn.cursor() as cur:
- cur.execute(sql)
-
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn, op):
- with conn:
- onestmt(conn, "SAVEPOINT cockroach_restart")
- while True:
- try:
- # Attempt the work.
- op(conn)
-
- # If we reach this point, commit.
- onestmt(conn, "RELEASE SAVEPOINT cockroach_restart")
- break
-
- except psycopg2.OperationalError as e:
- if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE:
- # A non-retryable error; report this up the call stack.
- raise e
- # Signal the database that we'll retry.
- onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart")
-
-
-# The transaction we want to run.
-def transfer_funds(txn, frm, to, amount):
- with txn.cursor() as cur:
-
- # Check the current balance.
- cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
- from_balance = cur.fetchone()[0]
- if from_balance < amount:
- raise "Insufficient funds"
-
- # Perform the transfer.
- cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
- (amount, frm))
- cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
- (amount, to))
-
-
-# Execute the transaction.
-run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100))
-
-
-with conn:
- with conn.cursor() as cur:
- # Check account balances.
- cur.execute("SELECT id, balance FROM accounts")
- rows = cur.fetchall()
- print('Balances after transfer:')
- for row in rows:
- print([str(cell) for cell in row])
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.0/app/txn-sample.rb b/src/current/_includes/v2.0/app/txn-sample.rb
deleted file mode 100644
index 1c3e028fdf7..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.rb
+++ /dev/null
@@ -1,52 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn)
- conn.transaction do |txn|
- txn.exec('SAVEPOINT cockroach_restart')
- while
- begin
- # Attempt the work.
- yield txn
-
- # If we reach this point, commit.
- txn.exec('RELEASE SAVEPOINT cockroach_restart')
- break
- rescue PG::TRSerializationFailure
- txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
- end
- end
- end
-end
-
-def transfer_funds(txn, from, to, amount)
- txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
- res.each do |row|
- raise 'insufficient funds' if Integer(row['balance']) < amount
- end
- end
- txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
- txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
-end
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey:'certs/client.maxroach.key',
- sslcert:'certs/client.maxroach.crt'
-)
-
-run_transaction(conn) do |txn|
- transfer_funds(txn, 1, 2, 100)
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.0/app/txn-sample.rs b/src/current/_includes/v2.0/app/txn-sample.rs
deleted file mode 100644
index e2282c56ea1..00000000000
--- a/src/current/_includes/v2.0/app/txn-sample.rs
+++ /dev/null
@@ -1,59 +0,0 @@
-extern crate postgres;
-
-use postgres::{Connection, TlsMode, Result};
-use postgres::transaction::Transaction;
-use self::postgres::error::T_R_SERIALIZATION_FAILURE;
-
-/// Runs op inside a transaction and retries it as needed.
-/// On non-retryable failures, the transaction is aborted and
-/// rolled back; on success, the transaction is committed.
-fn execute_txn(conn: &Connection, mut op: F) -> Result
-where
- F: FnMut(&Transaction) -> Result,
-{
- let txn = conn.transaction()?;
- loop {
- let sp = txn.savepoint("cockroach_restart")?;
- match op(&sp).and_then(|t| sp.commit().map(|_| t)) {
- Err(ref err) if err.as_db()
- .map(|e| e.code == T_R_SERIALIZATION_FAILURE)
- .unwrap_or(false) => {},
- r => break r,
- }
- }.and_then(|t| txn.commit().map(|_| t))
-}
-
-fn transfer_funds(txn: &Transaction, from: i64, to: i64, amount: i64) -> Result<()> {
- // Read the balance.
- let from_balance: i64 = txn.query("SELECT balance FROM accounts WHERE id = $1", &[&from])?
- .get(0)
- .get(0);
-
- assert!(from_balance >= amount);
-
- // Perform the transfer.
- txn.execute(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
- &[&amount, &from],
- )?;
- txn.execute(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
- &[&amount, &to],
- )?;
- Ok(())
-}
-
-fn main() {
- let conn = Connection::connect("postgresql://maxroach@localhost:26257/bank", TlsMode::None)
- .unwrap();
-
- // Run a transfer in a transaction.
- execute_txn(&conn, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
-
- // Check account balances after the transaction.
- for row in &conn.query("SELECT id, balance FROM accounts", &[]).unwrap() {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v2.0/app/util.clj b/src/current/_includes/v2.0/app/util.clj
deleted file mode 100644
index d040affe794..00000000000
--- a/src/current/_includes/v2.0/app/util.clj
+++ /dev/null
@@ -1,38 +0,0 @@
-(ns test.util
- (:require [clojure.java.jdbc :as j]
- [clojure.walk :as walk]))
-
-(defn txn-restart-err?
- "Takes an exception and returns true if it is a CockroachDB retry error."
- [e]
- (when-let [m (.getMessage e)]
- (condp instance? e
- java.sql.BatchUpdateException
- (and (re-find #"getNextExc" m)
- (txn-restart-err? (.getNextException e)))
-
- org.postgresql.util.PSQLException
- (= (.getSQLState e) "40001") ; 40001 is the code returned by CockroachDB retry errors.
-
- false)))
-
-;; Wrapper for a transaction.
-;; This automatically invokes the body again as long as the database server
-;; asks the transaction to be retried.
-
-(defmacro with-txn-retry
- "Wrap an evaluation within a CockroachDB retry block."
- [[txn c] & body]
- `(j/with-db-transaction [~txn ~c]
- (loop []
- (j/execute! ~txn ["savepoint cockroach_restart"])
- (let [res# (try (let [r# (do ~@body)]
- {:ok r#})
- (catch java.sql.SQLException e#
- (if (txn-restart-err? e#)
- {:retry true}
- (throw e#))))]
- (if (:retry res#)
- (do (j/execute! ~txn ["rollback to savepoint cockroach_restart"])
- (recur))
- (:ok res#))))))
diff --git a/src/current/_includes/v2.0/computed-columns/jsonb.md b/src/current/_includes/v2.0/computed-columns/jsonb.md
deleted file mode 100644
index bd37ecdaad7..00000000000
--- a/src/current/_includes/v2.0/computed-columns/jsonb.md
+++ /dev/null
@@ -1,35 +0,0 @@
-In this example, let's create a table with a `JSONB` column and a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE student_profiles (
- id STRING PRIMARY KEY AS (profile->>'id') STORED,
- profile JSONB
-);
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO student_profiles (profile) VALUES
- ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'),
- ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'),
- ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM student_profiles;
-~~~
-~~~
-+--------+---------------------------------------------------------------------------------------------------------------------+
-| id | profile |
-+--------+---------------------------------------------------------------------------------------------------------------------+
-| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} |
-| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} |
-| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} |
-+--------+---------------------------------------------------------------------------------------------------------------------+
-~~~
-
-The primary key `id` is computed as a field from the `profile` column.
diff --git a/src/current/_includes/v2.0/computed-columns/partitioning.md b/src/current/_includes/v2.0/computed-columns/partitioning.md
deleted file mode 100644
index 3785cbe9f8c..00000000000
--- a/src/current/_includes/v2.0/computed-columns/partitioning.md
+++ /dev/null
@@ -1,53 +0,0 @@
-{{site.data.alerts.callout_info}}Partioning is an enterprise feature. To request and enable a trial or full enterprise license, see Enterprise Licensing.{{site.data.alerts.end}}
-
-In this example, let's create a table with geo-partitioning and a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE user_locations (
- locality STRING AS (CASE
- WHEN country IN ('ca', 'mx', 'us') THEN 'north_america'
- WHEN country IN ('au', 'nz') THEN 'australia'
- END) STORED,
- id SERIAL,
- name STRING,
- country STRING,
- PRIMARY KEY (locality, id))
- PARTITION BY LIST (locality)
- (PARTITION north_america VALUES IN ('north_america'),
- PARTITION australia VALUES IN ('australia'));
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO user_locations (name, country) VALUES
- ('Leonard McCoy', 'us'),
- ('Uhura', 'nz'),
- ('Spock', 'ca'),
- ('James Kirk', 'us'),
- ('Scotty', 'mx'),
- ('Hikaru Sulu', 'us'),
- ('Pavel Chekov', 'au');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM user_locations;
-~~~
-~~~
-+---------------+--------------------+---------------+---------+
-| locality | id | name | country |
-+---------------+--------------------+---------------+---------+
-| australia | 333153890100609025 | Uhura | nz |
-| australia | 333153890100772865 | Pavel Chekov | au |
-| north_america | 333153890100576257 | Leonard McCoy | us |
-| north_america | 333153890100641793 | Spock | ca |
-| north_america | 333153890100674561 | James Kirk | us |
-| north_america | 333153890100707329 | Scotty | mx |
-| north_america | 333153890100740097 | Hikaru Sulu | us |
-+---------------+--------------------+---------------+---------+
-~~~
-
-The `locality` column is computed from the `country` column.
diff --git a/src/current/_includes/v2.0/computed-columns/secondary-index.md b/src/current/_includes/v2.0/computed-columns/secondary-index.md
deleted file mode 100644
index 242b5d6c7f2..00000000000
--- a/src/current/_includes/v2.0/computed-columns/secondary-index.md
+++ /dev/null
@@ -1,63 +0,0 @@
-In this example, let's create a table with a computed columns and an index on that column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE gymnastics (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- athlete STRING,
- vault DECIMAL,
- bars DECIMAL,
- beam DECIMAL,
- floor DECIMAL,
- combined_score DECIMAL AS (vault + bars + beam + floor) STORED,
- INDEX total (combined_score DESC)
- );
-~~~
-
-Then, insert a few rows a data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES
- ('Simone Biles', 15.933, 14.800, 15.300, 15.800),
- ('Gabby Douglas', 0, 15.766, 0, 0),
- ('Laurie Hernandez', 15.100, 0, 15.233, 14.833),
- ('Madison Kocian', 0, 15.933, 0, 0),
- ('Aly Raisman', 15.833, 0, 15.000, 15.366);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM gymnastics;
-~~~
-~~~
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| id | athlete | vault | bars | beam | floor | combined_score |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 |
-| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 |
-| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 |
-| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 |
-| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-~~~
-
-Now, let's run a query using the secondary index:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC;
-~~~
-~~~
-+------------------+----------------+
-| athlete | combined_score |
-+------------------+----------------+
-| Simone Biles | 61.833 |
-| Aly Raisman | 46.199 |
-| Laurie Hernandez | 45.166 |
-| Madison Kocian | 15.933 |
-| Gabby Douglas | 15.766 |
-+------------------+----------------+
-~~~
-
-The athlete with the highest combined score of 61.833 is Simone Biles.
diff --git a/src/current/_includes/v2.0/computed-columns/simple.md b/src/current/_includes/v2.0/computed-columns/simple.md
deleted file mode 100644
index 056ad70ecc7..00000000000
--- a/src/current/_includes/v2.0/computed-columns/simple.md
+++ /dev/null
@@ -1,37 +0,0 @@
-In this example, let's create a simple table with a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE names (
- id INT PRIMARY KEY,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
- );
-~~~
-
-Then, insert a few rows a data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO names (id, first_name, last_name) VALUES
- (1, 'Lola', 'McDog'),
- (2, 'Carl', 'Kimball'),
- (3, 'Ernie', 'Narayan');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM names;
-~~~
-~~~
-+----+------------+-------------+----------------+
-| id | first_name | last_name | full_name |
-+----+------------+-------------+----------------+
-| 1 | Lola | McDog | Lola McDog |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-+----+------------+-------------+----------------+
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html).
diff --git a/src/current/_includes/v2.0/faq/auto-generate-unique-ids.html b/src/current/_includes/v2.0/faq/auto-generate-unique-ids.html
deleted file mode 100644
index 0ffcbb03c3e..00000000000
--- a/src/current/_includes/v2.0/faq/auto-generate-unique-ids.html
+++ /dev/null
@@ -1,87 +0,0 @@
-To auto-generate unique row IDs, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE t1 (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO t1 (name) VALUES ('a'), ('b'), ('c');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t1;
-~~~
-
-~~~
-+--------------------------------------+------+
-| id | name |
-+--------------------------------------+------+
-| 60853a85-681d-4620-9677-946bbfdc8fbc | c |
-| 77c9bc2e-76a5-4ebc-80c3-7ad3159466a1 | b |
-| bd3a56e1-c75e-476c-b221-0da9d74d66eb | a |
-+--------------------------------------+------+
-(3 rows)
-~~~
-
-Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE t2 (id BYTES PRIMARY KEY DEFAULT uuid_v4(), name STRING);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO t2 (name) VALUES ('a'), ('b'), ('c');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t2;
-~~~
-
-~~~
-+---------------------------------------------------+------+
-| id | name |
-+---------------------------------------------------+------+
-| "\x9b\x10\xdc\x11\x9a\x9cGB\xbd\x8d\t\x8c\xf6@vP" | a |
-| "\xd9s\xd7\x13\n_L*\xb0\x87c\xb6d\xe1\xd8@" | c |
-| "\uac74\x1dd@B\x97\xac\x04N&\x9eBg\x86" | b |
-+---------------------------------------------------+------+
-(3 rows)
-~~~
-
-In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 64MB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load.
-
-If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE t3 (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO t3 (name) VALUES ('a'), ('b'), ('c');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t3;
-~~~
-
-~~~
-+--------------------+------+
-| id | name |
-+--------------------+------+
-| 293807573840855041 | a |
-| 293807573840887809 | b |
-| 293807573840920577 | c |
-+--------------------+------+
-(3 rows)
-~~~
-
-Upon insert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second.
diff --git a/src/current/_includes/v2.0/faq/clock-synchronization-effects.md b/src/current/_includes/v2.0/faq/clock-synchronization-effects.md
deleted file mode 100644
index d86fb8dc238..00000000000
--- a/src/current/_includes/v2.0/faq/clock-synchronization-effects.md
+++ /dev/null
@@ -1,15 +0,0 @@
-CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node.
-
-The one rare case to note is when a node's clock suddenly jumps beyond the maximum offset before the node detects it. Although extremely unlikely, this could occur, for example, when running CockroachDB inside a VM and the VM hypervisor decides to migrate the VM to different hardware with a different time. In this case, there can be a small window of time between when the node's clock becomes unsynchronized and when the node spontaneously shuts down. During this window, it would be possible for a client to read stale data and write data derived from stale reads.
-
-For guidance on synchronizing clocks, see the tutorial for your deployment environment:
-
-Environment | Featured Approach
-------------|---------------------
-[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service.
-[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service.
-[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service.
-[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service.
-[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service.
-
-{{site.data.alerts.callout_info}}In most cases, we recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/faq/clock-synchronization-monitoring.html b/src/current/_includes/v2.0/faq/clock-synchronization-monitoring.html
deleted file mode 100644
index 7fb82e4d188..00000000000
--- a/src/current/_includes/v2.0/faq/clock-synchronization-monitoring.html
+++ /dev/null
@@ -1,8 +0,0 @@
-As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes:
-
-Metric | Definition
--------|-----------
-`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds
-`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds
-
-As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset.
diff --git a/src/current/_includes/v2.0/faq/differences-between-numberings.md b/src/current/_includes/v2.0/faq/differences-between-numberings.md
deleted file mode 100644
index 741ec4f8066..00000000000
--- a/src/current/_includes/v2.0/faq/differences-between-numberings.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences |
-|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------|
-| Size | 16 bytes | 8 bytes | 1 to 8 bytes |
-| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered |
-| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention |
-| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values |
-| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local |
-| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher |
-| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node |
-| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited |
diff --git a/src/current/_includes/v2.0/faq/planned-maintenance.md b/src/current/_includes/v2.0/faq/planned-maintenance.md
deleted file mode 100644
index c9fbb49266a..00000000000
--- a/src/current/_includes/v2.0/faq/planned-maintenance.md
+++ /dev/null
@@ -1,22 +0,0 @@
-By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window.
-
-For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.time_until_store_dead = '15m0s';
-~~~
-
-After completing the maintenance work and [restarting the nodes](start-a-node.html), you would then change the setting back to its default:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
-~~~
-
-It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
-
-{% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING server.shutdown.drain_wait = '10s';
- ~~~
diff --git a/src/current/_includes/v2.0/faq/sequential-numbers.md b/src/current/_includes/v2.0/faq/sequential-numbers.md
deleted file mode 100644
index ee5bd96d9c4..00000000000
--- a/src/current/_includes/v2.0/faq/sequential-numbers.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations:
-
-- Unless you need roughly-ordered numbers, we recommend using [`UUID`](uuid.html) values instead. See the [previous
-FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details.
-- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that
-consumes a lower sequence number commits after a transaction that consumes a higher number).
-- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers.
diff --git a/src/current/_includes/v2.0/faq/sequential-transactions.md b/src/current/_includes/v2.0/faq/sequential-transactions.md
deleted file mode 100644
index 684f2ce5d2a..00000000000
--- a/src/current/_includes/v2.0/faq/sequential-transactions.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly
-solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM
-TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following:
-
-- Paginating through all the changes to a table or dataset
-- Determining the order of changes to data over time
-- Determining the state of data at some point in the past
-- Determining the changes to data between two points of time
-
-Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering.
-
-However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows:
-
-- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);`
-- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;`
-
-This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result.
-
-If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs.
diff --git a/src/current/_includes/v2.0/faq/simulate-key-value-store.html b/src/current/_includes/v2.0/faq/simulate-key-value-store.html
deleted file mode 100644
index 4772fa5358c..00000000000
--- a/src/current/_includes/v2.0/faq/simulate-key-value-store.html
+++ /dev/null
@@ -1,13 +0,0 @@
-CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key:
-
-~~~ sql
-> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES);
-~~~
-
-When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation:
-
-~~~ sql
-> UPSERT INTO kv VALUES (1, b'hello')
-~~~
-
-This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises.
diff --git a/src/current/_includes/v2.0/faq/sql-query-logging.md b/src/current/_includes/v2.0/faq/sql-query-logging.md
deleted file mode 100644
index b6bb14a900b..00000000000
--- a/src/current/_includes/v2.0/faq/sql-query-logging.md
+++ /dev/null
@@ -1,63 +0,0 @@
-There are several ways to log SQL queries. The type of logging you use will depend on your requirements.
-
-- For per-table audit logs, turn on [SQL audit logs](#sql-audit-logs).
-- For system troubleshooting and performance optimization, turn on [cluster-wide execution logs](#cluster-wide-execution-logs).
-- For local testing, turn on [per-node execution logs](#per-node-execution-logs).
-
-### SQL audit logs
-
-{% include {{ page.version.version }}/misc/experimental-warning.md %}
-
-SQL audit logging is useful if you want to log all queries that are run against specific tables.
-
-- For a tutorial, see [SQL Audit Logging](sql-audit-logging.html).
-
-- For SQL reference documentation, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html).
-
-### Cluster-wide execution logs
-
-For production clusters, the best way to log all queries is to turn on the [cluster-wide setting](cluster-settings.html) `sql.trace.log_statement_execute`:
-
-~~~ sql
-> SET CLUSTER SETTING sql.trace.log_statement_execute = true;
-~~~
-
-With this setting on, each node of the cluster writes all SQL queries it executes to its log file. When you no longer need to log queries, you can turn the setting back off:
-
-~~~ sql
-> SET CLUSTER SETTING sql.trace.log_statement_execute = false;
-~~~
-
-### Per-node execution logs
-
-Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can either pass a CLI flag at node startup, or execute a SQL function on a running node.
-
-Using the CLI to start a new node, pass the `--vmodule` flag to the [`cockroach start`](start-a-node.html) command. For example, to start a single node locally and log all SQL queries it executes, you'd run:
-
-~~~ shell
-$ cockroach start --insecure --host=localhost --vmodule=exec_log=2
-~~~
-
-From the SQL prompt on a running node, execute the `crdb_internal.set_vmodule()` [function](functions-and-operators.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT crdb_internal.set_vmodule('exec_log=2');
-~~~
-
-This will result in the following output:
-
-~~~
-+---------------------------+
-| crdb_internal.set_vmodule |
-+---------------------------+
-| 0 |
-+---------------------------+
-(1 row)
-~~~
-
-Once the logging is enabled, all of the node's queries will be written to the [CockroachDB log file](debug-and-error-logs.html) as follows:
-
-~~~
-I180402 19:12:28.112957 394661 sql/exec_log.go:173 [n1,client=127.0.0.1:50155,user=root] exec "psql" {} "SELECT version()" {} 0.795 1 ""
-~~~
diff --git a/src/current/_includes/v2.0/faq/when-to-interleave-tables.html b/src/current/_includes/v2.0/faq/when-to-interleave-tables.html
deleted file mode 100644
index a65196ad693..00000000000
--- a/src/current/_includes/v2.0/faq/when-to-interleave-tables.html
+++ /dev/null
@@ -1,5 +0,0 @@
-You're most likely to benefit from interleaved tables when:
-
- - Your tables form a [hierarchy](interleave-in-parent.html#interleaved-hierarchy)
- - Queries maximize the [benefits of interleaving](interleave-in-parent.html#benefits)
- - Queries do not suffer too greatly from interleaving's [tradeoffs](interleave-in-parent.html#tradeoffs)
diff --git a/src/current/_includes/v2.0/json/json-sample.go b/src/current/_includes/v2.0/json/json-sample.go
deleted file mode 100644
index ecba73acc55..00000000000
--- a/src/current/_includes/v2.0/json/json-sample.go
+++ /dev/null
@@ -1,79 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "io/ioutil"
- "net/http"
- "time"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257")
- if err != nil {
- panic(err)
- }
-
- // The Reddit API wants us to tell it where to start from. The first request
- // we just say "null" to say "from the start", subsequent requests will use
- // the value received from the last call.
- after := "null"
-
- for i := 0; i < 300; i++ {
- after, err = makeReq(db, after)
- if err != nil {
- panic(err)
- }
- // Reddit limits to 30 requests per minute, so do not do any more than that.
- time.Sleep(2 * time.Second)
- }
-}
-
-func makeReq(db *sql.DB, after string) (string, error) {
- // First, make a request to reddit using the appropriate "after" string.
- client := &http.Client{}
- req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil)
-
- req.Header.Add("User-Agent", `Go`)
-
- resp, err := client.Do(req)
- if err != nil {
- return "", err
- }
-
- res, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- return "", err
- }
-
- // We've gotten back our JSON from reddit, we can use a couple SQL tricks to
- // accomplish multiple things at once.
- // The JSON reddit returns looks like this:
- // {
- // "data": {
- // "children": [ ... ]
- // },
- // "after": ...
- // }
- // We structure our query so that we extract the `children` field, and then
- // expand that and insert each individual element into the database as a
- // separate row. We then return the "after" field so we know how to make the
- // next request.
- r, err := db.Query(`
- INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements($1->'data'->'children')
- RETURNING $1->'data'->'after'`,
- string(res))
- if err != nil {
- return "", err
- }
-
- // Since we did a RETURNING, we need to grab the result of our query.
- r.Next()
- var newAfter string
- r.Scan(&newAfter)
-
- return newAfter, nil
-}
diff --git a/src/current/_includes/v2.0/json/json-sample.py b/src/current/_includes/v2.0/json/json-sample.py
deleted file mode 100644
index 68b7fd1ef37..00000000000
--- a/src/current/_includes/v2.0/json/json-sample.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import json
-import psycopg2
-import requests
-import time
-
-conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-# The Reddit API wants us to tell it where to start from. The first request
-# we just say "null" to say "from the start"; subsequent requests will use
-# the value received from the last call.
-url = "https://www.reddit.com/r/programming.json"
-after = {"after": "null"}
-
-for n in range(300):
- # First, make a request to reddit using the appropriate "after" string.
- req = requests.get(url, params=after, headers={"User-Agent": "Python"})
-
- # Decode the JSON and set "after" for the next request.
- resp = req.json()
- after = {"after": str(resp['data']['after'])}
-
- # Convert the JSON to a string to send to the database.
- data = json.dumps(resp)
-
- # The JSON reddit returns looks like this:
- # {
- # "data": {
- # "children": [ ... ]
- # },
- # "after": ...
- # }
- # We structure our query so that we extract the `children` field, and then
- # expand that and insert each individual element into the database as a
- # separate row.
- cur.execute("""INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements(%s->'data'->'children')""", (data,))
-
- # Reddit limits to 30 requests per minute, so do not do any more than that.
- time.sleep(2)
-
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v2.0/known-limitations/cte-by-name.md b/src/current/_includes/v2.0/known-limitations/cte-by-name.md
deleted file mode 100644
index d33a6f8c7e8..00000000000
--- a/src/current/_includes/v2.0/known-limitations/cte-by-name.md
+++ /dev/null
@@ -1,10 +0,0 @@
-It is currently not possible to refer to a [common table expression](common-table-expressions.html) by name more than once.
-
-For example, the following query is invalid because the CTE `a` is
-referred to twice:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (VALUES (1), (2), (3))
- SELECT * FROM a, a;
-~~~
diff --git a/src/current/_includes/v2.0/known-limitations/cte-in-set-expression.md b/src/current/_includes/v2.0/known-limitations/cte-in-set-expression.md
deleted file mode 100644
index 6c5e1dbbd56..00000000000
--- a/src/current/_includes/v2.0/known-limitations/cte-in-set-expression.md
+++ /dev/null
@@ -1,12 +0,0 @@
-{{site.data.alerts.callout_info}}
-Resolved as of v2.1.
-{{site.data.alerts.end}}
-
-It is not yet possible to use a [common table expression](common-table-expressions.html) defined outside of a [set expression](selection-queries.html#set-operations) in the right operand of a set operator, for example:
-
-~~~ sql
-> WITH a AS (SELECT 1)
- SELECT * FROM users UNION SELECT * FROM a; -- "a" used on the right, not yet supported.
-~~~
-
-For `UNION`, you can work around this limitation by swapping the operands. For the other set operators, you can inline the definition of the CTE inside the right operand.
diff --git a/src/current/_includes/v2.0/known-limitations/cte-in-values-clause.md b/src/current/_includes/v2.0/known-limitations/cte-in-values-clause.md
deleted file mode 100644
index 3e20d578673..00000000000
--- a/src/current/_includes/v2.0/known-limitations/cte-in-values-clause.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{{site.data.alerts.callout_info}}
-Resolved as of v2.1.
-{{site.data.alerts.end}}
-
-It is not yet possible to use a [common table expression](common-table-expressions.html) define outside of a `VALUES` clause in a [subquery](subqueries.html) inside the [`VALUES`](selection-queries.html#values-clause) clause, for example:
-
-~~~ sql
-> WITH a AS (...) VALUES ((SELECT * FROM a));
-~~~
diff --git a/src/current/_includes/v2.0/known-limitations/cte-with-dml.md b/src/current/_includes/v2.0/known-limitations/cte-with-dml.md
deleted file mode 100644
index 0cbd18ea484..00000000000
--- a/src/current/_includes/v2.0/known-limitations/cte-with-dml.md
+++ /dev/null
@@ -1,29 +0,0 @@
-{{site.data.alerts.callout_info}}
-Resolved as of v2.1.
-{{site.data.alerts.end}}
-
-If a [common table expression](common-table-expressions.html) containing data-modifying statement is not referred to
-by the top level query, either directly or indirectly, the
-data-modifying statement will not be executed at all.
-
-For example, the following query does not insert any row, because the CTE `a` is not used:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (INSERT INTO t(x) VALUES (1), (2), (3))
- SELECT * FROM b;
-~~~
-
-Also, the following query does not insert any row, even though the CTE `a` is used, because
-the other CTE that uses `a` are themselves not used:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (INSERT INTO t(x) VALUES (1), (2), (3)),
- b AS (SELECT * FROM a)
- SELECT * FROM c;
-~~~
-
-To determine whether a modification will effectively take place, use
-[`EXPLAIN`](explain.html) and check whether the desired data
-modification is part of the final plan for the overall query.
diff --git a/src/current/_includes/v2.0/known-limitations/cte-with-view.md b/src/current/_includes/v2.0/known-limitations/cte-with-view.md
deleted file mode 100644
index 1e82ab9ee2a..00000000000
--- a/src/current/_includes/v2.0/known-limitations/cte-with-view.md
+++ /dev/null
@@ -1,5 +0,0 @@
-{{site.data.alerts.callout_info}}
-Resolved as of v2.1.
-{{site.data.alerts.end}}
-
-It is not yet possible to use a [common table expression](common-table-expressions.html) inside the selection query used to [define a view](create-view.html).
diff --git a/src/current/_includes/v2.0/known-limitations/dump-cyclic-foreign-keys.md b/src/current/_includes/v2.0/known-limitations/dump-cyclic-foreign-keys.md
deleted file mode 100644
index 4e3c43644ea..00000000000
--- a/src/current/_includes/v2.0/known-limitations/dump-cyclic-foreign-keys.md
+++ /dev/null
@@ -1 +0,0 @@
-The [`cockroach dump`](sql-dump.html) command will successfully create a dump file for a table with a [foreign key](foreign-key.html) reference to itself, or a set of tables with a cyclic foreign key dependency (e.g., a depends on b depends on a). That dump file, however, can only be executed after manually editing the output to remove the foreign key definitions from the `CREATE TABLE` statements and adding them as `ALTER TABLE ... ADD CONSTRAINT` statements after the `INSERT` statements.
diff --git a/src/current/_includes/v2.0/known-limitations/node-map.md b/src/current/_includes/v2.0/known-limitations/node-map.md
deleted file mode 100644
index 863f09c3ac2..00000000000
--- a/src/current/_includes/v2.0/known-limitations/node-map.md
+++ /dev/null
@@ -1,8 +0,0 @@
-You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:
-
-| Node | Region | Datacenter |
-| ------ | ------ | ------ |
-| Node1 | us-east | datacenter-1 |
-| Node2 | us-west | datacenter-1 |
-
-In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed.
diff --git a/src/current/_includes/v2.0/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v2.0/known-limitations/partitioning-with-placeholders.md
deleted file mode 100644
index b3c3345200d..00000000000
--- a/src/current/_includes/v2.0/known-limitations/partitioning-with-placeholders.md
+++ /dev/null
@@ -1 +0,0 @@
-When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause.
diff --git a/src/current/_includes/v2.0/known-limitations/system-range-replication.md b/src/current/_includes/v2.0/known-limitations/system-range-replication.md
deleted file mode 100644
index 4bfe40ba1a2..00000000000
--- a/src/current/_includes/v2.0/known-limitations/system-range-replication.md
+++ /dev/null
@@ -1 +0,0 @@
-Changes to the [`.default` cluster-wide replication zone](configure-replication-zones.html#edit-the-default-replication-zone) are not automatically applied to existing replication zones, including those for important internal data. For the cluster as a whole to remain available, the "system ranges" for this internal data must always retain a majority of their replicas. Therefore, if you increase the default replication factor, be sure to also [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) as well.
diff --git a/src/current/_includes/v2.0/metric-names.md b/src/current/_includes/v2.0/metric-names.md
deleted file mode 100644
index 7eebed323d8..00000000000
--- a/src/current/_includes/v2.0/metric-names.md
+++ /dev/null
@@ -1,246 +0,0 @@
-Name | Help
------|-----
-`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
-`addsstable.copies` | Number of SSTable ingestions that required copying files during application
-`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
-`build.timestamp` | Build information
-`capacity.available` | Available storage capacity
-`capacity.reserved` | Capacity reserved for snapshots
-`capacity.used` | Used storage capacity
-`capacity` | Total storage capacity
-`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds
-`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds
-`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges
-`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine
-`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine
-`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions
-`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue
-`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted
-`distsender.batches.partial` | Number of partial batches processed
-`distsender.batches` | Number of batches processed
-`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered
-`distsender.rpc.sent.local` | Number of local RPCs sent
-`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors
-`distsender.rpc.sent` | Number of RPCs sent
-`exec.error` | Number of batch KV requests that failed to execute on this node
-`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node
-`exec.success` | Number of batch KV requests executed successfully on this node
-`gcbytesage` | Cumulative age of non-live data in seconds
-`gossip.bytes.received` | Number of received gossip bytes
-`gossip.bytes.sent` | Number of sent gossip bytes
-`gossip.connections.incoming` | Number of active incoming gossip connections
-`gossip.connections.outgoing` | Number of active outgoing gossip connections
-`gossip.connections.refused` | Number of refused incoming gossip connections
-`gossip.infos.received` | Number of received gossip Info objects
-`gossip.infos.sent` | Number of sent gossip Info objects
-`intentage` | Cumulative age of intents in seconds
-`intentbytes` | Number of bytes in intent KV pairs
-`intentcount` | Count of intent keys
-`keybytes` | Number of bytes taken up by keys
-`keycount` | Count of all keys
-`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated
-`leases.epoch` | Number of replica leaseholders using epoch-based leases
-`leases.error` | Number of failed lease requests
-`leases.expiration` | Number of replica leaseholders using expiration-based leases
-`leases.success` | Number of successful lease requests
-`leases.transfers.error` | Number of failed lease transfers
-`leases.transfers.success` | Number of successful lease transfers
-`livebytes` | Number of bytes of live data (keys plus values)
-`livecount` | Count of live keys
-`liveness.epochincrements` | Number of times this node has incremented its liveness epoch
-`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node
-`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds
-`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node
-`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live)
-`node-id` | node ID with labels for advertised RPC and HTTP addresses
-`queue.consistency.pending` | Number of pending replicas in the consistency checker queue
-`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue
-`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue
-`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue
-`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal
-`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal
-`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine
-`queue.gc.info.intentsconsidered` | Number of 'old' intents
-`queue.gc.info.intenttxns` | Number of associated distinct transactions
-`queue.gc.info.numkeysaffected` | Number of keys with GC'able data
-`queue.gc.info.pushtxn` | Number of attempted pushes
-`queue.gc.info.resolvesuccess` | Number of successful intent resolutions
-`queue.gc.info.resolvetotal` | Number of attempted intent resolutions
-`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns
-`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns
-`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns
-`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine
-`queue.gc.pending` | Number of pending replicas in the GC queue
-`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue
-`queue.gc.process.success` | Number of replicas successfully processed by the GC queue
-`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue
-`queue.raftlog.pending` | Number of pending replicas in the Raft log queue
-`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue
-`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue
-`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue
-`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue
-`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue
-`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue
-`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue
-`queue.replicagc.pending` | Number of pending replicas in the replica GC queue
-`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue
-`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue
-`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue
-`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue
-`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue
-`queue.replicate.pending` | Number of pending replicas in the replicate queue
-`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue
-`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue
-`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue
-`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options
-`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue
-`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage)
-`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition)
-`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue
-`queue.split.pending` | Number of pending replicas in the split queue
-`queue.split.process.failure` | Number of replicas which failed processing in the split queue
-`queue.split.process.success` | Number of replicas successfully processed by the split queue
-`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue
-`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue
-`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue
-`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue
-`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue
-`raft.commandsapplied` | Count of Raft commands applied
-`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue
-`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced
-`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands
-`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries
-`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick()
-`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working
-`raft.rcvd.app` | Number of MsgApp messages received by this store
-`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store
-`raft.rcvd.dropped` | Number of dropped incoming Raft messages
-`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store
-`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store
-`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store
-`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store
-`raft.rcvd.prop` | Number of MsgProp messages received by this store
-`raft.rcvd.snap` | Number of MsgSnap messages received by this store
-`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store
-`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store
-`raft.rcvd.vote` | Number of MsgVote messages received by this store
-`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store
-`raft.ticks` | Number of Raft ticks queued
-`raftlog.behind` | Number of Raft log entries followers on other stores are behind
-`raftlog.truncated` | Number of Raft log entries truncated
-`range.adds` | Number of range additions
-`range.raftleadertransfers` | Number of raft leader transfers
-`range.removes` | Number of range removals
-`range.snapshots.generated` | Number of generated snapshots
-`range.snapshots.normal-applied` | Number of applied snapshots
-`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots
-`range.splits` | Number of range splits
-`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
-`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
-`ranges` | Number of ranges
-`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
-`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
-`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
-`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
-`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue
-`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue
-`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue
-`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree
-`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue
-`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store
-`replicas.leaders` | Number of raft leaders
-`replicas.leaseholders` | Number of lease holders
-`replicas.quiescent` | Number of quiesced replicas
-`replicas.reserved` | Number of replicas reserved for snapshots
-`replicas` | Number of replicas
-`requests.backpressure.split` | Number of backpressured writes waiting on a Range split
-`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue
-`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender
-`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease
-`requests.slow.raft` | Number of requests that have been stuck for a long time in raft
-`rocksdb.block.cache.hits` | Count of block cache hits
-`rocksdb.block.cache.misses` | Count of block cache misses
-`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache
-`rocksdb.block.cache.usage` | Bytes used by the block cache
-`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked
-`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation
-`rocksdb.compactions` | Number of table compactions
-`rocksdb.flushes` | Number of table flushes
-`rocksdb.memtable.total-size` | Current size of memtable in bytes
-`rocksdb.num-sstables` | Number of rocksdb SSTables
-`rocksdb.read-amplification` | Number of disk reads per query
-`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks
-`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds
-`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error.
-`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error.
-`sql.bytesin` | Number of sql bytes received
-`sql.bytesout` | Number of sql bytes sent
-`sql.conns` | Number of active sql connections
-`sql.ddl.count` | Number of SQL DDL statements
-`sql.delete.count` | Number of SQL DELETE statements
-`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution
-`sql.distsql.flows.active` | Number of distributed SQL flows currently active
-`sql.distsql.flows.total` | Number of distributed SQL flows executed
-`sql.distsql.queries.active` | Number of distributed SQL queries currently active
-`sql.distsql.queries.total` | Number of distributed SQL queries executed
-`sql.distsql.select.count` | Number of DistSQL SELECT statements
-`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution
-`sql.exec.latency` | Latency in nanoseconds of SQL statement execution
-`sql.insert.count` | Number of SQL INSERT statements
-`sql.mem.current` | Current sql statement memory usage
-`sql.mem.distsql.current` | Current sql statement memory usage for distsql
-`sql.mem.distsql.max` | Memory usage per sql statement for distsql
-`sql.mem.max` | Memory usage per sql statement
-`sql.mem.session.current` | Current sql session memory usage
-`sql.mem.session.max` | Memory usage per sql session
-`sql.mem.txn.current` | Current sql transaction memory usage
-`sql.mem.txn.max` | Memory usage per sql transaction
-`sql.misc.count` | Number of other SQL statements
-`sql.query.count` | Number of SQL queries
-`sql.select.count` | Number of SQL SELECT statements
-`sql.service.latency` | Latency in nanoseconds of SQL request execution
-`sql.txn.abort.count` | Number of SQL transaction ABORT statements
-`sql.txn.begin.count` | Number of SQL transaction BEGIN statements
-`sql.txn.commit.count` | Number of SQL transaction COMMIT statements
-`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements
-`sql.update.count` | Number of SQL UPDATE statements
-`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo
-`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released
-`sys.cgocalls` | Total number of cgo call
-`sys.cpu.sys.ns` | Total system cpu time in nanoseconds
-`sys.cpu.sys.percent` | Current system cpu percentage
-`sys.cpu.user.ns` | Total user cpu time in nanoseconds
-`sys.cpu.user.percent` | Current user cpu percentage
-`sys.fd.open` | Process open file descriptors
-`sys.fd.softlimit` | Process open FD soft limit
-`sys.gc.count` | Total number of GC runs
-`sys.gc.pause.ns` | Total GC pause in nanoseconds
-`sys.gc.pause.percent` | Current GC pause percentage
-`sys.go.allocbytes` | Current bytes of memory allocated by go
-`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released
-`sys.goroutines` | Current number of goroutines
-`sys.rss` | Current process RSS
-`sys.uptime` | Process uptime in seconds
-`sysbytes` | Number of bytes in system KV pairs
-`syscount` | Count of system KV pairs
-`timeseries.write.bytes` | Total size in bytes of metric samples written to disk
-`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk
-`timeseries.write.samples` | Total number of metric samples written to disk
-`totalbytes` | Total number of bytes taken up by keys and values including non-live data
-`tscache.skl.read.pages` | Number of pages in the read timestamp cache
-`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache
-`tscache.skl.write.pages` | Number of pages in the write timestamp cache
-`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache
-`txn.abandons` | Number of abandoned KV transactions
-`txn.aborts` | Number of aborted KV transactions
-`txn.autoretries` | Number of automatic retries to avoid serializable restarts
-`txn.commits1PC` | Number of committed one-phase KV transactions
-`txn.commits` | Number of committed KV transactions (including 1PC)
-`txn.durations` | KV transaction durations in nanoseconds
-`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command
-`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer
-`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE
-`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first
-`txn.restarts` | Number of restarted KV transactions
-`valbytes` | Number of bytes taken up by values
-`valcount` | Count of all values
diff --git a/src/current/_includes/v2.0/misc/available-capacity-metric.md b/src/current/_includes/v2.0/misc/available-capacity-metric.md
deleted file mode 100644
index 11511de2d37..00000000000
--- a/src/current/_includes/v2.0/misc/available-capacity-metric.md
+++ /dev/null
@@ -1 +0,0 @@
-If you are running multiple nodes on a single machine (not recommended in production) and didn't specify the maximum allocated storage capacity for each node using the [`--store`](start-a-node.html#store) flag, the capacity metrics in the Admin UI are incorrect. This is because when multiple nodes are running on a single machine, the machine's hard disk is treated as an available store for each node, while in reality, only one hard disk is available for all nodes. The total available capacity is then calculated as the hard disk size multiplied by the number of nodes on the machine.
diff --git a/src/current/_includes/v2.0/misc/aws-locations.md b/src/current/_includes/v2.0/misc/aws-locations.md
deleted file mode 100644
index 8b073c1f230..00000000000
--- a/src/current/_includes/v2.0/misc/aws-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`|
-| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` |
-| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` |
-| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` |
-| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` |
-| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` |
-| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` |
-| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` |
-| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` |
-| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` |
-| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` |
-| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` |
-| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` |
-| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` |
-| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` |
-| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v2.0/misc/azure-locations.md b/src/current/_includes/v2.0/misc/azure-locations.md
deleted file mode 100644
index 7119ff8b7cb..00000000000
--- a/src/current/_includes/v2.0/misc/azure-locations.md
+++ /dev/null
@@ -1,30 +0,0 @@
-| Location | SQL Statement |
-| -------- | ------------- |
-| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` |
-| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` |
-| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` |
-| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` |
-| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` |
-| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` |
-| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` |
-| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` |
-| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` |
-| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` |
-| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` |
-| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` |
-| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` |
-| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` |
-| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` |
-| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` |
-| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` |
-| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` |
-| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` |
-| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` |
-| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` |
-| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` |
-| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` |
-| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` |
-| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` |
-| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` |
-| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` |
-| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` |
diff --git a/src/current/_includes/v2.0/misc/basic-terms.md b/src/current/_includes/v2.0/misc/basic-terms.md
deleted file mode 100644
index cd067ce00f0..00000000000
--- a/src/current/_includes/v2.0/misc/basic-terms.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Term | Definition
------|------------
-**Cluster** | Your CockroachDB deployment, which acts as a single logical application.
-**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster.
-**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 64 MiB in size, it splits into two ranges. This process continues as the table and its indexes continue growing.
-**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node.
-**Leaseholder** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder.
-**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](/docs/v2.0/architecture/replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder.
-**Raft Log** | For each range, a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication.
diff --git a/src/current/_includes/v2.0/misc/beta-warning.md b/src/current/_includes/v2.0/misc/beta-warning.md
deleted file mode 100644
index 505ce8b03dd..00000000000
--- a/src/current/_includes/v2.0/misc/beta-warning.md
+++ /dev/null
@@ -1 +0,0 @@
-{{site.data.alerts.callout_danger}} This is a beta feature. It is currently undergoing continued testing. Please file a Github issue with us if you identify a bug. {{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/misc/diagnostics-callout.html b/src/current/_includes/v2.0/misc/diagnostics-callout.html
deleted file mode 100644
index a969a8cf152..00000000000
--- a/src/current/_includes/v2.0/misc/diagnostics-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/misc/experimental-warning.md b/src/current/_includes/v2.0/misc/experimental-warning.md
deleted file mode 100644
index c6f3283bc8a..00000000000
--- a/src/current/_includes/v2.0/misc/experimental-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-This is an experimental feature. The interface and output are subject to change.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/misc/external-urls.md b/src/current/_includes/v2.0/misc/external-urls.md
deleted file mode 100644
index ae91b9b76ef..00000000000
--- a/src/current/_includes/v2.0/misc/external-urls.md
+++ /dev/null
@@ -1,24 +0,0 @@
-~~~
-[scheme]://[host]/[path]?[parameters]
-~~~
-
-| Location | scheme | host | parameters |
-|----------|--------|------|------------|
-| Amazon S3 | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` |
-| Azure | `azure` | Container name | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME` |
-| Google Cloud [1](#considerations) | `gs` | Bucket name | `AUTH` (optional): can be `default` or `implicit` |
-| HTTP [2](#considerations) | `http` | Remote host | N/A |
-| NFS/Local [3](#considerations) | `nodelocal` | File system location | N/A |
-| S3-compatible services [4](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, `AWS_ENDPOINT` |
-
-#### Considerations
-
-- 1 If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` setting will be used if it is non-empty, otherwise the `implicit` behavior is used.
-
-- 2 You can easily create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs.
-
-- 3 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](start-a-node.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled.
-
-- 4 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service.
-
-- The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
diff --git a/src/current/_includes/v2.0/misc/gce-locations.md b/src/current/_includes/v2.0/misc/gce-locations.md
deleted file mode 100644
index 22ba06c81e2..00000000000
--- a/src/current/_includes/v2.0/misc/gce-locations.md
+++ /dev/null
@@ -1,17 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` |
-| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` |
-| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` |
-| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` |
-| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` |
-| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` |
-| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` |
-| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` |
-| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` |
-| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` |
-| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` |
-| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` |
-| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` |
-| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` |
-| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v2.0/misc/linux-binary-prereqs.md b/src/current/_includes/v2.0/misc/linux-binary-prereqs.md
deleted file mode 100644
index b470603bb66..00000000000
--- a/src/current/_includes/v2.0/misc/linux-binary-prereqs.md
+++ /dev/null
@@ -1 +0,0 @@
-
The CockroachDB binary for Linux requires glibc and libncurses, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.
diff --git a/src/current/_includes/v2.0/misc/logging-flags.md b/src/current/_includes/v2.0/misc/logging-flags.md
deleted file mode 100644
index 06af86228ee..00000000000
--- a/src/current/_includes/v2.0/misc/logging-flags.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Flag | Description
------|------------
-`--log-dir` | Enable logging to files and write logs to the specified directory.
Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory.
-`--log-dir-max-size` | After the log directory reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-dir-max-size=1GiB`.
**Default**: 100MiB
-`--log-file-max-size` | After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.
**Default**: 10MiB
-`--log-file-verbosity` | Only writes messages to log files if they are at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.
**Default**: `INFO`
-`--logtostderr` | Enable logging to `stderr` for messages at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--logtostderr=ERROR`
If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.
Setting `--logtostderr=NONE` disables logging to `stderr`.
-`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.
When set to `false`, messages logged to `stderr` are colorized based on [severity level](debug-and-error-logs.html#severity-levels).
**Default:** `false`
-`--sql-audit-dir` | New in v2.0: If non-empty, create a SQL audit log in this directory. By default, SQL audit logs are written in the same directory as the other logs generated by CockroachDB. For more information, see [SQL Audit Logging](sql-audit-logging.html).
diff --git a/src/current/_includes/v2.0/misc/remove-user-callout.html b/src/current/_includes/v2.0/misc/remove-user-callout.html
deleted file mode 100644
index 086d27509fc..00000000000
--- a/src/current/_includes/v2.0/misc/remove-user-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user.
diff --git a/src/current/_includes/v2.0/misc/schema-change-view-job.md b/src/current/_includes/v2.0/misc/schema-change-view-job.md
deleted file mode 100644
index 1e9b4a7444e..00000000000
--- a/src/current/_includes/v2.0/misc/schema-change-view-job.md
+++ /dev/null
@@ -1 +0,0 @@
-Whenever you initiate a schema change, CockroachDB registers it as a job, which you can view with [`SHOW JOBS`](show-jobs.html).
diff --git a/src/current/_includes/v2.0/orchestration/initialize-cluster-insecure.md b/src/current/_includes/v2.0/orchestration/initialize-cluster-insecure.md
deleted file mode 100644
index 1b374e6dbf9..00000000000
--- a/src/current/_includes/v2.0/orchestration/initialize-cluster-insecure.md
+++ /dev/null
@@ -1,40 +0,0 @@
-1. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
- ~~~
-
- ~~~
- job "cluster-init" created
- ~~~
-
-2. Confirm that cluster initialization has completed successfully. The job
- should be considered successful and the CockroachDB pods should soon be
- considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init
- ~~~
-
- ~~~
- NAME DESIRED SUCCESSFUL AGE
- cluster-init 1 1 2m
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-limitations.md b/src/current/_includes/v2.0/orchestration/kubernetes-limitations.md
deleted file mode 100644
index 00c6c0fdd21..00000000000
--- a/src/current/_includes/v2.0/orchestration/kubernetes-limitations.md
+++ /dev/null
@@ -1,7 +0,0 @@
-#### Kubernetes version
-
-Kubernetes 1.18 or higher is required in order to use our most up-to-date configuration files. Earlier Kubernetes releases do not support some of the options used in our configuration files. If you need to run on an older version of Kubernetes, we have kept around configuration files that work on older Kubernetes releases in the versioned subdirectories of [https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes) (e.g., [v1.7](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/v1.7)).
-
-#### Storage
-
-At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local).
diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-prometheus-alertmanager.md b/src/current/_includes/v2.0/orchestration/kubernetes-prometheus-alertmanager.md
deleted file mode 100644
index bcea917fd34..00000000000
--- a/src/current/_includes/v2.0/orchestration/kubernetes-prometheus-alertmanager.md
+++ /dev/null
@@ -1,189 +0,0 @@
-Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
-
-### Configure Prometheus
-
-Every node of a CockroachDB cluster exports granular timeseries metrics formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data. This section shows you how to orchestrate Prometheus as part of your Kubernetes cluster and pull these metrics into Prometheus for external monitoring.
-
-This guidance is based on [CoreOS's Prometheus Operator](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md), which allows a Prometheus instance to be managed using built-in Kubernetes concepts.
-
-
-{{site.data.alerts.callout_info}}
-Before starting, make sure the email address associated with your Google Cloud account is part of the `cluster-admin` RBAC group, as shown in [Step 1. Start Kubernetes](#step-1-start-kubernetes).
-{{site.data.alerts.end}}
-
-
-1. From your local workstation, edit the `cockroachdb` service to add the `prometheus: cockroachdb` label:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label svc cockroachdb prometheus=cockroachdb
- ~~~
-
- ~~~
- service "cockroachdb" labeled
- ~~~
-
- This ensures that there is a prometheus job and monitoring data only for the `cockroachdb` service, not for the `cockroach-public` service.
-
-2. Install [CoreOS's Prometheus Operator](https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml
- ~~~
-
- ~~~
- clusterrolebinding "prometheus-operator" created
- clusterrole "prometheus-operator" created
- serviceaccount "prometheus-operator" created
- deployment "prometheus-operator" created
- ~~~
-
-3. Confirm that the `prometheus-operator` has started:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get deploy prometheus-operator
- ~~~
-
- ~~~
- NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
- prometheus-operator 1 1 1 1 1m
- ~~~
-
-4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml
- ~~~
-
- ~~~
- clusterrole "prometheus" created
- clusterrolebinding "prometheus" created
- servicemonitor "cockroachdb" created
- prometheus "cockroachdb" created
- ~~~
-
-5. Access the Prometheus UI locally and verify that CockroachDB is feeding data into Prometheus:
-
- 1. Port-forward from your local machine to the pod running Prometheus:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward prometheus-cockroachdb-0 9090
- ~~~
-
- 2. Go to http://localhost:9090 in your browser.
-
- 3. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this:
-
-
-
- 4. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this:
-
-
-
- {{site.data.alerts.callout_success}}
- Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the Admin UI](#step-6-access-the-admin-ui){% else %}[Access the Admin UI](#step-5-access-the-admin-ui){% endif %} and then point your browser to http://localhost:8080/_status/vars.
-
- For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/).
- {{site.data.alerts.end}}
-
-### Configure Alertmanager
-
-Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. This section shows you how to use [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml) to do this.
-
-1. Download our alertmanager-config.yaml configuration file.
-
-2. Edit the `alertmanager-config.yaml` file to [specify the desired receivers for notifications](https://prometheus.io/docs/alerting/configuration/). Initially, the file contains a placeholder web hook.
-
-3. Add this configuration to the Kubernetes cluster as a secret, renaming it to `alertmanager.yaml` and labelling it to make it easier to find:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret generic alertmanager-cockroachdb --from-file=alertmanager.yaml=alertmanager-config.yaml
- ~~~
-
- ~~~
- secret "alertmanager-cockroachdb" created
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label secret alertmanager-cockroachdb app=cockroachdb
- ~~~
-
- ~~~
- secret "alertmanager-cockroachdb" labeled
- ~~~
-
- {{site.data.alerts.callout_danger}}
- The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `altermanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen.
- {{site.data.alerts.end}}
-
-4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml
- ~~~
-
- ~~~
- alertmanager "cockroachdb" created
- service "alertmanager-cockroachdb" created
- ~~~
-
-5. Verify that Alertmanager is running:
-
- 1. Port-forward from your local machine to the pod running Alertmanager:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward alertmanager-cockroachdb-0 9093
- ~~~
-
- 2. Go to http://localhost:9093 in your browser. The screen should look like this:
-
-
-
-6. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this:
-
-
-
-7. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml
- ~~~
-
- ~~~
- prometheusrule "prometheus-cockroachdb-rules" created
- ~~~
-
-8. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/status http://localhost:9090/rules. The screen should look like this:
-
-
-
-9. Verify that the example alert is firing by opening http://localhost:9090/alerts. The screen should look like this:
-
-
-
-10. To remove the example alert:
-
- 1. Use the `kubectl edit` command to open the rules for editing:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl edit prometheusrules prometheus-cockroachdb-rules
- ~~~
-
- 2. Remove the `dummy.rules` block and save the file:
-
- ~~~
- - name: rules/dummy.rules
- rules:
- - alert: TestAlertManager
- expr: vector(1)
- ~~~
diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-scale-cluster.md b/src/current/_includes/v2.0/orchestration/kubernetes-scale-cluster.md
deleted file mode 100644
index 75c6b278ac2..00000000000
--- a/src/current/_includes/v2.0/orchestration/kubernetes-scale-cluster.md
+++ /dev/null
@@ -1,17 +0,0 @@
-The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.
-
-1. Add a worker node:
- - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
- - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
- - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).
-
-2. Use the `kubectl scale` command to add a pod to your StatefulSet:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=4
- ~~~
-
- ~~~
- statefulset "cockroachdb" scaled
- ~~~
diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v2.0/orchestration/kubernetes-simulate-failure.md
deleted file mode 100644
index e3b2fd5c080..00000000000
--- a/src/current/_includes/v2.0/orchestration/kubernetes-simulate-failure.md
+++ /dev/null
@@ -1,28 +0,0 @@
-Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
-
-To see this in action:
-
-1. Terminate one of the CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-2
- ~~~
-
- ~~~
- pod "cockroachdb-2" deleted
- ~~~
-
-2. In the Admin UI, the **Summary** panel will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
-
-3. Back in the terminal, verify that the pod was automatically restarted:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-2 1/1 Running 0 12s
- ~~~
diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-upgrade-cluster.md b/src/current/_includes/v2.0/orchestration/kubernetes-upgrade-cluster.md
deleted file mode 100644
index 25fd2eb716a..00000000000
--- a/src/current/_includes/v2.0/orchestration/kubernetes-upgrade-cluster.md
+++ /dev/null
@@ -1,43 +0,0 @@
-As new versions of CockroachDB are released, it's strongly recommended to upgrade to newer versions in order to pick up bug fixes, performance improvements, and new features. The [general CockroachDB upgrade documentation](upgrade-cockroach-version.html) provides best practices for how to prepare for and execute upgrades of CockroachDB clusters, but the mechanism of actually stopping and restarting processes in Kubernetes is somewhat special.
-
-Kubernetes knows how to carry out a safe rolling upgrade process of the CockroachDB nodes. When you tell it to change the Docker image used in the CockroachDB StatefulSet, Kubernetes will go one-by-one, stopping a node, restarting it with the new image, and waiting for it to be ready to receive client requests before moving on to the next one. For more information, see [the Kubernetes documentation](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets).
-
-1. All that it takes to kick off this process is changing the desired Docker image. To do so, pick the version that you want to upgrade to, then run the following command, replacing "VERSION" with your desired new version:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]'
- ~~~
-
- ~~~
- statefulset "cockroachdb" patched
- ~~~
-
-2. If you then check the status of your cluster's pods, you should see one of them being restarted:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 2m
- cockroachdb-1 1/1 Running 0 2m
- cockroachdb-2 1/1 Running 0 2m
- cockroachdb-3 0/1 Terminating 0 1m
- ~~~
-
-3. This will continue until all of the pods have restarted and are running the new image. To check the image of each pod to determine whether they've all be upgraded, run:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
- ~~~
-
- ~~~
- cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-3 cockroachdb/cockroach:{{page.release_info.version}}
- ~~~
diff --git a/src/current/_includes/v2.0/orchestration/monitor-cluster.md b/src/current/_includes/v2.0/orchestration/monitor-cluster.md
deleted file mode 100644
index 4db8e9058e0..00000000000
--- a/src/current/_includes/v2.0/orchestration/monitor-cluster.md
+++ /dev/null
@@ -1,28 +0,0 @@
-To access the cluster's [Admin UI](admin-ui-overview.html):
-
-1. Port-forward from your local machine to one of the pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward cockroachdb-0 8080
- ~~~
-
- ~~~
- Forwarding from 127.0.0.1:8080 -> 8080
- ~~~
-
- {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}}
-
-{% if page.secure == true %}
-
-2. Go to https://localhost:8080.
-
-{% else %}
-
-2. Go to http://localhost:8080.
-
-{% endif %}
-
-3. In the UI, verify that the cluster is running as expected:
- - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster.
- - Click the **Databases** tab on the left to verify that `bank` is listed.
diff --git a/src/current/_includes/v2.0/orchestration/start-cluster.md b/src/current/_includes/v2.0/orchestration/start-cluster.md
deleted file mode 100644
index 90d820c0c6c..00000000000
--- a/src/current/_includes/v2.0/orchestration/start-cluster.md
+++ /dev/null
@@ -1,103 +0,0 @@
-{% if page.secure == true %}
-
-From your local workstation, use our [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
-~~~
-
-~~~
-serviceaccount "cockroachdb" created
-role "cockroachdb" created
-clusterrole "cockroachdb" created
-rolebinding "cockroachdb" created
-clusterrolebinding "cockroachdb" created
-service "cockroachdb-public" created
-service "cockroachdb" created
-poddisruptionbudget "cockroachdb-budget" created
-statefulset "cockroachdb" created
-~~~
-
-Alternatively, if you'd rather start with a configuration file that has been customized for performance:
-
-1. Download our [performance version of `cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml
- ~~~
-
-2. Modify the file wherever there is a `TODO` comment.
-
-3. Use the file to create the StatefulSet and start the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-secure.yaml
- ~~~
-
-{% else %}
-
-1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- service "cockroachdb-public" created
- service "cockroachdb" created
- poddisruptionbudget "cockroachdb-budget" created
- statefulset "cockroachdb" created
- ~~~
-
- Alternatively, if you'd rather start with a configuration file that has been customized for performance:
-
- 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml
- ~~~
-
- 2. Modify the file wherever there is a `TODO` comment.
-
- 3. Use the file to create the StatefulSet and start the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-insecure.yaml
- ~~~
-
-2. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
-3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
-{% endif %}
diff --git a/src/current/_includes/v2.0/orchestration/start-kubernetes.md b/src/current/_includes/v2.0/orchestration/start-kubernetes.md
deleted file mode 100644
index 131890e81ba..00000000000
--- a/src/current/_includes/v2.0/orchestration/start-kubernetes.md
+++ /dev/null
@@ -1,79 +0,0 @@
-Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.
-
-
-
-
-
-
-
-
-
-1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation.
-
- This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
-
- {{site.data.alerts.callout_success}}
- The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide.
- {{site.data.alerts.end}}
-
-2. From your local workstation, start the Kubernetes cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ gcloud container clusters create cockroachdb
- ~~~
-
- ~~~
- Creating cluster cockroachdb...done.
- ~~~
-
- This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`.
-
- The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
-
-3. Get the email address associated with your Google Cloud account:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ gcloud info | grep Account
- ~~~
-
- ~~~
- Account: [your.google.cloud.email@example.org]
- ~~~
-
- {{site.data.alerts.callout_danger}}
- This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com.
- {{site.data.alerts.end}}
-
-4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user=
- ~~~
-
- ~~~
- clusterrolebinding "cluster-admin-binding" created
- ~~~
-
-
-
-
-
-From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/gce/) documentation.
-
-The process includes:
-
-- Creating a Google Cloud Platform account, installing `gcloud`, and other prerequisites.
-- Downloading and installing the latest Kubernetes release.
-- Creating GCE instances and joining them into a single Kubernetes cluster.
-- Installing `kubectl`, the command-line tool used to manage Kubernetes from your workstation.
-
-
-
-
-
-From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on AWS EC2](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/aws/) documentation.
-
-
diff --git a/src/current/_includes/v2.0/orchestration/stop-kubernetes.md b/src/current/_includes/v2.0/orchestration/stop-kubernetes.md
deleted file mode 100644
index 264eba07fa8..00000000000
--- a/src/current/_includes/v2.0/orchestration/stop-kubernetes.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
- {{site.data.alerts.callout_danger}}If you stop Kubernetes without first deleting the persistent volumes, they will still exist in your cloud project.{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/orchestration/test-cluster-insecure.md b/src/current/_includes/v2.0/orchestration/test-cluster-insecure.md
deleted file mode 100644
index 52396b848ad..00000000000
--- a/src/current/_includes/v2.0/orchestration/test-cluster-insecure.md
+++ /dev/null
@@ -1,45 +0,0 @@
-1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=cockroachdb-public
- ~~~
-
-2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- +----+---------+
- | id | balance |
- +----+---------+
- | 1 | 1000.5 |
- +----+---------+
- (1 row)
- ~~~
-
-3. Exit the SQL shell and delete the temporary pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v2.0/performance/tuning.py b/src/current/_includes/v2.0/performance/tuning.py
deleted file mode 100644
index 248daec2488..00000000000
--- a/src/current/_includes/v2.0/performance/tuning.py
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(database='movr', user='root', host=args.host, port=26257)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-print("Average time (milliseconds):")
-print(float(sum(times))/len(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v2.0/prod-deployment/insecure-initialize-cluster.md
deleted file mode 100644
index 5d1384c8467..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-initialize-cluster.md
+++ /dev/null
@@ -1,12 +0,0 @@
-On your local machine, complete the node startup process and have them join together as a cluster:
-
-1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
-
-2. Run the [`cockroach init`](initialize-a-cluster.html) command, with the `--host` flag set to the address of any node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=
- ~~~
-
- Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-recommendations.md b/src/current/_includes/v2.0/prod-deployment/insecure-recommendations.md
deleted file mode 100644
index e6f7fc0b9fe..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-recommendations.md
+++ /dev/null
@@ -1,15 +0,0 @@
-- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html).
-
-- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks:
- - Your cluster is open to any client that can access any node's IP addresses.
- - Any user, even `root`, can log in without providing a password.
- - Any user, connecting as `root`, can read or write any data in your cluster.
- - There is no network encryption or authentication, and thus no confidentiality.
-
-- Decide how you want to access your Admin UI:
-
- Access Level | Description
- -------------|------------
- Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
- Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
- Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-requirements.md b/src/current/_includes/v2.0/prod-deployment/insecure-requirements.md
deleted file mode 100644
index 52640254763..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-requirements.md
+++ /dev/null
@@ -1,5 +0,0 @@
-- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
-
-- Your network configuration must allow TCP communication on the following ports:
- - `26257` for intra-cluster and client-cluster communication
- - `8080` to expose your Admin UI
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v2.0/prod-deployment/insecure-scale-cluster.md
deleted file mode 100644
index 66df603eb03..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-scale-cluster.md
+++ /dev/null
@@ -1,120 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start --insecure \
- --host= \
- --locality= \
- --cache=.25 \
- --max-sql-memory=.25 \
- --join=:26257,:26257,:26257 \
- --background
- ~~~
-
-5. Update your load balancer to recognize the new node.
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown cockroach /var/lib/cockroach
- ~~~
-
-7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
- ~~~
-
- Save the file in the `/etc/systemd/system/` directory
-
-8. Customize the sample configuration template for your deployment:
-
- Specify values for the following flags in the sample configuration template:
-
- Flag | Description
- -----|------------
- `--host` | Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.
If you want the node to listen on multiple interfaces, leave `--host` empty.
If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave `--host` empty and set the `--advertise-host` flag to the internal address.
- `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster.
-
-9. Repeat these steps for each additional node that you want in your cluster.
-
-
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v2.0/prod-deployment/insecure-start-nodes.md
deleted file mode 100644
index 046a80d37c0..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-start-nodes.md
+++ /dev/null
@@ -1,149 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --advertise-host= \
- --join=:26257,:26257,:26257 \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
- This command primes the node to start, using the following flags:
-
- Flag | Description
- -----|------------
- `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication.
- `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- `--cache` `--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
- `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-5. Repeat these steps for each additional node that you want in your cluster.
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown cockroach /var/lib/cockroach
- ~~~
-
-7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
- ~~~
-
-8. In the sample configuration template, specify values for the following flags:
-
- Flag | Description
- -----|------------
- `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-9. Start the CockroachDB cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ systemctl start insecurecockroachdb
- ~~~
-
-10. Repeat these steps for each additional node that you want in your cluster.
-
-{{site.data.alerts.callout_info}}
-`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb`
-{{site.data.alerts.end}}
-
-
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v2.0/prod-deployment/insecure-test-cluster.md
deleted file mode 100644
index 1c926379fda..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-test-cluster.md
+++ /dev/null
@@ -1,48 +0,0 @@
-CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
-
-To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
-
-1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure --host=
- ~~~
-
-2. Create an `insecurenodetest` database:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE insecurenodetest;
- ~~~
-
-3. Use `\q` or `ctrl-d` to exit the SQL shell.
-
-4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure --host=
- ~~~
-
-5. View the cluster's databases, which will include `insecurenodetest`:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SHOW DATABASES;
- ~~~
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | crdb_internal |
- | information_schema |
- | insecurenodetest |
- | pg_catalog |
- | system |
- +--------------------+
- (5 rows)
- ~~~
-
-6. Use `\q` or `ctrl-d` to exit the SQL shell.
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v2.0/prod-deployment/insecure-test-load-balancing.md
deleted file mode 100644
index e4369b54410..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecure-test-load-balancing.md
+++ /dev/null
@@ -1,41 +0,0 @@
-CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
-
-{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the run the sample TPC-C workload.
-
- This should be a machine that is not running a CockroachDB node.
-
-2. Download `workload` and make it executable:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
- ~~~
-
-3. Rename and copy `workload` into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i workload.LATEST /usr/local/bin/workload
- ~~~
-
-4. Start the TPC-C workload, pointing it at the IP address of the load balancer:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ workload run tpcc \
- --drop \
- --init \
- --duration=20m \
- --tolerate-errors \
- "postgresql://root@tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.
-
-4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
-
- Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/src/current/_includes/v2.0/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v2.0/prod-deployment/insecurecockroachdb.service
deleted file mode 100644
index 1bab823eea7..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/insecurecockroachdb.service
+++ /dev/null
@@ -1,16 +0,0 @@
-[Unit]
-Description=Cockroach Database cluster node
-Requires=network.target
-[Service]
-Type=notify
-WorkingDirectory=/var/lib/cockroach
-ExecStart=/usr/local/bin/cockroach start --insecure --advertise-host= --join=:26257,:26257,:26257 --cache=.25 --max-sql-memory=.25
-TimeoutStopSec=60
-Restart=always
-RestartSec=10
-StandardOutput=syslog
-StandardError=syslog
-SyslogIdentifier=cockroach
-User=cockroach
-[Install]
-WantedBy=default.target
diff --git a/src/current/_includes/v2.0/prod-deployment/monitor-cluster.md b/src/current/_includes/v2.0/prod-deployment/monitor-cluster.md
deleted file mode 100644
index cb8185eac19..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/monitor-cluster.md
+++ /dev/null
@@ -1,3 +0,0 @@
-Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
-
-For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html).
diff --git a/src/current/_includes/v2.0/prod-deployment/prod-see-also.md b/src/current/_includes/v2.0/prod-deployment/prod-see-also.md
deleted file mode 100644
index 9dc661f6dfc..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/prod-see-also.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- [Production Checklist](recommended-production-settings.html)
-- [Manual Deployment](manual-deployment.html)
-- [Orchestrated Deployment](orchestration.html)
-- [Monitoring and Alerting](monitoring-and-alerting.html)
-- [Performance Benchmarking](performance-benchmarking-with-tpc-c.html)
-- [Performance Tuning](performance-tuning.html)
-- [Local Deployment](start-a-local-cluster.html)
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v2.0/prod-deployment/secure-generate-certificates.md
deleted file mode 100644
index 4d821e21063..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-generate-certificates.md
+++ /dev/null
@@ -1,144 +0,0 @@
-You can use either `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands.
-
-Locally, you'll need to [create the following certificates and keys](create-security-certificates.html):
-
-- A certificate authority (CA) key pair (`ca.crt` and `ca.key`).
-- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
-- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine.
-
-{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}}
-
-1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
-
-2. Create two directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir my-safe-directory
- ~~~
- - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.
- - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.
-
-3. Create the CA certificate and key:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- \
- \
- \
- \
- localhost \
- 127.0.0.1 \
- \
- \
- \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-5. Upload certificates to the first node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create the certs directory:
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Upload the CA certificate and node certificate and key:
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
-6. Delete the local copy of the node certificate and key:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm certs/node.crt certs/node.key
- ~~~
-
- {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.{{site.data.alerts.end}}
-
-7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- \
- \
- \
- \
- localhost \
- 127.0.0.1 \
- \
- \
- \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-8. Upload certificates to the second node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create the certs directory:
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Upload the CA certificate and node certificate and key:
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
-9. Repeat steps 6 - 8 for each additional node.
-
-10. Create a client certificate and key for the `root` user:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-11. Upload certificates to the machine where you will run a sample workload:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create the certs directory:
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Upload the CA certificate and client certificate and key:
- $ scp certs/ca.crt \
- certs/client.root.crt \
- certs/client.root.key \
- @:~/certs
- ~~~
-
- In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well.
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v2.0/prod-deployment/secure-initialize-cluster.md
deleted file mode 100644
index 9ae863063bf..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-initialize-cluster.md
+++ /dev/null
@@ -1,15 +0,0 @@
-On your local machine, run the [`cockroach init`](initialize-a-cluster.html) command to complete the node startup process and have them join together as a cluster:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach init --certs-dir=certs --host=
-~~~
-
-This command requires the following flags:
-
-Flag | Description
------|------------
-`--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `client.root.crt` and `client.root.key` files for the `root` user.
-`--host` | Specifies the address of any node in the cluster.
-
-After running this command, each node prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-recommendations.md b/src/current/_includes/v2.0/prod-deployment/secure-recommendations.md
deleted file mode 100644
index 79d077ee84d..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-recommendations.md
+++ /dev/null
@@ -1,9 +0,0 @@
-- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html).
-
-- Decide how you want to access your Admin UI:
-
- Access Level | Description
- -------------|------------
- Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
- Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
- Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-requirements.md b/src/current/_includes/v2.0/prod-deployment/secure-requirements.md
deleted file mode 100644
index f4a9beb1209..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-requirements.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates.
-
-- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
-
-- Your network configuration must allow TCP communication on the following ports:
- - `26257` for intra-cluster and client-cluster communication
- - `8080` to expose your Admin UI
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v2.0/prod-deployment/secure-scale-cluster.md
deleted file mode 100644
index 4638d6b7500..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-scale-cluster.md
+++ /dev/null
@@ -1,128 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --host= \
- --locality= \
- --cache=.25 \
- --max-sql-memory=.25 \
- --join=:26257,:26257,:26257 \
- --background
- ~~~
-
-5. Update your load balancer to recognize the new node.
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Move the `certs` directory to the `cockroach` directory.
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mv certs /var/lib/cockroach/
- ~~~
-
-7. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown -R cockroach.cockroach /var/lib/cockroach
- ~~~
-
-8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
- ~~~
-
- Save the file in the `/etc/systemd/system/` directory.
-
-9. Customize the sample configuration template for your deployment:
-
- Specify values for the following flags in the sample configuration template:
-
- Flag | Description
- -----|------------
- `--host` | Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.
If you want the node to listen on multiple interfaces, leave `--host` empty.
If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave `--host` empty and set the `--advertise-host` flag to the internal address.
- `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster.
-
-10. Repeat these steps for each additional node that you want in your cluster.
-
-
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-start-nodes.md b/src/current/_includes/v2.0/prod-deployment/secure-start-nodes.md
deleted file mode 100644
index e843e84d951..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-start-nodes.md
+++ /dev/null
@@ -1,156 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --advertise-host= \
- --join=:26257,:26257,:26257 \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
- This command primes the node to start, using the following flags:
-
- Flag | Description
- -----|------------
- `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node.
- `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- `--cache` `--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
- `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-5. Repeat these steps for each additional node that you want in your cluster.
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Move the `certs` directory to the `cockroach` directory.
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mv certs /var/lib/cockroach/
- ~~~
-
-7. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown -R cockroach.cockroach /var/lib/cockroach
- ~~~
-
-8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
- ~~~
-
-9. In the sample configuration template, specify values for the following flags:
-
- Flag | Description
- -----|------------
- `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-10. Start the CockroachDB cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ systemctl start securecockroachdb
- ~~~
-
-11. Repeat these steps for each additional node that you want in your cluster.
-
-{{site.data.alerts.callout_info}}
-`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb`
-{{site.data.alerts.end}}
-
-
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-test-cluster.md b/src/current/_includes/v2.0/prod-deployment/secure-test-cluster.md
deleted file mode 100644
index 7b897f362a5..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-test-cluster.md
+++ /dev/null
@@ -1,55 +0,0 @@
-CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
-
-To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
-
-1. On your local machine, launch the built-in SQL client:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --certs-dir=certs --host=
- ~~~
-
- This command requires the following flags:
-
- Flag | Description
- -----|------------
- `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `client.root.crt` and `client.root.key` files for the `root` user.
- `--host` | Specifies the address of any node in the cluster.
-
-2. Create a `securenodetest` database:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE securenodetest;
- ~~~
-
-3. Use `\q` or **CTRL-C** to exit the SQL shell.
-
-4. Launch the built-in SQL client against a different node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --certs-dir=certs --host=
- ~~~
-
-5. View the cluster's databases, which will include `securenodetest`:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SHOW DATABASES;
- ~~~
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | crdb_internal |
- | information_schema |
- | securenodetest |
- | pg_catalog |
- | system |
- +--------------------+
- (5 rows)
- ~~~
-
-6. Use `\q` or **CTRL-C** to exit the SQL shell.
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v2.0/prod-deployment/secure-test-load-balancing.md
deleted file mode 100644
index dad1a55e835..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/secure-test-load-balancing.md
+++ /dev/null
@@ -1,41 +0,0 @@
-CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
-
-{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the run the sample TPC-C workload.
-
- This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files.
-
-2. Download `workload` and make it executable:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
- ~~~
-
-3. Rename and copy `workload` into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i workload.LATEST /usr/local/bin/workload
- ~~~
-
-4. Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the `ca.crt`, `client.root.crt`, and `client.root.key` files:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ workload run tpcc \
- --drop \
- --init \
- --duration=20m \
- --tolerate-errors \
- "postgresql://root@tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.
-
-4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
-
- Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/src/current/_includes/v2.0/prod-deployment/securecockroachdb.service b/src/current/_includes/v2.0/prod-deployment/securecockroachdb.service
deleted file mode 100644
index 7ab88217783..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/securecockroachdb.service
+++ /dev/null
@@ -1,16 +0,0 @@
-[Unit]
-Description=Cockroach Database cluster node
-Requires=network.target
-[Service]
-Type=notify
-WorkingDirectory=/var/lib/cockroach
-ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-host= --join=:26257,:26257,:26257 --cache=.25 --max-sql-memory=.25
-TimeoutStopSec=60
-Restart=always
-RestartSec=10
-StandardOutput=syslog
-StandardError=syslog
-SyslogIdentifier=cockroach
-User=cockroach
-[Install]
-WantedBy=default.target
diff --git a/src/current/_includes/v2.0/prod-deployment/synchronize-clocks.md b/src/current/_includes/v2.0/prod-deployment/synchronize-clocks.md
deleted file mode 100644
index 5257e7a9640..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/synchronize-clocks.md
+++ /dev/null
@@ -1,173 +0,0 @@
-CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.
-
-{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %}
-
-[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well.
-
-1. SSH to the first machine.
-
-2. Disable `timesyncd`, which tends to be active by default on some Linux distributions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo timedatectl set-ntp no
- ~~~
-
- Verify that `timesyncd` is off:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ timedatectl
- ~~~
-
- Look for `Network time on: no` or `NTP enabled: no` in the output.
-
-3. Install the `ntp` package:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install ntp
- ~~~
-
-4. Stop the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp stop
- ~~~
-
-5. Sync the machine's clock with Google's NTP service:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpd -b time.google.com
- ~~~
-
- To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
-
- {% include copy-clipboard.html %}
- ~~~
- server time1.google.com iburst
- server time2.google.com iburst
- server time3.google.com iburst
- server time4.google.com iburst
- ~~~
-
- Restart the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp start
- ~~~
-
- {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}}
-
-6. Verify that the machine is using a Google NTP server:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpq -p
- ~~~
-
- The active NTP server will be marked with an asterisk.
-
-7. Repeat these steps for each machine where a CockroachDB node will run.
-
-{% elsif page.title contains "Google" %}
-
-Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should:
-
-- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances).
-- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks).
-
-{% elsif page.title contains "AWS" %}
-
-Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.
-
-- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service).
-- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second.
-
-{% elsif page.title contains "Azure" %}
-
-[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems.
-
-1. SSH to the first machine.
-
-2. Find the ID of the Hyper-V Time Synchronization device:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3
- ~~~
-
- ~~~
- VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization]
- Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee}
- Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee
- Rel_ID=12, target_cpu=0
- ~~~
-
-3. Unbind the device, using the `Device_ID` from the previous command's output:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ echo | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind
- ~~~
-
-4. Install the `ntp` package:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install ntp
- ~~~
-
-5. Stop the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp stop
- ~~~
-
-6. Sync the machine's clock with Google's NTP service:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpd -b time.google.com
- ~~~
-
- To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
-
- {% include copy-clipboard.html %}
- ~~~
- server time1.google.com iburst
- server time2.google.com iburst
- server time3.google.com iburst
- server time4.google.com iburst
- ~~~
-
- Restart the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp start
- ~~~
-
- {{site.data.alerts.callout_info}}We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}}
-
-7. Verify that the machine is using a Google NTP server:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpq -p
- ~~~
-
- The active NTP server will be marked with an asterisk.
-
-8. Repeat these steps for each machine where a CockroachDB node will run.
-
-{% endif %}
diff --git a/src/current/_includes/v2.0/prod-deployment/use-cluster.md b/src/current/_includes/v2.0/prod-deployment/use-cluster.md
deleted file mode 100644
index fc2224fd384..00000000000
--- a/src/current/_includes/v2.0/prod-deployment/use-cluster.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Now that your deployment is working, you can:
-
-1. [Implement your data model](sql-statements.html).
-2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html).
-3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node.
-
-You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases, tables, and rows differently. For more information, see [Configure Replication Zones](configure-replication-zones.html).
-
-{{site.data.alerts.callout_danger}}
-When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.0/sql/connection-parameters-with-url.md b/src/current/_includes/v2.0/sql/connection-parameters-with-url.md
deleted file mode 100644
index 59c24c6450d..00000000000
--- a/src/current/_includes/v2.0/sql/connection-parameters-with-url.md
+++ /dev/null
@@ -1,2 +0,0 @@
-{% include {{ page.version.version }}/sql/connection-parameters.md %}
- `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.
**Env Variable:** `COCKROACH_URL` **Default:** no URL
diff --git a/src/current/_includes/v2.0/sql/connection-parameters.md b/src/current/_includes/v2.0/sql/connection-parameters.md
deleted file mode 100644
index 2e74255dcc4..00000000000
--- a/src/current/_includes/v2.0/sql/connection-parameters.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Flag | Description
------|------------
-`--host` | The server host to connect to. This can be the address of any node in the cluster.
**Env Variable:** `COCKROACH_HOST` **Default:**`localhost`
-`--port` `-p` | The server port to connect to.
**Env Variable:** `COCKROACH_PORT` **Default:** `26257`
-`--user` `-u` | The [SQL user](create-and-manage-users.html) that will own the client session.
**Env Variable:** `COCKROACH_USER` **Default:** `root`
-`--insecure` | Use an insecure connection.
**Env Variable:** `COCKROACH_INSECURE` **Default:** `false`
-`--certs-dir` | The path to the [certificate directory](create-security-certificates.html) containing the CA and client certificates and client key.
**Env Variable:** `COCKROACH_CERTS_DIR` **Default:** `${HOME}/.cockroach-certs/`
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/add_column.html b/src/current/_includes/v2.0/sql/diagrams/add_column.html
deleted file mode 100644
index f59fd135d0e..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/add_column.html
+++ /dev/null
@@ -1,52 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/add_constraint.html b/src/current/_includes/v2.0/sql/diagrams/add_constraint.html
deleted file mode 100644
index a8f3b1c9c61..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/add_constraint.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_column.html b/src/current/_includes/v2.0/sql/diagrams/alter_column.html
deleted file mode 100644
index 1c77dc193ef..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/alter_column.html
+++ /dev/null
@@ -1,56 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_sequence_options.html b/src/current/_includes/v2.0/sql/diagrams/alter_sequence_options.html
deleted file mode 100644
index ee56ccdaee6..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/alter_sequence_options.html
+++ /dev/null
@@ -1,63 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_table_partition_by.html b/src/current/_includes/v2.0/sql/diagrams/alter_table_partition_by.html
deleted file mode 100644
index 073c8794394..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/alter_table_partition_by.html
+++ /dev/null
@@ -1,81 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_user_password.html b/src/current/_includes/v2.0/sql/diagrams/alter_user_password.html
deleted file mode 100644
index 0e014933d1b..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/alter_user_password.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_view.html b/src/current/_includes/v2.0/sql/diagrams/alter_view.html
deleted file mode 100644
index 2e481fa60aa..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/alter_view.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/backup.html b/src/current/_includes/v2.0/sql/diagrams/backup.html
deleted file mode 100644
index 1974cb5bcb0..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/backup.html
+++ /dev/null
@@ -1,73 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/begin_transaction.html b/src/current/_includes/v2.0/sql/diagrams/begin_transaction.html
deleted file mode 100644
index ee2372d9861..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/begin_transaction.html
+++ /dev/null
@@ -1,47 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/cancel_job.html b/src/current/_includes/v2.0/sql/diagrams/cancel_job.html
deleted file mode 100644
index f61ff0cfa79..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/cancel_job.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/cancel_query.html b/src/current/_includes/v2.0/sql/diagrams/cancel_query.html
deleted file mode 100644
index 6cc33a38466..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/cancel_query.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/check_column_level.html b/src/current/_includes/v2.0/sql/diagrams/check_column_level.html
deleted file mode 100644
index 59eec3e3c15..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/check_column_level.html
+++ /dev/null
@@ -1,70 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/check_table_level.html b/src/current/_includes/v2.0/sql/diagrams/check_table_level.html
deleted file mode 100644
index 6066d637220..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/check_table_level.html
+++ /dev/null
@@ -1,60 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/col_qualification.html b/src/current/_includes/v2.0/sql/diagrams/col_qualification.html
deleted file mode 100644
index 8b9b2d4fa1d..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/col_qualification.html
+++ /dev/null
@@ -1,132 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/column_def.html b/src/current/_includes/v2.0/sql/diagrams/column_def.html
deleted file mode 100644
index 284e8dc5838..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/column_def.html
+++ /dev/null
@@ -1,23 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/commit_transaction.html b/src/current/_includes/v2.0/sql/diagrams/commit_transaction.html
deleted file mode 100644
index 12914f3e1cb..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/commit_transaction.html
+++ /dev/null
@@ -1,17 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_database.html b/src/current/_includes/v2.0/sql/diagrams/create_database.html
deleted file mode 100644
index c621b08e138..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_database.html
+++ /dev/null
@@ -1,42 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_index.html b/src/current/_includes/v2.0/sql/diagrams/create_index.html
deleted file mode 100644
index dc0479dab14..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_index.html
+++ /dev/null
@@ -1,91 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_inverted_index.html b/src/current/_includes/v2.0/sql/diagrams/create_inverted_index.html
deleted file mode 100644
index 266281c12c1..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_inverted_index.html
+++ /dev/null
@@ -1,64 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_role.html b/src/current/_includes/v2.0/sql/diagrams/create_role.html
deleted file mode 100644
index 3c9c43dedf3..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_role.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_sequence.html b/src/current/_includes/v2.0/sql/diagrams/create_sequence.html
deleted file mode 100644
index 4363cc0b087..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_sequence.html
+++ /dev/null
@@ -1,58 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_table.html b/src/current/_includes/v2.0/sql/diagrams/create_table.html
deleted file mode 100644
index 456c9f64ab7..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_table.html
+++ /dev/null
@@ -1,67 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_table_as.html b/src/current/_includes/v2.0/sql/diagrams/create_table_as.html
deleted file mode 100644
index dbf1028099a..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_table_as.html
+++ /dev/null
@@ -1,50 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_user.html b/src/current/_includes/v2.0/sql/diagrams/create_user.html
deleted file mode 100644
index 1dc78bb289a..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_user.html
+++ /dev/null
@@ -1,39 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_view.html b/src/current/_includes/v2.0/sql/diagrams/create_view.html
deleted file mode 100644
index 044db4c888c..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/create_view.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/default_value_column_level.html b/src/current/_includes/v2.0/sql/diagrams/default_value_column_level.html
deleted file mode 100644
index 0ba9afca9c4..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/default_value_column_level.html
+++ /dev/null
@@ -1,64 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/delete.html b/src/current/_includes/v2.0/sql/diagrams/delete.html
deleted file mode 100644
index d79cbd6e082..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/delete.html
+++ /dev/null
@@ -1,66 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_column.html b/src/current/_includes/v2.0/sql/diagrams/drop_column.html
deleted file mode 100644
index 384f5219d9d..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_column.html
+++ /dev/null
@@ -1,43 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_constraint.html b/src/current/_includes/v2.0/sql/diagrams/drop_constraint.html
deleted file mode 100644
index 77cea230ccd..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_constraint.html
+++ /dev/null
@@ -1,45 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_database.html b/src/current/_includes/v2.0/sql/diagrams/drop_database.html
deleted file mode 100644
index 038eb0befc1..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_database.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_index.html b/src/current/_includes/v2.0/sql/diagrams/drop_index.html
deleted file mode 100644
index 2dd8b3636ee..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_index.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_role.html b/src/current/_includes/v2.0/sql/diagrams/drop_role.html
deleted file mode 100644
index 0037ebf56ce..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_role.html
+++ /dev/null
@@ -1,25 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_sequence.html b/src/current/_includes/v2.0/sql/diagrams/drop_sequence.html
deleted file mode 100644
index 6507f7dec30..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_sequence.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_table.html b/src/current/_includes/v2.0/sql/diagrams/drop_table.html
deleted file mode 100644
index 18ad4fdd502..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_table.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_user.html b/src/current/_includes/v2.0/sql/diagrams/drop_user.html
deleted file mode 100644
index 57c3db991b9..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_user.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_view.html b/src/current/_includes/v2.0/sql/diagrams/drop_view.html
deleted file mode 100644
index d95db116000..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/drop_view.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/experimental_audit.html b/src/current/_includes/v2.0/sql/diagrams/experimental_audit.html
deleted file mode 100644
index 46cc527074a..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/experimental_audit.html
+++ /dev/null
@@ -1,39 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/explain.html b/src/current/_includes/v2.0/sql/diagrams/explain.html
deleted file mode 100644
index 89ca35dd0fa..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/explain.html
+++ /dev/null
@@ -1,40 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/family_def.html b/src/current/_includes/v2.0/sql/diagrams/family_def.html
deleted file mode 100644
index 1dda01d9e79..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/family_def.html
+++ /dev/null
@@ -1,30 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/foreign_key_column_level.html b/src/current/_includes/v2.0/sql/diagrams/foreign_key_column_level.html
deleted file mode 100644
index a963e586425..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/foreign_key_column_level.html
+++ /dev/null
@@ -1,75 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/foreign_key_table_level.html b/src/current/_includes/v2.0/sql/diagrams/foreign_key_table_level.html
deleted file mode 100644
index 2eb3498af46..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/foreign_key_table_level.html
+++ /dev/null
@@ -1,85 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/grant_privileges.html b/src/current/_includes/v2.0/sql/diagrams/grant_privileges.html
deleted file mode 100644
index da7f44e5160..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/grant_privileges.html
+++ /dev/null
@@ -1,74 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/grant_roles.html b/src/current/_includes/v2.0/sql/diagrams/grant_roles.html
deleted file mode 100644
index f8eee0dc766..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/grant_roles.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/import.html b/src/current/_includes/v2.0/sql/diagrams/import.html
deleted file mode 100644
index 4528fe2a3e2..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/import.html
+++ /dev/null
@@ -1,72 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/index_def.html b/src/current/_includes/v2.0/sql/diagrams/index_def.html
deleted file mode 100644
index 7808b2e4800..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/index_def.html
+++ /dev/null
@@ -1,85 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/insert.html b/src/current/_includes/v2.0/sql/diagrams/insert.html
deleted file mode 100644
index 81576677379..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/insert.html
+++ /dev/null
@@ -1,81 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/interleave.html b/src/current/_includes/v2.0/sql/diagrams/interleave.html
deleted file mode 100644
index 09bb9c35b5b..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/interleave.html
+++ /dev/null
@@ -1,69 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/joined_table.html b/src/current/_includes/v2.0/sql/diagrams/joined_table.html
deleted file mode 100644
index 68b66314702..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/joined_table.html
+++ /dev/null
@@ -1,100 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/limit_clause.html b/src/current/_includes/v2.0/sql/diagrams/limit_clause.html
deleted file mode 100644
index 98d5114a88e..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/limit_clause.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/not_null_column_level.html b/src/current/_includes/v2.0/sql/diagrams/not_null_column_level.html
deleted file mode 100644
index 52e17e9d57d..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/not_null_column_level.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/offset_clause.html b/src/current/_includes/v2.0/sql/diagrams/offset_clause.html
deleted file mode 100644
index d6dc4873ee5..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/offset_clause.html
+++ /dev/null
@@ -1,26 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/on_conflict.html b/src/current/_includes/v2.0/sql/diagrams/on_conflict.html
deleted file mode 100644
index 7a64a45547b..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/on_conflict.html
+++ /dev/null
@@ -1,107 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/opt_interleave.html b/src/current/_includes/v2.0/sql/diagrams/opt_interleave.html
deleted file mode 100644
index 5825c01b310..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/opt_interleave.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/pause_job.html b/src/current/_includes/v2.0/sql/diagrams/pause_job.html
deleted file mode 100644
index 2726666933a..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/pause_job.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/primary_key_column_level.html b/src/current/_includes/v2.0/sql/diagrams/primary_key_column_level.html
deleted file mode 100644
index f938b641654..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/primary_key_column_level.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/primary_key_table_level.html b/src/current/_includes/v2.0/sql/diagrams/primary_key_table_level.html
deleted file mode 100644
index db8ece49c39..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/primary_key_table_level.html
+++ /dev/null
@@ -1,63 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/release_savepoint.html b/src/current/_includes/v2.0/sql/diagrams/release_savepoint.html
deleted file mode 100644
index 194ce6573ca..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/release_savepoint.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_column.html b/src/current/_includes/v2.0/sql/diagrams/rename_column.html
deleted file mode 100644
index 2d275bc9de7..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/rename_column.html
+++ /dev/null
@@ -1,44 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_database.html b/src/current/_includes/v2.0/sql/diagrams/rename_database.html
deleted file mode 100644
index ce9ddd3ddba..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/rename_database.html
+++ /dev/null
@@ -1,30 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_index.html b/src/current/_includes/v2.0/sql/diagrams/rename_index.html
deleted file mode 100644
index 82ed2e90255..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/rename_index.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_sequence.html b/src/current/_includes/v2.0/sql/diagrams/rename_sequence.html
deleted file mode 100644
index a564d9db425..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/rename_sequence.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_table.html b/src/current/_includes/v2.0/sql/diagrams/rename_table.html
deleted file mode 100644
index 316c56482eb..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/rename_table.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/reset_csetting.html b/src/current/_includes/v2.0/sql/diagrams/reset_csetting.html
deleted file mode 100644
index 49e120ffc69..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/reset_csetting.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/reset_session.html b/src/current/_includes/v2.0/sql/diagrams/reset_session.html
deleted file mode 100644
index 0a47ec52d49..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/reset_session.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/restore.html b/src/current/_includes/v2.0/sql/diagrams/restore.html
deleted file mode 100644
index 4aec1b4819f..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/restore.html
+++ /dev/null
@@ -1,67 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/resume_job.html b/src/current/_includes/v2.0/sql/diagrams/resume_job.html
deleted file mode 100644
index 2aa93c46cb5..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/resume_job.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/revoke_privileges.html b/src/current/_includes/v2.0/sql/diagrams/revoke_privileges.html
deleted file mode 100644
index a6f9a1dee8e..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/revoke_privileges.html
+++ /dev/null
@@ -1,74 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/revoke_roles.html b/src/current/_includes/v2.0/sql/diagrams/revoke_roles.html
deleted file mode 100644
index a30aee75474..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/revoke_roles.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/rollback_transaction.html b/src/current/_includes/v2.0/sql/diagrams/rollback_transaction.html
deleted file mode 100644
index c34d5d12047..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/rollback_transaction.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/savepoint.html b/src/current/_includes/v2.0/sql/diagrams/savepoint.html
deleted file mode 100644
index 9b7dc70608b..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/savepoint.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/select.html b/src/current/_includes/v2.0/sql/diagrams/select.html
deleted file mode 100644
index 9f743234e06..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/select.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/select_clause.html b/src/current/_includes/v2.0/sql/diagrams/select_clause.html
deleted file mode 100644
index 88dc35507df..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/select_clause.html
+++ /dev/null
@@ -1,53 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/set_cluster_setting.html b/src/current/_includes/v2.0/sql/diagrams/set_cluster_setting.html
deleted file mode 100644
index b6554c7be52..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/set_cluster_setting.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/set_operation.html b/src/current/_includes/v2.0/sql/diagrams/set_operation.html
deleted file mode 100644
index aa0e63023dc..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/set_operation.html
+++ /dev/null
@@ -1,32 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/set_transaction.html b/src/current/_includes/v2.0/sql/diagrams/set_transaction.html
deleted file mode 100644
index 14d8b19a019..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/set_transaction.html
+++ /dev/null
@@ -1,68 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/set_var.html b/src/current/_includes/v2.0/sql/diagrams/set_var.html
deleted file mode 100644
index 96bb04e7cf6..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/set_var.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_backup.html b/src/current/_includes/v2.0/sql/diagrams/show_backup.html
deleted file mode 100644
index 0f4f4e2c379..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_backup.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_cluster_setting.html b/src/current/_includes/v2.0/sql/diagrams/show_cluster_setting.html
deleted file mode 100644
index d575106689f..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_cluster_setting.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_columns.html b/src/current/_includes/v2.0/sql/diagrams/show_columns.html
deleted file mode 100644
index 7b47a3b3123..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_columns.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_constraints.html b/src/current/_includes/v2.0/sql/diagrams/show_constraints.html
deleted file mode 100644
index 9c520ae9bc6..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_constraints.html
+++ /dev/null
@@ -1,25 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_create_sequence.html b/src/current/_includes/v2.0/sql/diagrams/show_create_sequence.html
deleted file mode 100644
index 6a2437fb077..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_create_sequence.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_create_table.html b/src/current/_includes/v2.0/sql/diagrams/show_create_table.html
deleted file mode 100644
index ee1abe260ad..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_create_table.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_create_view.html b/src/current/_includes/v2.0/sql/diagrams/show_create_view.html
deleted file mode 100644
index 6d730290564..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_create_view.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_databases.html b/src/current/_includes/v2.0/sql/diagrams/show_databases.html
deleted file mode 100644
index 487bfc4e629..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_databases.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_grants.html b/src/current/_includes/v2.0/sql/diagrams/show_grants.html
deleted file mode 100644
index 92a7932dc22..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_grants.html
+++ /dev/null
@@ -1,61 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_index.html b/src/current/_includes/v2.0/sql/diagrams/show_index.html
deleted file mode 100644
index 3014183c521..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_index.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_jobs.html b/src/current/_includes/v2.0/sql/diagrams/show_jobs.html
deleted file mode 100644
index b59d4d176d0..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_jobs.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_queries.html b/src/current/_includes/v2.0/sql/diagrams/show_queries.html
deleted file mode 100644
index 26376243dac..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_queries.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_ranges.html b/src/current/_includes/v2.0/sql/diagrams/show_ranges.html
deleted file mode 100644
index 268530ff8f4..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_ranges.html
+++ /dev/null
@@ -1,32 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_roles.html b/src/current/_includes/v2.0/sql/diagrams/show_roles.html
deleted file mode 100644
index fd508395e0b..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_roles.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_schemas.html b/src/current/_includes/v2.0/sql/diagrams/show_schemas.html
deleted file mode 100644
index efa07764533..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_schemas.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_sessions.html b/src/current/_includes/v2.0/sql/diagrams/show_sessions.html
deleted file mode 100644
index 3b2aa5b16ee..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_sessions.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_tables.html b/src/current/_includes/v2.0/sql/diagrams/show_tables.html
deleted file mode 100644
index 570e6222172..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_tables.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_trace.html b/src/current/_includes/v2.0/sql/diagrams/show_trace.html
deleted file mode 100644
index ffa4c89a33e..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_trace.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_users.html b/src/current/_includes/v2.0/sql/diagrams/show_users.html
deleted file mode 100644
index 7c33b7f00b4..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_users.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_var.html b/src/current/_includes/v2.0/sql/diagrams/show_var.html
deleted file mode 100644
index fb7ec6f4ce8..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/show_var.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/simple_select_clause.html b/src/current/_includes/v2.0/sql/diagrams/simple_select_clause.html
deleted file mode 100644
index 4eeeeae5b59..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/simple_select_clause.html
+++ /dev/null
@@ -1,107 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/sort_clause.html b/src/current/_includes/v2.0/sql/diagrams/sort_clause.html
deleted file mode 100644
index dbac057629e..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/sort_clause.html
+++ /dev/null
@@ -1,55 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/split_index_at.html b/src/current/_includes/v2.0/sql/diagrams/split_index_at.html
deleted file mode 100644
index 51daee7e3c7..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/split_index_at.html
+++ /dev/null
@@ -1,35 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/split_table_at.html b/src/current/_includes/v2.0/sql/diagrams/split_table_at.html
deleted file mode 100644
index a694595b9b5..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/split_table_at.html
+++ /dev/null
@@ -1,30 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/stmt_block.html b/src/current/_includes/v2.0/sql/diagrams/stmt_block.html
deleted file mode 100644
index a1b877445f7..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/stmt_block.html
+++ /dev/null
@@ -1,13814 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/table.html b/src/current/_includes/v2.0/sql/diagrams/table.html
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/src/current/_includes/v2.0/sql/diagrams/table_clause.html b/src/current/_includes/v2.0/sql/diagrams/table_clause.html
deleted file mode 100644
index 97691481d76..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/table_clause.html
+++ /dev/null
@@ -1,15 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/table_constraint.html b/src/current/_includes/v2.0/sql/diagrams/table_constraint.html
deleted file mode 100644
index ac37f0f1eac..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/table_constraint.html
+++ /dev/null
@@ -1,120 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/table_ref.html b/src/current/_includes/v2.0/sql/diagrams/table_ref.html
deleted file mode 100644
index e3164a6e2a9..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/table_ref.html
+++ /dev/null
@@ -1,85 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/truncate.html b/src/current/_includes/v2.0/sql/diagrams/truncate.html
deleted file mode 100644
index 06cb91a310c..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/truncate.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/unique_column_level.html b/src/current/_includes/v2.0/sql/diagrams/unique_column_level.html
deleted file mode 100644
index c7c178e9351..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/unique_column_level.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/unique_table_level.html b/src/current/_includes/v2.0/sql/diagrams/unique_table_level.html
deleted file mode 100644
index e77a972161a..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/unique_table_level.html
+++ /dev/null
@@ -1,63 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/update.html b/src/current/_includes/v2.0/sql/diagrams/update.html
deleted file mode 100644
index 7ead70594b4..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/update.html
+++ /dev/null
@@ -1,118 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/upsert.html b/src/current/_includes/v2.0/sql/diagrams/upsert.html
deleted file mode 100644
index b4d7987ddfe..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/upsert.html
+++ /dev/null
@@ -1,71 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/validate_constraint.html b/src/current/_includes/v2.0/sql/diagrams/validate_constraint.html
deleted file mode 100644
index d470d8dd98f..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/validate_constraint.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.0/sql/diagrams/values_clause.html b/src/current/_includes/v2.0/sql/diagrams/values_clause.html
deleted file mode 100644
index 34f78e982b4..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/values_clause.html
+++ /dev/null
@@ -1,27 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/diagrams/with_clause.html b/src/current/_includes/v2.0/sql/diagrams/with_clause.html
deleted file mode 100644
index 0f746306ae3..00000000000
--- a/src/current/_includes/v2.0/sql/diagrams/with_clause.html
+++ /dev/null
@@ -1,71 +0,0 @@
-
diff --git a/src/current/_includes/v2.0/sql/function-special-forms.md b/src/current/_includes/v2.0/sql/function-special-forms.md
deleted file mode 100644
index bb4b06bbe39..00000000000
--- a/src/current/_includes/v2.0/sql/function-special-forms.md
+++ /dev/null
@@ -1,27 +0,0 @@
-| Special form | Equivalent to |
-|-----------------------------------------------------------|---------------------------------------------|
-| `CURRENT_CATALOG` | `current_catalog()` |
-| `CURRENT_DATE` | `current_date()` |
-| `CURRENT_ROLE` | `current_user()` |
-| `CURRENT_SCHEMA` | `current_schema()` |
-| `CURRENT_TIMESTAMP` | `current_timestamp()` |
-| `CURRENT_TIME` | `current_time()` |
-| `CURRENT_USER` | `current_user()` |
-| `EXTRACT( FROM )` | `extract("", )` |
-| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` |
-| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` |
-| `OVERLAY( PLACING FROM )` | `overlay(, , )` |
-| `POSITION( IN )` | `strpos(, )` |
-| `SESSION_USER` | `current_user()` |
-| `SUBSTRING( FOR FROM )` | `substring(, , )` |
-| `SUBSTRING( FOR )` | `substring(, 1, )` |
-| `SUBSTRING( FROM FOR )` | `substring(, , )` |
-| `SUBSTRING( FROM )` | `substring(, )` |
-| `TRIM( FROM )` | `btrim(, )` |
-| `TRIM(, )` | `btrim(, )` |
-| `TRIM(FROM )` | `btrim()` |
-| `TRIM(LEADING FROM )` | `ltrim(, )` |
-| `TRIM(LEADING FROM )` | `ltrim()` |
-| `TRIM(TRAILING FROM )` | `rtrim(, )` |
-| `TRIM(TRAILING FROM )` | `rtrim()` |
-| `USER` | `current_user()` |
diff --git a/src/current/_includes/v2.0/sql/settings/settings.md b/src/current/_includes/v2.0/sql/settings/settings.md
deleted file mode 100644
index 67a6dab2e4a..00000000000
--- a/src/current/_includes/v2.0/sql/settings/settings.md
+++ /dev/null
@@ -1,52 +0,0 @@
-| SETTING | TYPE | DEFAULT | DESCRIPTION |
-|-----------------------------------------------------|-------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
-| `cloudstorage.gs.default.key` | string | `` | if set, JSON key to use during Google Cloud Storage operations |
-| `cloudstorage.http.custom_ca` | string | `` | custom root CA (appended to system's default CAs) for verifying certificates when interacting with HTTPS storage |
-| `cluster.organization` | string | `` | organization name |
-| `debug.panic_on_failed_assertions` | boolean | `false` | panic when an assertion fails rather than reporting |
-| `diagnostics.reporting.enabled` | boolean | `true` | enable reporting diagnostic metrics to cockroach labs |
-| `diagnostics.reporting.interval` | duration | `1h0m0s` | interval at which diagnostics data should be reported |
-| `diagnostics.reporting.send_crash_reports` | boolean | `true` | send crash and panic reports |
-| `kv.allocator.lease_rebalancing_aggressiveness` | float | `1E+00` | set greater than 1.0 to rebalance leases toward load more aggressively, or between 0 and 1.0 to be more conservative about rebalancing leases |
-| `kv.allocator.load_based_lease_rebalancing.enabled` | boolean | `true` | set to enable rebalancing of range leases based on load and latency |
-| `kv.allocator.range_rebalance_threshold` | float | `5E-02` | minimum fraction away from the mean a store's range count can be before it is considered overfull or underfull |
-| `kv.allocator.stat_based_rebalancing.enabled` | boolean | `false` | set to enable rebalancing of range replicas based on write load and disk usage |
-| `kv.allocator.stat_rebalance_threshold` | float | `2E-01` | minimum fraction away from the mean a store's stats (like disk usage or writes per second) can be before it is considered overfull or underfull |
-| `kv.bulk_io_write.max_rate` | byte size | `8.0 EiB` | the rate limit (bytes/sec) to use for writes to disk on behalf of bulk io ops |
-| `kv.bulk_sst.sync_size` | byte size | `2.0 MiB` | threshold after which non-Rocks SST writes must fsync (0 disables) |
-| `kv.raft.command.max_size` | byte size | `64 MiB` | maximum size of a raft command |
-| `kv.raft_log.synchronize` | boolean | `true` | set to true to synchronize on Raft log writes to persistent storage |
-| `kv.range.backpressure_range_size_multiplier` | float | `2E+00` | multiple of range_max_bytes that a range is allowed to grow to without splitting before writes to that range are blocked, or 0 to disable |
-| `kv.range_descriptor_cache.size` | integer | `1000000` | maximum number of entries in the range descriptor and leaseholder caches |
-| `kv.snapshot_rebalance.max_rate` | byte size | `2.0 MiB` | the rate limit (bytes/sec) to use for rebalance snapshots |
-| `kv.snapshot_recovery.max_rate` | byte size | `8.0 MiB` | the rate limit (bytes/sec) to use for recovery snapshots |
-| `kv.transaction.max_intents_bytes` | integer | `256000` | maximum number of bytes used to track write intents in transactions |
-| `kv.transaction.max_refresh_spans_bytes` | integer | `256000` | maximum number of bytes used to track refresh spans in serializable transactions |
-| `rocksdb.min_wal_sync_interval` | duration | `0s` | minimum duration between syncs of the RocksDB WAL |
-| `server.consistency_check.interval` | duration | `24h0m0s` | the time between range consistency checks; set to 0 to disable consistency checking |
-| `server.declined_reservation_timeout` | duration | `1s` | the amount of time to consider the store throttled for up-replication after a reservation was declined |
-| `server.failed_reservation_timeout` | duration | `5s` | the amount of time to consider the store throttled for up-replication after a failed reservation call |
-| `server.remote_debugging.mode` | string | `local` | set to enable remote debugging, localhost-only or disable (any, local, off) |
-| `server.shutdown.drain_wait` | duration | `0s` | the amount of time a server waits in an unready state before proceeding with the rest of the shutdown process |
-| `server.shutdown.query_wait` | duration | `10s` | the server will wait for at least this amount of time for active queries to finish |
-| `server.time_until_store_dead` | duration | `5m0s` | the time after which if there is no new gossiped information about a store, it is considered dead |
-| `server.web_session_timeout` | duration | `168h0m0s` | the duration that a newly created web session will be valid |
-| `sql.defaults.distsql` | enumeration | `1` | Default distributed SQL execution mode [off = 0, auto = 1, on = 2] |
-| `sql.distsql.distribute_index_joins` | boolean | `true` | if set, for index joins we instantiate a join reader on every node that has a stream; if not set, we use a single join reader |
-| `sql.distsql.interleaved_joins.enabled` | boolean | `true` | if set we plan interleaved table joins instead of merge joins when possible |
-| `sql.distsql.merge_joins.enabled` | boolean | `true` | if set, we plan merge joins when possible |
-| `sql.distsql.temp_storage.joins` | boolean | `true` | set to true to enable use of disk for distributed sql joins |
-| `sql.distsql.temp_storage.sorts` | boolean | `true` | set to true to enable use of disk for distributed sql sorts |
-| `sql.distsql.temp_storage.workmem` | byte size | `64 MiB` | maximum amount of memory in bytes a processor can use before falling back to temp storage |
-| `sql.metrics.statement_details.dump_to_logs` | boolean | `false` | dump collected statement statistics to node logs when periodically cleared |
-| `sql.metrics.statement_details.enabled` | boolean | `true` | collect per-statement query statistics |
-| `sql.metrics.statement_details.threshold` | duration | `0s` | minimum execution time to cause statistics to be collected |
-| `sql.trace.log_statement_execute` | boolean | `false` | set to true to enable logging of executed statements |
-| `sql.trace.session_eventlog.enabled` | boolean | `false` | set to true to enable session tracing |
-| `sql.trace.txn.enable_threshold` | duration | `0s` | duration beyond which all transactions are traced (set to 0 to disable) |
-| `timeseries.resolution_10s.storage_duration` | duration | `720h0m0s` | the amount of time to store timeseries data |
-| `timeseries.storage.enabled` | boolean | `true` | if set, periodic timeseries data is stored within the cluster; disabling is not recommended unless you are storing the data elsewhere |
-| `trace.debug.enable` | boolean | `false` | if set, traces for recent requests can be seen in the /debug page |
-| `trace.lightstep.token` | string | `` | if set, traces go to Lightstep using this token |
-| `trace.zipkin.collector` | string | `` | if set, traces go to the given Zipkin instance (example: '127.0.0.1:9411'); ignored if trace.lightstep.token is set. |
-| `version` | custom validation | `2.0` | set the active cluster version in the format '.'. |
diff --git a/src/current/_includes/v2.0/start-in-docker/mac-linux-steps.md b/src/current/_includes/v2.0/start-in-docker/mac-linux-steps.md
deleted file mode 100644
index e8715c0dd48..00000000000
--- a/src/current/_includes/v2.0/start-in-docker/mac-linux-steps.md
+++ /dev/null
@@ -1,160 +0,0 @@
-## Before you begin
-
-If you have not already installed the official CockroachDB Docker image, go to [Install CockroachDB](install-cockroachdb.html) and follow the instructions under **Use Docker**.
-
-## Step 1. Create a bridge network
-
-Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker network create -d bridge roachnet
-~~~
-
-We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like.
-
-## Step 2. Start the first node
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker run -d \
---name=roach1 \
---hostname=roach1 \
---net=roachnet \
--p 26257:26257 -p 8080:8080 \
--v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data" \
-{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure
-~~~
-
-This command creates a container and starts the first CockroachDB node inside it. Let's look at each part:
-
-- `docker run`: The Docker command to start a new container.
-- `-d`: This flag runs the container in the background so you can continue the next steps in the same shell.
-- `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container.
-- `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster.
-- `--net`: The bridge network for the container to join. See step 1 for more details.
-- `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the Admin UI (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser.
-- `-v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in `${PWD}/cockroach-data/roach1` on the host and will persist after the container is stopped or deleted. For more details, see Docker's Bind Mounts topic.
-- `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode.
-
-## Step 3. Add nodes to the cluster
-
-At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities.
-
-To simulate a real deployment, scale your cluster by adding two more nodes:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker run -d \
---name=roach2 \
---hostname=roach2 \
---net=roachnet \
--v "${PWD}/cockroach-data/roach2:/cockroach/cockroach-data" \
-{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker run -d \
---name=roach3 \
---hostname=roach3 \
---net=roachnet \
--v "${PWD}/cockroach-data/roach3:/cockroach/cockroach-data" \
-{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
-~~~
-
-These commands add two more containers and start CockroachDB nodes inside them, joining them to the first node. There are only a few differences to note from step 2:
-
-- `-v`: This flag mounts a host directory as a data volume. Data and logs for these nodes will be stored in `${PWD}/cockroach-data/roach2` and `${PWD}/cockroach-data/roach3` on the host and will persist after the containers are stopped or deleted.
-- `--join`: This flag joins the new nodes to the cluster, using the first container's `hostname`. Otherwise, all [`cockroach start`](start-a-node.html) defaults are accepted. Note that since each node is in a unique container, using identical default ports won’t cause conflicts.
-
-## Step 4. Test the cluster
-
-Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, use the `docker exec` command to start the [built-in SQL shell](use-the-built-in-sql-client.html) in the first container:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker exec -it roach1 ./cockroach sql --insecure
-~~~
-
-~~~
-# Welcome to the cockroach SQL interface.
-# All statements must be terminated by a semicolon.
-# To exit: CTRL + D.
-~~~
-
-Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO bank.accounts VALUES (1, 1000.50);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM bank.accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000.5 |
-+----+---------+
-(1 row)
-~~~
-
-Exit the SQL shell on node 1:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
-
-Then start the SQL shell in the second container:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker exec -it roach2 ./cockroach sql --insecure
-~~~
-
-~~~
-# Welcome to the cockroach SQL interface.
-# All statements must be terminated by a semicolon.
-# To exit: CTRL + D.
-~~~
-
-Now run the same `SELECT` query:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM bank.accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000.5 |
-+----+---------+
-(1 row)
-~~~
-
-As you can see, node 1 and node 2 behaved identically as SQL gateways.
-
-When you're done, exit the SQL shell on node 2:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/images/v2.0/2automated-scaling-repair.png b/src/current/images/v2.0/2automated-scaling-repair.png
deleted file mode 100644
index 2402db24d75..00000000000
Binary files a/src/current/images/v2.0/2automated-scaling-repair.png and /dev/null differ
diff --git a/src/current/images/v2.0/2distributed-transactions.png b/src/current/images/v2.0/2distributed-transactions.png
deleted file mode 100644
index 52fc2d11943..00000000000
Binary files a/src/current/images/v2.0/2distributed-transactions.png and /dev/null differ
diff --git a/src/current/images/v2.0/2go-implementation.png b/src/current/images/v2.0/2go-implementation.png
deleted file mode 100644
index e5729f51cfb..00000000000
Binary files a/src/current/images/v2.0/2go-implementation.png and /dev/null differ
diff --git a/src/current/images/v2.0/2open-source.png b/src/current/images/v2.0/2open-source.png
deleted file mode 100644
index b2a936d8d29..00000000000
Binary files a/src/current/images/v2.0/2open-source.png and /dev/null differ
diff --git a/src/current/images/v2.0/2simplified-deployments.png b/src/current/images/v2.0/2simplified-deployments.png
deleted file mode 100644
index 15576d1ae5d..00000000000
Binary files a/src/current/images/v2.0/2simplified-deployments.png and /dev/null differ
diff --git a/src/current/images/v2.0/2strong-consistency.png b/src/current/images/v2.0/2strong-consistency.png
deleted file mode 100644
index 571dc01761d..00000000000
Binary files a/src/current/images/v2.0/2strong-consistency.png and /dev/null differ
diff --git a/src/current/images/v2.0/CockroachDB_Training_Wide.png b/src/current/images/v2.0/CockroachDB_Training_Wide.png
deleted file mode 100644
index 0844c2b50e0..00000000000
Binary files a/src/current/images/v2.0/CockroachDB_Training_Wide.png and /dev/null differ
diff --git a/src/current/images/v2.0/Parallel_Statement_Execution_Error_Mismatch.png b/src/current/images/v2.0/Parallel_Statement_Execution_Error_Mismatch.png
deleted file mode 100644
index f60360c9598..00000000000
Binary files a/src/current/images/v2.0/Parallel_Statement_Execution_Error_Mismatch.png and /dev/null differ
diff --git a/src/current/images/v2.0/Parallel_Statement_Hybrid_Execution.png b/src/current/images/v2.0/Parallel_Statement_Hybrid_Execution.png
deleted file mode 100644
index a4edf85dc02..00000000000
Binary files a/src/current/images/v2.0/Parallel_Statement_Hybrid_Execution.png and /dev/null differ
diff --git a/src/current/images/v2.0/Parallel_Statement_Normal_Execution.png b/src/current/images/v2.0/Parallel_Statement_Normal_Execution.png
deleted file mode 100644
index df63ab1da01..00000000000
Binary files a/src/current/images/v2.0/Parallel_Statement_Normal_Execution.png and /dev/null differ
diff --git a/src/current/images/v2.0/Sequential_Statement_Execution.png b/src/current/images/v2.0/Sequential_Statement_Execution.png
deleted file mode 100644
index 99c47c51664..00000000000
Binary files a/src/current/images/v2.0/Sequential_Statement_Execution.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-cluster-overview-panel.png b/src/current/images/v2.0/admin-ui-cluster-overview-panel.png
deleted file mode 100644
index ee906077ee8..00000000000
Binary files a/src/current/images/v2.0/admin-ui-cluster-overview-panel.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-custom-chart-debug-00.png b/src/current/images/v2.0/admin-ui-custom-chart-debug-00.png
deleted file mode 100644
index 7e94d83da20..00000000000
Binary files a/src/current/images/v2.0/admin-ui-custom-chart-debug-00.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-components.png b/src/current/images/v2.0/admin-ui-node-components.png
deleted file mode 100644
index 2ed730ff80c..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-components.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-list.png b/src/current/images/v2.0/admin-ui-node-list.png
deleted file mode 100644
index 9820b63c12a..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-list.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-map-after-license.png b/src/current/images/v2.0/admin-ui-node-map-after-license.png
deleted file mode 100644
index fa47a7b579f..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-map-after-license.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-map-before-license.png b/src/current/images/v2.0/admin-ui-node-map-before-license.png
deleted file mode 100644
index f352e214868..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-map-before-license.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-map-complete.png b/src/current/images/v2.0/admin-ui-node-map-complete.png
deleted file mode 100644
index 46b1c38d4bf..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-map-complete.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-map-navigation.gif b/src/current/images/v2.0/admin-ui-node-map-navigation.gif
deleted file mode 100644
index 67ce2dc009c..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-map-navigation.gif and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-node-map.png b/src/current/images/v2.0/admin-ui-node-map.png
deleted file mode 100644
index c1e0b83a3dc..00000000000
Binary files a/src/current/images/v2.0/admin-ui-node-map.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-region-component.png b/src/current/images/v2.0/admin-ui-region-component.png
deleted file mode 100644
index c36a362d107..00000000000
Binary files a/src/current/images/v2.0/admin-ui-region-component.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-single-node.gif b/src/current/images/v2.0/admin-ui-single-node.gif
deleted file mode 100644
index f60d25b0e2a..00000000000
Binary files a/src/current/images/v2.0/admin-ui-single-node.gif and /dev/null differ
diff --git a/src/current/images/v2.0/admin-ui-time-range.gif b/src/current/images/v2.0/admin-ui-time-range.gif
deleted file mode 100644
index c28807b9a1b..00000000000
Binary files a/src/current/images/v2.0/admin-ui-time-range.gif and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui.png b/src/current/images/v2.0/admin_ui.png
deleted file mode 100644
index 33bce5efcdd..00000000000
Binary files a/src/current/images/v2.0/admin_ui.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_capacity.png b/src/current/images/v2.0/admin_ui_capacity.png
deleted file mode 100644
index 1e9085851af..00000000000
Binary files a/src/current/images/v2.0/admin_ui_capacity.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_clock_offset.png b/src/current/images/v2.0/admin_ui_clock_offset.png
deleted file mode 100644
index 2f4b3051282..00000000000
Binary files a/src/current/images/v2.0/admin_ui_clock_offset.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_cpu_time.png b/src/current/images/v2.0/admin_ui_cpu_time.png
deleted file mode 100644
index 3e81817ca38..00000000000
Binary files a/src/current/images/v2.0/admin_ui_cpu_time.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_database_grants_view.png b/src/current/images/v2.0/admin_ui_database_grants_view.png
deleted file mode 100644
index ad18cc34ce6..00000000000
Binary files a/src/current/images/v2.0/admin_ui_database_grants_view.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_database_tables_view.png b/src/current/images/v2.0/admin_ui_database_tables_view.png
deleted file mode 100644
index 27acc8b8efb..00000000000
Binary files a/src/current/images/v2.0/admin_ui_database_tables_view.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_events.png b/src/current/images/v2.0/admin_ui_events.png
deleted file mode 100644
index 3d3a4738c78..00000000000
Binary files a/src/current/images/v2.0/admin_ui_events.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_file_descriptors.png b/src/current/images/v2.0/admin_ui_file_descriptors.png
deleted file mode 100644
index 42187c9878d..00000000000
Binary files a/src/current/images/v2.0/admin_ui_file_descriptors.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_hovering.gif b/src/current/images/v2.0/admin_ui_hovering.gif
deleted file mode 100644
index 1795471051f..00000000000
Binary files a/src/current/images/v2.0/admin_ui_hovering.gif and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_jobs_page.png b/src/current/images/v2.0/admin_ui_jobs_page.png
deleted file mode 100644
index a9f07a785a3..00000000000
Binary files a/src/current/images/v2.0/admin_ui_jobs_page.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_memory_usage.png b/src/current/images/v2.0/admin_ui_memory_usage.png
deleted file mode 100644
index ffc2c515616..00000000000
Binary files a/src/current/images/v2.0/admin_ui_memory_usage.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_node_count.png b/src/current/images/v2.0/admin_ui_node_count.png
deleted file mode 100644
index d5c103fc868..00000000000
Binary files a/src/current/images/v2.0/admin_ui_node_count.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_nodes_page.png b/src/current/images/v2.0/admin_ui_nodes_page.png
deleted file mode 100644
index 495ff14eea0..00000000000
Binary files a/src/current/images/v2.0/admin_ui_nodes_page.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_overview_dashboard.png b/src/current/images/v2.0/admin_ui_overview_dashboard.png
deleted file mode 100644
index f1ef539a293..00000000000
Binary files a/src/current/images/v2.0/admin_ui_overview_dashboard.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_ranges.png b/src/current/images/v2.0/admin_ui_ranges.png
deleted file mode 100644
index 316186bb4a3..00000000000
Binary files a/src/current/images/v2.0/admin_ui_ranges.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replica_quiescence.png b/src/current/images/v2.0/admin_ui_replica_quiescence.png
deleted file mode 100644
index 663dbfb097e..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replica_quiescence.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replica_snapshots.png b/src/current/images/v2.0/admin_ui_replica_snapshots.png
deleted file mode 100644
index 56146c7f775..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replica_snapshots.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replicas.png b/src/current/images/v2.0/admin_ui_replicas.png
deleted file mode 100644
index 8ee31eed675..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replicas.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replicas_migration.png b/src/current/images/v2.0/admin_ui_replicas_migration.png
deleted file mode 100644
index 6e08c5a3a5b..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replicas_migration.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replicas_migration2.png b/src/current/images/v2.0/admin_ui_replicas_migration2.png
deleted file mode 100644
index f7183689f20..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replicas_migration2.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replicas_migration3.png b/src/current/images/v2.0/admin_ui_replicas_migration3.png
deleted file mode 100644
index b7d9fd39760..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replicas_migration3.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replicas_per_node.png b/src/current/images/v2.0/admin_ui_replicas_per_node.png
deleted file mode 100644
index a6a662c6f32..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replicas_per_node.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_replicas_per_store.png b/src/current/images/v2.0/admin_ui_replicas_per_store.png
deleted file mode 100644
index 2036c392fc8..00000000000
Binary files a/src/current/images/v2.0/admin_ui_replicas_per_store.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_service_latency_99_percentile.png b/src/current/images/v2.0/admin_ui_service_latency_99_percentile.png
deleted file mode 100644
index 7e14805d21d..00000000000
Binary files a/src/current/images/v2.0/admin_ui_service_latency_99_percentile.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_sql_byte_traffic.png b/src/current/images/v2.0/admin_ui_sql_byte_traffic.png
deleted file mode 100644
index 9f077b25259..00000000000
Binary files a/src/current/images/v2.0/admin_ui_sql_byte_traffic.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_sql_connections.png b/src/current/images/v2.0/admin_ui_sql_connections.png
deleted file mode 100644
index 7cda5614e49..00000000000
Binary files a/src/current/images/v2.0/admin_ui_sql_connections.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_sql_queries.png b/src/current/images/v2.0/admin_ui_sql_queries.png
deleted file mode 100644
index 94ed02d88ae..00000000000
Binary files a/src/current/images/v2.0/admin_ui_sql_queries.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_summary_panel.png b/src/current/images/v2.0/admin_ui_summary_panel.png
deleted file mode 100644
index 5eaa9b18439..00000000000
Binary files a/src/current/images/v2.0/admin_ui_summary_panel.png and /dev/null differ
diff --git a/src/current/images/v2.0/admin_ui_transactions.png b/src/current/images/v2.0/admin_ui_transactions.png
deleted file mode 100644
index 5131ecc6b2d..00000000000
Binary files a/src/current/images/v2.0/admin_ui_transactions.png and /dev/null differ
diff --git a/src/current/images/v2.0/after-decommission1.png b/src/current/images/v2.0/after-decommission1.png
deleted file mode 100644
index 945ec05f974..00000000000
Binary files a/src/current/images/v2.0/after-decommission1.png and /dev/null differ
diff --git a/src/current/images/v2.0/after-decommission2.png b/src/current/images/v2.0/after-decommission2.png
deleted file mode 100644
index fbb041d2c14..00000000000
Binary files a/src/current/images/v2.0/after-decommission2.png and /dev/null differ
diff --git a/src/current/images/v2.0/automated-operations1.png b/src/current/images/v2.0/automated-operations1.png
deleted file mode 100644
index 64c6e51616c..00000000000
Binary files a/src/current/images/v2.0/automated-operations1.png and /dev/null differ
diff --git a/src/current/images/v2.0/before-decommission1.png b/src/current/images/v2.0/before-decommission1.png
deleted file mode 100644
index 91627545b22..00000000000
Binary files a/src/current/images/v2.0/before-decommission1.png and /dev/null differ
diff --git a/src/current/images/v2.0/before-decommission2.png b/src/current/images/v2.0/before-decommission2.png
deleted file mode 100644
index 063efeb6326..00000000000
Binary files a/src/current/images/v2.0/before-decommission2.png and /dev/null differ
diff --git a/src/current/images/v2.0/cloudformation_admin_ui_live_node_count.png b/src/current/images/v2.0/cloudformation_admin_ui_live_node_count.png
deleted file mode 100644
index fce52a39034..00000000000
Binary files a/src/current/images/v2.0/cloudformation_admin_ui_live_node_count.png and /dev/null differ
diff --git a/src/current/images/v2.0/cloudformation_admin_ui_replicas.png b/src/current/images/v2.0/cloudformation_admin_ui_replicas.png
deleted file mode 100644
index 9327b1004e4..00000000000
Binary files a/src/current/images/v2.0/cloudformation_admin_ui_replicas.png and /dev/null differ
diff --git a/src/current/images/v2.0/cloudformation_admin_ui_sql_queries.png b/src/current/images/v2.0/cloudformation_admin_ui_sql_queries.png
deleted file mode 100644
index 843d94b30f0..00000000000
Binary files a/src/current/images/v2.0/cloudformation_admin_ui_sql_queries.png and /dev/null differ
diff --git a/src/current/images/v2.0/cluster-status-after-decommission1.png b/src/current/images/v2.0/cluster-status-after-decommission1.png
deleted file mode 100644
index 35d96fef0d5..00000000000
Binary files a/src/current/images/v2.0/cluster-status-after-decommission1.png and /dev/null differ
diff --git a/src/current/images/v2.0/cluster-status-after-decommission2.png b/src/current/images/v2.0/cluster-status-after-decommission2.png
deleted file mode 100644
index e420e202aa6..00000000000
Binary files a/src/current/images/v2.0/cluster-status-after-decommission2.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple1.png b/src/current/images/v2.0/decommission-multiple1.png
deleted file mode 100644
index 30c90280f7c..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple1.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple2.png b/src/current/images/v2.0/decommission-multiple2.png
deleted file mode 100644
index d93abcd4acb..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple2.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple3.png b/src/current/images/v2.0/decommission-multiple3.png
deleted file mode 100644
index 3a1d17176de..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple3.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple4.png b/src/current/images/v2.0/decommission-multiple4.png
deleted file mode 100644
index 854c4ba50c9..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple4.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple5.png b/src/current/images/v2.0/decommission-multiple5.png
deleted file mode 100644
index 3a8621e956b..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple5.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple6.png b/src/current/images/v2.0/decommission-multiple6.png
deleted file mode 100644
index 168ba907be1..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple6.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-multiple7.png b/src/current/images/v2.0/decommission-multiple7.png
deleted file mode 100644
index a52d034cf9a..00000000000
Binary files a/src/current/images/v2.0/decommission-multiple7.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario1.1.png b/src/current/images/v2.0/decommission-scenario1.1.png
deleted file mode 100644
index a66389270de..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario1.1.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario1.2.png b/src/current/images/v2.0/decommission-scenario1.2.png
deleted file mode 100644
index 9b33855e101..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario1.2.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario1.3.png b/src/current/images/v2.0/decommission-scenario1.3.png
deleted file mode 100644
index 4c1175d956b..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario1.3.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario2.1.png b/src/current/images/v2.0/decommission-scenario2.1.png
deleted file mode 100644
index 2fa8790c556..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario2.1.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario2.2.png b/src/current/images/v2.0/decommission-scenario2.2.png
deleted file mode 100644
index 391b8e24c0f..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario2.2.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario3.1.png b/src/current/images/v2.0/decommission-scenario3.1.png
deleted file mode 100644
index db682df3d78..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario3.1.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario3.2.png b/src/current/images/v2.0/decommission-scenario3.2.png
deleted file mode 100644
index 3571bd0b83e..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario3.2.png and /dev/null differ
diff --git a/src/current/images/v2.0/decommission-scenario3.3.png b/src/current/images/v2.0/decommission-scenario3.3.png
deleted file mode 100644
index 45f61d9bd18..00000000000
Binary files a/src/current/images/v2.0/decommission-scenario3.3.png and /dev/null differ
diff --git a/src/current/images/v2.0/follow-workload-1.png b/src/current/images/v2.0/follow-workload-1.png
deleted file mode 100644
index a58fcb2e5ed..00000000000
Binary files a/src/current/images/v2.0/follow-workload-1.png and /dev/null differ
diff --git a/src/current/images/v2.0/follow-workload-2.png b/src/current/images/v2.0/follow-workload-2.png
deleted file mode 100644
index 47d83c5d4d6..00000000000
Binary files a/src/current/images/v2.0/follow-workload-2.png and /dev/null differ
diff --git a/src/current/images/v2.0/icon_info.svg b/src/current/images/v2.0/icon_info.svg
deleted file mode 100644
index 57aac994733..00000000000
--- a/src/current/images/v2.0/icon_info.svg
+++ /dev/null
@@ -1,4 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/images/v2.0/perf_tuning_concepts1.png b/src/current/images/v2.0/perf_tuning_concepts1.png
deleted file mode 100644
index 3a086a41c26..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_concepts1.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_concepts2.png b/src/current/images/v2.0/perf_tuning_concepts2.png
deleted file mode 100644
index d67b8f253f8..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_concepts2.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_concepts3.png b/src/current/images/v2.0/perf_tuning_concepts3.png
deleted file mode 100644
index 46d666be55d..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_concepts3.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_concepts4.png b/src/current/images/v2.0/perf_tuning_concepts4.png
deleted file mode 100644
index b60b19e01bf..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_concepts4.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_movr_schema.png b/src/current/images/v2.0/perf_tuning_movr_schema.png
deleted file mode 100644
index 262adc18b75..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_movr_schema.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing.png b/src/current/images/v2.0/perf_tuning_multi_region_rebalancing.png
deleted file mode 100644
index e5ef7d970cc..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing_after_partitioning.png b/src/current/images/v2.0/perf_tuning_multi_region_rebalancing_after_partitioning.png
deleted file mode 100644
index 4f358ac05af..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing_after_partitioning.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_multi_region_topology.png b/src/current/images/v2.0/perf_tuning_multi_region_topology.png
deleted file mode 100644
index fe64c322ca0..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_multi_region_topology.png and /dev/null differ
diff --git a/src/current/images/v2.0/perf_tuning_single_region_topology.png b/src/current/images/v2.0/perf_tuning_single_region_topology.png
deleted file mode 100644
index 4dfca364929..00000000000
Binary files a/src/current/images/v2.0/perf_tuning_single_region_topology.png and /dev/null differ
diff --git a/src/current/images/v2.0/raw-status-endpoints.png b/src/current/images/v2.0/raw-status-endpoints.png
deleted file mode 100644
index a893911fa87..00000000000
Binary files a/src/current/images/v2.0/raw-status-endpoints.png and /dev/null differ
diff --git a/src/current/images/v2.0/recovery1.png b/src/current/images/v2.0/recovery1.png
deleted file mode 100644
index 31b74749434..00000000000
Binary files a/src/current/images/v2.0/recovery1.png and /dev/null differ
diff --git a/src/current/images/v2.0/recovery2.png b/src/current/images/v2.0/recovery2.png
deleted file mode 100644
index 83bd7dd66b0..00000000000
Binary files a/src/current/images/v2.0/recovery2.png and /dev/null differ
diff --git a/src/current/images/v2.0/recovery3.png b/src/current/images/v2.0/recovery3.png
deleted file mode 100644
index 44ecc0fec4c..00000000000
Binary files a/src/current/images/v2.0/recovery3.png and /dev/null differ
diff --git a/src/current/images/v2.0/remove-dead-node1.png b/src/current/images/v2.0/remove-dead-node1.png
deleted file mode 100644
index 26569078efd..00000000000
Binary files a/src/current/images/v2.0/remove-dead-node1.png and /dev/null differ
diff --git a/src/current/images/v2.0/replication1.png b/src/current/images/v2.0/replication1.png
deleted file mode 100644
index 1ac6c708f00..00000000000
Binary files a/src/current/images/v2.0/replication1.png and /dev/null differ
diff --git a/src/current/images/v2.0/replication2.png b/src/current/images/v2.0/replication2.png
deleted file mode 100644
index 5db0abed2bd..00000000000
Binary files a/src/current/images/v2.0/replication2.png and /dev/null differ
diff --git a/src/current/images/v2.0/scalability1.png b/src/current/images/v2.0/scalability1.png
deleted file mode 100644
index 9bebd74f1a8..00000000000
Binary files a/src/current/images/v2.0/scalability1.png and /dev/null differ
diff --git a/src/current/images/v2.0/scalability2.png b/src/current/images/v2.0/scalability2.png
deleted file mode 100644
index c15995d4c6f..00000000000
Binary files a/src/current/images/v2.0/scalability2.png and /dev/null differ
diff --git a/src/current/images/v2.0/trace.png b/src/current/images/v2.0/trace.png
deleted file mode 100644
index 4f0fb98a753..00000000000
Binary files a/src/current/images/v2.0/trace.png and /dev/null differ
diff --git a/src/current/releases/v2.0.md b/src/current/releases/v2.0.md
index 4a396107099..45f8b532ca5 100644
--- a/src/current/releases/v2.0.md
+++ b/src/current/releases/v2.0.md
@@ -1,5 +1,5 @@
---
-title: What's New in v2.0
+title: What's New in v2.0
toc: true
toc_not_nested: true
summary: Additions and changes in CockroachDB version v2.0 since version v1.1
@@ -8,16 +8,32 @@ docs_area: releases
keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes
---
-{% assign rel = site.data.releases | where_exp: "rel", "rel.major_version == page.major_version" | sort: "release_date" | reverse %}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-{% assign vers = site.data.versions | where_exp: "vers", "vers.major_version == page.major_version" | first %}
+This release is no longer supported. For more information, see our [Release support policy]({% link releases/release-support-policy.md %}).
-{% assign today = "today" | date: "%Y-%m-%d" %}
-
-{% include releases/testing-release-notice.md major_version=vers %}
-
-{% include releases/whats-new-intro.md major_version=vers %}
-
-{% for r in rel %}
-{% include releases/{{ page.major_version }}/{{ r.release_name }}.md release=r.release_name release_date=r.release_date %}
-{% endfor %}
+To download the archived documentation for this release, see [Archived Documentation]({% link releases/archived-documentation.md %}).
\ No newline at end of file
diff --git a/src/current/v2.0/404.md b/src/current/v2.0/404.md
deleted file mode 100755
index 13a69ddde5c..00000000000
--- a/src/current/v2.0/404.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Page Not Found
-description: "Page not found."
-sitemap: false
-search: exclude
-related_pages: none
-toc: false
----
-
-
-{%comment%}
-
-
-{%endcomment%}
\ No newline at end of file
diff --git a/src/current/v2.0/add-column.md b/src/current/v2.0/add-column.md
deleted file mode 100644
index f1125bf9a7f..00000000000
--- a/src/current/v2.0/add-column.md
+++ /dev/null
@@ -1,148 +0,0 @@
----
-title: ADD COLUMN
-summary: Use the ADD COLUMN statement to add columns to tables.
-toc: true
----
-
-The `ADD COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and adds columns to tables.
-
-
-## Synopsis
-
-
- {% include {{ page.version.version }}/sql/diagrams/add_column.html %}
-
-
-## Required Privileges
-
-The user must have the `CREATE` [privilege](privileges.html) on the table.
-
-## Parameters
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_name` | The name of the table to which you want to add the column. |
-| `column_name` | The name of the column you want to add. The column name must follow these [identifier rules](keywords-and-identifiers.html#identifiers) and must be unique within the table but can have the same name as indexes or constraints. |
-| `typename` | The [data type](data-types.html) of the new column. |
-| `col_qualification` | An optional list of column definitions, which may include [column-level constraints](constraints.html), [collation](collate.html), or [column family assignments](column-families.html).
Note that it is not possible to add a column with the [Foreign Key](foreign-key.html) constraint. As a workaround, you can add the column without the constraint, then use [`CREATE INDEX`](create-index.html) to index the column, and then use [`ADD CONSTRAINT`](add-constraint.html) to add the Foreign Key constraint to the column. |
-
-## Viewing Schema Changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
-
-## Examples
-
-### Add a Single Column
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN names STRING;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM accounts;
-~~~
-
-~~~
-+-----------+-------------------+-------+---------+-----------+
-| Field | Type | Null | Default | Indices |
-+-----------+-------------------+-------+---------+-----------+
-| id | INT | false | NULL | {primary} |
-| balance | DECIMAL | true | NULL | {} |
-| names | STRING | true | NULL | {} |
-+-----------+-------------------+-------+---------+-----------+
-~~~
-
-### Add Multiple Columns
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN location STRING, ADD COLUMN amount DECIMAL;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM accounts;
-~~~
-
-~~~
-+-----------+-------------------+-------+---------+-----------+
-| Field | Type | Null | Default | Indices |
-+-----------+-------------------+-------+---------+-----------+
-| id | INT | false | NULL | {primary} |
-| balance | DECIMAL | true | NULL | {} |
-| names | STRING | true | NULL | {} |
-| location | STRING | true | NULL | {} |
-| amount | DECIMAL | true | NULL | {} |
-+-----------+-------------------+-------+---------+-----------+
-
-~~~
-
-### Add a Non-Null Column with a Default Value
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN interest DECIMAL NOT NULL DEFAULT (DECIMAL '1.3');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM accounts;
-~~~
-~~~
-+-----------+-------------------+-------+---------------------------+-----------+
-| Field | Type | Null | Default | Indices |
-+-----------+-------------------+-------+---------------------------+-----------+
-| id | INT | false | NULL | {primary} |
-| balance | DECIMAL | true | NULL | {} |
-| names | STRING | true | NULL | {} |
-| location | STRING | true | NULL | {} |
-| amount | DECIMAL | true | NULL | {} |
-| interest | DECIMAL | false | ('1.3':::STRING::DECIMAL) | {} |
-+-----------+-------------------+-------+---------------------------+-----------+
-~~~
-
-### Add a Non-Null Column with Unique Values
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN cust_number DECIMAL UNIQUE NOT NULL;
-~~~
-
-### Add a Column with Collation
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN more_names STRING COLLATE en;
-~~~
-
-### Add a Column and Assign it to a Column Family
-
-#### Add a Column and Assign it to a New Column Family
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN location1 STRING CREATE FAMILY new_family;
-~~~
-
-#### Add a Column and Assign it to an Existing Column Family
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN location2 STRING FAMILY existing_family;
-~~~
-
-#### Add a Column and Create a New Column Family if Column Family Does Not Exist
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN new_name STRING CREATE IF NOT EXISTS FAMILY f1;
-~~~
-
-
-## See Also
-- [`ALTER TABLE`](alter-table.html)
-- [Column-level Constraints](constraints.html)
-- [Collation](collate.html)
-- [Column Families](column-families.html)
diff --git a/src/current/v2.0/add-constraint.md b/src/current/v2.0/add-constraint.md
deleted file mode 100644
index f12a6e59a47..00000000000
--- a/src/current/v2.0/add-constraint.md
+++ /dev/null
@@ -1,140 +0,0 @@
----
-title: ADD CONSTRAINT
-summary: Use the ADD CONSTRAINT statement to add constraints to columns.
-toc: true
----
-
-The `ADD CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and can add the following [constraints](constraints.html) to columns:
-
-- [Check](check.html)
-- [Foreign Keys](foreign-key.html)
-- [Unique](unique.html)
-
-{{site.data.alerts.callout_info}}
-The Primary Key and Not Null constraints can only be applied through CREATE TABLE. The Default constraint is managed through ALTER COLUMN.{{site.data.alerts.end}}
-
-
-## Synopsis
-
-
- {% include {{ page.version.version }}/sql/diagrams/add_constraint.html %}
-
-
-## Required Privileges
-
-The user must have the `CREATE` [privilege](privileges.html) on the table.
-
-## Parameters
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_name` | The name of the table containing the column you want to constrain. |
-| `constraint_name` | The name of the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). |
-| `constraint_elem` | The [Check](check.html), [Foreign Keys](foreign-key.html), [Unique](unique.html) constraint you want to add.
Adding/changing a Default constraint is done through [`ALTER COLUMN`](alter-column.html).
Adding/changing the table's Primary Key is not supported through `ALTER TABLE`; it can only be specified during [table creation](create-table.html#create-a-table-primary-key-defined). |
-
-## Viewing Schema Changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
-
-## Examples
-
-### Add the Unique Constraint
-
-Adding the [Unique constraint](unique.html) requires that all of a column's values be distinct from one another (except for *NULL* values).
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE orders ADD CONSTRAINT id_customer_unique UNIQUE (id, customer);
-~~~
-
-### Add the Check Constraint
-
-Adding the [Check constraint](check.html) requires that all of a column's values evaluate to `TRUE` for a Boolean expression.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE orders ADD CONSTRAINT total_0_check CHECK (total > 0);
-~~~
-
-### Add the Foreign Key Constraint with `CASCADE`
-
-Before you can add the [Foreign Key](foreign-key.html) constraint to columns, the columns must already be indexed. If they are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the `ADD CONSTRAINT` statement to add the Foreign Key constraint to the columns.
-
-For example, let's say you have two simple tables, `orders` and `customers`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW CREATE TABLE customers;
-~~~
-
-~~~
-+-----------+-------------------------------------------------+
-| Table | CreateTable |
-+-----------+-------------------------------------------------+
-| customers | CREATE TABLE customers ( |
-| | id INT NOT NULL, |
-| | "name" STRING NOT NULL, |
-| | address STRING NULL, |
-| | CONSTRAINT "primary" PRIMARY KEY (id ASC), |
-| | FAMILY "primary" (id, "name", address) |
-| | ) |
-+-----------+-------------------------------------------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW CREATE TABLE orders;
-~~~
-
-~~~
-+--------+-------------------------------------------------------------------------------------------------------------+
-| Table | CreateTable |
-+--------+-------------------------------------------------------------------------------------------------------------+
-| orders | CREATE TABLE orders ( |
-| | id INT NOT NULL, |
-| | customer_id INT NULL, |
-| | status STRING NOT NULL, |
-| | CONSTRAINT "primary" PRIMARY KEY (id ASC), |
-| | FAMILY "primary" (id, customer_id, status), |
-| | CONSTRAINT check_status CHECK (status IN ('open':::STRING, 'complete':::STRING, 'cancelled':::STRING)) |
-| | ) |
-+--------+-------------------------------------------------------------------------------------------------------------+
-(1 row)
-~~~
-
-To ensure that each value in the `orders.customer_id` column matches a unique value in the `customers.id` column, you want to add the Foreign Key constraint to `orders.customer_id`. So you first create an index on `orders.customer_id`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE INDEX ON orders (customer_id);
-~~~
-
-Then you add the Foreign Key constraint.
-
-New in v2.0: You can include a [foreign key action](foreign-key.html#foreign-key-actions-new-in-v2-0) to specify what happens when a foreign key is updated or deleted.
-
-In this example, let's use `ON DELETE CASCADE` (i.e., when referenced row is deleted, all dependent objects are also deleted).
-
-{{site.data.alerts.callout_danger}}
-`CASCADE` does not list objects it drops or updates, so it should be used cautiously.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE orders ADD CONSTRAINT customer_fk FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE;
-~~~
-
-If you had tried to add the constraint before indexing the column, you would have received an error:
-
-~~~
-pq: foreign key requires an existing index on columns ("customer_id")
-~~~
-
-## See Also
-
-- [Constraints](constraints.html)
-- [Foreign Key Constraint](foreign-key.html)
-- [`ALTER COLUMN`](alter-column.html)
-- [`CREATE TABLE`](create-table.html)
-- [`ALTER TABLE`](alter-table.html)
diff --git a/src/current/v2.0/admin-ui-access-and-navigate.md b/src/current/v2.0/admin-ui-access-and-navigate.md
deleted file mode 100644
index ad210ca29d2..00000000000
--- a/src/current/v2.0/admin-ui-access-and-navigate.md
+++ /dev/null
@@ -1,167 +0,0 @@
----
-title: Access and Navigate the CockroachDB Admin UI
-summary: Learn how to access and navigate the Admin UI.
-toc: true
----
-
-
-## Access the Admin UI
-
-You can access the Admin UI from any node in the cluster.
-
-By default, you can access it via HTTP on port `8080` of the hostname or IP address you configured using the `--host` flag while [starting the node](start-a-node.html#general). For example, `http://:8080`. If you are running a secure cluster, use `https://:8080`.
-
-You can also set the CockroachDB Admin UI to a custom port using `--http-port` or a custom hostname using `--http-host` when [starting each node](start-a-node.html). For example, if you set both a custom port and hostname, `http://:`. For a secure cluster, `https://:`.
-
-For additional guidance on accessing the Admin UI in the context of cluster deployment, see [Start a Local Cluster](start-a-local-cluster.html) and [Manual Deployment](manual-deployment.html).
-
-## Navigate the Admin UI
-
-The left-hand navigation bar allows you to navigate to the [Cluster Overview page](admin-ui-access-and-navigate.html), [Cluster metrics dashboards](admin-ui-overview.html), [Databases page](admin-ui-databases-page.html), and [Jobs page](admin-ui-jobs-page.html).
-
-The main panel displays changes for each page:
-
-Page | Main Panel Component
------------|------------
-Cluster Overview |
[Node List](admin-ui-access-and-navigate.html#node-list). [Enterprise users](enterprise-licensing.html) can enable and switch to the [Node Map](admin-ui-access-and-navigate.html#node-map-enterprise) view.
-Cluster Metrics |
[Time Series graphs](admin-ui-access-and-navigate.html#time-series-graphs)
-Databases | Information about the tables and grants in your [databases](admin-ui-databases-page.html).
-Jobs | Information about all currently active schema changes and backup/restore [jobs](admin-ui-jobs-page.html).
-
-### Cluster Overview Panel
-
-
-
-The **Cluster Overview** panel provides the following metrics:
-
-Metric | Description
---------|----
-Capacity Usage |
The storage capacity used as a percentage of total storage capacity allocated across all nodes.
The current capacity usage.
-Node Status |
The number of [live nodes](admin-ui-access-and-navigate.html#live-nodes) in the cluster.
The number of suspect nodes in the cluster. A node is considered a suspect node if it's liveness status is unavailable or the node is in the process of decommissioning.
The number of [dead nodes](admin-ui-access-and-navigate.html#dead-nodes) in the cluster.
-Replication Status |
The total number of ranges in the cluster.
The number of [under-replicated ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
The number of [unavailable ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
-
-### Node List
-
-The **Node List** is the default view on the **Overview** page.
-
-
-#### Live Nodes
-Live nodes are nodes that are online and responding. They are marked with a green dot. If a node is removed or dies, the dot turns yellow to indicate that it is not responding. If the node remains unresponsive for a certain amount of time (5 minutes by default), the node turns red and is moved to the [**Dead Nodes**](#dead-nodes) section, indicating that it is no longer expected to come back.
-
-The following details are shown for each live node:
-
-Column | Description
--------|------------
-ID | The ID of the node.
-Address | The address of the node. You can click on the address to view further details about the node.
-Uptime | How long the node has been running.
-Bytes | The used capacity for the node.
-Replicas | The number of replicas on the node.
-Mem Usage | The memory usage for the node.
-Version | The build tag of the CockroachDB version installed on the node.
-Logs | Click **Logs** to see the logs for the node.
-
-#### Dead Nodes
-
-Nodes are considered dead once they have not responded for a certain amount of time (5 minutes by default). At this point, the automated repair process starts, wherein CockroachDB automatically rebalances replicas from the dead node, using the unaffected replicas as sources. See [Stop a Node](stop-a-node.html#how-it-works) for more information.
-
-The following details are shown for each dead node:
-
-Column | Description
--------|------------
-ID | The ID of the node.
-Address | The address of the node. You can click on the address to view further details about the node.
-Down Since | How long the node has been down.
-
-#### Decommissioned Nodes
-
-New in v1.1: Nodes that have been decommissioned for permanent removal from the cluster are listed in the **Decommissioned Nodes** table.
-
-When you decommission a node, CockroachDB lets the node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node so that it can be safely shut down. See [Remove Nodes](remove-nodes.html) for more information.
-
-### Node Map (Enterprise)
-
-New in v2.0: The **Node Map** is an [enterprise-only](enterprise-licensing.html) feature that gives you a visual representation of the geographical configuration of your cluster.
-
-
-
-The Node Map consists of the following components:
-
-**Region component**
-
-
-
-**Node component**
-
-
-
-For guidance on enabling and using the node map, see [Enable Node Map](enable-node-map.html).
-
-### Time Series Graphs
-
-The **Cluster Metrics** dashboards display the time series graphs that are useful to visualize and monitor data trends. To access the time series graphs, click **Metrics** on the left-hand navigation bar.
-
-You can hover over each graph to see actual point-in-time values.
-
-
-
-{{site.data.alerts.callout_info}}By default, CockroachDB stores timeseries metrics for the last 30 days, but you can reduce the interval for timeseries storage. Alternately, if you are exclusively using a third-party tool such as Prometheus for timeseries monitoring, you can disable timeseries storage entirely. For more details, see this FAQ.
-{{site.data.alerts.end}}
-
-#### Change time range
-
-You can change the time range by clicking on the time window.
-
-
-{{site.data.alerts.callout_info}}The Admin UI shows time in UTC, even if you set a different time zone for your cluster. {{site.data.alerts.end}}
-
-#### View metrics for a single node
-
-By default, the time series panel displays the metrics for the entire cluster. To view the metrics for an individual node, select the node from the **Graph** drop-down list.
-
-
-### Summary Panel
-
-The **Cluster Metrics** dashboards display the **Summary** panel of key metrics. To view the **Summary** panel, click **Metrics** on the left-hand navigation bar.
-
-
-
-The **Summary** panel provides the following metrics:
-
-Metric | Description
---------|----
-Total Nodes | The total number of nodes in the cluster. Decommissioned nodes are not included in the Total Nodes count.
You can further drill down into the nodes details by clicking on [**View nodes list**](#node-list).
-Dead Nodes | The number of [dead nodes](admin-ui-access-and-navigate.html#dead-nodes) in the cluster.
-Capacity Used | The storage capacity used as a percentage of total storage capacity allocated across all nodes.
-Unavailable Ranges | The number of unavailable ranges in the cluster. A non-zero number indicates an unstable cluster.
-Queries per second | The number of SQL queries executed per second.
-P50 Latency | The 50th percentile of service latency. Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client.
-P99 Latency | The 99th percentile of service latency.
-
-{{site.data.alerts.callout_info}}
-{% include v2.0/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-### Events Panel
-
-The **Cluster Metrics** dashboards display the **Events** panel that lists the 10 most recent events logged for the all nodes across the cluster. To view the **Events** panel, click **Metrics** on the left-hand navigation bar. To see the list of all events, click **View all events** in the **Events** panel.
-
-
-
-The following types of events are listed:
-
-- Database created
-- Database dropped
-- Table created
-- Table dropped
-- Table altered
-- Index created
-- Index dropped
-- View created
-- View dropped
-- Schema change reversed
-- Schema change finished
-- Node joined
-- Node decommissioned
-- Node restarted
-- Cluster setting changed
diff --git a/src/current/v2.0/admin-ui-custom-chart-debug-page.md b/src/current/v2.0/admin-ui-custom-chart-debug-page.md
deleted file mode 100644
index a318e466b0c..00000000000
--- a/src/current/v2.0/admin-ui-custom-chart-debug-page.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title: Custom Chart Debug Page
-toc: true
----
-
-New in v2.0: The **Custom Chart** debug page in the Admin UI can be used to create a custom chart showing any combination of over [200 available metrics](#available-metrics).
-
-The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Just like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc.
-
-
-## Getting There
-
-To get to the **Custom Chart** debug page, [open the Admin UI](admin-ui-access-and-navigate.html), and either:
-
-- Open http://localhost:8080/#/debug/chart in your browser (replacing `localhost` and `8080` with your node's host and port).
-
-- Open any node's Admin UI debug page at http://localhost:8080/#/debug in your browser (replacing `localhost` and `8080` with your node's host and port), scroll down to the **UI Debugging** section, and click **Custom Time-Series Chart**.
-
-## Query Options
-
-The dropdown menus above the chart are used to set:
-
-- The time span to chart
-- The units to display
-
-
-
-The table below the chart shows which metrics are being queried, and how they'll be combined and displayed.
-
-Options include:
-
-{% include {{page.version.version}}/admin-ui-custom-chart-debug-page-00.html %}
-
-## Examples
-
-### Query User and System CPU Usage
-
-
-
-To compare system vs. userspace CPU usage, select the following values under **Metric Name**:
-
-+ `sys.cpu.sys.percent`
-+ `sys.cpu.user.percent`
-
-The Y-axis label is the **Count**. A count of 1 represents 100% utilization. The **Aggregator** of **Sum** can show the count to be above 1, which would mean CPU utilization is greater than 100%.
-
-Checking **Per Node** displays statistics for each node, which could show whether an individual node's CPU usage was higher or lower than the average.
-
-## Available Metrics
-
-{{site.data.alerts.callout_info}}
-This list is taken directly from the source code and is subject to change. Some of the metrics listed below are already visible in other areas of the [Admin UI](admin-ui-overview.html).
-{{site.data.alerts.end}}
-
-{% include {{page.version.version}}/metric-names.md %}
-
-## See Also
-
-+ [Troubleshooting Overview](troubleshooting-overview.html)
-+ [Support Resources](support-resources.html)
-+ [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.0/admin-ui-databases-page.md b/src/current/v2.0/admin-ui-databases-page.md
deleted file mode 100644
index b8a5453f835..00000000000
--- a/src/current/v2.0/admin-ui-databases-page.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Database Page
-toc: true
----
-
-The **Databases** page of the Admin UI provides details of the databases configured, the tables in each database, and the grants assigned to each user. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Databases** on the left-hand navigation bar.
-
-
-## Tables View
-
-The **Tables** view shows details of the system table as well as the tables in your databases. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar.
-
-
-
-The following details are displayed for each table:
-
-Metric | Description
---------|----
-Table Name | The name of the table.
-Size | Approximate total disk size of the table across all replicas.
-Ranges | The number of ranges in the table.
-\# of Columns | The number of columns in the table.
-\# of Indices | The number of indices for the table.
-
-## Grants View
-
-The **Grants** view shows the [privileges](privileges.html) granted to users for each database. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar, select **Databases** from the left-hand navigation bar, and then select **Grants** from the **View** menu.
-
-For more details about grants and privileges, see [Grants](grant.html).
-
-
diff --git a/src/current/v2.0/admin-ui-jobs-page.md b/src/current/v2.0/admin-ui-jobs-page.md
deleted file mode 100644
index 5d4bc43bd5a..00000000000
--- a/src/current/v2.0/admin-ui-jobs-page.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Jobs Page
-toc: true
----
-
-New in v1.1: The **Jobs** page of the Admin UI provides details about the backup/restore jobs as well as schema changes performed across all nodes in the cluster. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Jobs** on the left-hand navigation bar.
-
-
-## Job Details
-
-The **Jobs** table displays the user, description, creation time, and status of each backup and restore job, as well as schema changes performed across all nodes in the cluster.
-
-
-
-If a description is truncated, click the ellipsis to view the job's the full description.
-
-## Filter Results
-
-You can filter the results based on the status of the jobs or the type of jobs (backups, restores, or schema changes). You can also choose to view either the latest 50 jobs or all the jobs across all nodes.
-
-Filter By | Description
-----------|------------
-Job Status | From the **Status** menu, select the required status filter.
-Job Type | From the **Type** menu, select **Backups**, **Restores**, **Imports**, or **Schema Changes**.
-Jobs Shown | From the **Show** menu, select **First 50** or **All**.
diff --git a/src/current/v2.0/admin-ui-overview-dashboard.md b/src/current/v2.0/admin-ui-overview-dashboard.md
deleted file mode 100644
index 02262d1683a..00000000000
--- a/src/current/v2.0/admin-ui-overview-dashboard.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: Overview Dashboard
-summary: The Overview dashboard lets you monitor important SQL performance, replication, and storage metrics.
-toc: true
----
-
-The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default.
-
-
-The **Overview** dashboard displays the following time series graphs:
-
-## SQL Queries
-
-
-
-- In the node view, the SQL Queries graph shows the current moving average, over the last 10 seconds, of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node.
-
-- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
-
-## Service Latency: SQL, 99th percentile
-
-
-
-Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client.
-
-- In the node view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the node.
-
-- In the cluster view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency across all nodes in the cluster.
-
-## Replicas per Node
-
-
-
-Ranges are subsets of your data, which are replicated to ensure survivability. Ranges are replicated to a configurable number of CockroachDB nodes.
-
-- In the node view, the graph shows the number of range replicas on the selected node.
-
-- In the cluster view, the graph shows the number of range replicas on each node in the cluster.
-
-For details about how to control the number and location of replicas, see [Configure Replication Zones](configure-replication-zones.html).
-
-{{site.data.alerts.callout_info}}The timeseries data used to power the graphs in the admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this FAQ.{{site.data.alerts.end}}
-
-## Capacity
-
-
-
-You can monitor the **Capacity** graph to determine when additional storage is needed.
-
-- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node.
-
-- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-Capacity | The maximum storage capacity allocated to CockroachDB. You can configure the maximum allocated storage capacity for CockroachDB using the --store flag. For more information, see [Start a Node](start-a-node.html#store).
-Available | The free storage capacity available to CockroachDB.
-Used | Disk space used by the data in the CockroachDB store. Note that this value is less than (Capacity - Available) because Capacity and Available metrics consider the entire disk and all applications on the disk including CockroachDB, whereas Used metric tracks only the store's disk usage.
-
-{{site.data.alerts.callout_info}}
-{% include v2.0/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
diff --git a/src/current/v2.0/admin-ui-overview.md b/src/current/v2.0/admin-ui-overview.md
deleted file mode 100644
index 00779fbbc6d..00000000000
--- a/src/current/v2.0/admin-ui-overview.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title: Admin UI Overview
-summary: Use the Admin UI to monitor and optimize cluster performance.
-toc: false
-key: explore-the-admin-ui.html
----
-
-The CockroachDB Admin UI provides details about your cluster and database configuration, and helps you optimize cluster performance by monitoring the following areas:
-
-Area | Description
---------|----
-[Node Map](enable-node-map.html) | View and monitor the metrics and geographical configuration of your cluster.
-[Cluster Health](admin-ui-access-and-navigate.html#summary-panel) | View essential metrics about the cluster's health, such as the number of live, dead, and suspect nodes, the number of unavailable ranges, and the queries per second and service latency across the cluster.
-[Overview Metrics](admin-ui-overview-dashboard.html) | View important SQL performance, replication, and storage metrics.
-[Runtime Metrics](admin-ui-runtime-dashboard.html) | View metrics about node count, CPU time, and memory usage.
-[SQL Performance](admin-ui-sql-dashboard.html) | View metrics about SQL connections, byte traffic, queries, transactions, and service latency.
-[Storage Utilization](admin-ui-storage-dashboard.html) | View metrics about storage capacity and file descriptors.
-[Replication Details](admin-ui-replication-dashboard.html) | View metrics about how data is replicated across the cluster, such as range status, replicas per store, and replica quiescence.
-[Nodes Details](admin-ui-access-and-navigate.html#summary-panel) | View details of live, dead, and decommissioned nodes.
-[Events](admin-ui-access-and-navigate.html#events-panel) | View a list of recent cluster events.
-[Database Details](admin-ui-databases-page.html) | View details about the system and user databases in the cluster.
-[Jobs Details](admin-ui-jobs-page.html) | View details of the jobs running in the cluster.
-[Custom Chart Debug Page](admin-ui-custom-chart-debug-page.html) | Create a custom dashboard choosing from over 200 available metrics.
-
-The Admin UI also provides details about the way data is **Distributed**, the state of specific **Queues**, and metrics for **Slow Queries**, but these details are largely internal and intended for use by CockroachDB developers.
-
-{{site.data.alerts.callout_info}}By default, the Admin UI shares anonymous usage details with Cockroach Labs. For information about the details shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}}
diff --git a/src/current/v2.0/admin-ui-replication-dashboard.md b/src/current/v2.0/admin-ui-replication-dashboard.md
deleted file mode 100644
index 191a031fe8b..00000000000
--- a/src/current/v2.0/admin-ui-replication-dashboard.md
+++ /dev/null
@@ -1,91 +0,0 @@
----
-title: Replication Dashboard
-summary: The Replication dashboard lets you monitor the replication metrics for your cluster.
-toc: true
----
-
-The **Replication** dashboard in the CockroachDB Admin UI enables you to monitor the replication metrics for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Replication**.
-
-
-## Review of CockroachDB terminology
-
-- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
-- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node.
-- **Range Lease:** For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
-- **Under-replicated Ranges:** When a cluster is first initialized, the few default starting ranges will only have a single replica, but as soon as other nodes are available, they will replicate to them until they've reached their desired replication factor, the default being 3. If a range does not have enough replicas, the range is said to be "under-replicated".
-- **Unavailable Ranges:** If a majority of a range's replicas are on nodes that are unavailable, then the entire range is unavailable and will be unable to process queries.
-
-For more details, see [Scalable SQL Made Easy: How CockroachDB Automates Operations](https://www.cockroachlabs.com/blog/automated-rebalance-and-repair/)
-
-## Replication Dashboard
-
-The **Replication** dashboard displays the following time series graphs:
-
-### Ranges
-
-
-
-The **Ranges** graph shows you various details about the status of ranges.
-
-- In the node view, the graph shows details about ranges on the node.
-
-- In the cluster view, the graph shows details about ranges across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-Ranges | The number of ranges.
-Leaders | The number of ranges with leaders. If the number does not match the number of ranges for a long time, troubleshoot your cluster.
-Lease Holders | The number of ranges that have leases.
-Leaders w/o Leases | The number of Raft leaders without leases. If the number if non-zero for a long time, troubleshoot your cluster.
-Unavailable | The number of unavailable ranges. If the number if non-zero for a long time, troubleshoot your cluster.
-Under-replicated | The number of under-replicated ranges.
-
-### Replicas Per Store
-
-
-
-- In the node view, the graph shows the number of range replicas on the store.
-
-- In the cluster view, the graph shows the number of range replicas on each store.
-
-You can [Configure replication zones](configure-replication-zones.html) to set the number and location of replicas. You can monitor the configuration changes using the Admin UI, as described in [Fault tolerance and recovery](demo-fault-tolerance-and-recovery.html).
-
-### Replica Quiescence
-
-
-
-- In the node view, the graph shows the number of replicas on the node.
-
-- In the cluster view, the graph shows the number of replicas across all nodes.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-Replicas | The number of replicas.
-Quiescent | The number of replicas that haven't been accessed for a while.
-
-### Snapshots
-
-
-
-Usually the nodes in a [Raft group](architecture/replication-layer.html#raft) stay synchronized by following along the log message by message. However, if a node is far enough behind the log (e.g., if it was offline or is a new node getting up to speed), rather than send all the individual messages that changed the range, the cluster can send it a snapshot of the range and it can start following along from there. Commonly this is done preemptively, when the cluster can predict that a node will need to catch up, but occasionally the Raft protocol itself will request the snapshot.
-
-Metric | Description
--------|------------
-Generated | The number of snapshots created per second.
-Applied (Raft-initiated) | The number of snapshots applied to nodes per second that were initiated within Raft.
-Applied (Preemptive) | The number of snapshots applied to nodes per second that were anticipated ahead of time (e.g., because a node was about to be added to a Raft group).
-Reserved | The number of slots reserved per second for incoming snapshots that will be sent to a node.
-
-### Other Graphs
-
-The **Replication** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Leaseholders per Store
-- Logical Bytes per Store
-- Range Operations
-
-For monitoring CockroachDB, it is sufficient to use the [**Ranges**](#ranges), [**Replicas per Store**](#replicas-per-store), and [**Replica Quiescence**](#replica-quiescence) graphs.
diff --git a/src/current/v2.0/admin-ui-runtime-dashboard.md b/src/current/v2.0/admin-ui-runtime-dashboard.md
deleted file mode 100644
index d93e418894f..00000000000
--- a/src/current/v2.0/admin-ui-runtime-dashboard.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: Runtime Dashboard
-toc: true
----
-
-The **Runtime** dashboard in the CockroachDB Admin UI lets you monitor runtime metrics for you cluster, such as node count, memory usage, and CPU time. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Runtime**.
-
-
-The **Runtime** dashboard displays the following time series graphs:
-
-## Live Node Count
-
-
-
-In the node view as well as the cluster view, the graph shows the number of live nodes in the cluster.
-
-A dip in the graph indicates decommissioned nodes, dead nodes, or nodes that are not responding. To troubleshoot the dip in the graph, refer to the [Summary panel](admin-ui-access-and-navigate.html#summary-panel).
-
-## Memory Usage
-
-
-
-- In the node view, the graph shows the memory in use for the selected node.
-
-- In the cluster view, the graph shows the memory in use across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-RSS | Total memory in use by CockroachDB.
-Go Allocated | Memory allocated by the Go layer.
-Go Total | Total memory managed by the Go layer.
-CGo Allocated | Memory allocated by the C layer.
-CGo Total | Total memory managed by the C layer.
-
-{{site.data.alerts.callout_info}}If Go Total or CGO Total fluctuates or grows steadily over time, contact us.{{site.data.alerts.end}}
-
-## CPU Time
-
-
-
-
-- In the node view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations for the selected node.
-- In the cluster view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations across all nodes in the cluster.
-
-On hovering over the CPU Time graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-User CPU Time | Total CPU seconds per second used by the CockroachDB process across all nodes.
-Sys CPU Time | Total CPU seconds per second used by the system calls made by CockroachDB across all nodes.
-
-## Clock Offset
-
-
-
-- In the node view, the graph shows the mean clock offset of the node against the rest of the cluster.
-- In the cluster view, the graph shows the mean clock offset of each node against the rest of the cluster.
-
-## Other Graphs
-
-The **Runtime** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Goroutine Count
-- GC Runs
-- GC Pause Time
-
-For monitoring CockroachDB, it is sufficient to use the [**Live Node Count**](#live-node-count), [**Memory Usage**](#memory-usage), [**CPU Time**](#cpu-time), and [**Clock Offset**](#clock-offset) graphs.
diff --git a/src/current/v2.0/admin-ui-sql-dashboard.md b/src/current/v2.0/admin-ui-sql-dashboard.md
deleted file mode 100644
index 860b6efde12..00000000000
--- a/src/current/v2.0/admin-ui-sql-dashboard.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: SQL Dashboard
-summary: The SQL dashboard lets you monitor the performance of your SQL queries.
-toc: true
----
-
-The **SQL** dashboard in the CockroachDB Admin UI lets you monitor the performance of your SQL queries. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **SQL**.
-
-
-The **SQL** dashboard displays the following time series graphs:
-
-## SQL Connections
-
-
-
-- In the node view, the graph shows the number of connections currently open between the client and the selected node.
-
-- In the cluster view, the graph shows the total number of SQL client connections to all nodes combined.
-
-## SQL Byte Traffic
-
-
-
-The **SQL Byte Traffic** graph helps you correlate SQL query count to byte traffic, especially in bulk data inserts or analytic queries that return data in bulk.
-
-- In the node view, the graph shows the current byte throughput (bytes/second) between all the currently connected SQL clients and the node.
-
-- In the cluster view, the graph shows the aggregate client throughput across all nodes.
-
-## SQL Queries
-
-
-
-- In the node view, the graph shows the current moving average, over the last 10 seconds, of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node.
-
-- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
-
-## Transactions
-
-
-
-- In the node view, the graph shows separately the current moving average, over the last 10 seconds, of the number of opened, committed, aborted and rolled back transactions per second issued by SQL clients on the node.
-
-- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current transactions load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
-
-If the graph shows excessive aborts or rollbacks, it might indicate issues with the SQL queries. In that case, re-examine queries to lower contention.
-
-## Service Latency
-
-
-
-Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client.
-
-- In the node view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the selected node.
-
-- In the cluster view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for each node in the cluster.
-
-## Other Graphs
-
-The **SQL** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Execution Latency
-- Active Distributed SQL Queries
-- Active Flows for Distributed SQL Queries
-- Service Latency: DistSQL
-- Schema Changes
-
-For monitoring CockroachDB, it is sufficient to use the [**SQL Connections**](#sql-connections), [**SQL Byte Traffic**](#sql-byte-traffic), [**SQL Queries**](#sql-queries), [**Service Latency**](#service-latency), and [**Transactions**](#transactions) graphs.
diff --git a/src/current/v2.0/admin-ui-storage-dashboard.md b/src/current/v2.0/admin-ui-storage-dashboard.md
deleted file mode 100644
index 0d8e2bbd282..00000000000
--- a/src/current/v2.0/admin-ui-storage-dashboard.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: Storage Dashboard
-summary: The Storage dashboard lets you monitor the storage utilization for your cluster.
-toc: true
----
-
-The **Storage** dashboard in the CockroachDB Admin UI lets you monitor the storage utilization for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Storage**.
-
-
-The **Storage** dashboard displays the following time series graphs:
-
-## Capacity
-
-
-
-You can monitor the **Capacity** graph to determine when additional storage is needed.
-
-- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node.
-
-- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-Capacity | The maximum storage capacity allocated to CockroachDB. You can configure the maximum allocated storage capacity for CockroachDB using the `--store` flag. For more information, see [Start a Node](start-a-node.html#store).
-Available | The free storage capacity available to CockroachDB.
-Used | Disk space used by the data in the CockroachDB store. Note that this value is less than (Capacity - Available) because Capacity and Available metrics consider the entire disk and all applications on the disk including CockroachDB, whereas Used metric tracks only the store's disk usage.
-
-{{site.data.alerts.callout_info}}
-{% include v2.0/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-## File Descriptors
-
-
-
-- In the node view, the graph shows the number of open file descriptors for that node, compared with the file descriptor limit.
-
-- In the cluster view, the graph shows the number of open file descriptors across all nodes, compared with the file descriptor limit.
-
-If the Open count is almost equal to the Limit count, increase [File Descriptors](recommended-production-settings.html#file-descriptors-limit).
-
-{{site.data.alerts.callout_info}}If you are running multiple nodes on a single machine (not recommended), the actual number of open file descriptors are considered open on each node. Thus the limit count value displayed on the Admin UI is the actual value of open file descriptors multiplied by the number of nodes, compared with the file descriptor limit. {{site.data.alerts.end}}
-
-For Windows systems, you can ignore the File Descriptors graph because the concept of file descriptors is not applicable to Windows.
-
-## Other Graphs
-
-The **Storage** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Live Bytes
-- Log Commit Latency
-- Command Commit Latency
-- RocksDB Read Amplification
-- RocksDB SSTables
-- Time Series Writes
-- Time Series Bytes Written
-
-For monitoring CockroachDB, it is sufficient to use the [**Capacity**](#capacity) and [**File Descriptors**](#file-descriptors) graphs.
diff --git a/src/current/v2.0/alter-column.md b/src/current/v2.0/alter-column.md
deleted file mode 100644
index 1f3d5053f83..00000000000
--- a/src/current/v2.0/alter-column.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title: ALTER COLUMN
-summary: Use the ALTER COLUMN statement to set, change, or drop a column's Default constraint or to drop the Not Null constraint.
-toc: true
----
-
-The `ALTER COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and sets, changes, or drops a column's [Default constraint](default-value.html) or drops the [Not Null constraint](not-null.html).
-
-{{site.data.alerts.callout_info}}To manage other constraints, see ADD CONSTRAINT and DROP CONSTRAINT{{site.data.alerts.end}}
-
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/alter_column.html %}
-
-
-## Required Privileges
-
-The user must have the `CREATE` [privilege](privileges.html) on the table.
-
-## Parameters
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_name` | The name of the table with the column you want to modify. |
-| `column_name` | The name of the column you want to modify. |
-| `a_expr` | The new [Default Value](default-value.html) you want to use. |
-
-## Viewing Schema Changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
-
-## Examples
-
-### Set or Change a Default Value
-
-Setting the [Default Value constraint](default-value.html) inserts the value when data's written to the table without explicitly defining the value for the column. If the column already has a Default Value set, you can use this statement to change it.
-
-The below example inserts the Boolean value `true` whenever you inserted data to the `subscriptions` table without defining a value for the `newsletter` column.
-
-~~~ sql
-> ALTER TABLE subscriptions ALTER COLUMN newsletter SET DEFAULT true;
-~~~
-
-### Remove Default Constraint
-
-If the column has a defined [Default Value](default-value.html), you can remove the constraint, which means the column will no longer insert a value by default if one is not explicitly defined for the column.
-
-~~~ sql
-> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP DEFAULT;
-~~~
-
-### Remove Not Null Constraint
-
-If the column has the [Not Null constraint](not-null.html) applied to it, you can remove the constraint, which means the column becomes optional and can have *NULL* values written into it.
-
-~~~ sql
-> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP NOT NULL;
-~~~
-
-## See Also
-
-- [Constraints](constraints.html)
-- [`ADD CONSTRAINT`](add-constraint.html)
-- [`DROP CONSTRAINT`](drop-constraint.html)
-- [`ALTER TABLE`](alter-table.html)
diff --git a/src/current/v2.0/alter-database.md b/src/current/v2.0/alter-database.md
deleted file mode 100644
index 31972f31829..00000000000
--- a/src/current/v2.0/alter-database.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: ALTER DATABASE
-summary: Use the ALTER DATABASE statement to change an existing database.
-toc: false
----
-
-The `ALTER DATABASE` [statement](sql-statements.html) applies a schema change to a database.
-
-{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}}
-
-For information on using `ALTER DATABASE`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`RENAME`](rename-database.html) | Change the name of a database.
diff --git a/src/current/v2.0/alter-index.md b/src/current/v2.0/alter-index.md
deleted file mode 100644
index 46080a55a17..00000000000
--- a/src/current/v2.0/alter-index.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: ALTER INDEX
-summary: Use the ALTER INDEX statement to change an existing index.
-toc: false
----
-
-The `ALTER INDEX` [statement](sql-statements.html) applies a schema change to an index.
-
-{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}}
-
-For information on using `ALTER INDEX`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`RENAME`](rename-index.html) | Change the name of an index.
-[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the index.
diff --git a/src/current/v2.0/alter-sequence.md b/src/current/v2.0/alter-sequence.md
deleted file mode 100644
index e2da8e5ce44..00000000000
--- a/src/current/v2.0/alter-sequence.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-title: ALTER SEQUENCE
-summary: Use the ALTER SEQUENCE statement to change the name, increment values, and other settings of a sequence.
-toc: true
----
-
-New in v2.0: The `ALTER SEQUENCE` [statement](sql-statements.html) [changes the name](rename-sequence.html), increment values, and other settings of a sequence.
-
-{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}}
-
-
-## Required Privileges
-
-The user must have the `CREATE` [privilege](privileges.html) on the parent database.
-
-## Synopsis
-
-{% include {{ page.version.version }}/sql/diagrams/alter_sequence_options.html %}
-
-## Parameters
-
-
-
- Parameter | Description
------------|------------
-`IF EXISTS` | Modify the sequence only if it exists; if it does not exist, do not return an error.
-`sequence_name` | The name of the sequence you want to modify.
-`INCREMENT` | The new value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence.
-`MINVALUE` | The new minimum value of the sequence.
Default: `1`
-`MAXVALUE` | The new maximum value of the sequence.
Default: `9223372036854775807`
-`START` | The value the sequence starts at if you `RESTART` or if the sequence hits the `MAXVALUE` and `CYCLE` is set.
`RESTART` and `CYCLE` are not implemented yet.
-`CYCLE` | The sequence will wrap around when the sequence value hits the maximum or minimum value. If `NO CYCLE` is set, the sequence will not wrap.
-
-## Examples
-
-### Change the Increment Value of a Sequence
-
-In this example, we're going to change the increment value of a sequence from its current state (i.e., `1`) to `2`.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER SEQUENCE customer_seq INCREMENT 2;
-~~~
-
-Next, we'll add another record to the table and check that the new record adheres to the new sequence.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO customer_list (customer, address) VALUES ('Marie', '333 Ocean Ave');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM customer_list;
-~~~
-~~~
-+----+----------+--------------------+
-| id | customer | address |
-+----+----------+--------------------+
-| 1 | Lauren | 123 Main Street |
-| 2 | Jesse | 456 Broad Ave |
-| 3 | Amruta | 9876 Green Parkway |
-| 5 | Marie | 333 Ocean Ave |
-+----+----------+--------------------+
-~~~
-
-### Set the Next Value of a Sequence
-
-In this example, we're going to change the next value of the example sequence (`customer_seq`). Currently, the next value will be `7` (i.e., `5` + `INCREMENT 2`). We will change the next value to `20`.
-
-{{site.data.alerts.callout_info}}You cannot set a value outside the MAXVALUE or MINVALUE of the sequence. {{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT setval('customer_seq', 20, false);
-~~~
-~~~
-+--------+
-| setval |
-+--------+
-| 20 |
-+--------+
-~~~
-
-{{site.data.alerts.callout_info}}The setval('seq_name', value, is_called) function in CockroachDB SQL mimics the setval() function in PostgreSQL, but it does not store the is_called flag. Instead, it sets the value to val - increment for false or val for true. {{site.data.alerts.end}}
-
-Let's add another record to the table to check that the new record adheres to the new next value.
-
-~~~ sql
-> INSERT INTO customer_list (customer, address) VALUES ('Lola', '333 Schermerhorn');
-~~~
-~~~
-+----+----------+--------------------+
-| id | customer | address |
-+----+----------+--------------------+
-| 1 | Lauren | 123 Main Street |
-| 2 | Jesse | 456 Broad Ave |
-| 3 | Amruta | 9876 Green Parkway |
-| 5 | Marie | 333 Ocean Ave |
-| 20 | Lola | 333 Schermerhorn |
-+----+----------+--------------------+
-~~~
-
-
-## See Also
-
-- [`RENAME SEQUENCE`](rename-sequence.html)
-- [`CREATE SEQUENCE`](create-sequence.html)
-- [`DROP SEQUENCE`](drop-sequence.html)
-- [Functions and Operators](functions-and-operators.html)
-- [Other SQL Statements](sql-statements.html)
diff --git a/src/current/v2.0/alter-table.md b/src/current/v2.0/alter-table.md
deleted file mode 100644
index 9fd5ca94786..00000000000
--- a/src/current/v2.0/alter-table.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: ALTER TABLE
-summary: Use the ALTER TABLE statement to change the schema of a table.
-toc: true
----
-
-The `ALTER TABLE` [statement](sql-statements.html) applies a schema change to a table.
-
-{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}}
-
-
-## Subcommands
-
-For information on using `ALTER TABLE`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`ADD COLUMN`](add-column.html) | Add columns to tables.
-[`ADD CONSTRAINT`](add-constraint.html) | Add constraints to columns.
-[`ALTER COLUMN`](alter-column.html) | Change or drop a column's [Default constraint](default-value.html) or drop the [Not Null constraint](not-null.html).
-[`DROP COLUMN`](drop-column.html) | Remove columns from tables.
-[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from columns.
-[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Enable per-table audit logs.
-[`PARTITION BY`](partition-by.html) | New in v2.0: Repartition or unpartition a table with partitions ([Enterprise-only](enterprise-licensing.html)).
-[`RENAME COLUMN`](rename-column.html) | Change the names of columns.
-[`RENAME TABLE`](rename-table.html) | Change the names of tables.
-[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the table.
-[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column.
-
-## Viewing Schema Changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
diff --git a/src/current/v2.0/alter-user.md b/src/current/v2.0/alter-user.md
deleted file mode 100644
index bb7dcda941f..00000000000
--- a/src/current/v2.0/alter-user.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: ALTER USER
-summary: The ALTER USER statement can be used to add or change a user's password.
-toc: true
----
-
-New in v2.0: The `ALTER USER` [statement](sql-statements.html) can be used to add or change a [user's](create-and-manage-users.html) password.
-
-{{site.data.alerts.callout_success}}You can also use the cockroach user command to add or change a user's password.{{site.data.alerts.end}}
-
-
-## Considerations
-
-- Password creation and alteration is supported only in secure clusters for non-`root` users.
-
-## Required Privileges
-
-The user must have the `INSERT` and `UPDATE` [privileges](privileges.html) on the `system.users` table.
-
-## Synopsis
-
-
{% include {{ page.version.version }}/sql/diagrams/alter_user_password.html %}
-
-## Parameters
-
-
-
-Parameter | Description
-----------|-------------
-`name` | The name of the user whose password you want to create or add.
-`password` | Let the user [authenticate their access to a secure cluster](create-user.html#user-authentication) using this new password. Passwords should be entered as [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an [identifier](#change-password-using-an-identifier), although this is discouraged.
-
-## Examples
-
-### Change Password Using a String Literal
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD 'ilov3beefjerky';
-~~~
-~~~
-ALTER USER 1
-~~~
-
-### Change Password Using an Identifier
-
-The following statement changes the password to `ilov3beefjerky`, as above:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD ilov3beefjerky;
-~~~
-
-This is equivalent to the example in the previous section because the password contains only lowercase characters.
-
-In contrast, the following statement changes the password to `thereisnotomorrow`, even though the password in the syntax contains capitals, because identifiers are normalized automatically:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD ThereIsNoTomorrow;
-~~~
-
-To preserve case in a password specified using identifier syntax, use double quotes:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD "ThereIsNoTomorrow";
-~~~
-
-## See Also
-
-- [`cockroach user` command](create-and-manage-users.html)
-- [`DROP USER`](drop-user.html)
-- [`SHOW USERS`](show-users.html)
-- [`GRANT `](grant.html)
-- [`SHOW GRANTS`](show-grants.html)
-- [Create Security Certificates](create-security-certificates.html)
-- [Other SQL Statements](sql-statements.html)
diff --git a/src/current/v2.0/alter-view.md b/src/current/v2.0/alter-view.md
deleted file mode 100644
index e2594d0e8d7..00000000000
--- a/src/current/v2.0/alter-view.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: ALTER VIEW
-summary: The ALTER VIEW statement changes the name of a view.
-toc: true
----
-
-The `ALTER VIEW` [statement](sql-statements.html) changes the name of a [view](views.html).
-
-{{site.data.alerts.callout_info}}It is not currently possible to change the SELECT statement executed by a view. Instead, you must drop the existing view and create a new view. Also, it is not currently possible to rename a view that other views depend on, but this ability may be added in the future (see this issue).{{site.data.alerts.end}}
-
-
-## Required Privileges
-
-The user must have the `DROP` [privilege](privileges.html) on the view and the `CREATE` privilege on the parent database.
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/alter_view.html %}
-
-
-## Parameters
-
-Parameter | Description
-----------|------------
-`IF EXISTS` | Rename the view only if a view of `view_name` exists; if one does not exist, do not return an error.
-`view_name` | The name of the view to rename. To find view names, use:
`SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';`
-`name` | The new [`name`](sql-grammar.html#name) for the view, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers).
-
-## Example
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';
-~~~
-
-~~~
-+---------------+-------------------+--------------------+------------+---------+
-| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION |
-+---------------+-------------------+--------------------+------------+---------+
-| def | bank | user_accounts | VIEW | 2 |
-| def | bank | user_emails | VIEW | 1 |
-+---------------+-------------------+--------------------+------------+---------+
-(2 rows)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER VIEW bank.user_emails RENAME TO bank.user_email_addresses;
-~~~
-
-~~~
-RENAME VIEW
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';
-~~~
-
-~~~
-+---------------+-------------------+----------------------+------------+---------+
-| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION |
-+---------------+-------------------+----------------------+------------+---------+
-| def | bank | user_accounts | VIEW | 2 |
-| def | bank | user_email_addresses | VIEW | 3 |
-+---------------+-------------------+----------------------+------------+---------+
-(2 rows)
-~~~
-
-## See Also
-
-- [Views](views.html)
-- [`CREATE VIEW`](create-view.html)
-- [`SHOW CREATE VIEW`](show-create-view.html)
-- [`DROP VIEW`](drop-view.html)
diff --git a/src/current/v2.0/architecture/distribution-layer.md b/src/current/v2.0/architecture/distribution-layer.md
deleted file mode 100644
index 91f4a4ca766..00000000000
--- a/src/current/v2.0/architecture/distribution-layer.md
+++ /dev/null
@@ -1,185 +0,0 @@
----
-title: Distribution Layer
-summary: The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data.
-toc: true
----
-
-The Distribution Layer of CockroachDB's architecture provides a unified view of your cluster's data.
-
-{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}}
-
-
-## Overview
-
-To make all data in your cluster accessible from any node, CockroachDB stores data in a monolithic sorted map of key-value pairs. This keyspace describes all of the data in your cluster, as well as its location, and is divided into what we call "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
-
-CockroachDB implements a sorted map to enable:
-
- - **Simple lookups**: Because we identify which nodes are responsible for certain portions of the data, queries are able to quickly locate where to find the data they want.
- - **Efficient scans**: By defining the order of data, it's easy to find data within a particular range during a scan.
-
-### Monolithic Sorted Map Structure
-
-The monolithic sorted map is comprised of two fundamental elements:
-
-- System data, which include **meta ranges** that describe the locations of data in your cluster (among many other cluster-wide and local data elements)
-- User data, which store your cluster's **table data**
-
-#### Meta Ranges
-
-The locations of all ranges in your cluster are stored in a two-level index at the beginning of your key-space, known as meta ranges, where the first level (`meta1`) addresses the second, and the second (`meta2`) addresses data in the cluster. Importantly, every node has information on where to locate the `meta1` range (known as its Range Descriptor, detailed below), and the range is never split.
-
-This meta range structure lets us address up to 4EiB of user data by default: we can address 2^(18 + 18) = 2^36 ranges; each range addresses 2^26 B, and altogether we address 2^(36+26) B = 2^62 B = 4EiB. However, with larger range sizes, it's possible to expand this capacity even further.
-
-Meta ranges are treated mostly like normal ranges and are accessed and replicated just like other elements of your cluster's KV data.
-
-Each node caches values of the `meta2` range it has accessed before, which optimizes access of that data in the future. Whenever a node discovers that its `meta2` cache is invalid for a specific key, the cache is updated by performing a regular read on the `meta2` range.
-
-#### Table Data
-
-After the node's meta ranges is the KV data your cluster stores.
-
-Each table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 64 MiB in size, it splits into two ranges. This process continues as a table and its indexes continue growing. Once a table is split across multiple ranges, it's likely that the table and secondary indexes will be stored in separate ranges. However, a range can still contain data for both the table and a secondary index.
-
-The default 64MiB range size represents a sweet spot for us between a size that's small enough to move quickly between nodes, but large enough to store a meaningfully contiguous set of data whose keys are more likely to be accessed together. These ranges are then shuffled around your cluster to ensure survivability.
-
-These ranges are replicated (in the aptly named Replication Layer), and have the addresses of each replica stored in the `meta2` range.
-
-### Using the Monolithic Sorted Map
-
-When a node receives a request, it looks at the Meta Ranges to find out which node it needs to route the request to by comparing the keys in the request to the keys in its `meta2` range.
-
-These meta ranges are heavily cached, so this is normally handled without having to send an RPC to the node actually containing the `meta2` ranges.
-
-The node then sends those KV operations to the Leaseholder identified in the `meta2` range. However, it's possible that the data moved, in which case the node that no longer has the information replies to the requesting node where it's now located. In this case we go back to the `meta2` range to get more up-to-date information and try again.
-
-### Interactions with Other Layers
-
-In relationship to other layers in CockroachDB, the Distribution Layer:
-
-- Receives requests from the Transaction Layer on the same node.
-- Identifies which nodes should receive the request, and then sends the request to the proper node's Replication Layer.
-
-## Technical Details & Components
-
-### gRPC
-
-gRPC is the software nodes use to communicate with one another. Because the Distribution Layer is the first layer to communicate with other nodes, CockroachDB implements gRPC here.
-
-gRPC requires inputs and outputs to be formatted as protocol buffers (protobufs). To leverage gRPC, CockroachDB implements a protocol-buffer-based API defined in `api.proto`.
-
-For more information about gRPC, see the [official gRPC documentation](http://www.grpc.io/docs/guides/).
-
-### BatchRequest
-
-All KV operation requests are bundled into a [protobuf](https://en.wikipedia.org/wiki/Protocol_Buffers), known as a `BatchRequest`. The destination of this batch is identified in the `BatchRequest` header, as well as a pointer to the request's transaction record. (On the other side, when a node is replying to a `BatchRequest`, it uses a protobuf––`BatchResponse`.)
-
-This `BatchRequest` is also what's used to send requests between nodes using gRPC, which accepts and sends protocol buffers.
-
-### DistSender
-
-The gateway/coordinating node's `DistSender` receives `BatchRequest`s from its own `TxnCoordSender`. `DistSender` is then responsible for breaking up `BatchRequests` and routing a new set of `BatchRequests` to the nodes it identifies contain the data using its `meta2` ranges. It will use the cache to send the request to the Leaseholder, but it's also prepared to try the other replicas, in order of "proximity". The replica that the cache says is the Leaseholder is simply moved to the front of the list of replicas to be tried and then an RPC is sent to all of them, in order.
-
-Requests received by a non-Leaseholder fail with an error pointing at the replica's last known Leaseholder. These requests are retried transparently with the updated lease by the gateway node and never reach the client.
-
-As nodes begin replying to these commands, `DistSender` also aggregates the results in preparation for returning them to the client.
-
-### Meta Range KV Structure
-
-Like all other data in your cluster, meta ranges are structured as KV pairs. Both meta ranges have a similar structure:
-
-~~~
-metaX/successorKey -> LeaseholderAddress, [list of other nodes containing data]
-~~~
-
-Element | Description
---------|------------------------
-`metaX` | The level of meta range. Here we use a simplified `meta1` or `meta2`, but these are actually represented in `cockroach` as `\x02` and `\x03` respectively.
-`successorKey` | The first key *greater* than the key you're scanning for. This makes CockroachDB's scans efficient; it simply scans the keys until it finds a value greater than the key it's looking for, and that is where it finds the relevant data.
The `successorKey` for the end of a keyspace is identified as `maxKey`.
-`LeaseholderAddress` | The replica primarily responsible for reads and writes, known as the Leaseholder. The Replication Layer contains more information about [Leases](replication-layer.html#leases).
-
-Here's an example:
-
-~~~
-meta2/M -> node1:26257, node2:26257, node3:26257
-~~~
-
-In this case, the replica on `node1` is the Leaseholder, and nodes 2 and 3 also contain replicas.
-
-#### Example
-
-Let's imagine we have an alphabetically sorted column, which we use for lookups. Here are what the meta ranges would approximately look like:
-
-1. `meta1` contains the address for the nodes containing the `meta2` replicas.
-
- ~~~
- # Points to meta2 range for keys [A-M)
- meta1/M -> node1:26257, node2:26257, node3:26257
-
- # Points to meta2 range for keys [M-Z]
- meta1/maxKey -> node4:26257, node5:26257, node6:26257
- ~~~
-
-2. `meta2` contains addresses for the nodes containing the replicas of each range in the cluster, the first of which is the [Leaseholder](replication-layer.html#leases).
-
- ~~~
- # Contains [A-G)
- meta2/G -> node1:26257, node2:26257, node3:26257
-
- # Contains [G-M)
- meta2/M -> node1:26257, node2:26257, node3:26257
-
- #Contains [M-Z)
- meta2/Z -> node4:26257, node5:26257, node6:26257
-
- #Contains [Z-maxKey)
- meta2/maxKey-> node4:26257, node5:26257, node6:26257
- ~~~
-
-### Table Data KV Structure
-
-Key-Value data, which represents the data in your tables using the following structure:
-
-~~~
-/
// ->
-~~~
-
-The table itself is stored with an `index_id` of 1 for its `PRIMARY KEY` columns, with the rest of the columns in the table considered as stored/covered columns.
-
-### Range Descriptors
-
-Each range in CockroachDB contains metadata, known as a Range Descriptor. A Range Descriptor is comprised of the following:
-
-- A sequential RangeID
-- The keyspace (i.e., the set of keys) the range contains; for example, the first and last `` in the Table Data KV Structure above. This determines the `meta2` range's keys.
-- The addresses of nodes containing replicas of the range, with its Leaseholder (which is responsible for its reads and writes) in the first position. This determines the `meta2` range's key's values.
-
-Because Range Descriptors comprise the key-value data of the `meta2` range, each node's `meta2` cache also stores Range Descriptors.
-
-Range Descriptors are updated whenever there are:
-
-- Membership changes to a range's Raft group (discussed in more detail in the [Replication Layer](replication-layer.html#membership-changes-rebalance-repair))
-- Leaseholder changes
-- Range splits
-
-All of these updates to the Range Descriptor occur locally on the range, and then propagate to the `meta2` range.
-
-### Range Splits
-
-By default, CockroachDB attempts to keep ranges/replicas at 64MiB. Once a range reaches that limit we split it into two 32 MiB ranges (composed of contiguous key spaces).
-
-During this range split, the node creates a new Raft group containing all of the same members as the range that was split. The fact that there are now two ranges also means that there is a transaction that updates `meta2` with the new keyspace boundaries, as well as the addresses of the nodes using the Range Descriptor.
-
-## Technical Interactions with Other Layers
-
-### Distribution & Transaction Layer
-
-The Distribution Layer's `DistSender` receives `BatchRequests` from its own node's `TxnCoordSender`, housed in the Transaction Layer.
-
-### Distribution & Replication Layer
-
-The Distribution Layer routes `BatchRequests` to nodes containing ranges of data, which is ultimately routed to the Raft group leader or Leaseholder, which are handled in the Replication Layer.
-
-## What's Next?
-
-Learn how CockroachDB copies data and ensures consistency in the [Replication Layer](replication-layer.html).
diff --git a/src/current/v2.0/architecture/overview.md b/src/current/v2.0/architecture/overview.md
deleted file mode 100644
index 8f26d270298..00000000000
--- a/src/current/v2.0/architecture/overview.md
+++ /dev/null
@@ -1,87 +0,0 @@
----
-title: Architecture Overview
-summary: Learn about the inner-workings of the CockroachDB architecture.
-toc: true
-key: cockroachdb-architecture.html
----
-
-CockroachDB was designed to create the open-source database our developers would want to use: one that is both scalable and consistent. Developers often have questions about how we've achieved this, and this guide sets out to detail the inner-workings of the `cockroach` process as a means of explanation.
-
-However, you definitely do not need to understand the underlying architecture to use CockroachDB. These pages give serious users and database enthusiasts a high-level framework to explain what's happening under the hood.
-
-## Using this Guide
-
-This guide is broken out into pages detailing each layer of CockroachDB. It's recommended to read through the layers sequentially, starting with this overview and then proceeding to the SQL Layer.
-
-If you're looking for a high-level understanding of CockroachDB, you can simply read the **Overview** section of each layer. For more technical detail––for example, if you're interested in [contributing to the project](https://wiki.crdb.io/wiki/spaces/CRDB/pages/73204033/Contributing+to+CockroachDB)––you should read the **Components** sections as well.
-
-{{site.data.alerts.callout_info}}This guide details how CockroachDB is built, but does not explain how you should architect an application using CockroachDB. For help with your own application's architecture using CockroachDB, check out our user documentation.{{site.data.alerts.end}}
-
-## Goals of CockroachDB
-
-CockroachDB was designed in service of the following goals:
-
-- Make life easier for humans. This means being low-touch and highly automated for operators and simple to reason about for developers.
-- Offer industry-leading consistency, even on massively scaled deployments. This means enabling distributed transactions, as well as removing the pain of eventual consistency issues and stale reads.
-- Create an always-on database that accepts reads and writes on all nodes without generating conflicts.
-- Allow flexible deployment in any environment, without tying you to any platform or vendor.
-- Support familiar tools for working with relational data (i.e., SQL).
-
-With the confluence of these features, we hope that CockroachDB lets teams easily build global, scalable, resilient cloud services.
-
-## Glossary
-
-### Terms
-
-It's helpful to understand a few terms before reading our architecture documentation.
-
-{% include {{ page.version.version }}/misc/basic-terms.md %}
-
-### Concepts
-
-CockroachDB heavily relies on the following concepts, so being familiar with them will help you understand what our architecture achieves.
-
-Term | Definition
------|-----------
-**Consistency** | CockroachDB uses "consistency" in both the sense of [ACID semantics](https://en.wikipedia.org/wiki/Consistency_(database_systems)) and the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), albeit less formally than either definition. What we try to express with this term is that your data should be anomaly-free.
-**Consensus** | When a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.
When a write *doesn't* achieve consensus, forward progress halts to maintain consistency within the cluster.
-**Replication** | Replication involves creating and distributing copies of data, as well as ensuring copies remain consistent. However, there are multiple types of replication: namely, synchronous and asynchronous.
Synchronous replication requires all writes to propagate to a quorum of copies of the data before being considered committed. To ensure consistency with your data, this is the kind of replication CockroachDB uses.
Asynchronous replication only requires a single node to receive the write to be considered committed; it's propagated to each copy of the data after the fact. This is more or less equivalent to "eventual consistency", which was popularized by NoSQL databases. This method of replication is likely to cause anomalies and loss of data.
-**Transactions** | A set of operations performed on your database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/Database_transaction). This is a crucial component for a consistent system to ensure developers can trust the data in their database.
-**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast.
-
-## Overview
-
-CockroachDB starts running on machines with two commands:
-
-- `cockroach start` with a `--join` flag for all of the initial nodes in the cluster, so the process knows all of the other machines it can communicate with
-- `cockroach init` to perform a one-time initialization of the cluster
-
-Once the `cockroach` process is running, developers interact with CockroachDB through a SQL API, which we've modeled after PostgreSQL. Thanks to the symmetrical behavior of all nodes, you can send SQL requests to any of them; this makes CockroachDB really easy to integrate with load balancers.
-
-After receiving SQL RPCs, nodes convert them into operations that work with our distributed key-value store. As these RPCs start filling your cluster with data, CockroachDB algorithmically starts distributing your data among your nodes, breaking the data up into 64MiB chunks that we call ranges. Each range is replicated to at least 3 nodes to ensure survivability. This way, if nodes go down, you still have copies of the data which can be used for reads and writes, as well as replicating the data to other nodes.
-
-If a node receives a read or write request it cannot directly serve, it simply finds the node that can handle the request, and communicates with it. This way you do not need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node.
-
-Any changes made to the data in a range rely on a consensus algorithm to ensure a majority of its replicas agree to commit the change, ensuring industry-leading isolation guarantees and providing your application consistent reads, regardless of which node you communicate with.
-
-Ultimately, data is written to and read from disk using an efficient storage engine, which is able to keep track of the data's timestamp. This has the benefit of letting us support the SQL standard `AS OF SYSTEM TIME` clause, letting you find historical data for a period of time.
-
-However, while that high-level overview gives you a notion of what CockroachDB does, looking at how the `cockroach` process operates on each of these nodes will give you much greater understanding of our architecture.
-
-### Layers
-
-At the highest level, CockroachDB converts clients' SQL statements into key-value (KV) data, which is distributed among nodes and written to disk. Our architecture is the process by which we accomplish that, which is manifested as a number of layers that interact with those directly above and below it as relatively opaque services.
-
-The following pages describe the function each layer performs, but mostly ignore the details of other layers. This description is true to the experience of the layers themselves, which generally treat the other layers as black-box APIs. There are interactions that occur between layers which *are not* clearly articulated and require an understanding of each layer's function to understand the entire process.
-
-Layer | Order | Purpose
-------|------------|--------
-[SQL](sql-layer.html) | 1 | Translate client SQL queries to KV operations.
-[Transactional](transaction-layer.html) | 2 | Allow atomic changes to multiple KV entries.
-[Distribution](distribution-layer.html) | 3 | Present replicated KV ranges as a single entity.
-[Replication](replication-layer.html) | 4 | Consistently and synchronously replicate KV ranges across many nodes. This layer also enables consistent reads via leases.
-[Storage](storage-layer.html) | 5 | Write and read KV data on disk.
-
-## What's Next?
-
-Begin understanding our architecture by learning how CockroachDB works with applications in the [SQL Layer](sql-layer.html).
diff --git a/src/current/v2.0/architecture/replication-layer.md b/src/current/v2.0/architecture/replication-layer.md
deleted file mode 100644
index 25e2b588a8f..00000000000
--- a/src/current/v2.0/architecture/replication-layer.md
+++ /dev/null
@@ -1,108 +0,0 @@
----
-title: Replication Layer
-summary: The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between copies.
-toc: true
----
-
-The Replication Layer of CockroachDB's architecture copies data between nodes and ensures consistency between these copies by implementing our consensus algorithm.
-
-{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}}
-
-## Overview
-
-High availability requires that your database can tolerate nodes going offline without interrupting service to your application. This means replicating data between nodes to ensure the data remains accessible.
-
-Ensuring consistency with nodes offline, though, is a challenge many databases fail. To solve this problem, CockroachDB uses a consensus algorithm to require that a quorum of replicas agrees on any changes to a range before those changes are committed. Because 3 is the smallest number that can achieve quorum (i.e., 2 out of 3), CockroachDB's high availability (known as Multi-Active Availability) requires 3 nodes.
-
-The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [Replication Zones](../configure-replication-zones.html).
-
-When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed.
-
-### Interactions with Other Layers
-
-In relationship to other layers in CockroachDB, the Replication Layer:
-
-- Receives requests from and sends responses to the Distribution Layer.
-- Writes accepted requests to the Storage Layer.
-
-## Components
-
-### Raft
-
-Raft is a consensus protocol––an algorithm which makes sure that your data is safely stored on multiple machines, and that those machines agree on the current state even if some of them are temporarily disconnected.
-
-Raft organizes all nodes that contain a replica of a range into a group--unsurprisingly called a Raft Group. Each replica in a Raft Group is either a "leader" or a "follower". The leader, which is elected by Raft and long-lived, coordinates all writes to the Raft Group. It heartbeats followers periodically and keeps their logs replicated. In the absence of heartbeats, followers become candidates after randomized election timeouts and proceed to hold new leader elections.
-
-Once a node receives a `BatchRequest` for a range it contains, it converts those KV operations into Raft commands. Those commands are proposed to the Raft group leader––which is what makes it ideal for the [Leaseholder](#leases) and the Raft leader to be one in the same––and written to the Raft log.
-
-For a great overview of Raft, we recommend [The Secret Lives of Data](http://thesecretlivesofdata.com/raft/).
-
-#### Raft Logs
-
-When writes receive a quorum, and are committed by the Raft group leader, they're appended to the Raft log. This provides an ordered set of commands that the replicas agreed on and is essentially the source of truth for consistent replication.
-
-Because this log is treated as serializable, it can be replayed to bring a node from a past state to its current state. This log also lets nodes that temporarily went offline to be "caught up" to the current state without needing to receive a copy of the existing data in the form of a snapshot.
-
-### Snapshots
-
-Each replica can be "snapshotted", which copies all of its data as of a specific timestamp (available because of [MVCC](storage-layer.html#mvcc)). This snapshot can be sent to other nodes during a rebalance event to expedite replication.
-
-After loading the snapshot, the node gets up to date by replaying all actions from the Raft group's log that have occurred since the snapshot was taken.
-
-### Leases
-
-A single node in the Raft group acts as the Leaseholder, which is the only node that can serve reads or propose writes to the Raft group leader (both actions are received as `BatchRequests` from [`DistSender`](distribution-layer.html#distsender)).
-
-When serving reads, Leaseholders bypass Raft; for the Leaseholder's writes to have been committed in the first place, they must have already achieved consensus, so a second consensus on the same data is unnecessary. This has the benefit of not incurring networking round trips required by Raft and greatly increases the speed of reads (without sacrificing consistency).
-
-CockroachDB attempts to elect a Leaseholder who is also the Raft group leader, which can also optimize the speed of writes.
-
-If there is no Leaseholder, any node receiving a request will attempt to become the Leaseholder for the range. To prevent two nodes from acquiring the lease, the requester includes a copy of the last valid lease it had; if another node became the Leaseholder, its request is ignored.
-
-#### Co-location with Raft Leadership
-
-The range lease is completely separate from Raft leadership, and so without further efforts, Raft leadership and the Range lease might not be held by the same Replica. However, we can optimize query performance by making the same node both Raft leader and the Leaseholder; it reduces network round trips if the Leaseholder receiving the requests can simply propose the Raft commands to itself, rather than communicating them to another node.
-
-To achieve this, each lease renewal or transfer also attempts to collocate them. In practice, that means that the mismatch is rare and self-corrects quickly.
-
-#### Epoch-Based Leases (Table Data)
-
-To manage leases for table data, CockroachDB implements a notion of "epochs," which are defined as the period between a node joining a cluster and a node disconnecting from a cluster. When the node disconnects, the epoch is considered changed, and the node immediately loses all of its leases.
-
-This mechanism lets us avoid tracking leases for every range, which eliminates a substantial amount of traffic we would otherwise incur. Instead, we assume leases do not expire until a node loses connection.
-
-#### Expiration-Based Leases (Meta & System Ranges)
-
-Your table's meta and system ranges (detailed in the Distribution Layer) are treated as normal key-value data, and therefore have Leases, as well. However, instead of using epochs, they have an expiration-based lease. These leases simply expire at a particular timestamp (typically a few seconds)––however, as long as the node continues proposing Raft commands, it continues to extend the expiration of the lease. If it doesn't, the next node containing a replica of the range that tries to read from or write to the range will become the Leaseholder.
-
-### Membership Changes: Rebalance/Repair
-
-Whenever there are changes to a cluster's number of nodes, the members of Raft groups change and, to ensure optimal survivability and performance, replicas need to be rebalanced. What that looks like varies depending on whether the membership change is nodes being added or going offline.
-
-**Nodes added**: The new node communicates information about itself to other nodes, indicating that it has space available. The cluster then rebalances some replicas onto the new node.
-
-**Nodes going offline**: If a member of a Raft group ceases to respond, after 5 minutes, the cluster begins to rebalance by replicating the data the downed node held onto other nodes.
-
-#### Rebalancing Replicas
-
-When CockroachDB detects a membership change, ultimately, replicas are moved between nodes.
-
-This is achieved by using a snapshot of a replica from the Leaseholder, and then sending the data to another node over [gRPC](distribution-layer.html#grpc). After the transfer has been completed, the node with the new replica joins that range's Raft group; it then detects that its latest timestamp is behind the most recent entries in the Raft log and it replays all of the actions in the Raft log on itself.
-
-## Interactions with Other Layers
-
-### Replication & Distribution Layers
-
-The Replication Layer receives requests from its and other nodes' `DistSender`. If this node is the Leaseholder for the range, it accepts the requests; if it isn't, it returns an error with a pointer to which node it believes *is* the Leaseholder. These KV requests are then turned into Raft commands.
-
-The Replication layer sends `BatchResponses` back to the Distribution Layer's `DistSender`.
-
-### Replication & Storage Layers
-
-Committed Raft commands are written to the Raft log and ultimately stored on disk through the Storage Layer.
-
-The Leaseholder serves reads from its RocksDB instance, which is in the Storage Layer.
-
-## What's Next?
-
-Learn how CockroachDB reads and writes data from disk in the [Storage Layer](storage-layer.html).
diff --git a/src/current/v2.0/architecture/sql-layer.md b/src/current/v2.0/architecture/sql-layer.md
deleted file mode 100644
index b6d31b6f456..00000000000
--- a/src/current/v2.0/architecture/sql-layer.md
+++ /dev/null
@@ -1,100 +0,0 @@
----
-title: SQL Layer
-summary: The SQL layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations.
-toc: true
----
-
-The SQL Layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations used by the rest of the database.
-
-{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}}
-
-## Overview
-
-Once CockroachDB has been deployed, developers need nothing more than a connection string to the cluster and SQL statements to start working.
-
-Because CockroachDB's nodes all behave symmetrically, developers can send requests to any node (which means CockroachDB works well with load balancers). Whichever node receives the request acts as the "gateway node," as other layers process the request.
-
-When developers send requests to the cluster, they arrive as SQL statements, but data is ultimately written to and read from the storage layer as key-value (KV) pairs. To handle this, the SQL layer converts SQL statements into a plan of KV operations, which it passes along to the Transaction Layer.
-
-### Interactions with Other Layers
-
-In relationship to other layers in CockroachDB, the SQL Layer:
-
-- Sends requests to the Transaction Layer.
-
-## Components
-
-### Relational Structure
-
-Developers experience data stored in CockroachDB in a relational structure, i.e., rows and columns. Sets of rows and columns are organized into tables. Collections of tables are organized into databases. Your cluster can contain many databases.
-
-Because of this structure, CockroachDB provides typical relational features like constraints (e.g., foreign keys). This lets application developers trust that the database will ensure consistent structuring of the application's data; data validation doesn't need to be built into the application logic separately.
-
-### SQL API
-
-CockroachDB implements a large portion of the ANSI SQL standard to manifest its relational structure. You can view [all of the SQL features CockroachDB supports here](../sql-feature-support.html).
-
-Importantly, through the SQL API, we also let developers use ACID-semantic transactions just like they would through any SQL database (`BEGIN`, `END`, `ISOLATION LEVELS`, etc.)
-
-### PostgreSQL Wire Protocol
-
-SQL queries reach your cluster through the PostgreSQL wire protocol. This makes connecting your application to the cluster simple by supporting most PostgreSQL-compatible drivers, as well as many PostgreSQL ORMs, such as GORM (Go) and Hibernate (Java).
-
-### SQL Parser, Planner, Executor
-
-After your node ultimately receives a SQL request from a client, CockroachDB parses the statement, creates a query plan, and then executes the plan.
-
-#### Parsing
-
-Received queries are parsed against our `yacc` file (which describes our supported syntax), and converts the string version of each query into [Abstract Syntax Trees](https://en.wikipedia.org/wiki/Abstract_syntax_tree) (AST).
-
-#### Planning
-
-With the AST, CockroachDB begins [semantic analysis](https://en.wikipedia.org/wiki/Semantic_analysis_(compilers)), which includes checking whether the query is valid, resolving names, eliminating unneeded intermediate computations, and finalizing which data types to use for intermediate results.
-
-At the same time, CockroachDB starts planning the query's execution by generating a tree of `planNodes`. Each of the `planNodes` contain a set of code that uses KV operations; this is ultimately how SQL statements are converted into KV operations.
-
-You can see the `planNodes` a query generates using [`EXPLAIN`](../explain.html).
-
-#### Executing
-
-`planNodes` are then executed, which begins by communicating with the Transaction Layer.
-
-This step also includes encoding values from your statements, as well as decoding values returned from lower layers.
-
-### Encoding
-
-Though SQL queries are written in parsable strings, lower layers of CockroachDB deal primarily in bytes. This means at the SQL layer, in query execution, CockroachDB must convert row data from their SQL representation as strings into bytes, and convert bytes returned from lower layers into SQL data that can be passed back to the client.
-
-It's also important––for indexed columns––that this byte encoding preserve the same sort order as the data type it represents. This is because of the way CockroachDB ultimately stores data in a sorted key-value map; storing bytes in the same order as the data it represents lets us efficiently scan KV data.
-
-However, for non-indexed columns (e.g., non-`PRIMARY KEY` columns), CockroachDB instead uses an encoding (known as "value encoding") which consumes less space but does not preserve ordering.
-
-You can find more exhaustive detail in the [Encoding Tech Note](https://github.com/cockroachdb/cockroach/blob/master/docs/tech-notes/encoding.md).
-
-### DistSQL
-
-Because CockroachDB is a distributed database, we've developed a Distributed SQL (DistSQL) optimization tool for some queries, which can dramatically speed up queries that involve many ranges. Though DistSQL's architecture is worthy of its own documentation, this cursory explanation can provide some insight into how it works.
-
-In non-distributed queries, the coordinating node receives all of the rows that match its query, and then performs any computations on the entire data set.
-
-However, for DistSQL-compatible queries, each node does computations on the rows it contains, and then sends the results (instead of the entire rows) to the coordinating node. The coordinating node then aggregates the results from each node, and finally returns a single response to the client.
-
-This dramatically reduces the amount of data brought to the coordinating node, and leverages the well-proven concept of parallel computing, ultimately reducing the time it takes for complex queries to complete. In addition, this processes data on the node that already stores it, which lets CockroachDB handle row-sets that are larger than an individual node's storage.
-
-To run SQL statements in a distributed fashion, we introduce a couple of concepts:
-
-- **Logical plan**: Similar to the AST/`planNode` tree described above, it represents the abstract (non-distributed) data flow through computation stages.
-- **Physical plan**: A physical plan is conceptually a mapping of the logical plan nodes to physical machines running `cockroach`. Logical plan nodes are replicated and specialized depending on the cluster topology. Like `planNodes` above, these components of the physical plan are scheduled and run on the cluster.
-
-You can find much greater detail in the [DistSQL RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20160421_distributed_sql.md).
-
-## Technical Interactions with Other Layers
-
-### SQL & Transaction Layer
-
-KV operations from executed `planNodes` are sent to the Transaction Layer.
-
-## What's Next?
-
-Learn how CockroachDB handles concurrent requests in the [Transaction Layer](transaction-layer.html).
diff --git a/src/current/v2.0/architecture/storage-layer.md b/src/current/v2.0/architecture/storage-layer.md
deleted file mode 100644
index 6e482a94b79..00000000000
--- a/src/current/v2.0/architecture/storage-layer.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: Storage Layer
-summary: The storage layer of CockroachDB's architecture reads and writes data to disk.
-toc: true
----
-
-The Storage Layer of CockroachDB's architecture reads and writes data to disk.
-
-{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}}
-
-## Overview
-
-Each CockroachDB node contains at least one `store`, specified when the node starts, which is where the `cockroach` process reads and writes its data on disk.
-
-This data is stored as key-value pairs on disk using RocksDB, which is treated primarily as a black-box API. Internally, each store contains three instance of RocksDB:
-
-- One for the Raft log
-- One for storing temporary Distributed SQL data
-- One for all other data on the node
-
-In addition, there is also a block cache shared amongst all of the stores in a node. These stores in turn have a collection of range replicas. More than one replica for a range will never be placed on the same store or even the same node.
-
-### Interactions with Other Layers
-
-In relationship to other layers in CockroachDB, the Storage Layer:
-
-- Serves successful reads and writes from the Replication Layer.
-
-## Components
-
-### RocksDB
-
-CockroachDB uses RocksDB––an embedded key-value store––to read and write data to disk. You can find more information about it on the [RocksDB Basics GitHub page](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics).
-
-RocksDB integrates really well with CockroachDB for a number of reasons:
-
-- Key-value store, which makes mapping to our key-value layer very simple
-- Atomic write batches and snapshots, which give us a subset of transactions
-
-Efficient storage for the keys is guaranteed by the underlying RocksDB engine by means of prefix compression.
-
-### MVCC
-
-CockroachDB relies heavily on [multi-version concurrency control (MVCC)](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) to process concurrent requests and guarantee consistency. Much of this work is done by using [hybrid logical clock (HLC) timestamps](transaction-layer.html#time-hybrid-logical-clocks) to differentiate between versions of data, track commit timestamps, and identify a value's garbage collection expiration. All of this MVCC data is then stored in RocksDB.
-
-Despite being implemented in the Storage Layer, MVCC values are widely used to enforce consistency in the [Transaction Layer](transaction-layer.html). For example, CockroachDB maintains a [Timestamp Cache](transaction-layer.html#timestamp-cache), which stores the timestamp of the last time that the key was read. If a write operation occurs at a lower timestamp than the largest value in the Read Timestamp Cache, it signifies there’s a potential anomaly and the transaction must be restarted at a later timestamp.
-
-#### Time-Travel
-
-As described in the [SQL:2011 standard](https://en.wikipedia.org/wiki/SQL:2011#Temporal_support), CockroachDB supports time travel queries (enabled by MVCC).
-
-To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB actually uses the schema information as of that time to formulate the queries.
-
-Using these tools, you can get consistent data from your database as far back as your garbage collection period.
-
-### Garbage Collection
-
-CockroachDB regularly garbage collects MVCC values to reduce the size of data stored on disk. To do this, we compact old MVCC values when there is a newer MVCC value with a timestamp that's older than the garbage collection period. By default, the garbage collection period is 24 hours, but it can be set at the cluster, database, or table level through [Replication Zones](../configure-replication-zones.html).
-
-## Interactions with Other Layers
-
-### Storage & Replication Layers
-
-The Storage Layer commits writes from the Raft log to disk, as well as returns requested data (i.e., reads) to the Replication Layer.
-
-## What's Next?
-
-Now that you've learned about our architecture, [start a local cluster](../install-cockroachdb.html) and start [building an app with CockroachDB](../build-an-app-with-cockroachdb.html).
diff --git a/src/current/v2.0/architecture/transaction-layer.md b/src/current/v2.0/architecture/transaction-layer.md
deleted file mode 100644
index 1e3d7d4e9eb..00000000000
--- a/src/current/v2.0/architecture/transaction-layer.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: Transaction Layer
-summary: The transaction layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations.
-toc: true
----
-
-The Transaction Layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations.
-
-{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}}
-
-## Overview
-
-Above all else, CockroachDB believes consistency is the most important feature of a database––without it, developers cannot build reliable tools, and businesses suffer from potentially subtle and hard to detect anomalies.
-
-To provide consistency, CockroachDB implements full support for ACID transaction semantics in the Transaction Layer. However, it's important to realize that *all* statements are handled as transactions, including single statements––this is sometimes referred to as "autocommit mode" because it behaves as if every statement is followed by a `COMMIT`.
-
-For code samples of using transactions in CockroachDB, see our documentation on [transactions](../transactions.html#sql-statements).
-
-Because CockroachDB enables transactions that can span your entire cluster (including cross-range and cross-table transactions), it optimizes correctness through a two-phase transaction protocol with asynchronous cleanup.
-
-### Writes & Reads (Phase 1)
-
-#### Writing
-
-When the Transaction Layer executes write operations, it doesn't directly write values to disk. Instead, it creates two things that help it mediate a distributed transaction:
-
-- A **Transaction Record** stored in the range where the first write occurs, which includes the transaction's current state (which starts as `PENDING`, and ends as either `COMMITTED` or `ABORTED`).
-
-- **Write Intents** for all of a transaction’s writes, which represent a provisional, uncommitted state. These are essentially the same as standard [multi-version concurrency control (MVCC)](storage-layer.html#mvcc) values but also contain a pointer to the Transaction Record stored on the cluster.
-
-As write intents are created, CockroachDB checks for newer committed values. If newer committed values exist, the transaction may be restarted. If existing write intents for the same keys exist, it is resolved as a [transaction conflict](#transaction-conflicts).
-
-If transactions fail for other reasons, such as failing to pass a SQL constraint, the transaction is aborted.
-
-#### Reading
-
-If the transaction has not been aborted, the Transaction Layer begins executing read operations. If a read only encounters standard MVCC values, everything is fine. However, if it encounters any Write Intents, the operation must be resolved as a [transaction conflict](#transaction-conflicts).
-
-### Commits (Phase 2)
-
-CockroachDB checks the running transaction's record to see if it's been `ABORTED`; if it has, it restarts the transaction.
-
-If the transaction passes these checks, it's moved to `COMMITTED` and responds with the transaction's success to the client. At this point, the client is free to begin sending more requests to the cluster.
-
-### Cleanup (Asynchronous Phase 3)
-
-After the transaction has been resolved, all of the Write Intents should resolved. To do this, the coordinating node––which kept a track of all of the keys it wrote––reaches out to the values and either:
-
-- Resolves their Write Intents to MVCC values by removing the element that points it to the Transaction Record.
-- Deletes the Write Intents.
-
-This is simply an optimization, though. If operations in the future encounter Write Intents, they always check their Transaction Records––any operation can resolve or remove Write Intents by checking the Transaction Record's status.
-
-### Interactions with Other Layers
-
-In relationship to other layers in CockroachDB, the Transaction Layer:
-
-- Receives KV operations from the SQL Layer.
-- Controls the flow of KV operations sent to the Distribution Layer.
-
-## Technical Details & Components
-
-### Time & Hybrid Logical Clocks
-
-In distributed systems, ordering and causality are difficult problems to solve. While it's possible to rely entirely on Raft consensus to maintain serializability, it would be inefficient for reading data. To optimize performance of reads, CockroachDB implements hybrid-logical clocks (HLC) which are composed of a physical component (always close to local wall time) and a logical component (used to distinguish between events with the same physical component). This means that HLC time is always greater than or equal to the wall time. You can find more detail in the [HLC paper](http://www.cse.buffalo.edu/tech-reports/2014-04.pdf).
-
-In terms of transactions, the gateway node picks a timestamp for the transaction using HLC time. Whenever a transaction's timestamp is mentioned, it's an HLC value. This timestamp is used to both track versions of values (through [multiversion concurrency control](storage-layer.html#mvcc)), as well as provide our transactional isolation guarantees.
-
-When nodes send requests to other nodes, they include the timestamp generated by their local HLCs (which includes both physical and logical components). When nodes receive requests, they inform their local HLC of the timestamp supplied with the event by the sender. This is useful in guaranteeing that all data read/written on a node is at a timestamp less than the next HLC time.
-
-This then lets the node primarily responsible for the range (i.e., the Leaseholder) serve reads for data it stores by ensuring the transaction reading the data is at an HLC time greater than the MVCC value it's reading (i.e., the read always happens "after" the write).
-
-#### Max Clock Offset Enforcement
-
-CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), **it crashes immediately**.
-
-While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node.
-
-For more detail about the risks that large clock offsets can cause, see [What happens when node clocks are not properly synchronized?](../operational-faqs.html#what-happens-when-node-clocks-are-not-properly-synchronized)
-
-### Timestamp Cache
-
-To provide serializability, whenever an operation reads a value, we store the operation's timestamp in a timestamp cache, which shows the high-water mark for values being read.
-
-Whenever a write occurs, its timestamp is checked against the timestamp cache. If the timestamp is less than the timestamp cache's latest value, we attempt to push the timestamp for its transaction forward to a later time. In the case of serializable transactions, this might cause them to restart in the second phase of the transaction (see [read refreshing](#read-refreshing)).
-
-### client.Txn and TxnCoordSender
-
-As we mentioned in the SQL layer's architectural overview, CockroachDB converts all SQL statements into key-value (KV) operations, which is how data is ultimately stored and accessed.
-
-All of the KV operations generated from the SQL layer use `client.Txn`, which is the transactional interface for the CockroachDB KV layer––but, as we discussed above, all statements are treated as transactions, so all statements use this interface.
-
-However, `client.Txn` is actually just a wrapper around `TxnCoordSender`, which plays a crucial role in our code base by:
-
-- Dealing with transactions' state. After a transaction is started, `TxnCoordSender` starts asynchronously sending heartbeat messages to that transaction's Transaction Record, which signals that it should be kept alive. If the `TxnCoordSender`'s heartbeating stops, the Transaction Record is moved to the `ABORTED` status.
-- Tracking each written key or key range over the course of the transaction.
-- Clearing the accumulated Write Intent for the transaction when it's committed or aborted. All requests being performed as part of a transaction have to go through the same `TxnCoordSender` to account for all of its Write Intents, which optimizes the cleanup process.
-
-After setting up this bookkeeping, the request is passed to the `DistSender` in the Distribution Layer.
-
-### Transaction Records
-
-When a transaction starts, `TxnCoordSender` writes a Transaction Record to the range containing the first key modified in the transaction. As mentioned above, the Transaction Record provides the system with a source of truth about the status of a transaction.
-
-The Transaction Record expresses one of the following dispositions of a transaction:
-
-- `PENDING`: The initial status of all values, indicating that the Write Intent's transaction is still in progress.
-- `COMMITTED`: Once a transaction has completed, this status indicates that the value can be read.
-- `ABORTED`: If a transaction fails or is aborted by the client, it's moved into this state.
-
-The Transaction Record for a committed transaction remains until all its Write Intents are converted to MVCC values. For an aborted transaction, the Transaction Record can be deleted at any time, which also means that CockroachDB treats missing Transaction Records as if they belong to aborted transactions.
-
-### Write Intents
-
-Values in CockroachDB are not directly written to the storage layer; instead everything is written in a provisional state known as a "Write Intent." These are essentially multi-version concurrency control values (also known as MVCC, which is explained in greater depth in the Storage Layer) with an additional value added to them which identifies the Transaction Record to which the value belongs.
-
-Whenever an operation encounters a Write Intent (instead of an MVCC value), it looks up the status of the Transaction Record to understand how it should treat the Write Intent value.
-
-#### Resolving Write Intent
-
-Whenever an operation encounters a Write Intent for a key, it attempts to "resolve" it, the result of which depends on the Write Intent's Transaction Record:
-
-- `COMMITTED`: The operation reads the Write Intent and converts it to an MVCC value by removing the Write Intent's pointer to the Transaction Record.
-- `ABORTED`: The Write Intent is ignored and deleted.
-- `PENDING`: This signals there is a [transaction conflict](#transaction-conflicts), which must be resolved.
-
-### Isolation Levels
-
-Isolation is an element of [ACID transactions](https://en.wikipedia.org/wiki/ACID), which determines how concurrency is controlled, and ultimately guarantees consistency.
-
-CockroachDB efficiently supports the strongest ANSI transaction isolation level: `SERIALIZABLE`. All other ANSI transaction isolaton levels (e.g., `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ`) are automatically upgraded to `SERIALIZABLE`. Weaker isolation levels have historically been used to maximize transaction throughput. However, [recent research](http://www.bailis.org/papers/acidrain-sigmod2017.pdf) has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks. CockroachDB continues to support an additional non-ANSI isolation level, `SNAPSHOT`, although it is deprecated. Clients can explicitly set a transaction's isolation when starting the transaction:
-
-- **Serializable Snapshot Isolation** _(Serializable)_ transactions are CockroachDB's default (equivalent to ANSI SQL's `SERIALIZABLE` isolation level, which is the highest of the four standard levels). This isolation level does not allow any anomalies in your data, and is enforced by requiring the client to retry transactions if serializability violations are possible.
-
-- **Snapshot Isolation** _(Snapshot)_ transactions trade correctness in order to avoid retries when serializability violations are possible. This is achieved by always reading at an initial transaction timestamp, but allowing the transaction's commit timestamp to be pushed forward in the event of [transaction conflicts](#transaction-conflicts). Snapshot isolation cannot prevent an anomaly known as [write skew](https://en.wikipedia.org/wiki/Snapshot_isolation).
-
-### Transaction Conflicts
-
-CockroachDB's transactions allow the following types of conflicts that involve running into an intent:
-
-- **Write/Write**, where two `PENDING` transactions create Write Intents for the same key.
-- **Write/Read**, when a read encounters an existing Write Intent with a timestamp less than its own.
-
-To make this simpler to understand, we'll call the first transaction `TxnA` and the transaction that encounters its Write Intents `TxnB`.
-
-CockroachDB proceeds through the following steps until one of the transactions is aborted, has its timestamp pushed, or enters the `TxnWaitQueue`.
-
-1. If the transaction has an explicit priority set (i.e., `HIGH`, or `LOW`), the transaction with the lower priority is aborted (in the writer/write case) or has its timestamp pushed (in the write/read case).
-
-2. `TxnB` tries to push `TxnA`'s timestamp forward.
-
- This succeeds only in the case that `TxnA` has snapshot isolation and `TxnB`'s operation is a read. In this case, the [write skew](https://en.wikipedia.org/wiki/Snapshot_isolation) anomaly occurs.
-
-3. `TxnB` enters the `TxnWaitQueue` to wait for `TxnA` to complete.
-
-Additionally, the following types of conflicts that do not involve running into intents can arise:
-
-- **Write after read**, when a write with a lower timestamp encounters a later read. This is handled through the [Timestamp Cache](#timestamp-cache).
-- **Read within uncertainty window**, when a read encounters a value with a higher timestamp but it's ambiguous whether the value should be considered to be in the future or in the past of the transaction because of possible *clock skew*. This is handled by attempting to push the transaction's timestamp beyond the uncertain value (see [read refreshing](#read-refreshing)). Note that, if the transaction has to be retried, reads will never encounter uncertainty issues on any node which was previously visited, and that there's never any uncertainty on values read from the transaction's gateway node.
-
-### TxnWaitQueue
-
-The `TxnWaitQueue` tracks all transactions that could not push a transaction whose writes they encountered, and must wait for the blocking transaction to complete before they can proceed.
-
-The `TxnWaitQueue`'s structure is a map of blocking transaction IDs to those they're blocking. For example:
-
-~~~
-txnA -> txn1, txn2
-txnB -> txn3, txn4, txn5
-~~~
-
-Importantly, all of this activity happens on a single node, which is the leader of the range's Raft group that contains the Transaction Record.
-
-Once the transaction does resolve––by committing or aborting––a signal is sent to the `TxnWaitQueue`, which lets all transactions that were blocked by the resolved transaction begin executing.
-
-Blocked transactions also check the status of their own transaction to ensure they're still active. If the blocked transaction was aborted, it's simply removed.
-
-If there is a deadlock between transactions (i.e., they're each blocked by each other's Write Intents), one of the transactions is randomly aborted. In the above example, this would happen if `TxnA` blocked `TxnB` on `key1` and `TxnB` blocked `TxnA` on `key2`.
-
-### Read refreshing
-
-Whenever a transaction's timestamp has been pushed, additional checks are required before allowing serializable transactions to commit at the pushed timestamp: any values which the transaction previously read must be checked to verify that no writes have subsequently occurred between the original transaction timestamp and the pushed transaction timestamp. This check prevents serializability violation. The check is done by keeping track of all the reads using a dedicated `RefreshRequest`. If this succeeds, the transaction is allowed to commit (transactions perform this check at commit time if they've been pushed by a different transaction or by the timestamp cache, or they perform the check whenever they encounter a `ReadWithinUncertaintyIntervalError` immediately, before continuing).
-If the refreshing is unsuccessful, then the transaction must be retried at the pushed timestamp.
-
-## Technical Interactions with Other Layers
-
-### Transaction & SQL Layer
-
-The Transaction Layer receives KV operations from `planNodes` executed in the SQL Layer.
-
-### Transaction & Distribution Layer
-
-The `TxnCoordSender` sends its KV requests to `DistSender` in the Distribution Layer.
-
-## What's Next?
-
-Learn how CockroachDB presents a unified view of your cluster's data in the [Distribution Layer](distribution-layer.html).
diff --git a/src/current/v2.0/array.md b/src/current/v2.0/array.md
deleted file mode 100644
index 59929d54b19..00000000000
--- a/src/current/v2.0/array.md
+++ /dev/null
@@ -1,217 +0,0 @@
----
-title: ARRAY
-summary: The ARRAY data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array data types.
-toc: true
----
-
-New in v1.1:The `ARRAY` data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array [data type](data-types.html).
-
-The `ARRAY` data type is useful for ensuring compatibility with ORMs and other tools. However, if such compatibility is not a concern, it's more flexible to design your schema with normalized tables.
-
-
-{{site.data.alerts.callout_info}} CockroachDB does not support nested arrays, creating database indexes on arrays, and ordering by arrays.{{site.data.alerts.end}}
-
-## Syntax
-
-A value of data type `ARRAY` can be expressed in the following ways:
-
-
-- Appending square brackets (`[]`) to any non-array [data type](data-types.html).
-- Adding the term `ARRAY` to any non-array [data type](data-types.html).
-
-## Size
-
-The size of an `ARRAY` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation.
-
-## Examples
-
-{{site.data.alerts.callout_success}}
-For a complete list of array functions built into CockroachDB, see the [documentation on array functions](functions-and-operators.html#array-functions).
-{{site.data.alerts.end}}
-
-### Creating an array column by appending square brackets
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE a (b STRING[]);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO a VALUES (ARRAY['sky', 'road', 'car']);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM a;
-~~~
-
-~~~
-+----------------------+
-| b |
-+----------------------+
-| {"sky","road","car"} |
-+----------------------+
-(1 row)
-~~~
-
-### Creating an array column by adding the term `ARRAY`
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE c (d INT ARRAY);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO c VALUES (ARRAY[10,20,30]);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM c;
-~~~
-
-~~~
-+------------+
-| d |
-+------------+
-| {10,20,30} |
-+------------+
-(1 row)
-~~~
-
-### Accessing an array element using array index
-{{site.data.alerts.callout_info}} Arrays in CockroachDB are 1-indexed. {{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM c;
-~~~
-
-~~~
-+------------+
-| d |
-+------------+
-| {10,20,30} |
-+------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT d[2] FROM c;
-~~~
-
-~~~
-+------+
-| d[2] |
-+------+
-| 20 |
-+------+
-(1 row)
-~~~
-
-### Appending an element to an array
-
-#### Using the `array_append` function
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM c;
-~~~
-
-~~~
-+------------+
-| d |
-+------------+
-| {10,20,30} |
-+------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> UPDATE c SET d = array_append(d, 40) WHERE d[3] = 30;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM c;
-~~~
-
-~~~
-+---------------+
-| d |
-+---------------+
-| {10,20,30,40} |
-+---------------+
-(1 row)
-~~~
-
-#### Using the append (`||`) operator
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM c;
-~~~
-
-~~~
-+---------------+
-| d |
-+---------------+
-| {10,20,30,40} |
-+---------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> UPDATE c SET d = d || 50 WHERE d[4] = 40;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM c;
-~~~
-
-~~~
-+------------------+
-| d |
-+------------------+
-| {10,20,30,40,50} |
-+------------------+
-(1 row)
-~~~
-
-## Supported Casting & ConversionNew in v2.0
-
-[Casting](data-types.html#data-type-conversions-casts) between `ARRAY` values is supported when the data types of the arrays support casting. For example, it is possible to cast from a `BOOL` array to an `INT` array but not from a `BOOL` array to a `TIMESTAMP` array:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT ARRAY[true,false,true]::INT[];
-~~~
-
-~~~
-+--------------------------------+
-| ARRAY[true, false, |
-| true]::INT[] |
-+--------------------------------+
-| {1,0,1} |
-+--------------------------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT ARRAY[true,false,true]::TIMESTAMP[];
-~~~
-
-~~~
-pq: invalid cast: bool[] -> TIMESTAMP[]
-~~~
-
-## See Also
-
-[Data Types](data-types.html)
diff --git a/src/current/v2.0/as-of-system-time.md b/src/current/v2.0/as-of-system-time.md
deleted file mode 100644
index 3f710bc2411..00000000000
--- a/src/current/v2.0/as-of-system-time.md
+++ /dev/null
@@ -1,166 +0,0 @@
----
-title: AS OF SYSTEM TIME
-summary: The AS OF SYSTEM TIME clause executes a statement as of a specified time.
-toc: true
----
-
-The `AS OF SYSTEM TIME timestamp` clause causes statements to execute
-using the database contents "as of" a specified time in the past.
-
-This clause can be used to read historical data (also known as "[time
-travel
-queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/)")
-and can also be advantageous for performance as it decreases
-transaction conflicts. For more details, see [SQL Performance Best
-Practices](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).
-
-{{site.data.alerts.callout_info}}Historical data is available only within the garbage collection window, which is determined by the ttlseconds field in the replication zone configuration.{{site.data.alerts.end}}
-
-## Synopsis
-
-The `AS OF SYSTEM TIME` clause is supported in multiple SQL contexts,
-including but not limited to:
-
-- In [`SELECT` clauses](select-clause.html), at the very end of the `FROM` sub-clause.
-- In [`BACKUP`](backup.html), after the parameters of the `TO` sub-clause.
-- In [`RESTORE`](restore.html), after the parameters of the `FROM` sub-clause.
-
-Currently, CockroachDB does not support `AS OF SYSTEM TIME` in
-[explicit transactions](transactions.html). This limitation may be
-lifted in the future.
-
-## Parameters
-
-The `timestamp` argument supports the following formats:
-
-Format | Notes
----|---
-[`INT`](int.html) | Nanoseconds since the Unix epoch.
-[`STRING`](string.html) | A [`TIMESTAMP`](timestamp.html) or [`INT`](int.html) number of nanoseconds.
-
-## Examples
-
-### Select Historical Data (Time-Travel)
-
-Imagine this example represents the database's current data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT name, balance
- FROM accounts
- WHERE name = 'Edna Barath';
-~~~
-~~~
-+-------------+---------+
-| name | balance |
-+-------------+---------+
-| Edna Barath | 750 |
-| Edna Barath | 2200 |
-+-------------+---------+
-~~~
-
-We could instead retrieve the values as they were on October 3, 2016 at 12:45 UTC:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT name, balance
- FROM accounts
- AS OF SYSTEM TIME '2016-10-03 12:45:00'
- WHERE name = 'Edna Barath';
-~~~
-~~~
-+-------------+---------+
-| name | balance |
-+-------------+---------+
-| Edna Barath | 450 |
-| Edna Barath | 2000 |
-+-------------+---------+
-~~~
-
-
-### Using Different Timestamp Formats
-
-Assuming the following statements are run at `2016-01-01 12:00:00`, they would execute as of `2016-01-01 08:00:00`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00'
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t AS OF SYSTEM TIME 1451635200000000000
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t AS OF SYSTEM TIME '1451635200000000000'
-~~~
-
-### Selecting from Multiple Tables
-
-{{site.data.alerts.callout_info}}It is not yet possible to select from multiple tables at different timestamps. The entire query runs at the specified time in the past.{{site.data.alerts.end}}
-
-When selecting over multiple tables in a single `FROM` clause, the `AS
-OF SYSTEM TIME` clause must appear at the very end and applies to the
-entire `SELECT` clause.
-
-For example:
-
-{% include copy-clipboard.html %}
-~~~sql
-> SELECT * FROM t, u, v AS OF SYSTEM TIME '2016-01-01 08:00:00';
-~~~
-
-{% include copy-clipboard.html %}
-~~~sql
-> SELECT * FROM t JOIN u ON t.x = u.y AS OF SYSTEM TIME '2016-01-01 08:00:00';
-~~~
-
-{% include copy-clipboard.html %}
-~~~sql
-> SELECT * FROM (SELECT * FROM t), (SELECT * FROM u) AS OF SYSTEM TIME '2016-01-01 08:00:00';
-~~~
-
-### Using `AS OF SYSTEM TIME` in Subqueries
-
-To enable time travel, the `AS OF SYSTEM TIME` clause must appear in
-at least the top-level statement. It is not valid to use it only in a
-[subquery](subqueries.html).
-
-For example, the following is invalid:
-
-~~~
-SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00'), u
-~~~
-
-To facilitate the composition of larger queries from simpler queries,
-CockroachDB allows `AS OF SYSTEM TIME` in sub-queries under the
-following conditions:
-
-- The top level query also specifies `AS OF SYSTEM TIME`.
-- All the `AS OF SYSTEM TIME` clauses specify the same timestamp.
-
-For example:
-
-{% include copy-clipboard.html %}
-~~~sql
-> SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00') tp
- JOIN u ON tp.x = u.y
- AS OF SYSTEM TIME '2016-01-01 08:00:00' -- same timestamp as above - OK.
- WHERE x < 123;
-~~~
-
-## See Also
-
-- [Select Historical Data](select-clause.html#select-historical-data-time-travel)
-- [Time-Travel Queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/)
-
-## Tech Note
-
-{{site.data.alerts.callout_info}}Although the following format is supported, it is not intended to be used by most users.{{site.data.alerts.end}}
-
-HLC timestamps can be specified using a [`DECIMAL`](decimal.html). The
-integer part is the wall time in nanoseconds. The fractional part is
-the logical counter, a 10-digit integer. This is the same format as
-produced by the `cluster_logical_timestamp()` function.
diff --git a/src/current/v2.0/automated-scaling-and-repair.md b/src/current/v2.0/automated-scaling-and-repair.md
deleted file mode 100644
index 4d708823f30..00000000000
--- a/src/current/v2.0/automated-scaling-and-repair.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Automated Scaling & Repair
-summary: CockroachDB transparently manages scale with an upgrade path from a single node to hundreds.
-toc: false
----
-
-CockroachDB scales horizontally with minimal operator overhead. You can run it on your local computer, a single server, a corporate development cluster, or a private or public cloud. [Adding capacity](start-a-node.html) is as easy as pointing a new node at the running cluster.
-
-At the key-value level, CockroachDB starts off with a single, empty range. As you put data in, this single range eventually reaches a threshold size (64MB by default). When that happens, the data splits into two ranges, each covering a contiguous segment of the entire key-value space. This process continues indefinitely; as new data flows in, existing ranges continue to split into new ranges, aiming to keep a relatively small and consistent range size.
-
-When your cluster spans multiple nodes (physical machines, virtual machines, or containers), newly split ranges are automatically rebalanced to nodes with more capacity. CockroachDB communicates opportunities for rebalancing using a peer-to-peer [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) by which nodes exchange network addresses, store capacity, and other information.
-
-- Add resources to scale horizontally, with zero hassle and no downtime
-- Self-organizes, self-heals, and automatically rebalances
-- Migrate data seamlessly between clouds
-
-
diff --git a/src/current/v2.0/back-up-data.md b/src/current/v2.0/back-up-data.md
deleted file mode 100644
index e9a02f47c9f..00000000000
--- a/src/current/v2.0/back-up-data.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: Back Up Data
-summary: Learn how to back up and restore a CockroachDB cluster.
-toc: false
----
-
-CockroachDB offers the following methods to back up your cluster's data:
-
-- [`cockroach dump`](sql-dump.html), which is a CLI command to dump/export your database's schema and table data.
-- [`BACKUP`](backup.html) (*[enterprise license](https://www.cockroachlabs.com/pricing/) only*), which is a SQL statement that backs up your cluster to cloud or network file storage.
-
-### Details
-
-We recommend creating daily backups of your data as an operational best practice.
-
-However, because CockroachDB is designed with high fault tolerance, backups are primarily needed for disaster recovery (i.e., if your cluster loses a majority of its nodes). Isolated issues (such as small-scale node outages) do not require any intervention.
-
-## Restore
-
-For information about restoring your backed up data, see [Restoring Data](restore-data.html).
-
-## See Also
-
-- [Restore Data](restore-data.html)
-- [Use the Built-in SQL Client](use-the-built-in-sql-client.html)
-- [Other Cockroach Commands](cockroach-commands.html)
diff --git a/src/current/v2.0/backup.md b/src/current/v2.0/backup.md
deleted file mode 100644
index 94db064ac17..00000000000
--- a/src/current/v2.0/backup.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-title: BACKUP
-summary: Back up your CockroachDB cluster to a cloud storage services such as AWS S3, Google Cloud Storage, or other NFS.
-toc: true
----
-
-{{site.data.alerts.callout_danger}}The BACKUP feature is only available to enterprise users. For non-enterprise backups, see cockroach dump.{{site.data.alerts.end}}
-
-CockroachDB's `BACKUP` [statement](sql-statements.html) allows you to create full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp. Backups can be with or without [revision history](backup.html#backups-with-revision-history-new-in-v2-0).
-
-Because CockroachDB is designed with high fault tolerance, these backups are designed primarily for disaster recovery (i.e., if your cluster loses a majority of its nodes) through [`RESTORE`](restore.html). Isolated issues (such as small-scale node outages) do not require any intervention.
-
-
-## Functional Details
-
-### Backup Targets
-
-You can backup entire tables (which automatically includes their indexes) or [views](views.html). Backing up a database simply backs up all of its tables and views.
-
-{{site.data.alerts.callout_info}}BACKUP only offers table-level granularity; it does not support backing up subsets of a table.{{site.data.alerts.end}}
-
-### Object Dependencies
-
-Dependent objects must be backed up at the same time as the objects they depend on.
-
-Object | Depends On
--------|-----------
-Table with [foreign key](foreign-key.html) constraints | The table it `REFERENCES`; however, this dependency can be [removed during the restore](restore.html#skip_missing_foreign_keys).
-Table with a [sequence](create-sequence.html) | New in v2.0: The sequence it uses; however, this dependency can be [removed during the restore](restore.html#skip_missing_sequences).
-[Views](views.html) | The tables used in the view's `SELECT` statement.
-[Interleaved tables](interleave-in-parent.html) | The parent table in the [interleaved hierarchy](interleave-in-parent.html#interleaved-hierarchy).
-
-### Users and Privileges
-
-Every backup you create includes `system.users`, which stores your users and their passwords. To restore your users, you must use [this procedure](restore.html#restoring-users-from-system-users-backup).
-
-Restored tables inherit privilege grants from the target database; they do not preserve privilege grants from the backed up table because the restoring cluster may have different users.
-
-Table-level privileges must be [granted to users](grant.html) after the restore is complete.
-
-### Backup Types
-
-CockroachDB offers two types of backups: full and incremental.
-
-#### Full Backups
-
-Full backups contain an unreplicated copy of your data and can always be used to restore your cluster. These files are roughly the size of your data and require greater resources to produce than incremental backups. You can take full backups as of a given timestamp and (optionally) include the available [revision history](backup.html#backups-with-revision-history-new-in-v2-0).
-
-#### Incremental Backups
-
-Incremental backups are smaller and faster to produce than full backups because they contain only the data that has changed since a base set of backups you specify (which must include one full backup, and can include many incremental backups). You can take incremental backups either as of a given timestamp or with full [revision history](backup.html#backups-with-revision-history-new-in-v2-0).
-
-Note the following restrictions:
-
-- Incremental backups can only be created within the garbage collection period of the base backup's most recent timestamp. This is because incremental backups are created by finding which data has been created or modified since the most recent timestamp in the base backup––that timestamp data, though, is deleted by the garbage collection process.
-
- You can configure garbage collection periods using the `ttlseconds` [replication zone setting](configure-replication-zones.html).
-
-- It is not possible to create an incremental backup if one or more tables were [created](create-table.html), [dropped](drop-table.html), or [truncated](truncate.html) after the full backup. In this case, you must create a new [full backup](#full-backups).
-
-### Backups with Revision History New in v2.0
-
-{% include {{ page.version.version }}/misc/beta-warning.md %}
-
-You can create full or incremental backups with revision history:
-
-- Taking full backups with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp.
-- Taking incremental backups with revision history allows you to back up every change made since the last backup and within the garbage collection period leading up to and including the given timestamp. You can take incremental backups with revision history even when your previous full or incremental backups were taken without revision history.
-
-You can configure garbage collection periods using the `ttlseconds` [replication zone setting](configure-replication-zones.html). Taking backups with revision history allows for point-in-time restores within the revision history.
-
-## Performance
-
-The `BACKUP` process minimizes its impact to the cluster's performance by distributing work to all nodes. Each node backs up only a specific subset of the data it stores (those for which it serves writes; more details about this architectural concept forthcoming), with no two nodes backing up the same data.
-
-For best performance, we also recommend always starting backups with a specific [timestamp](timestamp.html) at least 10 seconds in the past. For example:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP...AS OF SYSTEM TIME '2017-06-09 16:13:55.571516+00:00';
-~~~
-
-This improves performance by decreasing the likelihood that the `BACKUP` will be [retried because it contends with other statements/transactions](transactions.html#transaction-retries). However, because `AS OF SYSTEM TIME` returns historical data, your reads might be stale.
-
-## Automating Backups
-
-We recommend automating daily backups of your cluster.
-
-To automate backups, you must have a client send the `BACKUP` statement to the cluster.
-
-Once the backup is complete, your client will receive a `BACKUP` response.
-
-## Viewing and Controlling Backups Jobs
-
-After CockroachDB successfully initiates a backup, it registers the backup as a job, which you can view with [`SHOW JOBS`](show-jobs.html).
-
-After the backup has been initiated, you can control it with [`PAUSE JOB`](pause-job.html), [`RESUME JOB`](resume-job.html), and [`CANCEL JOB`](cancel-job.html).
-
-## Synopsis
-
-
- {% include {{ page.version.version }}/sql/diagrams/backup.html %}
-
-
-{{site.data.alerts.callout_info}}The BACKUP statement cannot be used within a transaction.{{site.data.alerts.end}}
-
-## Required Privileges
-
-Only the `root` user can run `BACKUP`.
-
-## Parameters
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_pattern` | The table or [view](views.html) you want to back up. |
-| `name` | The name of the database you want to back up (i.e., create backups of all tables and views in the database).|
-| `destination` | The URL where you want to store the backup.
For information about this URL structure, see [Backup File URLs](#backup-file-urls). |
-| `AS OF SYSTEM TIME timestamp` | Back up data as it existed as of [`timestamp`](as-of-system-time.html). The `timestamp` must be more recent than your cluster's last garbage collection (which defaults to occur every 25 hours, but is [configurable per table](configure-replication-zones.html#replication-zone-format)). |
-| `WITH revision_history` | New in v2.0: Create a backup with full [revision history](backup.html#backups-with-revision-history-new-in-v2-0) that records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. |
-| `INCREMENTAL FROM full_backup_location` | Create an incremental backup using the full backup stored at the URL `full_backup_location` as its base. For information about this URL structure, see [Backup File URLs](#backup-file-urls).
**Note:** It is not possible to create an incremental backup if one or more tables were [created](create-table.html), [dropped](drop-table.html), or [truncated](truncate.html) after the full backup. In this case, you must create a new [full backup](#full-backups). |
-| `incremental_backup_location` | Create an incremental backup that includes all backups listed at the provided URLs.
Lists of incremental backups must be sorted from oldest to newest. The newest incremental backup's timestamp must be within the table's garbage collection period.
For information about this URL structure, see [Backup File URLs](#backup-file-urls).
For more information about garbage collection, see [Configure Replication Zones](configure-replication-zones.html#replication-zone-format). |
-
-### Backup File URLs
-
-We will use the URL provided to construct a secure API call to the service you specify. The path to each backup must be unique, and the URL for your backup's destination/locations must use the following format:
-
-{% include {{ page.version.version }}/misc/external-urls.md %}
-
-## Examples
-
-Per our guidance in the [Performance](#performance) section, we recommend starting backups from a time at least 10 seconds in the past using [`AS OF SYSTEM TIME`](as-of-system-time.html).
-
-### Backup a Single Table or View
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP bank.customers \
-TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
-AS OF SYSTEM TIME '2017-03-26 23:59:00';
-~~~
-
-### Backup Multiple Tables
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP bank.customers, bank.accounts \
-TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
-AS OF SYSTEM TIME '2017-03-26 23:59:00';
-~~~
-
-### Backup an Entire Database
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP DATABASE bank \
-TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
-AS OF SYSTEM TIME '2017-03-26 23:59:00';
-~~~
-
-### Backup with Revision HistoryNew in v2.0
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP DATABASE bank \
-TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \
-AS OF SYSTEM TIME '2017-03-26 23:59:00' WITH revision_history;
-~~~
-
-### Create Incremental Backups
-
-Incremental backups must be based off of full backups you've already created.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP DATABASE bank \
-TO 'gs://acme-co-backup/db/bank/2017-03-29-nightly' \
-AS OF SYSTEM TIME '2017-03-28 23:59:00' \
-INCREMENTAL FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly';
-~~~
-
-### Create Incremental Backups with Revision HistoryNew in v2.0
-
-{% include copy-clipboard.html %}
-~~~ sql
-> BACKUP DATABASE bank \
-TO 'gs://acme-co-backup/database-bank-2017-03-29-nightly' \
-AS OF SYSTEM TIME '2017-03-28 23:59:00' \
-INCREMENTAL FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly' WITH revision_history;
-~~~
-
-## See Also
-
-- [`RESTORE`](restore.html)
-- [Configure Replication Zones](configure-replication-zones.html)
diff --git a/src/current/v2.0/begin-transaction.md b/src/current/v2.0/begin-transaction.md
deleted file mode 100644
index e8f707211e6..00000000000
--- a/src/current/v2.0/begin-transaction.md
+++ /dev/null
@@ -1,120 +0,0 @@
----
-title: BEGIN
-summary: Initiate a SQL transaction with the BEGIN statement in CockroachDB.
-toc: true
----
-
-The `BEGIN` [statement](sql-statements.html) initiates a [transaction](transactions.html), which either successfully executes all of the statements it contains or none at all.
-
-{{site.data.alerts.callout_danger}}When using transactions, your application should include logic to retry transactions that are aborted to break a dependency cycle between concurrent transactions.{{site.data.alerts.end}}
-
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/begin_transaction.html %}
-
-
-## Required Privileges
-
-No [privileges](privileges.html) are required to initiate a transaction. However, privileges are required for each statement within a transaction.
-
-## Aliases
-
-In CockroachDB, the following are aliases for the `BEGIN` statement:
-
-- `BEGIN TRANSACTION`
-- `START TRANSACTION`
-
-The following aliases also exist for [isolation levels](transactions.html#isolation-levels):
-
-- `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` are aliases for `SERIALIZABLE`
-
-For more information on isolation level aliases, see [Comparison to ANSI SQL Isolation Levels](transactions.html#comparison-to-ansi-sql-isolation-levels).
-
-## Parameters
-
-| Parameter | Description |
-|-----------|-------------|
-| `ISOLATION LEVEL` | By default, transactions in CockroachDB implement the strongest ANSI isolation level: `SERIALIZABLE`. At this isolation level, transactions will never result in anomalies. The `SNAPSHOT` isolation level is still supported as well for backwards compatibility, but you should avoid using it. It provides little benefit in terms of performance and can result in inconsistent state under certain complex workloads. For more information, see [Transactions: Isolation Levels](transactions.html#isolation-levels).
**Default**: `SERIALIZABLE` |
-| `PRIORITY` | If you do not want the transaction to run with `NORMAL` priority, you can set it to `LOW` or `HIGH`.
Transactions with higher priority are less likely to need to be retried.
For more information, see [Transactions: Priorities](transactions.html#transaction-priorities).
**Default**: `NORMAL` |
-
-## Examples
-
-### Begin a Transaction
-
-#### Use Default Settings
-
-Without modifying the `BEGIN` statement, the transaction uses `SERIALIZABLE` isolation and `NORMAL` priority.
-
-~~~ sql
-> BEGIN;
-
-> SAVEPOINT cockroach_restart;
-
-> UPDATE products SET inventory = 0 WHERE sku = '8675309';
-
-> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new');
-
-> RELEASE SAVEPOINT cockroach_restart;
-
-> COMMIT;
-~~~
-
-{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}}
-
-#### Change Isolation Level & Priority
-
-You can set a transaction's isolation level to `SNAPSHOT`, as well as its priority to `LOW` or `HIGH`.
-
-~~~ sql
-> BEGIN ISOLATION LEVEL SNAPSHOT, PRIORITY HIGH;
-
-> SAVEPOINT cockroach_restart;
-
-> UPDATE products SET inventory = 0 WHERE sku = '8675309';
-
-> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new');
-
-> RELEASE SAVEPOINT cockroach_restart;
-
-> COMMIT;
-~~~
-
-You can also set a transaction's isolation level and priority with [`SET TRANSACTION`](set-transaction.html).
-
-{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}}
-
-### Begin a Transaction with Automatic Retries
-
-CockroachDB will [automatically retry](transactions.html#transaction-retries) all transactions that contain both `BEGIN` and `COMMIT` in the same batch. Batching is controlled by your driver or client's behavior, but means that CockroachDB receives all of the statements as a single unit, instead of a number of requests.
-
-From the perspective of CockroachDB, a transaction sent as a batch looks like this:
-
-~~~ sql
-> BEGIN; DELETE FROM customers WHERE id = 1; DELETE orders WHERE customer = 1; COMMIT;
-~~~
-
-However, in your application's code, batched transactions are often just multiple statements sent at once. For example, in Go, this transaction would sent as a single batch (and automatically retried):
-
-~~~ go
-db.Exec(
- "BEGIN;
-
- DELETE FROM customers WHERE id = 1;
-
- DELETE orders WHERE customer = 1;
-
- COMMIT;"
-)
-~~~
-
-Issuing statements this way signals to CockroachDB that you do not need to change any of the statement's values if the transaction doesn't immediately succeed, so it can continually retry the transaction until it's accepted.
-
-## See Also
-
-- [Transactions](transactions.html)
-- [`COMMIT`](commit-transaction.html)
-- [`SAVEPOINT`](savepoint.html)
-- [`RELEASE SAVEPOINT`](release-savepoint.html)
-- [`ROLLBACK`](rollback-transaction.html)
diff --git a/src/current/v2.0/bool.md b/src/current/v2.0/bool.md
deleted file mode 100644
index b86a8243b49..00000000000
--- a/src/current/v2.0/bool.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title: BOOL
-summary: The BOOL data type stores Boolean values of false or true.
-toc: true
----
-
-The `BOOL` [data type](data-types.html) stores a Boolean value of `false` or `true`.
-
-
-## Aliases
-
-In CockroachDB, `BOOLEAN` is an alias for `BOOL`.
-
-## Syntax
-
-There are two predefined
-[named constants](sql-constants.html#named-constants) for `BOOL`:
-`TRUE` and `FALSE` (the names are case-insensitive).
-
-Alternately, a boolean value can be obtained by coercing a numeric
-value: zero is coerced to `FALSE`, and any non-zero value to `TRUE`.
-
-- `CAST(0 AS BOOL)` (false)
-- `CAST(123 AS BOOL)` (true)
-
-## Size
-
-A `BOOL` value is 1 byte in width, but the total storage size is likely to be larger due to CockroachDB metadata.
-
-## Examples
-
-~~~ sql
-> CREATE TABLE bool (a INT PRIMARY KEY, b BOOL, c BOOLEAN);
-
-> SHOW COLUMNS FROM bool;
-~~~
-~~~
-+-------+------+-------+---------+
-| Field | Type | Null | Default |
-+-------+------+-------+---------+
-| a | INT | false | NULL |
-| b | BOOL | true | NULL |
-| c | BOOL | true | NULL |
-+-------+------+-------+---------+
-~~~
-~~~ sql
-> INSERT INTO bool VALUES (12345, true, CAST(0 AS BOOL));
-
-> SELECT * FROM bool;
-~~~
-~~~
-+-------+------+-------+
-| a | b | c |
-+-------+------+-------+
-| 12345 | true | false |
-+-------+------+-------+
-~~~
-
-## Supported Casting & Conversion
-
-`BOOL` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types:
-
-Type | Details
------|--------
-`INT` | Converts `true` to `1`, `false` to `0`
-`DECIMAL` | Converts `true` to `1`, `false` to `0`
-`FLOAT` | Converts `true` to `1`, `false` to `0`
-`STRING` | ––
-
-## See Also
-
-[Data Types](data-types.html)
diff --git a/src/current/v2.0/build-a-c++-app-with-cockroachdb.md b/src/current/v2.0/build-a-c++-app-with-cockroachdb.md
deleted file mode 100644
index 9bcc68f09a8..00000000000
--- a/src/current/v2.0/build-a-c++-app-with-cockroachdb.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title: Build a C++ App with CockroachDB
-summary: Learn how to use CockroachDB from a simple C++ application with a low-level client driver.
-toc: true
-twitter: false
----
-
-This tutorial shows you how build a simple C++ application with CockroachDB using a PostgreSQL-compatible driver.
-
-We have tested the [C++ libpqxx driver](https://github.com/jtv/libpqxx) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-
-## Before You Begin
-
-Make sure you have already [installed CockroachDB](install-cockroachdb.html).
-
-## Step 1. Install the libpqxx driver
-
-Install the C++ libpqxx driver as described in the [official documentation](https://github.com/jtv/libpqxx).
-
-{% include {{ page.version.version }}/app/common-steps.md %}
-
-## Step 5. Run the C++ code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic Statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.cpp file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ cpp
-{% include {{ page.version.version }}/app/basic-sample.cpp %}
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
-
-Download the txn-sample.cpp file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ cpp
-{% include {{ page.version.version }}/app/txn-sample.cpp %}
-~~~
-
-After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-## What's Next?
-
-Read more about using the [C++ libpqxx driver](https://github.com/jtv/libpqxx).
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-clojure-app-with-cockroachdb.md b/src/current/v2.0/build-a-clojure-app-with-cockroachdb.md
deleted file mode 100644
index c0a4406988e..00000000000
--- a/src/current/v2.0/build-a-clojure-app-with-cockroachdb.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: Build a Clojure App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Clojure application with a low-level client driver.
-toc: true
-twitter: false
----
-
-This tutorial shows you how build a simple Clojure application with CockroachDB using [leiningen](https://leiningen.org/) and a PostgreSQL-compatible driver.
-
-We have tested the [Clojure java.jdbc driver](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/) in conjunction with the [PostgreSQL JDBC driver](https://jdbc.postgresql.org/) enough to claim **beta-level** support, so that combination is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-
-## Before You Begin
-
-Make sure you have already [installed CockroachDB](install-cockroachdb.html).
-
-## Step 1. Install `leiningen`
-
-Install the Clojure `lein` utility as described in its [official documentation](https://leiningen.org/).
-
-{% include {{ page.version.version }}/app/common-steps.md %}
-
-## Step 5. Create a table in the new database
-
-As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure \
---database=bank \
---user=maxroach \
--e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)'
-~~~
-
-## Step 6. Run the Clojure code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Create a basic Clojure/JDBC project
-
-1. Create a new directory `myapp`.
-2. Create a file `myapp/project.clj` and populate it with the following code, or download it directly. Be sure to place the file in the subdirectory `src/test` in your project.
-
- {% include copy-clipboard.html %}
- ~~~ clojure
- {% include {{ page.version.version }}/app/project.clj %}
- ~~~
-
-3. Create a file `myapp/src/test/util.clj` and populate it with the code from this file. Be sure to place the file in the subdirectory `src/test` in your project.
-
-### Basic Statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows.
-
-Create a file `myapp/src/test/test.clj` and copy the code below to it, or download it directly. Be sure to rename this file to `test.clj` in the subdirectory `src/test` in your project.
-
-{% include copy-clipboard.html %}
-~~~ clojure
-{% include {{ page.version.version }}/app/basic-sample.clj %}
-~~~
-
-Run with:
-
-{% include copy-clipboard.html %}
-~~~ shell
-lein run
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Copy the code below to `myapp/src/test/test.clj` or
-download it directly. Again, preserve the file name `test.clj`.
-
-{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ clojure
-{% include {{ page.version.version }}/app/txn-sample.clj %}
-~~~
-
-Run with:
-
-{% include copy-clipboard.html %}
-~~~ shell
-lein run
-~~~
-
-After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-## What's Next?
-
-Read more about using the [Clojure java.jdbc driver](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/).
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-csharp-app-with-cockroachdb.md b/src/current/v2.0/build-a-csharp-app-with-cockroachdb.md
deleted file mode 100644
index 48f4b825ceb..00000000000
--- a/src/current/v2.0/build-a-csharp-app-with-cockroachdb.md
+++ /dev/null
@@ -1,155 +0,0 @@
----
-title: Build a C# (.NET) App with CockroachDB
-summary: Learn how to use CockroachDB from a simple C# (.NET) application with a low-level client driver.
-toc: true
-twitter: true
----
-
-This tutorial shows you how build a simple C# (.NET) application with CockroachDB using a PostgreSQL-compatible driver.
-
-We have tested the [.NET Npgsql driver](http://www.npgsql.org/) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-
-## Before You Begin
-
-Make sure you have already [installed CockroachDB](install-cockroachdb.html) and the .NET SDK for your OS.
-
-## Step 1. Create a .NET project
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ dotnet new console -o cockroachdb-test-app
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cd cockroachdb-test-app
-~~~
-
-The `dotnet` command creates a new app of type `console`. The `-o` parameter creates a directory named `cockroachdb-test-app` where your app will be stored and populates it with the required files. The `cd cockroachdb-test-app` command puts you into the newly created app directory.
-
-## Step 2. Install the Npgsql driver
-
-Install the latest version of the [Npgsql driver](https://www.nuget.org/packages/Npgsql/) into the .NET project using the built-in nuget package manager:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ dotnet add package Npgsql
-~~~
-
-## Step 3. Start a single-node cluster
-
-For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---insecure \
---store=hello-1 \
---host=localhost
-~~~
-
-## Step 4. Create a user
-
-In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach user set maxroach --insecure
-~~~
-
-## Step 5. Create a database and grant privileges
-
-As the `root` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create a `bank` database.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'CREATE DATABASE bank'
-~~~
-
-Then [grant privileges](grant.html) to the `maxroach` user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'GRANT ALL ON DATABASE bank TO maxroach'
-~~~
-
-## Step 6. Run the C# code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic Statements
-
-Replace the contents of `cockraochdb-test-app/Program.cs` with the following code:
-
-{% include copy-clipboard.html %}
-~~~ csharp
-{% include {{ page.version.version }}/app/basic-sample.cs %}
-~~~
-
-Then run the code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ dotnet run
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
- account 1: 1000
- account 2: 250
-~~~
-
-### Transaction (with retry logic)
-
-Open `cockraochdb-test-app/Program.cs` again and replace the contents with the following code:
-
-{% include copy-clipboard.html %}
-~~~ csharp
-{% include {{ page.version.version }}/app/txn-sample.cs %}
-~~~
-
-Then run the code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ dotnet run
-~~~
-
-{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
-
-The output should be:
-
-~~~
-Initial balances:
- account 1: 1000
- account 2: 250
-Final balances:
- account 1: 900
- account 2: 350
-~~~
-
-However, if you want to verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-## What's Next?
-
-Read more about using the [.NET Npgsql driver](http://www.npgsql.org/).
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-go-app-with-cockroachdb-gorm.md b/src/current/v2.0/build-a-go-app-with-cockroachdb-gorm.md
deleted file mode 100644
index f6452a3b2d3..00000000000
--- a/src/current/v2.0/build-a-go-app-with-cockroachdb-gorm.md
+++ /dev/null
@@ -1,164 +0,0 @@
----
-title: Build a Go App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Go application with the GORM ORM.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Go application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Go pq driver](https://godoc.org/github.com/lib/pq) and the [GORM ORM](http://gorm.io) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-{{site.data.alerts.callout_success}}
-For a more realistic use of GORM with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-{{site.data.alerts.end}}
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the GORM ORM
-
-To install [GORM](http://gorm.io), run the following commands:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go get -u github.com/lib/pq # dependency
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go get -u github.com/jinzhu/gorm
-~~~
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Go code
-
-The following code uses the [GORM](http://gorm.io) ORM to map Go-specific objects to SQL operations. Specifically, `db.AutoMigrate(&Account{})` creates an `accounts` table based on the Account model, `db.Create(&Account{})` inserts rows into the table, and `db.Find(&accounts)` selects from the table so that balances can be printed.
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ go
-{% include {{ page.version.version }}/app/gorm-basic-sample.go %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go run gorm-basic-sample.go
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-1 1000
-2 250
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Go code
-
-The following code uses the [GORM](http://gorm.io) ORM to map Go-specific objects to SQL operations. Specifically, `db.AutoMigrate(&Account{})` creates an `accounts` table based on the Account model, `db.Create(&Account{})` inserts rows into the table, and `db.Find(&accounts)` selects from the table so that balances can be printed.
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ go
-{% include {{ page.version.version }}/app/insecure/gorm-basic-sample.go %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go run gorm-basic-sample.go
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-1 1000
-2 250
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [GORM ORM](http://gorm.io), or check out a more realistic implementation of GORM with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-go-app-with-cockroachdb.md b/src/current/v2.0/build-a-go-app-with-cockroachdb.md
deleted file mode 100644
index 47faf4089a1..00000000000
--- a/src/current/v2.0/build-a-go-app-with-cockroachdb.md
+++ /dev/null
@@ -1,227 +0,0 @@
----
-title: Build a Go App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Go application with the Go pq driver.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Go application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Go pq driver](https://godoc.org/github.com/lib/pq) and the [GORM ORM](http://gorm.io) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the Go pq driver
-
-To install the [Go pq driver](https://godoc.org/github.com/lib/pq), run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go get -u github.com/lib/pq
-~~~
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Go code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.go file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ go
-{% include {{ page.version.version }}/app/basic-sample.go %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go run basic-sample.go
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-1 1000
-2 250
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.go file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ go
-{% include {{ page.version.version }}/app/txn-sample.go %}
-~~~
-
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. For Go, the CockroachDB retry function is in the `crdb` package of the CockroachDB Go client.
-
-To install the [CockroachDB Go client](https://github.com/cockroachdb/cockroach-go), run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go get -d github.com/cockroachdb/cockroach-go
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go run txn-sample.go
-~~~
-
-The output should be:
-
-~~~ shell
-Success
-~~~
-
-To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Go code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.go file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ go
-{% include {{ page.version.version }}/app/insecure/basic-sample.go %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go run basic-sample.go
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-1 1000
-2 250
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.go file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ go
-{% include {{ page.version.version }}/app/insecure/txn-sample.go %}
-~~~
-
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. For Go, the CockroachDB retry function is in the `crdb` package of the CockroachDB Go client. Clone the library into your `$GOPATH` as follows:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ mkdir -p $GOPATH/src/github.com/cockroachdb
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cd $GOPATH/src/github.com/cockroachdb
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ git clone git@github.com:cockroachdb/cockroach-go.git
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ go run txn-sample.go
-~~~
-
-The output should be:
-
-~~~ shell
-Success
-~~~
-
-To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Go pq driver](https://godoc.org/github.com/lib/pq).
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v2.0/build-a-java-app-with-cockroachdb-hibernate.md
deleted file mode 100644
index 1d209806429..00000000000
--- a/src/current/v2.0/build-a-java-app-with-cockroachdb-hibernate.md
+++ /dev/null
@@ -1,258 +0,0 @@
----
-title: Build a Java App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Java application with the Hibernate ORM.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Java application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Java JDBC driver](https://jdbc.postgresql.org/) and the [Hibernate ORM](http://hibernate.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-{{site.data.alerts.callout_success}}
-For a more realistic use of Hibernate with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-{{site.data.alerts.end}}
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-{{site.data.alerts.callout_danger}}
-The examples on this page assume you are using a Java version <= 9. They do not work with Java 10.
-{{site.data.alerts.end}}
-
-## Step 1. Install the Gradle build tool
-
-This tutorial uses the [Gradle build tool](https://gradle.org/) to get all dependencies for your application, including Hibernate.
-
-To install Gradle on Mac, run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ brew install gradle
-~~~
-
-To install Gradle on a Debian-based Linux distribution like Ubuntu:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ apt-get install gradle
-~~~
-
-To install Gradle on a Red Hat-based Linux distribution like Fedora:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ dnf install gradle
-~~~
-
-For other ways to install Gradle, see [its official documentation](https://gradle.org/install).
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Convert the key file for use with Java
-
-The private key generated for user `maxroach` by CockroachDB is [PEM encoded](https://tools.ietf.org/html/rfc1421). To read the key in a Java application, you will need to convert it into [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java.
-
-To convert the key to PKCS#8 format, run the following OpenSSL command on the `maxroach` user's key file in the directory where you stored your certificates (`certs` in this example):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ openssl pkcs8 -topk8 -inform PEM -outform DER -in client.maxroach.key -out client.maxroach.pk8 -nocrypt
-~~~
-
-## Step 5. Run the Java code
-
-Download and extract [hibernate-basic-sample.tgz](https://github.com/cockroachdb/docs/raw/master/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz), which contains a Java project that includes the following files:
-
-File | Description
------|------------
-[`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/hibernate-basic-sample/Sample.java) | Uses [Hibernate](http://hibernate.org/orm/) to map Java object state to SQL operations. For more information, see [Sample.java](#sample-java).
-[`hibernate.cfg.xml`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml) | Specifies how to connect to the database and that the database schema will be deleted and recreated each time the app is run. For more information, see [hibernate.cfg.xml](#hibernate-cfg-xml).
-[`build.gradle`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/hibernate-basic-sample/build.gradle) | Used to build and run your app. For more information, see [build.gradle](#build-gradle).
-
-In the `hibernate-basic-sample` directory, build and run the application:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ gradle run
-~~~
-
-Toward the end of the output, you should see:
-
-~~~
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-### Sample.java
-
-The Java code shown below uses the [Hibernate ORM](http://hibernate.org/orm/) to map Java object state to SQL operations. Specifically, this code:
-
-- Creates an `accounts` table in the database based on the `Account` class.
-
-- Inserts rows into the table using `session.save(new Account())`.
-
-- Defines the SQL query for selecting from the table so that balances can be printed using the `CriteriaQuery query` object.
-
-{% include copy-clipboard.html %}
-~~~ java
-{% include {{page.version.version}}/app/hibernate-basic-sample/Sample.java %}
-~~~
-
-### hibernate.cfg.xml
-
-The Hibernate config (in `hibernate.cfg.xml`, shown below) specifies how to connect to the database. Note the [connection URL](connection-parameters.html#connect-using-a-url) that turns on SSL and specifies the location of the security certificates.
-
-{% include copy-clipboard.html %}
-~~~ xml
-{% include {{page.version.version}}/app/hibernate-basic-sample/hibernate.cfg.xml %}
-~~~
-
-### build.gradle
-
-The Gradle build file specifies the dependencies (in this case the Postgres JDBC driver and Hibernate):
-
-{% include copy-clipboard.html %}
-~~~ groovy
-{% include {{page.version.version}}/app/hibernate-basic-sample/build.gradle %}
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Java code
-
-Download and extract [hibernate-basic-sample.tgz](https://github.com/cockroachdb/docs/raw/master/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz), which contains a Java project that includes the following files:
-
-File | Description
------|------------
-[`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java) | Uses [Hibernate](http://hibernate.org/orm/) to map Java object state to SQL operations. For more information, see [Sample.java](#sample-java).
-[`hibernate.cfg.xml`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml) | Specifies how to connect to the database and that the database schema will be deleted and recreated each time the app is run. For more information, see [hibernate.cfg.xml](#hibernate-cfg-xml).
-[`build.gradle`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle) | Used to build and run your app. For more information, see [build.gradle](#build-gradle).
-
-In the `hibernate-basic-sample` directory, build and run the application:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ gradle run
-~~~
-
-Toward the end of the output, you should see:
-
-~~~
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-### Sample.java
-
-The Java code shown below uses the [Hibernate ORM](http://hibernate.org/orm/) to map Java object state to SQL operations. Specifically, this code:
-
-- Creates an `accounts` table in the database based on the `Account` class.
-
-- Inserts rows into the table using `session.save(new Account())`.
-
-- Defines the SQL query for selecting from the table so that balances can be printed using the `CriteriaQuery query` object.
-
-{% include copy-clipboard.html %}
-~~~ java
-{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/Sample.java %}
-~~~
-
-### hibernate.cfg.xml
-
-The Hibernate config (in `hibernate.cfg.xml`, shown below) specifies how to connect to the database. Note the [connection URL](connection-parameters.html#connect-using-a-url) that turns on SSL and specifies the location of the security certificates.
-
-{% include copy-clipboard.html %}
-~~~ xml
-{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/hibernate.cfg.xml %}
-~~~
-
-### build.gradle
-
-The Gradle build file specifies the dependencies (in this case the Postgres JDBC driver and Hibernate):
-
-{% include copy-clipboard.html %}
-~~~ groovy
-{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/build.gradle %}
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Hibernate ORM](http://hibernate.org/orm/), or check out a more realistic implementation of Hibernate with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-java-app-with-cockroachdb.md b/src/current/v2.0/build-a-java-app-with-cockroachdb.md
deleted file mode 100644
index cdf240d8942..00000000000
--- a/src/current/v2.0/build-a-java-app-with-cockroachdb.md
+++ /dev/null
@@ -1,264 +0,0 @@
----
-title: Build a Java App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Java application with the JDBC driver.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Java application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Java JDBC driver](https://jdbc.postgresql.org/) and the [Hibernate ORM](http://hibernate.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-{{site.data.alerts.callout_danger}}
-The examples on this page assume you are using a Java version <= 9. They do not work with Java 10.
-{{site.data.alerts.end}}
-
-## Step 1. Install the Java JDBC driver
-
-Download and set up the Java JDBC driver as described in the [official documentation](https://jdbc.postgresql.org/documentation/setup/).
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Convert the key file for use with Java
-
-The private key generated for user `maxroach` by CockroachDB is [PEM encoded](https://tools.ietf.org/html/rfc1421). To read the key in a Java application, you will need to convert it into [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java.
-
-To convert the key to PKCS#8 format, run the following OpenSSL command on the `maxroach` user's key file in the directory where you stored your certificates:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ openssl pkcs8 -topk8 -inform PEM -outform DER -in client.maxroach.key -out client.maxroach.pk8 -nocrypt
-~~~
-
-## Step 5. Run the Java code
-
-Now that you have created a database and set up encryption keys, in this section you will:
-
-- [Create a table and insert some rows](#basic1)
-- [Execute a batch of statements as a transaction](#txn1)
-
-
-
-### Basic example
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements: create a table, insert rows, and read and print the rows.
-
-To run it:
-
-1. Download [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/BasicSample.java), or create the file yourself and copy the code below.
-2. Download [the PostgreSQL JDBC driver](https://jdbc.postgresql.org/download/).
-3. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ javac -classpath .:/path/to/postgresql.jar BasicSample.java
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ java -classpath .:/path/to/postgresql.jar BasicSample
- ~~~
-
- The output should be:
-
- ~~~
- Initial balances:
- account 1: 1000
- account 2: 250
- ~~~
-
-The contents of [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/BasicSample.java):
-
-{% include copy-clipboard.html %}
-~~~ java
-{% include {{page.version.version}}/app/BasicSample.java %}
-~~~
-
-
-
-### Transaction example (with retry logic)
-
-Next, use the following code to execute a batch of statements as a [transaction](transactions.html) to transfer funds from one account to another.
-
-To run it:
-
-1. Download TxnSample.java, or create the file yourself and copy the code below. Note the use of [`SQLException.getSQLState()`](https://docs.oracle.com/javase/tutorial/jdbc/basics/sqlexception.html) instead of `getErrorCode()`.
-2. Compile and run the code (again adding the PostgreSQL JDBC driver to your classpath):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ javac -classpath .:/path/to/postgresql.jar TxnSample.java
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ java -classpath .:/path/to/postgresql.jar TxnSample
- ~~~
-
- The output should be:
-
- ~~~
- account 1: 900
- account 2: 350
- ~~~
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ java
-{% include {{page.version.version}}/app/TxnSample.java %}
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Java code
-
-Now that you have created a database, in this section you will:
-
-- [Create a table and insert some rows](#basic2)
-- [Execute a batch of statements as a transaction](#txn2)
-
-
-
-### Basic example
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-To run it:
-
-1. Download [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/BasicSample.java), or create the file yourself and copy the code below.
-2. Download [the PostgreSQL JDBC driver](https://jdbc.postgresql.org/download/).
-3. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ javac -classpath .:/path/to/postgresql.jar BasicSample.java
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ java -classpath .:/path/to/postgresql.jar BasicSample
- ~~~
-
-The contents of [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/BasicSample.java):
-
-{% include copy-clipboard.html %}
-~~~ java
-{% include {{page.version.version}}/app/insecure/BasicSample.java %}
-~~~
-
-
-
-### Transaction example (with retry logic)
-
-Next, use the following code to execute a batch of statements as a [transaction](transactions.html) to transfer funds from one account to another.
-
-To run it:
-
-1. Download TxnSample.java, or create the file yourself and copy the code below. Note the use of [`SQLException.getSQLState()`](https://docs.oracle.com/javase/tutorial/jdbc/basics/sqlexception.html) instead of `getErrorCode()`.
-2. Compile and run the code (again adding the PostgreSQL JDBC driver to your classpath):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ javac -classpath .:/path/to/postgresql.jar TxnSample.java
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ java -classpath .:/path/to/postgresql.jar TxnSample
- ~~~
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ java
-{% include {{page.version.version}}/app/insecure/TxnSample.java %}
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Java JDBC driver](https://jdbc.postgresql.org/).
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb-sequelize.md b/src/current/v2.0/build-a-nodejs-app-with-cockroachdb-sequelize.md
deleted file mode 100644
index 17191e9abb3..00000000000
--- a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb-sequelize.md
+++ /dev/null
@@ -1,166 +0,0 @@
----
-title: Build a Node.js App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Node.js application with the Sequelize ORM.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Node.js application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Node.js pg driver](https://www.npmjs.com/package/pg) and the [Sequelize ORM](https://sequelize.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-{{site.data.alerts.callout_success}}
-For a more realistic use of Sequelize with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms)repository.
-{{site.data.alerts.end}}
-
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the Sequelize ORM
-
-To install Sequelize, as well as a [CockroachDB Node.js package](https://github.com/cockroachdb/sequelize-cockroachdb) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ npm install sequelize sequelize-cockroachdb
-~~~
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Node.js code
-
-The following code uses the [Sequelize](https://sequelize.org/) ORM to map Node.js-specific objects to SQL operations. Specifically, `Account.sync({force: true})` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.bulkCreate([...])` inserts rows into the table, and `Account.findAll()` selects from the table so that balances can be printed.
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ js
-{% include {{ page.version.version }}/app/sequelize-basic-sample.js %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ node sequelize-basic-sample.js
-~~~
-
-The output should be:
-
-~~~ shell
-1 1000
-2 250
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=/tmp/certs -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Node.js code
-
-The following code uses the [Sequelize](https://sequelize.org/) ORM to map Node.js-specific objects to SQL operations. Specifically, `Account.sync({force: true})` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.bulkCreate([...])` inserts rows into the table, and `Account.findAll()` selects from the table so that balances can be printed.
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ js
-{% include {{ page.version.version }}/app/insecure/sequelize-basic-sample.js %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ node sequelize-basic-sample.js
-~~~
-
-The output should be:
-
-~~~ shell
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, you can again use the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SHOW TABLES' --database=bank
-~~~
-
-~~~
-+------------+
-| table_name |
-+------------+
-| accounts |
-+------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Sequelize ORM](https://sequelize.org/), or check out a more realistic implementation of Sequelize with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md b/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md
deleted file mode 100644
index 25815cdb64f..00000000000
--- a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md
+++ /dev/null
@@ -1,235 +0,0 @@
----
-title: Build a Node.js App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Node.js application with the Node.js pg driver.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Node.js application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Node.js pg driver](https://www.npmjs.com/package/pg) and the [Sequelize ORM](https://sequelize.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install Node.js packages
-
-To let your application communicate with CockroachDB, install the [Node.js pg driver](https://www.npmjs.com/package/pg):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ npm install pg
-~~~
-
-The example app on this page also requires [`async`](https://www.npmjs.com/package/async):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ npm install async
-~~~
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Node.js code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the [`basic-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/basic-sample.js) file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ js
-{% include {{page.version.version}}/app/basic-sample.js %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ node basic-sample.js
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-{ id: '1', balance: '1000' }
-{ id: '2', balance: '250' }
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another and then read the updated values, where all included statements are either committed or aborted.
-
-Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/txn-sample.js) file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ js
-{% include {{page.version.version}}/app/txn-sample.js %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ node txn-sample.js
-~~~
-
-The output should be:
-
-~~~
-Balances after transfer:
-{ id: '1', balance: '900' }
-{ id: '2', balance: '350' }
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Node.js code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the [`basic-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/basic-sample.js) file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ js
-{% include {{page.version.version}}/app/insecure/basic-sample.js %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ node basic-sample.js
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-{ id: '1', balance: '1000' }
-{ id: '2', balance: '250' }
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another and then read the updated values, where all included statements are either committed or aborted.
-
-Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/txn-sample.js) file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ js
-{% include {{page.version.version}}/app/insecure/txn-sample.js %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ node txn-sample.js
-~~~
-
-The output should be:
-
-~~~
-Balances after transfer:
-{ id: '1', balance: '900' }
-{ id: '2', balance: '350' }
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Node.js pg driver](https://www.npmjs.com/package/pg).
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-php-app-with-cockroachdb.md b/src/current/v2.0/build-a-php-app-with-cockroachdb.md
deleted file mode 100644
index fe0bcacc31c..00000000000
--- a/src/current/v2.0/build-a-php-app-with-cockroachdb.md
+++ /dev/null
@@ -1,175 +0,0 @@
----
-title: Build a PHP App with CockroachDB
-summary: Learn how to use CockroachDB from a simple PHP application with a low-level client driver.
-toc: true
-twitter: false
----
-
-This tutorial shows you how build a simple PHP application with CockroachDB using a PostgreSQL-compatible driver.
-
-We have tested the [php-pgsql driver](https://www.php.net/manual/en/book.pgsql.php) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the php-pgsql driver
-
-Install the php-pgsql driver as described in the [official documentation](https://www.php.net/manual/en/book.pgsql.php).
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the PHP code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows.
-
-Download the basic-sample.php file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ php
-{% include {{ page.version.version }}/app/basic-sample.php %}
-~~~
-
-The output should be:
-
-~~~ shell
-Account balances:
-1: 1000
-2: 250
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.php file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ php
-{% include {{ page.version.version }}/app/txn-sample.php %}
-~~~
-
-The output should be:
-
-~~~ shell
-Account balances after transfer:
-1: 900
-2: 350
-~~~
-
-To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the PHP code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows.
-
-Download the basic-sample.php file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ php
-{% include {{ page.version.version }}/app/insecure/basic-sample.php %}
-~~~
-
-The output should be:
-
-~~~ shell
-Account balances:
-1: 1000
-2: 250
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.php file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ php
-{% include {{ page.version.version }}/app/insecure/txn-sample.php %}
-~~~
-
-The output should be:
-
-~~~ shell
-Account balances after transfer:
-1: 900
-2: 350
-~~~
-
-To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [php-pgsql driver](https://www.php.net/manual/en/book.pgsql.php).
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v2.0/build-a-python-app-with-cockroachdb-sqlalchemy.md
deleted file mode 100644
index 21b0a20c09b..00000000000
--- a/src/current/v2.0/build-a-python-app-with-cockroachdb-sqlalchemy.md
+++ /dev/null
@@ -1,179 +0,0 @@
----
-title: Build a Python App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Python application with the SQLAlchemy ORM.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Python application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Python psycopg2 driver](http://initd.org/psycopg/docs/) and the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-{{site.data.alerts.callout_success}}
-For a more realistic use of SQLAlchemy with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-{{site.data.alerts.end}}
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the SQLAlchemy ORM
-
-To install SQLAlchemy, as well as a [CockroachDB Python package](https://github.com/cockroachdb/sqlalchemy-cockroachdb) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ pip install sqlalchemy sqlalchemy-cockroachdb psycopg2
-~~~
-
-{{site.data.alerts.callout_success}}
-You can substitute psycopg2 for other alternatives that include the psycopg python package.
-{{site.data.alerts.end}}
-
-For other ways to install SQLAlchemy, see the [official documentation](http://docs.sqlalchemy.org/en/latest/intro.html#installation-guide).
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 3. Run the Python code
-
-The following code uses the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) to map Python-specific objects to SQL operations. Specifically, `Base.metadata.create_all(engine)` creates an `accounts` table based on the Account class, `session.add_all([Account(),...
-])` inserts rows into the table, and `session.query(Account)` selects from the table so that balances can be printed.
-
-{{site.data.alerts.callout_info}}
-The sqlalchemy-cockroachdb python package installed earlier is triggered by the cockroachdb:// prefix in the engine URL. Using postgres:// to connect to your cluster will not work.
-{{site.data.alerts.end}}
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ python
-{% include {{page.version.version}}/app/sqlalchemy-basic-sample.py %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ python sqlalchemy-basic-sample.py
-~~~
-
-The output should be:
-
-~~~ shell
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-Then, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Python code
-
-The following code uses the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) to map Python-specific objects to SQL operations. Specifically, `Base.metadata.create_all(engine)` creates an `accounts` table based on the Account class, `session.add_all([Account(),...
-])` inserts rows into the table, and `session.query(Account)` selects from the table so that balances can be printed.
-
-{{site.data.alerts.callout_info}}
-The sqlalchemy-cockroachdb python package installed earlier is triggered by the cockroachdb:// prefix in the engine URL. Using postgres:// to connect to your cluster will not work.
-{{site.data.alerts.end}}
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ python
-{% include {{page.version.version}}/app/insecure/sqlalchemy-basic-sample.py %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ python sqlalchemy-basic-sample.py
-~~~
-
-The output should be:
-
-~~~ shell
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-Then, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/), or check out a more realistic implementation of SQLAlchemy with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-python-app-with-cockroachdb.md b/src/current/v2.0/build-a-python-app-with-cockroachdb.md
deleted file mode 100644
index 493683440fa..00000000000
--- a/src/current/v2.0/build-a-python-app-with-cockroachdb.md
+++ /dev/null
@@ -1,231 +0,0 @@
----
-title: Build a Python App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Python application with the psycopg2 driver.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Python application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Python psycopg2 driver](http://initd.org/psycopg/docs/) and the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the psycopg2 driver
-
-To install the Python psycopg2 driver, run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ pip install psycopg2
-~~~
-
-For other ways to install psycopg2, see the [official documentation](http://initd.org/psycopg/docs/install.html).
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Python code
-
-Now that you have a database and a user, you'll run the code shown below to:
-
-- Create a table and insert some rows
-- Read and update values as an atomic [transaction](transactions.html)
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.py file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ python
-{% include {{page.version.version}}/app/basic-sample.py %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ python basic-sample.py
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-['1', '1000']
-['2', '250']
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.py file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ python
-{% include {{page.version.version}}/app/txn-sample.py %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ python txn-sample.py
-~~~
-
-The output should be:
-
-~~~
-Balances after transfer:
-['1', '900']
-['2', '350']
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Python code
-
-Now that you have a database and a user, you'll run the code shown below to:
-
-- Create a table and insert some rows
-- Read and update values as an atomic [transaction](transactions.html)
-
-### Basic statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.py file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ python
-{% include {{page.version.version}}/app/insecure/basic-sample.py %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ python basic-sample.py
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-['1', '1000']
-['2', '250']
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.py file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ python
-{% include {{page.version.version}}/app/insecure/txn-sample.py %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ python txn-sample.py
-~~~
-
-The output should be:
-
-~~~
-Balances after transfer:
-['1', '900']
-['2', '350']
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Python psycopg2 driver](http://initd.org/psycopg/docs/).
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-ruby-app-with-cockroachdb-activerecord.md b/src/current/v2.0/build-a-ruby-app-with-cockroachdb-activerecord.md
deleted file mode 100644
index 22144ad57f6..00000000000
--- a/src/current/v2.0/build-a-ruby-app-with-cockroachdb-activerecord.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: Build a Ruby App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Ruby application with the ActiveRecord ORM.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Ruby application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Ruby pg driver](https://rubygems.org/gems/pg) and the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-{{site.data.alerts.callout_success}}
-For a more realistic use of ActiveRecord with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-{{site.data.alerts.end}}
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the ActiveRecord ORM
-
-To install ActiveRecord as well as the [pg driver](https://rubygems.org/gems/pg) and a [CockroachDB Ruby package](https://github.com/cockroachdb/activerecord-cockroachdb-adapter) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ gem install activerecord pg activerecord-cockroachdb-adapter
-~~~
-
-{{site.data.alerts.callout_info}}
-The exact command above will vary depending on the desired version of ActiveRecord. Specifically, version 4.2.x of ActiveRecord requires version 0.1.x of the adapter; version 5.1.x of ActiveRecord requires version 0.2.x of the adapter.
-{{site.data.alerts.end}}
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Ruby code
-
-The following code uses the [ActiveRecord](http://guides.rubyonrails.org/active_record_basics.html) ORM to map Ruby-specific objects to SQL operations. Specifically, `Schema.new.change()` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.create()` inserts rows into the table, and `Account.all` selects from the table so that balances can be printed.
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ ruby
-{% include {{page.version.version}}/app/activerecord-basic-sample.rb %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ ruby activerecord-basic-sample.rb
-~~~
-
-The output should be:
-
-~~~ shell
--- create_table(:accounts, {:force=>true})
- -> 0.0361s
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-Then, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Ruby code
-
-The following code uses the [ActiveRecord](http://guides.rubyonrails.org/active_record_basics.html) ORM to map Ruby-specific objects to SQL operations. Specifically, `Schema.new.change()` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.create()` inserts rows into the table, and `Account.all` selects from the table so that balances can be printed.
-
-Copy the code or
-download it directly.
-
-{% include copy-clipboard.html %}
-~~~ ruby
-{% include {{page.version.version}}/app/insecure/activerecord-basic-sample.rb %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ ruby activerecord-basic-sample.rb
-~~~
-
-The output should be:
-
-~~~ shell
--- create_table(:accounts, {:force=>true})
- -> 0.0361s
-1 1000
-2 250
-~~~
-
-To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-Then, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000 |
-| 2 | 250 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html), or check out a more realistic implementation of ActiveRecord with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository.
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md b/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md
deleted file mode 100644
index f461cee9b94..00000000000
--- a/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md
+++ /dev/null
@@ -1,211 +0,0 @@
----
-title: Build a Ruby App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Ruby application with the pg client driver.
-toc: true
-twitter: false
----
-
-
-
-
-
-
-This tutorial shows you how build a simple Ruby application with CockroachDB using a PostgreSQL-compatible driver or ORM.
-
-We have tested the [Ruby pg driver](https://rubygems.org/gems/pg) and the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-## Before you begin
-
-{% include {{page.version.version}}/app/before-you-begin.md %}
-
-## Step 1. Install the Ruby pg driver
-
-To install the [Ruby pg driver](https://rubygems.org/gems/pg), run the following command:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ gem install pg
-~~~
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Generate a certificate for the `maxroach` user
-
-Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key
-~~~
-
-## Step 4. Run the Ruby code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-The following code connects as the `maxroach` user and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.rb file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ ruby
-{% include {{page.version.version}}/app/basic-sample.rb %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ ruby basic-sample.rb
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-{"id"=>"1", "balance"=>"1000"}
-{"id"=>"2", "balance"=>"250"}
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.rb file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ ruby
-{% include {{page.version.version}}/app/txn-sample.rb %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ ruby txn-sample.rb
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-
-
-## Step 2. Create the `maxroach` user and `bank` database
-
-{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %}
-
-## Step 3. Run the Ruby code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic statements
-
-The following code connects as the `maxroach` user and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows.
-
-Download the basic-sample.rb file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ ruby
-{% include {{page.version.version}}/app/insecure/basic-sample.rb %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ ruby basic-sample.rb
-~~~
-
-The output should be:
-
-~~~
-Initial balances:
-{"id"=>"1", "balance"=>"1000"}
-{"id"=>"2", "balance"=>"250"}
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.rb file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ ruby
-{% include {{page.version.version}}/app/insecure/txn-sample.rb %}
-~~~
-
-Then run the code:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ ruby txn-sample.rb
-~~~
-
-To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure --database=bank
-~~~
-
-To check the account balances, issue the following statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT id, balance FROM accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-
-
-## What's next?
-
-Read more about using the [Ruby pg driver](https://rubygems.org/gems/pg).
-
-{% include {{page.version.version}}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-a-rust-app-with-cockroachdb.md b/src/current/v2.0/build-a-rust-app-with-cockroachdb.md
deleted file mode 100644
index 7cab3fb80ce..00000000000
--- a/src/current/v2.0/build-a-rust-app-with-cockroachdb.md
+++ /dev/null
@@ -1,84 +0,0 @@
----
-title: Build a Rust App with CockroachDB
-summary: Learn how to use CockroachDB from a simple Rust application with a low-level client driver.
-toc: true
-twitter: false
----
-
-This tutorial shows you how build a simple Rust application with CockroachDB using a PostgreSQL-compatible driver.
-
-We have tested the Rust Postgres driver enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
-
-
-## Before You Begin
-
-Make sure you have already [installed CockroachDB](install-cockroachdb.html).
-
-## Step 1. Install the Rust Postgres driver
-
-Install the Rust Postgres driver as described in the official documentation.
-
-{% include {{ page.version.version }}/app/common-steps.md %}
-
-## Step 5. Create a table in the new database
-
-As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure \
---database=bank \
---user=maxroach \
--e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)'
-~~~
-
-## Step 6. Run the Rust code
-
-Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
-
-### Basic Statements
-
-First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows.
-
-Download the basic-sample.rs file, or create the file yourself and copy the code into it.
-
-{% include copy-clipboard.html %}
-~~~ rust
-{% include {{ page.version.version }}/app/basic-sample.rs %}
-~~~
-
-### Transaction (with retry logic)
-
-Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted.
-
-Download the txn-sample.rs file, or create the file yourself and copy the code into it.
-
-{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ rust
-{% include {{ page.version.version }}/app/txn-sample.rs %}
-~~~
-
-After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 900 |
-| 2 | 350 |
-+----+---------+
-(2 rows)
-~~~
-
-## What's Next?
-
-Read more about using the Rust Postgres driver.
-
-{% include {{ page.version.version }}/app/see-also-links.md %}
diff --git a/src/current/v2.0/build-an-app-with-cockroachdb.md b/src/current/v2.0/build-an-app-with-cockroachdb.md
deleted file mode 100644
index 9e66d914d34..00000000000
--- a/src/current/v2.0/build-an-app-with-cockroachdb.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Build an App with CockroachDB
-summary: The tutorials in this section show you how to build a simple application with CockroachDB, using PostgreSQL-compatible client drivers and ORMs.
-tags: golang, python, java
-toc: false
-twitter: false
----
-
-The tutorials in this section show you how to build a simple application with CockroachDB using PostgreSQL-compatible client drivers and ORMs.
-
-{{site.data.alerts.callout_info}}We have tested the drivers and ORMs featured here enough to claim beta-level support. This means that applications using advanced or obscure features of a driver or ORM may encounter incompatibilities. If you encounter problems, please open an issue with details to help us make progress toward full support.{{site.data.alerts.end}}
-
-App Language | Featured Driver | Featured ORM
--------------|-----------------|-------------
-Go | [pq](build-a-go-app-with-cockroachdb.html) | [GORM](build-a-go-app-with-cockroachdb-gorm.html)
-Python | [psycopg2](build-a-python-app-with-cockroachdb.html) | [SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html)
-Ruby | [pg](build-a-ruby-app-with-cockroachdb.html) | [ActiveRecord](build-a-ruby-app-with-cockroachdb-activerecord.html)
-Java | [jdbc](build-a-java-app-with-cockroachdb.html) | [Hibernate](build-a-java-app-with-cockroachdb-hibernate.html)
-Node.js | [pg](build-a-nodejs-app-with-cockroachdb.html) | [Sequelize](build-a-nodejs-app-with-cockroachdb-sequelize.html)
-C++ | [libpqxx](build-a-c++-app-with-cockroachdb.html) | No ORMs tested
-C# (.NET) | [Npgsql](build-a-csharp-app-with-cockroachdb.html) | No ORMs tested
-Clojure | [java.jdbc](build-a-clojure-app-with-cockroachdb.html) | No ORMs tested
-PHP | [php-pgsql](build-a-php-app-with-cockroachdb.html) | No ORMs tested
-Rust | [postgres](build-a-rust-app-with-cockroachdb.html) | No ORMs tested
diff --git a/src/current/v2.0/bytes.md b/src/current/v2.0/bytes.md
deleted file mode 100644
index b8ac1026b6f..00000000000
--- a/src/current/v2.0/bytes.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: BYTES
-summary: The BYTES data type stores binary strings of variable length.
-toc: true
----
-
-The `BYTES` [data type](data-types.html) stores binary strings of variable length.
-
-
-## Aliases
-
-In CockroachDB, the following are aliases for `BYTES`:
-
-- `BYTEA`
-- `BLOB`
-
-## Syntax
-
-To express a byte array constant, see the section on
-[byte array literals](sql-constants.html#byte-array-literals) for more
-details. For example, the following three are equivalent literals for the same
-byte array: `b'abc'`, `b'\141\142\143'`, `b'\x61\x62\x63'`.
-
-In addition to this syntax, CockroachDB also supports using
-[string literals](sql-constants.html#string-literals), including the
-syntax `'...'`, `e'...'` and `x'....'` in contexts where a byte array
-is otherwise expected.
-
-## Size
-
-The size of a `BYTES` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation.
-
-## Example
-
-~~~ sql
-> CREATE TABLE bytes (a INT PRIMARY KEY, b BYTES);
-
-> -- explicitly typed BYTES literals
-> INSERT INTO bytes VALUES (1, b'\141\142\143'), (2, b'\x61\x62\x63'), (3, b'\141\x62\c');
-
-> -- string literal implicitly typed as BYTES
-> INSERT INTO bytes VALUES (4, 'abc');
-
-
-> SELECT * FROM bytes;
-~~~
-~~~
-+---+-----+
-| a | b |
-+---+-----+
-| 1 | abc |
-| 2 | abc |
-| 3 | abc |
-| 4 | abc |
-+---+-----+
-(4 rows)
-~~~
-
-## Supported Conversions
-
-`BYTES` values can be
-[cast](data-types.html#data-type-conversions-casts) explicitly to
-[`STRING`](string.html). The output of the conversion starts with the
-two characters `\`, `x` and the rest of the string is composed by the
-hexadecimal encoding of each byte in the input. For example,
-`x'48AA'::STRING` produces `'\x48AA'`.
-
-`STRING` values can be cast explicitly to `BYTES`. This conversion
-will fail if the hexadecimal digits are not valid, or if there is an
-odd number of them. Two conversion modes are supported:
-
-- If the string starts with the two special characters `\` and `x`
- (e.g., `\xAABB`), the rest of the string is interpreted as a sequence
- of hexadecimal digits. The string is then converted to a byte array
- where each pair of hexadecimal digits is converted to one byte.
-
-- Otherwise, the string is converted to a byte array that contains
- its UTF-8 encoding.
-
-## See Also
-
-[Data Types](data-types.html)
diff --git a/src/current/v2.0/cancel-job.md b/src/current/v2.0/cancel-job.md
deleted file mode 100644
index edd171c3229..00000000000
--- a/src/current/v2.0/cancel-job.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: CANCEL JOB
-summary: The CANCEL JOB statement stops long-running jobs such as imports, backups, and schema changes.
-toc: true
----
-
-New in v1.1: The `CANCEL JOB` [statement](sql-statements.html) lets you stop long-running jobs, which include [`IMPORT`](import.html) jobs and enterprise [`BACKUP`](backup.html) and [`RESTORE`](restore.html) tasks.
-
-
-## Limitations
-
-When an enterprise [`RESTORE`](restore.html) is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance.
-
-## Required Privileges
-
-By default, only the `root` user can cancel a job.
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/cancel_job.html %}
-
-
-## Parameters
-
-Parameter | Description
-----------|------------
-`job_id` | The ID of the job you want to cancel, which can be found with [`SHOW JOBS`](show-jobs.html).
-
-## Examples
-
-### Cancel a Restore
-
-~~~ sql
-> SHOW JOBS;
-~~~
-~~~
-+----------------+---------+-------------------------------------------+...
-| id | type | description |...
-+----------------+---------+-------------------------------------------+...
-| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |...
-+----------------+---------+-------------------------------------------+...
-~~~
-~~~ sql
-> CANCEL JOB 27536791415282;
-~~~
-
-## See Also
-
-- [`SHOW JOBS`](show-jobs.html)
-- [`BACKUP`](backup.html)
-- [`RESTORE`](restore.html)
-- [`IMPORT`](import.html)
\ No newline at end of file
diff --git a/src/current/v2.0/cancel-query.md b/src/current/v2.0/cancel-query.md
deleted file mode 100644
index c0d06b34df2..00000000000
--- a/src/current/v2.0/cancel-query.md
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title: CANCEL QUERY
-summary: The CANCEL QUERY statement cancels a running SQL query.
-toc: true
----
-
-New in v1.1: The `CANCEL QUERY` [statement](sql-statements.html) cancels a running SQL query.
-
-
-## Considerations
-
-- Schema changes (statements beginning with ALTER) cannot currently be cancelled. However, to monitor the progress of schema changes, you can use SHOW JOBS.
-- In rare cases where a query is close to completion when a cancellation request is issued, the query may run to completion.
-
-## Required Privileges
-
-The `root` user can cancel any currently active queries, whereas non-`root` users cancel only their own currently active queries.
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/cancel_query.html %}
-
-
-## Parameters
-
-Parameter | Description
-----------|------------
-`query_id` | A [scalar expression](scalar-expressions.html) that produces the ID of the query to cancel.
`CANCEL QUERY` accepts a single query ID. If a subquery is used and returns multiple IDs, the `CANCEL QUERY` statement will therefore fail.
-
-## Response
-
-When a query is successfully cancelled, CockroachDB sends a `query execution canceled` error to the client that issued the query.
-
-- If the canceled query was a single, stand-alone statement, no further action is required by the client.
-- If the canceled query was part of a larger, multi-statement [transaction](transactions.html), the client should then issue a [`ROLLBACK`](rollback-transaction.html) statement.
-
-## Examples
-
-### Cancel a Query via the Query ID
-
-In this example, we use the [`SHOW QUERIES`](show-queries.html) statement to get the ID of a query and then pass the ID into the `CANCEL QUERY` statement:
-
-~~~ sql
-> SHOW QUERIES;
-~~~
-
-~~~
-+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+
-| query_id | node_id | username | start | query | client_address | application_name | distributed | phase |
-+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+
-| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 14:08:22.878113+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing |
-+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+
-| 14dacc206c47a9690000000000000002 | 2 | root | 2017-08-14 19:11:05.309119+00:00 | SHOW CLUSTER QUERIES | 127.0.0.1:50921 | | NULL | preparing |
-+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+
-~~~
-
-~~~ sql
-> CANCEL QUERY '14dacc1f9a781e3d0000000000000001';
-~~~
-
-### Cancel a Query via a Subquery
-
-In this example, we nest a [`SELECT` clause](select-clause.html) that retrieves the ID of a query inside the `CANCEL QUERY` statement:
-
-~~~ sql
-> CANCEL QUERY (SELECT query_id FROM [SHOW CLUSTER QUERIES]
- WHERE client_address = '192.168.0.72:56194'
- AND username = 'mroach'
- AND query = 'SELECT * FROM test.kv ORDER BY k');
-~~~
-
-{{site.data.alerts.callout_info}}CANCEL QUERY accepts a single query ID. If subquery is used and returns multiple IDs, the CANCEL QUERY statement will therefore fail.{{site.data.alerts.end}}
-
-## See Also
-
-- [Manage Long-Running Queries](manage-long-running-queries.html)
-- [`SHOW QUERIES`](show-queries.html)
-- [SQL Statements](sql-statements.html)
diff --git a/src/current/v2.0/check.md b/src/current/v2.0/check.md
deleted file mode 100644
index 68e137c75c6..00000000000
--- a/src/current/v2.0/check.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-title: CHECK Constraint
-summary: The CHECK constraint specifies that values for the column in INSERT or UPDATE statements must satisfy a Boolean expression.
-toc: true
----
-
-The `CHECK` [constraint](constraints.html) specifies that values for the column in [`INSERT`](insert.html) or [`UPDATE`](update.html) statements must return `TRUE` or `NULL` for a Boolean expression. If any values return `FALSE`, the entire statement is rejected.
-
-
-## Details
-
-- If you add a `CHECK` constraint to an existing table, added values, along with any updates to current values, are checked. To check the existing rows, use [`VALIDATE CONSTRAINT`](validate-constraint.html).
-- `CHECK` constraints may be specified at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled consistently.
-- You can have multiple `CHECK` constraints on a single column but ideally, for performance optimization, these should be combined using the logical operators. For example:
-
- ~~~ sql
- warranty_period INT CHECK (warranty_period >= 0) CHECK (warranty_period <= 24)
- ~~~
-
- should be specified as:
-
- ~~~ sql
- warranty_period INT CHECK (warranty_period BETWEEN 0 AND 24)
- ~~~
-- When a column with a `CHECK` constraint is dropped, the `CHECK` constraint is also dropped.
-
-## Syntax
-
-`CHECK` constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level).
-
-{{site.data.alerts.callout_info}}You can also add the CHECK constraint to existing tables through ADD CONSTRAINT.{{site.data.alerts.end}}
-
-### Column Level
-
-
-{% include {{ page.version.version }}/sql/diagrams/check_column_level.html %}
-
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_name` | The name of the table you're creating. |
-| `column_name` | The name of the constrained column. |
-| `column_type` | The constrained column's [data type](data-types.html). |
-| `check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted.|
-| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. |
-| `column_def` | Definitions for any other columns in the table. |
-| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. |
-
-**Example**
-
-~~~ sql
-> CREATE TABLE inventories (
- product_id INT NOT NULL,
- warehouse_id INT NOT NULL,
- quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0),
- PRIMARY KEY (product_id, warehouse_id)
- );
-~~~
-
-### Table Level
-
-
-{% include {{ page.version.version }}/sql/diagrams/check_table_level.html %}
-
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_name` | The name of the table you're creating. |
-| `column_def` | Definitions for any other columns in the table. |
-| `name` | The name you want to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). |
-| `check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted.|
-| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. |
-
-**Example**
-
-~~~ sql
-> CREATE TABLE inventories (
- product_id INT NOT NULL,
- warehouse_id INT NOT NULL,
- quantity_on_hand INT NOT NULL,
- PRIMARY KEY (product_id, warehouse_id),
- CONSTRAINT ok_to_supply CHECK (quantity_on_hand > 0 AND warehouse_id BETWEEN 100 AND 200)
- );
-~~~
-
-## Usage Example
-
-`CHECK` constraints may be specified at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled in a consistent fashion.
-
-~~~ sql
-> CREATE TABLE inventories (
- product_id INT NOT NULL,
- warehouse_id INT NOT NULL,
- quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0),
- PRIMARY KEY (product_id, warehouse_id)
- );
-
-> INSERT INTO inventories (product_id, warehouse_id, quantity_on_hand) VALUES (1, 2, 0);
-~~~
-~~~
-pq: failed to satisfy CHECK constraint (quantity_on_hand > 0)
-~~~
-
-## See Also
-
-- [Constraints](constraints.html)
-- [`DROP CONSTRAINT`](drop-constraint.html)
-- [Default Value constraint](default-value.html)
-- [Foreign Key constraint](foreign-key.html)
-- [Not Null constraint](not-null.html)
-- [Primary Key constraint](primary-key.html)
-- [Unique constraint](unique.html)
-- [`SHOW CONSTRAINTS`](show-constraints.html)
diff --git a/src/current/v2.0/cluster-settings.md b/src/current/v2.0/cluster-settings.md
deleted file mode 100644
index 89521a4b8e4..00000000000
--- a/src/current/v2.0/cluster-settings.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: Cluster Settings
-summary: Learn about cluster settings that apply to all nodes of a CockroachDB cluster.
-toc: true
----
-
-This page shows you how to view and change CockroachDB's **cluster-wide settings**.
-
-{{site.data.alerts.callout_info}}In contrast to cluster-wide settings, node-level settings apply to a single node. They are defined by flags passed to the cockroach start command when starting a node and cannot be changed without stopping and restarting the node. For more details, see Start a Node.{{site.data.alerts.end}}
-
-
-## Overview
-
-Cluster settings apply to all nodes of a CockroachDB cluster and control, for example, whether or not to share diagnostic details with Cockroach Labs as well as advanced options for debugging and cluster tuning.
-
-They can be updated anytime after a cluster has been started, but only by the `root` user.
-
-## Settings
-
-{{site.data.alerts.callout_danger}}Many cluster settings are intended for tuning CockroachDB internals. Before changing these settings, we strongly encourage you to discuss your goals with Cockroach Labs; otherwise, you use them at your own risk.{{site.data.alerts.end}}
-
-{% include {{ page.version.version }}/sql/settings/settings.md %}
-
-## View Current Cluster Settings
-
-Use the [`SHOW CLUSTER SETTING`](show-cluster-setting.html) statement.
-
-## Change a Cluster Setting
-
-Use the [`SET CLUSTER SETTING`](set-cluster-setting.html) statement.
-
-Before changing a cluster setting, please note the following:
-
-- Changing a cluster setting is not instantaneous, as the change must be propagated to other nodes in the cluster.
-
-- It's not recommended to change cluster settings [upgrading to a new version of CockroachDB](upgrade-cockroach-version.html); wait until all nodes have been upgraded and then make the change.
-
-## See Also
-
-- [`SET CLUSTER SETTING`](set-cluster-setting.html)
-- [`SHOW CLUSTER SETTING`](show-cluster-setting.html)
-- [Diagnostics Reporting](diagnostics-reporting.html)
-- [Start a Node](start-a-node.html)
-- [Use the Built-in SQL Client](use-the-built-in-sql-client.html)
diff --git a/src/current/v2.0/cluster-setup-troubleshooting.md b/src/current/v2.0/cluster-setup-troubleshooting.md
deleted file mode 100644
index e20be0b5d5d..00000000000
--- a/src/current/v2.0/cluster-setup-troubleshooting.md
+++ /dev/null
@@ -1,191 +0,0 @@
----
-title: Troubleshoot Cluster Setup
-summary: Learn how to troubleshoot issues with starting CockroachDB clusters
-toc: true
----
-
-If you're having trouble starting or scaling your cluster, this page will help you troubleshoot the issue.
-
-
-## Before You Begin
-
-### Terminology
-
-To use this guide, it's important to understand some of CockroachDB's terminology:
-
- - A **Cluster** acts as a single logical database, but is actually made up of many cooperating nodes.
- - **Nodes** are single instances of the `cockroach` binary running on a machine. It's possible (though atypical) to have multiple nodes running on a single machine.
-
-### Using This Guide
-
-To diagnose issues, we recommend beginning with the simplest scenario and then increasing its complexity until you discover the problem. With that strategy in mind, you should proceed through these troubleshooting steps sequentially.
-
-We also recommend executing these steps in the environment where you want to deploy your CockroachDB cluster. However, if you run into issues you cannot solve, try the same steps in a simpler environment. For example, if you cannot successfully start a cluster using Docker, try deploying CockroachDB in the same environment without using containers.
-
-## Locate Your Issue
-
-Proceed through the following steps until you locate the source of the issue with starting or scaling your CockroachDB cluster.
-
-### 1. Start a Single-Node Cluster
-
-1. Terminate any running `cockroach` processes and remove any old data:
-
- ~~~ shell
- $ pkill -9 cockroach
- $ rm -r testStore
- ~~~
-
-2. Start a single insecure node and log all activity to your terminal:
-
- ~~~ shell
- $ cockroach start --insecure --logtostderr --store=testStore
- ~~~
-
- Errors at this stage potentially include:
- - CPU incompatibility
- - Other services running on port `26257` or `8080` (CockroachDB's default `port` and `http-port` respectively). You can either stop those services or start your node with different ports, specified with the [`--port` and `--http-port`](start-a-node.html#flags-changed-in-v2-0).
-
- If you change the port, you will need to include the `--port=[specified port]` flag in each subsequent `cockroach` command or change the `COCKROACH_PORT` environment variable.
- - Networking issues that prevent the node from communicating with itself on its hostname. You can control the hostname CockroachDB uses with the [`--host` flag](start-a-node.html#flags-changed-in-v2-0).
-
- If you change the host, you will need to include `--host=[specified host]` in each subsequent `cockroach` command.
-
-3. If the node appears to have started successfully, open a new terminal window, and attempt to execute the following SQL statement:
-
- ~~~ shell
- $ cockroach sql --insecure -e "SHOW DATABASES"
- ~~~
-
- You should receive a response that looks similar to this:
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | system |
- +--------------------+
- ~~~
-
- Errors at this stage potentially include:
- - `connection refused`, which indicates you have not included some flag that you used to start the node (e.g., `--port` or `--host`). We have additional troubleshooting steps for this error [here](common-errors.html#connection-refused).
- - The node crashed. You can identify if this is the case by looking for the `cockroach` process through `ps`. If you cannot locate the `cockroach` process (i.e., it crashed), [file an issue](file-an-issue.html).
-
-**Next step**: If you successfully completed these steps, try starting a multi-node cluster.
-
-### 2. Start a Multi-Node Cluster
-
-1. Terminate any running `cockroach` processes and remove any old data on the additional machines::
-
- ~~~ shell
- $ pkill -9 cockroach
- $ rm -r testStore
- ~~~
-
- {{site.data.alerts.callout_info}}If you're running all nodes on the same machine, skip this step. Running this command will kill your first node making it impossible to proceed.{{site.data.alerts.end}}
-
-2. On each machine, start the CockroachDB node, joining it to the first node:
-
- ~~~ shell
- $ cockroach start --insecure --logtostderr --store=testStore \
- --join=[first node's host]
- ~~~
-
- {{site.data.alerts.callout_info}}If you're running all nodes on the same machine, you will need to change the --port, --http-port, and --store flags. For an example of this, see Start a Local Cluster.{{site.data.alerts.end}}
-
- Errors at this stage potentially include:
- - The same port and host issues from [running a single node](#1-start-a-single-node-cluster).
- - [Networking issues](#networking-troubleshooting)
- - [Nodes not joining the cluster](#node-will-not-join-cluster)
-
-3. Visit the Admin UI on any node at `http://[node host]:8080` and click **Metrics** on the left-hand navigation bar. All nodes in the cluster should be listed and have data replicated onto them.
-
- Errors at this stage potentially include:
- - [Networking issues](#networking-troubleshooting)
- - [Nodes not receiving data](#replication-error-in-a-multi-node-cluster)
-
-**Next step**: If you successfully completed these steps, try [securing your deployment](manual-deployment.html) (*troubleshooting docs for this coming soon*) or reviewing our other [support resources](support-resources.html).
-
-## Troubleshooting Information
-
-Use the information below to resolve issues you encounter when trying to start or scale your cluster.
-
-### Networking Troubleshooting
-
-Most networking-related issues are caused by one of two issues:
-
-- Firewall rules, which require your network administrator to investigate
-
-- Inaccessible hostnames on your nodes, which can be controlled with the `--host` and `--advertise-host` flags on [`cockroach start`](start-a-node.html#flags-changed-in-v2-0)
-
-However, to efficiently troubleshoot the issue, it's important to understand where and why it's occurring. We recommend checking the following network-related issues:
-
-- By default, CockroachDB advertises itself to other nodes using its hostname. If your environment doesn't support DNS or the hostname is not resolvable, your nodes cannot connect to one another. In these cases, you can:
- - Change the hostname each node uses to advertises itself with `--advertise-host`
- - Set `--host=[node's IP address]` if the IP is a valid interface on the machine
-
-- Every node in the cluster should be able to `ping` each other node on the hostnames or IP addresses you use in the `--join`, `--host`, or `--advertise-host` flags.
-
-- Every node should be able to connect to other nodes on the port you're using for CockroachDB (**26257** by default) through `telnet` or `nc`:
- - `telnet [other node host] 26257`
- - `nc [other node host] 26257`
-
-Again, firewalls or hostname issues can cause any of these steps to fail.
-
-### Node Will Not Join Cluster
-
-When joining a node to a cluster, you might receive one of the following errors:
-
-~~~
-no resolvers found; use --join to specify a connected node
-~~~
-
-~~~
-node belongs to cluster {"cluster hash"} but is attempting to connect to a gossip network for cluster {"another cluster hash"}
-~~~
-
-**Solution**: Disassociate the node from the existing directory where you've stored CockroachDB data. For example, you can do either of the following:
-
-- Choose a different directory to store the CockroachDB data:
-
- ~~~ shell
- # Store this node's data in [new directory]
- $ cockroach start [flags] --store=[new directory] --join=[cluster host]:26257
- ~~~
-
-- Remove the existing directory and start a node joining the cluster again:
-
- ~~~ shell
- # Remove the directory
- $ rm -r cockroach-data/
-
- # Start a node joining the cluster
- $ cockroach start [flags] --join=[cluster host]:26257
- ~~~
-
-**Explanation**: When starting a node, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server, have quit `cockroach`, and then tried to join another cluster. Because the existing directory's cluster ID doesn't match the new cluster ID, the node cannot join it.
-
-### Replication Error in a Multi-Node Cluster
-
-If data is not being replicated to some nodes in the cluster, we recommend checking out the following:
-
-- Ensure every node but the first was started with the `--join` flag set to the hostname and port of first node (or any other node that's successfully joined the cluster).
-
- If the flag was not set correctly for a node, shut down the node and restart it with the `--join` flag set correctly. See [Stop a Node](stop-a-node.html) and [Start a Node](start-a-node.html) for more details.
-
-- Nodes might not be able to communicate on their advertised hostnames, even though they're able to connect.
-
- You can try to resolve this by [stopping the nodes](stop-a-node.html), and then [restarting them](start-a-node.html) with the `--advertise-host` flag set to an interface all nodes can access.
-
-- Check the [logs](debug-and-error-logs.html) for each node for further detail, as well as these common errors:
-
- - `connection refused`: [Troubleshoot your network](#networking-troubleshooting).
- - `not connected to cluster` or `node [id] belongs to cluster...`: See [Node Will Not Join Cluster](#node-will-not-join-cluster) on this page.
-
-## Something Else?
-
-Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including:
-
-- [CockroachDB Community Forum](https://forum.cockroachlabs.com)
-- [CockroachDB Community Slack](https://cockroachdb.slack.com)
-- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb)
-- [CockroachDB Support Portal](https://support.cockroachlabs.com)
diff --git a/src/current/v2.0/cockroach-commands.md b/src/current/v2.0/cockroach-commands.md
deleted file mode 100644
index 66129d05d7f..00000000000
--- a/src/current/v2.0/cockroach-commands.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: Cockroach Commands
-summary: Learn the commands for configuring, starting, and managing a CockroachDB cluster.
-toc: true
----
-
-This page introduces the `cockroach` commands for configuring, starting, and managing a CockroachDB cluster, as well as logging flags that can be set on any command and environment variables that can be used in place of certain flags.
-
-You can run `cockroach help` in your shell to get similar guidance.
-
-
-## Commands
-
-Command | Usage
---------|----
-[`start`](start-a-node.html) | Start a node.
-[`init`](initialize-a-cluster.html) | Initialize a cluster.
-[`cert`](create-security-certificates.html) | Create CA, node, and client certificates.
-[`quit`](stop-a-node.html) | Temporarily stop a node or permanently remove a node.
-[`sql`](use-the-built-in-sql-client.html) | Use the built-in SQL client.
-[`user`](create-and-manage-users.html) | Get, set, list, and remove users.
-[`zone`](configure-replication-zones.html) | Configure the number and location of replicas for specific sets of data.
-[`node`](view-node-details.html) | List node IDs, show their status, decommission nodes for removal, or recommission nodes.
-[`dump`](sql-dump.html) | Back up a table by outputting the SQL statements required to recreate the table and all its rows.
-[`debug zip`](debug-zip.html) | Generate a `.zip` file that can help Cockroach Labs troubleshoot issues with your cluster.
-[`gen`](generate-cockroachdb-resources.html) | Generate manpages, a bash completion file, example SQL data, or an HAProxy configuration file for a running cluster.
-[`version`](view-version-details.html) | Output CockroachDB version details.
-
-## Environment Variables
-
-For many common `cockroach` flags, such as `--port` and `--user`, you can set environment variables once instead of manually passing the flags each time you execute commands.
-
-- To find out which flags support environment variables, see the documentation for each [command](#commands).
-- To output the current configuration of CockroachDB and other environment variables, run `env`.
-- When a node uses environment variables on [startup](start-a-node.html), the variable names are printed to the node's logs; however, the variable values are not.
-
-CockroachDB prioritizes command flags, environment variables, and defaults as follows:
-
-1. If a flag is set for a command, CockroachDB uses it.
-2. If a flag is not set for a command, CockroachDB uses the corresponding environment variable.
-3. If neither the flag nor environment variable is set, CockroachDB uses the default for the flag.
-4. If there's no flag default, CockroachDB gives an error.
-
-For more details, see [Client Connection Parameters](connection-parameters.html).
diff --git a/src/current/v2.0/cockroachdb-in-comparison.md b/src/current/v2.0/cockroachdb-in-comparison.md
deleted file mode 100644
index 73afbbb5f8f..00000000000
--- a/src/current/v2.0/cockroachdb-in-comparison.md
+++ /dev/null
@@ -1,260 +0,0 @@
----
-title: CockroachDB in Comparison
-summary: Learn how CockroachDB compares to other popular databases like PostgreSQL, Cassandra, MongoDB, Google Cloud Spanner, and more.
-tags: mongodb, mysql, dynamodb
-toc: false
-comparison: true
----
-
-This page shows you how key features of CockroachDB stack up against other databases. Hover over features for their intended meanings, and click CockroachDB answers to view related documentation.
-
-
* In DynamoDB, distributed transactions and ACID semantics across all data in the database, not just per row, requires an additional transaction library.
diff --git a/src/current/v2.0/collate.md b/src/current/v2.0/collate.md
deleted file mode 100644
index c2aade174eb..00000000000
--- a/src/current/v2.0/collate.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-title: COLLATE
-summary: The COLLATE feature lets you sort strings according to language- and country-specific rules.
-toc: true
----
-
-The `COLLATE` feature lets you sort [`STRING`](string.html) values according to language- and country-specific rules, known as collations.
-
-Collated strings are important because different languages have [different rules for alphabetic order](https://en.wikipedia.org/wiki/Alphabetical_order#Language-specific_conventions), especially with respect to accented letters. For example, in German accented letters are sorted with their unaccented counterparts, while in Swedish they are placed at the end of the alphabet. A collation is a set of rules used for ordering and usually corresponds to a language, though some languages have multiple collations with different rules for sorting; for example Portuguese has separate collations for Brazilian and European dialects (`pt-BR` and `pt-PT` respectively).
-
-
-## Details
-
-- Operations on collated strings cannot involve strings with a different collation or strings with no collation. However, it is possible to add or overwrite a collation on the fly.
-
-- Only use the collation feature when you need to sort strings by a specific collation. We recommend this because every time a collated string is constructed or loaded into memory, CockroachDB computes its collation key, whose size is linear in relationship to the length of the collated string, which requires additional resources.
-
-- Collated strings can be considerably larger than the corresponding uncollated strings, depending on the language and the string content. For example, strings containing the character `é` produce larger collation keys in the French locale than in Chinese.
-
-- Collated strings that are indexed require additional disk space as compared to uncollated strings. In case of indexed collated strings, collation keys must be stored in addition to the strings from which they are derived, creating a constant factor overhead.
-
-## Supported Collations
-
-CockroachDB supports the collations provided by Go's [language package](https://godoc.org/golang.org/x/text/language#Tag). The `` argument is the BCP 47 language tag at the end of each line, immediately preceded by `//`. For example, Afrikaans is supported as the `af` collation.
-
-## SQL Syntax
-
-Collated strings are used as normal strings in SQL, but have a `COLLATE` clause appended to them.
-
-- **Column syntax**: `STRING COLLATE `. For example:
-
- ~~~ sql
- > CREATE TABLE foo (a STRING COLLATE en PRIMARY KEY);
- ~~~
-
- {{site.data.alerts.callout_info}}You can also use any of the aliases for STRING.{{site.data.alerts.end}}
-
-- **Value syntax**: ` COLLATE `. For example:
-
- ~~~ sql
- > INSERT INTO foo VALUES ('dog' COLLATE en);
- ~~~
-
-## Examples
-
-### Specify Collation for a Column
-
-You can set a default collation for all values in a `STRING` column.
-
-For example, you can set a column's default collation to German (`de`):
-
-~~~ sql
-> CREATE TABLE de_names (name STRING COLLATE de PRIMARY KEY);
-~~~
-
-When inserting values into this column, you must specify the collation for every value:
-
-~~~ sql
-> INSERT INTO de_names VALUES ('Backhaus' COLLATE de), ('Bär' COLLATE de), ('Baz' COLLATE de);
-~~~
-
-The sort will now honor the `de` collation that treats *ä* as *a* in alphabetic sorting:
-
-~~~ sql
-> SELECT * FROM de_names ORDER BY name;
-~~~
-~~~
-+----------+
-| name |
-+----------+
-| Backhaus |
-| Bär |
-| Baz |
-+----------+
-~~~
-
-### Order by Non-Default Collation
-
-You can sort a column using a specific collation instead of its default.
-
-For example, you receive different results if you order results by German (`de`) and Swedish (`sv`) collations:
-
-~~~ sql
-> SELECT * FROM de_names ORDER BY name COLLATE sv;
-~~~
-~~~
-+----------+
-| name |
-+----------+
-| Backhaus |
-| Baz |
-| Bär |
-+----------+
-~~~
-
-### Ad-Hoc Collation Casting
-
-You can cast any string into a collation on the fly.
-
-~~~ sql
-> SELECT 'A' COLLATE de < 'Ä' COLLATE de;
-~~~
-~~~
-true
-~~~
-
-However, you cannot compare values with different collations:
-
-~~~ sql
-SELECT 'Ä' COLLATE sv < 'Ä' COLLATE de;
-~~~
-~~~
-pq: unsupported comparison operator: <
-~~~
-
-You can also use casting to remove collations from values.
-
-~~~ sql
-> SELECT CAST(name AS STRING) FROM de_names ORDER BY name;
-~~~
-
-## See Also
-
-[Data Types](data-types.html)
diff --git a/src/current/v2.0/column-families.md b/src/current/v2.0/column-families.md
deleted file mode 100644
index e212236a458..00000000000
--- a/src/current/v2.0/column-families.md
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: Column Families
-summary: A column family is a group of columns in a table that are stored as a single key-value pair in the underlying key-value store.
-toc: true
----
-
-A column family is a group of columns in a table that are stored as a single key-value pair in the [underlying key-value store](architecture/storage-layer.html). Column families reduce the number of keys stored in the key-value store, resulting in improved performance during [`INSERT`](insert.html), [`UPDATE`](update.html), and [`DELETE`](delete.html) operations.
-
-This page explains how CockroachDB organizes columns into families as well as cases in which you might want to manually override the default behavior.
-
-{{site.data.alerts.callout_info}}
-[Secondary indexes](indexes.html) do not respect column families. All secondary indexes store values in a single column family.
-{{site.data.alerts.end}}
-
-## Default Behavior
-
-When a table is created, all columns are stored as a single column family.
-
-This default approach ensures efficient key-value storage and performance in most cases. However, when frequently updated columns are grouped with seldom updated columns, the seldom updated columns are nonetheless rewritten on every update. Especially when the seldom updated columns are large, it's more performant to split them into a distinct family.
-
-## Manual Override
-
-### Assign Column Families on Table Creation
-
-To manually assign a column family on [table creation](create-table.html), use the `FAMILY` keyword.
-
-For example, let's say we want to create a table to store an immutable blob of data (`data BYTES`) with a last accessed timestamp (`last_accessed TIMESTAMP`). Because we know that the blob of data will never get updated, we use the `FAMILY` keyword to break it into a separate column family:
-
-~~~ sql
-> CREATE TABLE test (
- id INT PRIMARY KEY,
- last_accessed TIMESTAMP,
- data BYTES,
- FAMILY f1 (id, last_accessed),
- FAMILY f2 (data)
-);
-
-> SHOW CREATE TABLE users;
-~~~
-
-~~~
-+-------+---------------------------------------------+
-| Table | CreateTable |
-+-------+---------------------------------------------+
-| test | CREATE TABLE test ( |
-| | id INT NOT NULL, |
-| | last_accessed TIMESTAMP NULL, |
-| | data BYTES NULL, |
-| | CONSTRAINT "primary" PRIMARY KEY (id), |
-| | FAMILY f1 (id, last_accessed), |
-| | FAMILY f2 (data) |
-| | ) |
-+-------+---------------------------------------------+
-(1 row)
-~~~
-
-{{site.data.alerts.callout_info}}Columns that are part of the primary index are always assigned to the first column family. If you manually assign primary index columns to a family, it must therefore be the first family listed in the CREATE TABLE statement.{{site.data.alerts.end}}
-
-### Assign Column Families When Adding Columns
-
-When using the [`ALTER TABLE .. ADD COLUMN`](add-column.html) statement to add a column to a table, you can assign the column to a new or existing column family.
-
-- Use the `CREATE FAMILY` keyword to assign a new column to a **new family**. For example, the following would add a `data2 BYTES` column to the `test` table above and assign it to a new column family:
-
- ~~~ sql
- > ALTER TABLE test ADD COLUMN data2 BYTES CREATE FAMILY f3;
- ~~~
-
-- Use the `FAMILY` keyword to assign a new column to an **existing family**. For example, the following would add a `name STRING` column to the `test` table above and assign it to family `f1`:
-
- ~~~ sql
- > ALTER TABLE test ADD COLUMN name STRING FAMILY f1;
- ~~~
-
-- Use the `CREATE IF NOT EXISTS FAMILY` keyword to assign a new column to an **existing family or, if the family doesn't exist, to a new family**. For example, the following would assign the new column to the existing `f1` family; if that family didn't exist, it would create a new family and assign the column to it:
-
- ~~~ sql
- > ALTER TABLE test ADD COLUMN name STRING CREATE IF NOT EXISTS FAMILY f1;
- ~~~
-
-## Compatibility with Past Releases
-
-Using the [`beta-20160714`](../releases/v1.0.html#beta-20160714) release makes your data incompatible with versions earlier than the [`beta-20160629`](../releases/v1.0.html#beta-20160629) release.
-
-## See Also
-
-- [`CREATE TABLE`](create-table.html)
-- [`ADD COLUMN`](add-column.html)
-- [Other SQL Statements](sql-statements.html)
diff --git a/src/current/v2.0/commit-transaction.md b/src/current/v2.0/commit-transaction.md
deleted file mode 100644
index 32d23df6786..00000000000
--- a/src/current/v2.0/commit-transaction.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: COMMIT
-summary: Commit a transaction with the COMMIT statement in CockroachDB.
-toc: true
----
-
-The `COMMIT` [statement](sql-statements.html) commits the current [transaction](transactions.html) or, when using [client-side transaction retries](transactions.html#client-side-transaction-retries), clears the connection to allow new transactions to begin.
-
-When using [client-side transaction retries](transactions.html#client-side-transaction-retries), statements issued after [`SAVEPOINT cockroach_restart`](savepoint.html) are committed when [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html) is issued instead of `COMMIT`. However, you must still issue a `COMMIT` statement to clear the connection for the next transaction.
-
-For non-retryable transactions, if statements in the transaction [generated any errors](transactions.html#error-handling), `COMMIT` is equivalent to `ROLLBACK`, which aborts the transaction and discards *all* updates made by its statements.
-
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/commit_transaction.html %}
-
-
-## Required Privileges
-
-No [privileges](privileges.html) are required to commit a transaction. However, privileges are required for each statement within a transaction.
-
-## Aliases
-
-In CockroachDB, `END` is an alias for the `COMMIT` statement.
-
-## Example
-
-### Commit a Transaction
-
-How you commit transactions depends on how your application handles [transaction retries](transactions.html#transaction-retries).
-
-#### Client-Side Retryable Transactions
-
-When using [client-side transaction retries](transactions.html#client-side-transaction-retries), statements are committed by [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html). `COMMIT` itself only clears the connection for the next transaction.
-
-~~~ sql
-> BEGIN;
-
-> SAVEPOINT cockroach_restart;
-
-> UPDATE products SET inventory = 0 WHERE sku = '8675309';
-
-> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new');
-
-> RELEASE SAVEPOINT cockroach_restart;
-
-> COMMIT;
-~~~
-
-{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}}
-
-#### Automatically Retried Transactions
-
-If you are using transactions that CockroachDB will [automatically retry](transactions.html#automatic-retries) (i.e., all statements sent in a single batch), commit the transaction with `COMMIT`.
-
-~~~ sql
-> BEGIN; UPDATE products SET inventory = 100 WHERE = '8675309'; UPDATE products SET inventory = 100 WHERE = '8675310'; COMMIT;
-~~~
-
-## See Also
-
-- [Transactions](transactions.html)
-- [`BEGIN`](begin-transaction.html)
-- [`RELEASE SAVEPOINT`](release-savepoint.html)
-- [`ROLLBACK`](rollback-transaction.html)
-- [`SAVEPOINT`](savepoint.html)
diff --git a/src/current/v2.0/common-errors.md b/src/current/v2.0/common-errors.md
deleted file mode 100644
index 59a137aafa4..00000000000
--- a/src/current/v2.0/common-errors.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-title: Common Errors
-summary: Understand and resolve common error messages written to stderr or logs.
-toc: false
----
-
-This page helps you understand and resolve error messages written to `stderr` or your [logs](debug-and-error-logs.html).
-
-Topic | Message
-------|--------
-Client connection | [`connection refused`](#connection-refused)
-Client connection | [`node is running secure mode, SSL connection required`](#node-is-running-secure-mode-ssl-connection-required)
-Transactions | [`retry transaction`](#retry-transaction)
-Node startup | [`node belongs to cluster but is attempting to connect to a gossip network for cluster `](#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id)
-Node configuration | [`clock synchronization error: this node is more than 500ms away from at least half of the known nodes`](#clock-synchronization-error-this-node-is-more-than-500ms-away-from-at-least-half-of-the-known-nodes)
-Node configuration | [`open file descriptor limit of is under the minimum required `](#open-file-descriptor-limit-of-number-is-under-the-minimum-required-number)
-Replication | [`replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster"`](#replicas-failing-with-0-of-1-store-with-an-attribute-matching-likely-not-enough-nodes-in-cluster)
-Deadline exceeded | [`context deadline exceeded`](#context-deadline-exceeded)
-Ambiguous results | [`result is ambiguous`](#result-is-ambiguous)
-
-## connection refused
-
-This message indicates a client is trying to connect to a node that is either not running or is not listening on the specified interfaces (i.e., hostname or port).
-
-To resolve this issue, do one of the following:
-
-- If the node hasn't yet been started, [start the node](start-a-node.html).
-- If you specified a `--host` flag when starting the node, you must include it with all other [`cockroach` commands](cockroach-commands.html) or change the `COCKROACH_HOST` environment variable..
-- If you specified a `--port` flag when starting the node, you must include it with all other [`cockroach` commands](cockroach-commands.html) or change the `COCKROACH_PORT` environment variable.
-
-If you're not sure what the `--host` and `--port` values might have been, you can look in the node's [logs](debug-and-error-logs.html). If necessary, you can also terminate the `cockroach` process, and then restart the node:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ pkill cockroach
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start [flags]
-~~~
-
-## node is running secure mode, SSL connection required
-
-This message indicates that the cluster is using TLS encryption to protect network communication, and the client is trying to open a connection without using the required TLS certificates.
-
-To resolve this issue, use the [`cockroach cert client-create`](create-security-certificates.html) command to generate a client certificate and key for the user trying to connect. For a secure deployment walkthrough, including generating security certificates and connecting clients, see [Manual Deployment](manual-deployment.html).
-
-## retry transaction
-
-Messages with the error code `40001` and the string `retry transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](transactions.html#client-side-transaction-retries) for more details.
-
-## node belongs to cluster \ but is attempting to connect to a gossip network for cluster \
-
-This message usually indicates that a node tried to connect to a cluster, but the node is already a member of a different cluster. This is determined by metadata in the node's data directory. To resolve this issue, do one of the following:
-
-- Choose a different directory to store the CockroachDB data:
-
- ~~~ shell
- $ cockroach start [flags] --store=[new directory] --join=[cluster host]:26257
- ~~~
-
-- Remove the existing directory and start a node joining the cluster again:
-
- ~~~ shell
- $ rm -r cockroach-data/
- ~~~
-
- ~~~ shell
- $ cockroach start [flags] --join=[cluster host]:26257
- ~~~
-
-This message can also occur in the following scenario:
-
-1. The first node of a cluster is started without the `--join` flag.
-2. Subsequent nodes are started with the `--join` flag pointing to the first node.
-3. The first node is stopped and restarted after the node's data directory is deleted or using a new directory. This causes the first node to initialize a new cluster.
-4. The other nodes, still communicating with the first node, notice that their cluster ID and the first node's cluster ID do not match.
-
-To avoid this scenario, update your scripts to use the new, recommended approach to initializing a cluster:
-
-1. Start each initial node of the cluster with the `--join` flag set to addresses of 3 to 5 of the initial nodes.
-2. Run the `cockroach init` command against any node to perform a one-time cluster initialization.
-3. When adding more nodes, start them with the same `--join` flag as used for the initial nodes.
-
-For more guidance, see this [example](start-a-node.html#start-a-multi-node-cluster).
-
-## open file descriptor limit of \ is under the minimum required \
-
-CockroachDB can use a large number of open file descriptors, often more than is available by default. This message indicates that the machine on which a CockroachDB node is running is under CockroachDB's recommended limits.
-
-For more details on CockroachDB's file descriptor limits and instructions on increasing the limit on various platforms, see [File Descriptors Limit](recommended-production-settings.html#file-descriptors-limit).
-
-## replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster
-
-### When running a single-node cluster
-
-When running a single-node CockroachDB cluster, an error about replicas failing will eventually show up in the node's log files, for example:
-
-~~~ shell
-E160407 09:53:50.337328 storage/queue.go:511 [replicate] 7 replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster"
-~~~
-
-This happens because CockroachDB expects three nodes by default. If you do not intend to add additional nodes, you can stop this error by updating your default zone configuration to expect only one node:
-
-{% include copy-clipboard.html %}
-~~~ shell
-# Insecure cluster:
-$ cockroach zone set .default --insecure --disable-replication
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-# Secure cluster:
-$ cockroach zone set .default --certs-dir=[path to certs directory] --disable-replication
-~~~
-
-The `--disable-replication` flag automatically reduces the zone's replica count to 1, but you can do this manually as well:
-
-{% include copy-clipboard.html %}
-~~~ shell
-# Insecure cluster:
-$ echo 'num_replicas: 1' | cockroach zone set .default --insecure -f -
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-# Secure cluster:
-$ echo 'num_replicas: 1' | cockroach zone set .default --certs-dir=[path to certs directory] -f -
-~~~
-
-See [Configure Replication Zones](configure-replication-zones.html) for more details.
-
-### When running a multi-node cluster
-
-When running a multi-node CockroachDB cluster, if you see an error like the one above about replicas failing, some nodes might not be able to talk to each other. For recommended actions, see [Cluster Setup Troubleshooting](cluster-setup-troubleshooting.html#replication-error-in-a-multi-node-cluster).
-
-## clock synchronization error: this node is more than 500ms away from at least half of the known nodes
-
-This error indicates that a node has spontaneously shut down because it detected that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default). CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency, so the node shutting down in this way avoids the risk of consistency anomalies.
-
-To prevent this from happening, you should run clock synchronization software on each node. For guidance on synchronizing clocks, see the tutorial for your deployment environment:
-
-Environment | Recommended Approach
-------------|---------------------
-[Manual](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service.
-[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service.
-[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service.
-[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service.
-[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service.
-
-## context deadline exceeded
-
-This message occurs when a component of CockroachDB gives up because it was relying on another component that has not behaved as expected, for example, another node dropped a network connection. To investigate further, look in the node's logs for the primary failure that is the root cause.
-
-## result is ambiguous
-
-In a distributed system, some errors can have ambiguous results. For
-example, if you receive a `connection closed` error while processing a
-`COMMIT` statement, you cannot tell whether the transaction
-successfully committed or not. These errors are possible in any
-database, but CockroachDB is somewhat more likely to produce them than
-other databases because ambiguous results can be caused by failures
-between the nodes of a cluster. These errors are reported with the
-PostgreSQL error code `40003` (`statement_completion_unknown`) and the
-message `result is ambiguous`.
-
-Ambiguous errors can be caused by nodes crashing, network failures, or
-timeouts. If you experience a lot of these errors when things are
-otherwise stable, look for performance issues. Note that ambiguity is
-only possible for the last statement of a transaction (`COMMIT` or
-`RELEASE SAVEPOINT`) or for statements outside a transaction. If a connection drops during a transaction that has not yet tried to commit, the transaction will definitely be aborted.
-
-In general, you should handle ambiguous errors the same way as
-`connection closed` errors. If your transaction is
-[idempotent](https://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning),
-it is safe to retry it on ambiguous errors. `UPSERT` operations are
-typically idempotent, and other transactions can be written to be
-idempotent by verifying the expected state before performing any
-writes. Increment operations such as `UPDATE my_table SET x=x+1 WHERE
-id=$1` are typical examples of operations that cannot easily be made
-idempotent. If your transaction is not idempotent, then you should
-decide whether to retry or not based on whether it would be better for
-your application to apply the transaction twice or return an error to
-the user.
-
-## Something Else?
-
-Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including:
-
-- [CockroachDB Community Forum](https://forum.cockroachlabs.com)
-- [CockroachDB Community Slack](https://cockroachdb.slack.com)
-- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb)
-- [CockroachDB Support Portal](https://support.cockroachlabs.com)
diff --git a/src/current/v2.0/common-table-expressions.md b/src/current/v2.0/common-table-expressions.md
deleted file mode 100644
index 489240b7d65..00000000000
--- a/src/current/v2.0/common-table-expressions.md
+++ /dev/null
@@ -1,178 +0,0 @@
----
-title: Common Table Expressions
-summary: Common Table Expressions (CTEs) simplify the definition and use of subqueries
-toc: true
-toc_not_nested: true
----
-
-
-New in v2.0:
-Common Table Expressions, or CTEs, provide a shorthand name to a
-possibly complex [subquery](subqueries.html) before it is used in a
-larger query context. This improves readability of the SQL code.
-
-CTEs can be used in combination with [`SELECT`
-clauses](select-clause.html) and [`INSERT`](insert.html),
-[`DELETE`](delete.html), [`UPDATE`](update.html) and
-[`UPSERT`](upsert.html) statements.
-
-
-## Synopsis
-
-
{% include {{ page.version.version }}/sql/diagrams/with_clause.html %}
-
-
-
-## Parameters
-
-Parameter | Description
-----------|------------
-`table_alias_name` | The name to use to refer to the common table expression from the accompanying query or statement.
-`name` | A name for one of the columns in the newly defined common table expression.
-`preparable_stmt` | The statement or subquery to use as common table expression.
-
-## Overview
-
-A query or statement of the form `WITH x AS y IN z` creates the
-temporary table name `x` for the results of the subquery `y`, to be
-reused in the context of the query `z`.
-
-For example:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH o AS (SELECT * FROM orders WHERE id IN (33, 542, 112))
- SELECT *
- FROM customers AS c, o
- WHERE o.customer_id = c.id;
-~~~
-
-In this example, the `WITH` clause defines the temporary name `o` for
-the subquery over `orders`, and that name becomes a valid table name
-for use in any [table expression](table-expressions.html) of the
-subsequent `SELECT` clause.
-
-This query is equivalent to, but arguably simpler to read than:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT *
- FROM customers AS c, (SELECT * FROM orders WHERE id IN (33, 542, 112)) AS o
- WHERE o.customer_id = c.id;
-~~~
-
-It is also possible to define multiple common table expressions
-simultaneously with a single `WITH` clause, separated by commas. Later
-subqueries can refer to earlier subqueries by name. For example, the
-following query is equivalent to the two examples above:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH o AS (SELECT * FROM orders WHERE id IN (33, 542, 112)),
- results AS (SELECT * FROM customers AS c, o WHERE o.customer_id = c.id)
- SELECT * FROM results;
-~~~
-
-In this example, the second CTE `results` refers to the first CTE `o`
-by name. The final query refers to the CTE `results`.
-
-## Nested `WITH` Clauses
-
-It is possible to use a `WITH` clause in a subquery, or even a `WITH` clause within another `WITH` clause. For example:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (SELECT * FROM (WITH b AS (SELECT * FROM c)
- SELECT * FROM b))
- SELECT * FROM a;
-~~~
-
-When analyzing [table expressions](table-expressions.html) that
-mention a CTE name, CockroachDB will choose the CTE definition that is
-closest to the table expression. For example:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (TABLE x),
- b AS (WITH a AS (TABLE y)
- SELECT * FROM a)
- SELECT * FROM b;
-~~~
-
-In this example, the inner subquery `SELECT * FROM a` will select from
-table `y` (closest `WITH` clause), not from table `x`.
-
-## Data Modifying Statements
-
-It is possible to use a data-modifying statement (`INSERT`, `DELETE`,
-etc.) as a common table expression.
-
-For example:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH v AS (INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x)
- SELECT x+1 FROM v
-~~~
-
-However, the following restriction applies: only `WITH` sub-clauses at
-the top level of a SQL statement can contain data-modifying
-statements. The example above is valid, but the following is not:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT x+1 FROM
- (WITH v AS (INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x)
- SELECT * FROM v);
-~~~
-
-This is not valid because the `WITH` clause that defines an `INSERT`
-common table expression is not at the top level of the query.
-
-{{site.data.alerts.callout_info}}
-If a common table expression contains
-a data-modifying statement (INSERT, DELETE,
-etc.), the modifications are performed fully even if only part
-of the results are used, e.g., with LIMIT. See Data
-Writes in Subqueries for details.
-{{site.data.alerts.end}}
-
-
-
-## Known Limitations
-
-{{site.data.alerts.callout_info}}
-The following limitations may be lifted
-in a future version of CockroachDB.
-{{site.data.alerts.end}}
-
-
-
-### Referring to a CTE by name more than once
-
-{% include {{ page.version.version }}/known-limitations/cte-by-name.md %}
-
-### Using CTEs with data-modifying statements
-
-{% include {{ page.version.version }}/known-limitations/cte-with-dml.md %}
-
-### Using CTEs with views
-
-{% include {{ page.version.version }}/known-limitations/cte-with-view.md %}
-
-### Using CTEs with `VALUES` clauses
-
-{% include {{ page.version.version }}/known-limitations/cte-in-values-clause.md %}
-
-### Using CTEs with Set Operations
-
-{% include {{ page.version.version }}/known-limitations/cte-in-set-expression.md %}
-
-## See also
-
-- [Subqueries](subqueries.html)
-- [Selection Queries](selection-queries.html)
-- [Table Expressions](table-expressions.html)
-- [`EXPLAIN`](explain.html)
diff --git a/src/current/v2.0/computed-columns.md b/src/current/v2.0/computed-columns.md
deleted file mode 100644
index da6ae2ae50e..00000000000
--- a/src/current/v2.0/computed-columns.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: Computed Columns
-summary: A computed column stores data generated by an expression included in the column definition.
-toc: true
----
-
-New in v2.0: A computed column stores data generated from other columns by a [scalar expression](scalar-expressions.html) included in the column definition.
-
-
-## Why use computed columns?
-
-Computed columns are especially useful when used with [partitioning](partitioning.html), [`JSONB`](jsonb.html) columns, or [secondary indexes](indexes.html).
-
-- **Partitioning** requires that partitions are defined using columns that are a prefix of the [primary key](primary-key.html). In the case of geo-partitioning, some applications will want to collapse the number of possible values in this column, to make certain classes of queries more performant. For example, if a users table has a country and state column, then you can make a stored computed column locality with a reduced domain for use in partitioning. For more information, see the [partitioning example](#create-a-table-with-geo-partitions-and-a-computed-column) below.
-
-- **JSONB** columns are used for storing semi-structured `JSONB` data. When the table's primary information is stored in `JSONB`, it's useful to index a particular field of the `JSONB` document. In particular, computed columns allow for the following use case: a two-column table with a `PRIMARY KEY` column and a `payload` column, whose primary key is computed as some field from the `payload` column. This alleviates the need to manually separate your primary keys from your JSON blobs. For more information, see the [`JSONB` example](#create-a-table-with-a-jsonb-column-and-a-computed-column) below.
-
-- **Secondary indexes** can be created on computed columns, which is especially useful when a table is frequently sorted. See the [secondary indexes example](#create-a-table-with-a-secondary-index-on-a-computed-column) below.
-
-## Considerations
-
-Computed columns:
-
-- Cannot be added after a table is created. Follow the [GitHub issue](https://github.com/cockroachdb/cockroach/issues/22652) for updates on this limitation.
-- Cannot be used to generate other computed columns.
-- Cannot be a [foreign key](foreign-key.html) reference.
-- Behave like any other column, with the exception that they cannot be written to directly.
-- Are mutually exclusive with [`DEFAULT`](default-value.html).
-
-## Creation
-
-Computed columns can only be added at the time of [table creation](create-table.html). Use the following syntax:
-
-~~~
-column_name AS () STORED
-~~~
-
-Parameter | Description
-----------|------------
-`column_name` | The [name/identifier](keywords-and-identifiers.html#identifiers) of the computed column.
-`` | The [data type](data-types.html) of the computed column.
-`` | The pure [scalar expression](scalar-expressions.html) used to compute column values. Any functions marked as `impure`, such as `now()` or `nextval()` cannot be used.
-`STORED` | _(Required)_ The computed column is stored alongside other columns.
-
-## Examples
-
-### Create a Table with a Computed Column
-
-{% include {{ page.version.version }}/computed-columns/simple.md %}
-
-### Create a Table with Geo-partitions and a Computed Column
-
-{% include {{ page.version.version }}/computed-columns/partitioning.md %} The `locality` values can then be used for geo-partitioning.
-
-### Create a Table with a `JSONB` Column and a Computed Column
-
-{% include {{ page.version.version }}/computed-columns/jsonb.md %}
-
-### Create a Table with a Secondary Index on a Computed Column
-
-{% include {{ page.version.version }}/computed-columns/secondary-index.md %}
-
-## See Also
-
-- [Scalar Expressions](scalar-expressions.html)
-- [Information Schema](information-schema.html)
-- [`CREATE TABLE`](create-table.html)
-- [`JSONB`](jsonb.html)
-- [Define Table Partitions (Enterprise)](partitioning.html)
diff --git a/src/current/v2.0/configure-replication-zones.md b/src/current/v2.0/configure-replication-zones.md
deleted file mode 100644
index 0b03ce55bed..00000000000
--- a/src/current/v2.0/configure-replication-zones.md
+++ /dev/null
@@ -1,823 +0,0 @@
----
-title: Configure Replication Zones
-summary: In CockroachDB, you use replication zones to control the number and location of replicas for specific sets of data.
-keywords: ttl, time to live, availability zone
-toc: true
----
-
-In CockroachDB, you use **replication zones** to control the number and location of replicas for specific sets of data, both when replicas are first added and when they are rebalanced to maintain cluster equilibrium. Initially, there are some special pre-configured replication zones for internal system data along with a default replication zone that applies to the rest of the cluster. You can adjust these pre-configured zones as well as add zones for individual databases, tables and secondary indexes, and rows ([enterprise-only](enterprise-licensing.html)) as needed. For example, you might use the default zone to replicate most data in a cluster normally within a single datacenter, while creating a specific zone to more highly replicate a certain database or table across multiple datacenters and geographies.
-
-This page explains how replication zones work and how to use the `cockroach zone` [command](cockroach-commands.html) to configure them.
-
-{{site.data.alerts.callout_info}}
-Currently, only the `root` user can configure replication zones.
-{{site.data.alerts.end}}
-
-## Replication Zone Levels
-
-### For Table Data
-
-There are five replication zone levels for [**table data**](architecture/distribution-layer.html#table-data) in a cluster, listed from least to most granular:
-
-Level | Description
-------|------------
-Cluster | CockroachDB comes with a pre-configured `.default` replication zone that applies to all table data in the cluster not constrained by a database, table, or row-specific replication zone. This zone can be adjusted but not removed. See [View the Default Replication Zone](#view-the-default-replication-zone) and [Edit the Default Replication Zone](#edit-the-default-replication-zone) for more details.
-Database | You can add replication zones for specific databases. See [Create a Replication Zone for a Database](#create-a-replication-zone-for-a-database) for more details.
-Table | You can add replication zones for specific tables. See [Create a Replication Zone for a Table](#create-a-replication-zone-for-a-table).
-Index ([Enterprise-only](enterprise-licensing.html)) | The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes. See [Create a Replication Zone for a Secondary Index](#create-a-replication-zone-for-a-secondary-index) for more details.
-Row ([Enterprise-only](enterprise-licensing.html)) | You can add replication zones for specific rows in a table or secondary index by [defining table partitions](partitioning.html). See [Create a Replication Zone for a Table Partition](#create-a-replication-zone-for-a-table-or-secondary-index-partition-new-in-v2-0) for more details.
-
-### For System Data
-
-In addition, CockroachDB stores internal [**system data**](architecture/distribution-layer.html#monolithic-sorted-map-structure) in what are called system ranges. There are two replication zone levels for this internal system data, listed from least to most granular:
-
-Level | Description
-------|------------
-Cluster | The `.default` replication zone mentioned above also applies to all system ranges not constrained by a more specific replication zone.
-System Range | CockroachDB comes with pre-configured replication zones for the "meta" and "liveness" system ranges. If necessary, you can add replication zones for the "timeseries" range and other "system" ranges as well. See [Create a Replication Zone for a System Range](#create-a-replication-zone-for-a-system-range) for more details.
CockroachDB also comes with a pre-configured replication zone for one internal table, `system.jobs`, which stores metadata about long-running jobs such as schema changes and backups. Historical queries are never run against this table and the rows in it are updated frequently, so the pre-configured zone gives this table a lower-than-default `ttlseconds`.
-
-### Level Priorities
-
-When replicating data, whether table or system, CockroachDB always uses the most granular replication zone available. For example, for a piece of user data:
-
-1. If there's a replication zone for the row, CockroachDB uses it.
-2. If there's no applicable row replication zone and the row is from a secondary index, CockroachDB uses the secondary index replication zone.
-3. If the row isn't from a secondary index or there is no applicable secondary index replication zone, CockroachDB uses the table replication zone.
-4. If there's no applicable table replication zone, CockroachDB uses the database replication zone.
-5. If there's no applicable database replication zone, CockroachDB uses the `.default` cluster-wide replication zone.
-
-{{site.data.alerts.callout_danger}}
-{% include {{page.version.version}}/known-limitations/system-range-replication.md %}
-{{site.data.alerts.end}}
-
-## Replication Zone Format
-
-A replication zone is specified in [YAML](https://en.wikipedia.org/wiki/YAML) format and looks like this:
-
-~~~ yaml
-range_min_bytes:
-range_max_bytes:
-gc:
- ttlseconds:
-num_replicas:
-constraints:
-~~~
-
-Field | Description
-------|------------
-`range_min_bytes` | Not yet implemented.
-`range_max_bytes` | The maximum size, in bytes, for a range of data in the zone. When a range reaches this size, CockroachDB will split it into two ranges.
**Default:** `67108864` (64MiB)
-`ttlseconds` | The number of seconds overwritten values will be retained before garbage collection. Smaller values can save disk space if values are frequently overwritten; larger values increase the range allowed for `AS OF SYSTEM TIME` queries, also know as [Time Travel Queries](select-clause.html#select-historical-data-time-travel).
It is not recommended to set this below `600` (10 minutes); doing so will cause problems for long-running queries. Also, since all versions of a row are stored in a single range that never splits, it is not recommended to set this so high that all the changes to a row in that time period could add up to more than 64MiB; such oversized ranges could contribute to the server running out of memory or other problems.
**Default:** `90000` (25 hours)
-`num_replicas` | The number of replicas in the zone.
**Default:** `3`
-`constraints` | A JSON object or array of required and/or prohibited constraints influencing the location of replicas. See [Types of Constraints](#types-of-constraints) and [Scope of Constraints](#scope-of-constraints) for more details.
**Default:** No constraints, with CockroachDB locating each replica on a unique node and attempting to spread replicas evenly across localities.
-
-## Replication Constraints
-
-The location of replicas, both when they are first added and when they are rebalanced to maintain cluster equilibrium, is based on the interplay between descriptive attributes assigned to nodes and constraints set in zone configurations.
-
-{{site.data.alerts.callout_success}}For demonstrations of how to set node attributes and replication constraints in different scenarios, see Scenario-based Examples below.{{site.data.alerts.end}}
-
-### Descriptive Attributes Assigned to Nodes
-
-When starting a node with the [`cockroach start`](start-a-node.html) command, you can assign the following types of descriptive attributes:
-
-Attribute Type | Description
----------------|------------
-**Node Locality** | Using the `--locality` flag, you can assign arbitrary key-value pairs that describe the locality of the node. Locality might include country, region, datacenter, rack, etc. The key-value pairs should be ordered from most inclusive to least inclusive (e.g., country before datacenter before rack), and the keys and the order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. For example:
CockroachDB attempts to spread replicas evenly across the cluster based on locality, with the order determining the priority. However, locality can be used to influence the location of data replicas in various ways using replication zones.
When there is high latency between nodes, CockroachDB also uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance. See [Follow-the-workload](demo-follow-the-workload.html) for more details.
-**Node Capability** | Using the `--attrs` flag, you can specify node capability, which might include specialized hardware or number of cores, for example:
`--attrs=ram:64gb`
-**Store Type/Capability** | Using the `attrs` field of the `--store` flag, you can specify disk type or capability, for example:
`--store=path=/mnt/ssd01,attrs=ssd` `--store=path=/mnt/hda1,attrs=hdd:7200rpm`
-
-### Types of Constraints
-
-The node-level and store-level descriptive attributes mentioned above can be used as the following types of constraints in replication zones to influence the location of replicas. However, note the following general guidance:
-
-- When locality is the only consideration for replication, it's recommended to set locality on nodes without specifying any constraints in zone configurations. In the absence of constraints, CockroachDB attempts to spread replicas evenly across the cluster based on locality.
-- Required and prohibited constraints are useful in special situations where, for example, data must or must not be stored in a specific country or on a specific type of machine.
-
-Constraint Type | Description | Syntax
-----------------|-------------|-------
-**Required** | When placing replicas, the cluster will consider only nodes/stores with matching attributes or localities. When there are no matching nodes/stores, new replicas will not be added. | `+ssd`
-**Prohibited** | When placing replicas, the cluster will ignore nodes/stores with matching attributes or localities. When there are no alternate nodes/stores, new replicas will not be added. | `-ssd`
-
-### Scope of Constraints
-
-Constraints can be specified such that they apply to all replicas in a zone or such that different constraints apply to different replicas, meaning you can effectively pick the exact location of each replica.
-
-Constraint Scope | Description | Syntax
------------------|-------------|-------
-**All Replicas** | Constraints specified using JSON array syntax apply to all replicas in every range that's part of the replication zone. | `constraints: [+ssd, -region=west]`
-**Per-Replica** | Multiple lists of constraints can be provided in a JSON object, mapping each list of constraints to an integer number of replicas in each range that the constraints should apply to.
The total number of replicas constrained cannot be greater than the total number of replicas for the zone (`num_replicas`). However, if the total number of replicas constrained is less than the total number of replicas for the zone, the non-constrained replicas will be allowed on any nodes/stores. | `constraints: {"+ssd,-region=west": 2, "+region=east": 1}`
-
-## Node/Replica Recommendations
-
-See [Cluster Topography](recommended-production-settings.html#cluster-topology) recommendations for production deployments.
-
-## Subcommands
-
-Subcommand | Usage
------------|------
-`ls` | List all replication zones.
-`get` | View the YAML contents of a replication zone.
-`set` | Create or edit a replication zone.
-`rm` | Remove a replication zone.
-
-## Synopsis
-
-~~~ shell
-# List all replication zones:
-$ cockroach zone ls
-
-# View the default replication zone for the cluster:
-$ cockroach zone get .default
-
-# View the replication zone for a database:
-$ cockroach zone get
-
-# View the replication zone for a table:
-$ cockroach zone get
-
-# View the replication zone for an index:
-$ cockroach zone get
-
-# View the replication zone for a table or index partition:
-$ cockroach zone get
-
-# Edit the default replication zone for the cluster:
-$ cockroach zone set .default --file=
-
-# Create/edit the replication zone for a database:
-$ cockroach zone set --file=
-
-# Create/edit the replication zone for a table:
-$ cockroach zone set --file=
-
-# Create/edit the replication zone for an index:
-$ cockroach zone set --file=
-
-# Create/edit the replication zone for a table or index partition:
-$ cockroach zone set --file=
-
-# Remove the replication zone for a database:
-$ cockroach zone rm
-
-# Remove the replication zone for a table:
-$ cockroach zone rm
-
-# Remove the replication zone for an index:
-$ cockroach zone rm
-
-# Remove the replication zone for a table or index partition:
-$ cockroach zone set --file=
-
-# View help:
-$ cockroach zone --help
-$ cockroach zone ls --help
-$ cockroach zone get --help
-$ cockroach zone set --help
-$ cockroach zone rm --help
-~~~
-
-## Flags
-
-The `zone` command and subcommands support the following [general-use](#general) and [logging](#logging) flags.
-
-### General
-
-Flag | Description
------|------------
-`--disable-replication` | Disable replication in the zone by setting the zone's replica count to 1. This is equivalent to setting `num_replicas: 1`.
-`--echo-sql` | New in v1.1: Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below.
-`--file` `-f` | The path to the [YAML file](#replication-zone-format) defining the zone configuration. To pass the zone configuration via the standard input, set this flag to `-`.
This flag is relevant only for the `set` subcommand.
-
-### Client Connection
-
-{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %}
-
-See [Client Connection Parameters](connection-parameters.html) for more details.
-
-Currently, only the `root` user can configure replication zones and the `--database` flag is not effective.
-
-### Logging
-
-By default, the `zone` command logs errors to `stderr`.
-
-If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html).
-
-## Basic Examples
-
-These examples focus on the basic approach and syntax for working with zone configuration. For examples demonstrating how to use constraints, see [Scenario-based Examples](#scenario-based-examples).
-
-### List the Pre-Configured Replication Zones
-
-New in v2.0: Newly created CockroachDB clusters start with some special pre-configured replication zones:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone ls --insecure
-~~~
-
-~~~
-.default
-.liveness
-.meta
-system.jobs
-~~~
-
-### View the Default Replication Zone
-
-The cluster-wide replication zone (`.default`) is initially set to replicate data to any three nodes in your cluster, with ranges in each replica splitting once they get larger than 67108864 bytes.
-
-To view the default replication zone, use the `cockroach zone get .default` command with appropriate flags:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone get .default --insecure
-~~~
-
-~~~
-.default
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 86400
-num_replicas: 3
-constraints: []
-~~~
-
-### Edit the Default Replication Zone
-
-{{site.data.alerts.callout_danger}}
-{% include {{page.version.version}}/known-limitations/system-range-replication.md %}
-{{site.data.alerts.end}}
-
-To edit the default replication zone, create a YAML file defining only the values you want to change (other values will be copied from the `.default` zone), and use the `cockroach zone set .default -f ` command with appropriate flags:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat default_update.yaml
-~~~
-
-~~~
-num_replicas: 5
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone set .default --insecure -f default_update.yaml
-~~~
-
-~~~
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 86400
-num_replicas: 5
-constraints: []
-~~~
-
-Alternately, you can pass the YAML content via the standard input:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ echo 'num_replicas: 5' | cockroach zone set .default --insecure -f -
-~~~
-
-### Create a Replication Zone for a Database
-
-To control replication for a specific database, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat database_zone.yaml
-~~~
-
-~~~
-num_replicas: 7
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone set db1 --insecure -f database_zone.yaml
-~~~
-
-~~~
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 86400
-num_replicas: 5
-constraints: []
-~~~
-
-Alternately, you can pass the YAML content via the standard input:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ echo 'num_replicas: 5' | cockroach zone set db1 --insecure -f -
-~~~
-
-### Create a Replication Zone for a Table
-
-To control replication for a specific table, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat table_zone.yaml
-~~~
-
-~~~
-num_replicas: 7
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone set db1.t1 --insecure -f table_zone.yaml
-~~~
-
-~~~
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 86400
-num_replicas: 7
-constraints: []
-~~~
-
-Alternately, you can pass the YAML content via the standard input:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ echo 'num_replicas: 7' | cockroach zone set db1.t1 --insecure -f -
-~~~
-
-### Create a Replication Zone for a Secondary Index
-
-{{site.data.alerts.callout_info}}
-This is an [enterprise-only](enterprise-licensing.html) feature.
-{{site.data.alerts.end}}
-
-The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes.
-
-To control replication for a specific secondary index, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags:
-
-{{site.data.alerts.callout_success}}
-To get the name of a secondary index, which you need for the `cockroach zone set` command, use the [`SHOW INDEX`](show-index.html) or [`SHOW CREATE TABLE`](show-create-table.html) statements.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat index_zone.yaml
-~~~
-
-~~~
-num_replicas: 7
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone set db1.table@idx1 \
---insecure \
---host= \
--f index_zone.yaml
-~~~
-
-~~~
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 86400
-num_replicas: 7
-constraints: []
-~~~
-
-Alternately, you can pass the YAML content via the standard input:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ echo 'num_replicas: 7' | cockroach zone set db1.table@idx1 \
---insecure \
---host= \
--f -
-~~~
-
-### Create a Replication Zone for a Table or Secondary Index Partition New in v2.0
-
-{{site.data.alerts.callout_info}}
-This is an [enterprise-only](enterprise-licensing.html) feature.
-{{site.data.alerts.end}}
-
-To [control replication for table partitions](partitioning.html#replication-zones), create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat > australia_zone.yml
-~~~
-
-~~~ shell
-constraints: [+datacenter=au1]
-~~~
-
-Apply zone configurations to corresponding partitions:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone set roachlearn.students_by_list.australia \
---insecure \
---host= \
--f australia_zone.yml
-~~~
-
-{{site.data.alerts.callout_success}}
-Since the syntax is the same for defining a replication zone for a table or index partition (`database.table.partition`), give partitions names that communicate what they are partitioning, e.g., `australia_table` vs `australia_idx1`.
-{{site.data.alerts.end}}
-
-### Create a Replication Zone for a System Range
-
-In addition to the databases and tables that are visible via the SQL interface, CockroachDB stores internal data in what are called system ranges. CockroachDB comes with pre-configured replication zones for some of these ranges:
-
-Zone Name | Description
-----------|-----------------|------------
-`.meta` | The "meta" ranges contain the authoritative information about the location of all data in the cluster.
Because historical queries are never run on meta ranges and it is advantageous to keep these ranges smaller for reliable performance, CockroachDB comes with a **pre-configured** `.meta` replication zone giving these ranges a lower-than-default `ttlseconds`.
If your cluster is running in multiple datacenters, it's a best practice to configure the meta ranges to have a copy in each datacenter.
-`.liveness` | New in v2.0: The "liveness" range contains the authoritative information about which nodes are live at any given time.
Just as for "meta" ranges, historical queries are never run on the liveness range, so CockroachDB comes with a **pre-configured** `.liveness` replication zone giving this range a lower-than-default `ttlseconds`.
If this range is unavailable, the entire cluster will be unavailable, so giving it a high replication factor is strongly recommended.
-`.timeseries` | The "timeseries" ranges contain monitoring data about the cluster that powers the graphs in CockroachDB's admin UI. If necessary, you can add a `.timeseries` replication zone to control the replication of this data.
-`.system` | There are system ranges for a variety of other important internal data, including information needed to allocate new table IDs and track the status of a cluster's nodes. If necessary, you can add a `.system` replication zone to control the replication of this data.
-
-To control replication for one of the above sets of system ranges, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cat meta_zone.yaml
-~~~
-
-~~~
-num_replicas: 7
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach zone set .meta --insecure -f meta_zone.yaml
-~~~
-
-~~~
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 86400
-num_replicas: 7
-constraints: []
-~~~
-
-Alternately, you can pass the YAML content via the standard input:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ echo 'num_replicas: 7' | cockroach zone set .meta --insecure -f -
-~~~
-
-### Reveal the SQL statements sent implicitly by the command-line utility
-
-In this example, we use the `--echo-sql` flag to reveal the SQL statement sent implicitly by the command-line utility:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ echo 'num_replicas: 5' | cockroach zone set .default --insecure --echo-sql -f -
-~~~
-
-~~~
-> BEGIN
-> SAVEPOINT cockroach_restart
-> SELECT config FROM system.zones WHERE id = $1
-> UPSERT INTO system.zones (id, config) VALUES ($1, $2)
-range_min_bytes: 1048576
-range_max_bytes: 67108864
-gc:
- ttlseconds: 90000
-num_replicas: 5
-constraints: []
-> RELEASE SAVEPOINT cockroach_restart
-> COMMIT
-~~~
-
-## Scenario-based Examples
-
-### Even Replication Across Datacenters
-
-**Scenario:**
-
-- You have 6 nodes across 3 datacenters, 2 nodes in each datacenter.
-- You want data replicated 3 times, with replicas balanced evenly across all three datacenters.
-
-**Approach:**
-
-Start each node with its datacenter location specified in the `--locality` flag:
-
-~~~ shell
-# Start the two nodes in datacenter 1:
-$ cockroach start --insecure --host= --locality=datacenter=us-1
-$ cockroach start --insecure --host= --locality=datacenter=us-1 \
---join=:26257
-
-# Start the two nodes in datacenter 2:
-$ cockroach start --insecure --host= --locality=datacenter=us-2 \
---join=:26257
-$ cockroach start --insecure --host= --locality=datacenter=us-2 \
---join=:26257
-
-# Start the two nodes in datacenter 3:
-$ cockroach start --insecure --host= --locality=datacenter=us-3 \
---join=:26257
-$ cockroach start --insecure --host= --locality=datacenter=us-3 \
---join=:26257
-~~~
-
-There's no need to make zone configuration changes; by default, the cluster is configured to replicate data three times, and even without explicit constraints, the cluster will aim to diversify replicas across node localities.
-
-### Per-Replica Constraints to Specific Datacenters New in v2.0
-
-**Scenario:**
-
-- You have 5 nodes across 5 datacenters in 3 regions, 1 node in each datacenter.
-- You want data replicated 3 times, with a quorum of replicas for a database holding West Coast data centered on the West Coast and a database for nation-wide data replicated across the entire country.
-
-**Approach:**
-
-1. Start each node with its region and datacenter location specified in the `--locality` flag:
-
- ~~~ shell
- # Start the four nodes:
- $ cockroach start --insecure --host= --locality=region=us-west1,datacenter=us-west1-a
- $ cockroach start --insecure --host= --locality=region=us-west1,datacenter=us-west1-b \
- --join=:26257
- $ cockroach start --insecure --host= --locality=region=us-central1,datacenter=us-central1-a \
- --join=:26257
- $ cockroach start --insecure --host= --locality=region=us-east1,datacenter=us-east1-a \
- --join=:26257
- $ cockroach start --insecure --host= --locality=region=us-east1,datacenter=us-east1-b \
- --join=:26257
- ~~~
-
-2. On any node, configure a replication zone for the database used by the West Coast application:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create a YAML file with the replica count set to 5:
- $ cat west_app_zone.yaml
- ~~~
-
- ~~~
- constraints: {"+region=us-west1": 2, "+region=us-central1": 1}
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Apply the replication zone to the database used by the West Coast application:
- $ cockroach zone set west_app_db --insecure -f west_app_zone.yaml
- ~~~
-
- ~~~
- range_min_bytes: 1048576
- range_max_bytes: 67108864
- gc:
- ttlseconds: 86400
- num_replicas: 3
- constraints: {+region=us-central1: 1, +region=us-west1: 2}
- ~~~
-
- Two of the database's three replicas will be put in `region=us-west1` and its remaining replica will be put in `region=us-central1`. This gives the application the resilience to survive the total failure of any one datacenter while providing low-latency reads and writes on the West Coast because a quorum of replicas are located there.
-
-3. No configuration is needed for the nation-wide database. The cluster is configured to replicate data 3 times and spread them as widely as possible by default. Because the first key-value pair specified in each node's locality is considered the most significant part of each node's locality, spreading data as widely as possible means putting one replica in each of the three different regions.
-
-### Multiple Applications Writing to Different Databases
-
-**Scenario:**
-
-- You have 2 independent applications connected to the same CockroachDB cluster, each application using a distinct database.
-- You have 6 nodes across 2 datacenters, 3 nodes in each datacenter.
-- You want the data for application 1 to be replicated 5 times, with replicas evenly balanced across both datacenters.
-- You want the data for application 2 to be replicated 3 times, with all replicas in a single datacenter.
-
-**Approach:**
-
-1. Start each node with its datacenter location specified in the `--locality` flag:
-
- ~~~ shell
- # Start the three nodes in datacenter 1:
- $ cockroach start --insecure --host= --locality=datacenter=us-1
- $ cockroach start --insecure --host= --locality=datacenter=us-1 \
- --join=:26257
- $ cockroach start --insecure --host= --locality=datacenter=us-1 \
- --join=:26257
-
- # Start the three nodes in datacenter 2:
- $ cockroach start --insecure --host= --locality=datacenter=us-2 \
- --join=