diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171026.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171026.md deleted file mode 100644 index 13b9fae49ee..00000000000 --- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171026.md +++ /dev/null @@ -1,75 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Backwards-Incompatible Changes

- -- Casts from `BYTES` to `STRING` have been changed and now work the same way as in PostgreSQL. New functions `encode()` and `decode()` are available to replace the former functionality. [#18843](https://github.com/cockroachdb/cockroach/pull/18843) - -

General Changes

- -- CockroachDB now requires Go 1.9. [#18459](https://github.com/cockroachdb/cockroach/pull/18459) -- Release binaries now link against `libtinfo` dynamically. Building CockroachDB from source now requires `libtinfo` (or `ncurses`) development packages. [#18979](https://github.com/cockroachdb/cockroach/pull/18979) -- Building the web UI now requires Node version 6 and Yarn version 0.22.0 or newer. [#18830](https://github.com/cockroachdb/cockroach/pull/18830) -- Most dependencies have been updated to their latest versions. [#17490](https://github.com/cockroachdb/cockroach/pull/17490) -- Release docker images are now based on Debian 8.9. [#18748](https://github.com/cockroachdb/cockroach/pull/18748) - -

SQL Language Changes

- -- `DROP DATABASE` now defaults to `CASCADE`, restoring the 1.0 (and PostgreSQL-compatible) behavior. [#19182](https://github.com/cockroachdb/cockroach/pull/19182) -- The `INET` column type and related functions are now supported. [#18171](https://github.com/cockroachdb/cockroach/pull/18171) [#18585](https://github.com/cockroachdb/cockroach/pull/18585) -- The `ANY`, `SOME`, and `ALL` functions now support subquery and tuple operands. [#18094](https://github.com/cockroachdb/cockroach/pull/18094) [#19266](https://github.com/cockroachdb/cockroach/pull/19266) -- `current_schemas(false)` behaves more consistently with PostgreSQL. [#18108](https://github.com/cockroachdb/cockroach/pull/18108) -- `SET CLUSTER SETTING` now supports prepared statement placeholders. [#18377](https://github.com/cockroachdb/cockroach/pull/18377) -- `SHOW CLUSTER SETTINGS` is now only available to `root`. [#19031](https://github.com/cockroachdb/cockroach/pull/19031) -- A new cluster setting `cloudstorage.gs.default.key` can be used to store authentication credentials to be used by `BACKUP` and `RESTORE`. [#19018](https://github.com/cockroachdb/cockroach/pull/19018) -- The `RESTORE DATABASE` statement is now supported. [#19182](https://github.com/cockroachdb/cockroach/pull/19182) -- `IMPORT` now reports progress incrementally. [#18677](https://github.com/cockroachdb/cockroach/pull/18677) -- `IMPORT` now supports the `into_db` option. [#18899](https://github.com/cockroachdb/cockroach/pull/18899) -- The `date_trunc()` function is now available. [#19297](https://github.com/cockroachdb/cockroach/pull/19297) -- New function `gen_random_uuid()` is equivalent to `uuid_v4()` but returns type `UUID` instead of `BYTES`. [#19379](https://github.com/cockroachdb/cockroach/pull/19379) -- The `extract` function now works with `TIMESTAMP WITH TIME ZONE` in addition to plain `TIMESTAMP` and `DATE`. [#19045](https://github.com/cockroachdb/cockroach/pull/19045) -- `TIMESTAMP WITH TIME ZONE` values are now printed in the correct session time zone. [#19081](https://github.com/cockroachdb/cockroach/pull/19081) -- PostgreSQL compatibility updates: The `pg_namespace.aclitem` column has been renamed to `nspacl`. `pg_class` now has a `relpersistence` column. New functions `pg_encoding_to_char`, `pg_get_viewdef`, and `pg_get_keywords`. The `pg_tablespace` table is now available. The type name `"char"` (with quotes) is recognized as an alias for `CHAR`. Session variable `server_version_num` is now available. [#18530](https://github.com/cockroachdb/cockroach/pull/18530) [#18618](https://github.com/cockroachdb/cockroach/pull/18618) [#19127](https://github.com/cockroachdb/cockroach/pull/19127) [#19150](https://github.com/cockroachdb/cockroach/pull/19150) [#19405](https://github.com/cockroachdb/cockroach/pull/19405) - -

Command-Line Interface Changes

- -- A new flag `--temp-dir` can be used to set the location of temporary files (defaults to a subdirectory of the first store). [#18544](https://github.com/cockroachdb/cockroach/pull/18544) -- Many bugs in the interactive SQL shell have been fixed by switching to `libedit` for command-line input. The `normalize_history` option has been removed. [#18531](https://github.com/cockroachdb/cockroach/pull/18531) [#19125](https://github.com/cockroachdb/cockroach/pull/19125) -- New command `cockroach load show` displays information about available backups. [#18434](https://github.com/cockroachdb/cockroach/pull/18434) -- `cockroach node status` and `cockroach node ls` no longer show nodes that are decommissioned and dead. [#18270](https://github.com/cockroachdb/cockroach/pull/18270) -- The `cockroach node decommission` command now has less noisy output. [#18458](https://github.com/cockroachdb/cockroach/pull/18458) - -

Bug Fixes

- -- Fixed issues when `meta2` ranges split, lifting the ~64TB cluster size limitation. [#18709](https://github.com/cockroachdb/cockroach/pull/18709) [#18970](https://github.com/cockroachdb/cockroach/pull/18970) -- More errors now return the same error codes as PostgreSQL. [#19103](https://github.com/cockroachdb/cockroach/pull/19103) -- `ROLLBACK` can no longer return a "transaction aborted" error. [#19167](https://github.com/cockroachdb/cockroach/pull/19167) -- Fixed a panic in `SHOW TRACE FOR SELECT COUNT(*)`. [#19006](https://github.com/cockroachdb/cockroach/pull/19006) -- Escaped backslashes are now supported in `regexp_replace` substitution strings. [#19168](https://github.com/cockroachdb/cockroach/pull/19168) -- `extract(quarter FROM ts)` now works correctly. [#19298](https://github.com/cockroachdb/cockroach/pull/19298) -- The node liveness system is now more robust on a heavily-loaded cluster. [#19279](https://github.com/cockroachdb/cockroach/pull/19279) -- Added debug logging when attempting to commit a non-existent intent. [#17580](https://github.com/cockroachdb/cockroach/pull/17580) - -

Performance Improvements

- -- New cluster setting `timeseries.resolution_10s.storage_duration` can be used to reduce the storage used by built-in monitoring. [#18632](https://github.com/cockroachdb/cockroach/pull/18632) -- Foreign key checks are now performed in batches. [#18730](https://github.com/cockroachdb/cockroach/pull/18730) -- Raft ready processing is now batched, increasing performance of uncontended single-range write workloads. [#19056](https://github.com/cockroachdb/cockroach/pull/19056) [#19164](https://github.com/cockroachdb/cockroach/pull/19164) -- The leaseholder cache is now sharded to improve concurrency and uses less memory. [#17987](https://github.com/cockroachdb/cockroach/pull/17987) [#18443](https://github.com/cockroachdb/cockroach/pull/18443) -- Finding split keys is now more efficient. [#18649](https://github.com/cockroachdb/cockroach/pull/18649) [#18718](https://github.com/cockroachdb/cockroach/pull/18718) -- `STDDEV` and `VARIANCE` aggregations can now be parallelized by the distributed SQL engine. [#18520](https://github.com/cockroachdb/cockroach/pull/18520) -- Store statistics are now updated immediately after rebalancing. [#18425](https://github.com/cockroachdb/cockroach/pull/18425) [#19115](https://github.com/cockroachdb/cockroach/pull/19115) -- Raft truncation is now faster. [#18706](https://github.com/cockroachdb/cockroach/pull/18706) -- Replica rebalancing is now prioritized over lease rebalancing. [#17595](https://github.com/cockroachdb/cockroach/pull/17595) -- `IMPORT` and `RESTORE` are more efficient. [#19070](https://github.com/cockroachdb/cockroach/pull/19070) -- Restoring a backup no longer creates an extra empty range. [#19052](https://github.com/cockroachdb/cockroach/pull/19052) -- Improved performance of type checking. [#19078](https://github.com/cockroachdb/cockroach/pull/19078) -- The replica allocator now avoids adding new replicas that it would immediately try to undo. [#18364](https://github.com/cockroachdb/cockroach/pull/18364) -- Improved performance of the SQL parser. [#19068](https://github.com/cockroachdb/cockroach/pull/19068) -- Cache strings used for stats reporting in prepared statement. [#19240](https://github.com/cockroachdb/cockroach/pull/19240) -- Reduced command queue contention during intent resolution. [#19093](https://github.com/cockroachdb/cockroach/pull/19093) -- Transactions that do not use the client-directed retry protocol and experience retry errors are now more likely to detect those errors early instead of at commit time. [#18858](https://github.com/cockroachdb/cockroach/pull/18858) -- Commands that have already exceeded their deadline are now dropped before proposal. [#19380](https://github.com/cockroachdb/cockroach/pull/19380) -- Reduced the encoded size of some internal protocol buffers, reducing disk write amplification. [#18689](https://github.com/cockroachdb/cockroach/pull/18689) [#18834](https://github.com/cockroachdb/cockroach/pull/18834) [#18835](https://github.com/cockroachdb/cockroach/pull/18835) [#18828](https://github.com/cockroachdb/cockroach/pull/18828) [#18910](https://github.com/cockroachdb/cockroach/pull/18910) [#18950](https://github.com/cockroachdb/cockroach/pull/18950) -- Reduced memory allocations and GC overhead. [#18914](https://github.com/cockroachdb/cockroach/pull/18914) [#18927](https://github.com/cockroachdb/cockroach/pull/18927) [#18928](https://github.com/cockroachdb/cockroach/pull/18928) [#19136](https://github.com/cockroachdb/cockroach/pull/19136) [#19246](https://github.com/cockroachdb/cockroach/pull/19246) diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171113.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171113.md deleted file mode 100644 index 2922a6c1546..00000000000 --- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171113.md +++ /dev/null @@ -1,123 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Backwards-Incompatible Changes

- -- Redefined `NaN` comparisons to be compatible with PostgreSQL. `NaN` is now equal to itself and sorts before all other non-NULL values. [#19144](https://github.com/cockroachdb/cockroach/pull/19144) - -- It is no longer possible to [drop a user](https://www.cockroachlabs.com/docs/v2.0/drop-user) with grants; the user's grants must first be [revoked](https://www.cockroachlabs.com/docs/v2.0/revoke). [#19095](https://github.com/cockroachdb/cockroach/pull/19095) - -

Build Changes

- -- Fixed compilation on the 64-bit ARM architecture. [#19795](https://github.com/cockroachdb/cockroach/pull/19795) - -- NodeJS 6+ and Yarn 1.0+ are now required to build CockroachDB. [#18349](https://github.com/cockroachdb/cockroach/pull/18349) - -

SQL Language Changes

- -- [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.0/show-grants) (no user specified) and `SHOW GRANTS FOR ` are now supported. The former lists all grants for all users on all databases and tables; the latter does so for a specified user. [#19095](https://github.com/cockroachdb/cockroach/pull/19095) - -- [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.0/show-grants) statements now report the database name for tables. [#19095](https://github.com/cockroachdb/cockroach/pull/19095) - -- [`CREATE USER`](https://www.cockroachlabs.com/docs/v2.0/create-user) statements are no longer included in the results of [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v2.0/show-queries) statements. [#19095](https://github.com/cockroachdb/cockroach/pull/19095) - -- The new `ALTER USER ... WITH PASSWORD ...` statement can be used to change a user's password. [#19095](https://github.com/cockroachdb/cockroach/pull/19095) - -- [`CREATE USER IF NOT EXISTS`](https://www.cockroachlabs.com/docs/v2.0/create-user) is now supported. [#19095](https://github.com/cockroachdb/cockroach/pull/19095) - -- New [foreign key constraints](https://www.cockroachlabs.com/docs/v2.0/foreign-key) without an action specified for `ON DELETE` or `ON UPDATE` now default to `NO ACTION`, and existing foreign key constraints are now considered to have both `ON UPDATE` and `ON DELETE` actions set to `NO ACTION` even if `RESTRICT` was specified at the time of creation. To set an existing foreign key constraint's action to `RESTRICT`, the constraint must be dropped and recreated. - - Note that `NO ACTION` and `RESTRICT` are currently equivalent and will remain so until options for deferring constraint checking are added. [#19416](https://github.com/cockroachdb/cockroach/pull/19416) - -- Added more columns to [`information_schema.table_constraints`](https://www.cockroachlabs.com/docs/v2.0/information-schema#table_constraints). [#19466](https://github.com/cockroachdb/cockroach/pull/19466) - -

Command-Line Interface Changes

- -- On node startup, the location for temporary files, as defined by the `--temp-dir` flag, is printed to the standard output. [#19272](https://github.com/cockroachdb/cockroach/pull/19272) - -

Admin UI Changes

- -- [Decommissioned nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) no longer cause warnings about staggered versions. [#19547](https://github.com/cockroachdb/cockroach/pull/19547) - -

Bug Fixes

- -- Fixed a bug causing redundant log messages when running [`SHOW TRACE FOR`](https://www.cockroachlabs.com/docs/v2.0/show-trace). [#19468](https://github.com/cockroachdb/cockroach/pull/19468) - -- [`DROP INDEX IF EXISTS`](https://www.cockroachlabs.com/docs/v2.0/drop-index) now behaves properly when not using `table@idx` syntax. [#19390](https://github.com/cockroachdb/cockroach/pull/19390) - -- Fixed a double close of the merge joiner output. [#19794](https://github.com/cockroachdb/cockroach/pull/19794) - -- Fixed a panic caused by placeholders in `PREPARE` statements. [#19636](https://github.com/cockroachdb/cockroach/pull/19636) - -- Improved error messages about Raft progress in the replicate queue. [#19593](https://github.com/cockroachdb/cockroach/pull/19593) - -- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) command now properly supports [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values. [#19498](https://github.com/cockroachdb/cockroach/pull/19498) - -- Fixed range splitting to work when the first row of a range is larger than half the configured range size. [#19339](https://github.com/cockroachdb/cockroach/pull/19339) - -- Reduced unnecessary log messages when a cluster becomes temporarily unbalanced, for example, when a new node joins. [#19494](https://github.com/cockroachdb/cockroach/pull/19494) - -- Using [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete) without `WHERE` and `RETURNING` inside `[...]` no longer causes a panic. [#19822](https://github.com/cockroachdb/cockroach/pull/19822) - -- SQL comparisons using the `ANY`, `SOME`, or `ALL` [operators](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#operators) with sub-queries and cast expressions work properly again. [#19801](https://github.com/cockroachdb/cockroach/pull/19801) - -- On macOS, the built-in SQL shell ([`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client)) once again properly supports window resizing and suspend-to-background. [#19429](https://github.com/cockroachdb/cockroach/pull/19429) - -- Silenced an overly verbose log message. [#19504](https://github.com/cockroachdb/cockroach/pull/19504) - -- Fixed a bug preventing large, distributed queries that overflow onto disk from completing. [#19689](https://github.com/cockroachdb/cockroach/pull/19689) - -- It is not possible to `EXECUTE` inside of `PREPARE` statements or alongside other `EXECUTE` statements; attempting to do so no longer causes a panic. [#19809](https://github.com/cockroachdb/cockroach/pull/19809) [#19720](https://github.com/cockroachdb/cockroach/pull/19720) - -- The admin UI now works when a different `--advertise-host` is used. [#19426](https://github.com/cockroachdb/cockroach/pull/19426) - -- An improperly typed subquery used with `IN` no longer panics. [#19858](https://github.com/cockroachdb/cockroach/pull/19858) - -- It is now possible to [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) using an incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) taken after a table was dropped. [#19601](https://github.com/cockroachdb/cockroach/pull/19601) - -- Fixed an always-disabled crash reporting setting. [#19554](https://github.com/cockroachdb/cockroach/pull/19554) - -- Prevented occasional crashes when the server is shut down during startup. [#19591](https://github.com/cockroachdb/cockroach/pull/19591) - -- Prevented a potential Gossip deadlock on cluster startup. [#19493](https://github.com/cockroachdb/cockroach/pull/19493) - -- Improved error handling during splits. [#19448](https://github.com/cockroachdb/cockroach/pull/19448) - -- Some I/O errors now cause the server to shut down. [#19447](https://github.com/cockroachdb/cockroach/pull/19447) - -- Improved resiliency to S3 quota limits by retrying some operations during [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore)/[`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) - -- Executing [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.0/truncate) on a table with self-referential foreign key constraints no longer creates broken foreign key backward references. [#19322](https://github.com/cockroachdb/cockroach/issues/19322) - -

Performance Improvements

- -- Improved memory usage for certain queries that use limits at multiple levels. [#19682](https://github.com/cockroachdb/cockroach/pull/19682) - -- Eliminated some redundant Raft messages, improving write performance for some workloads by up to 30%. [#19540](https://github.com/cockroachdb/cockroach/pull/19540) - -- Trimmed the wire size of various RPCs. [#18930](https://github.com/cockroachdb/cockroach/pull/18930) - -- Table leases are now acquired in the background when frequently used, removing a jump in latency when they expire. [#19005](https://github.com/cockroachdb/cockroach/pull/19005) - -

Enterprise Edition Changes

- -- When an enterprise [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) fails or is canceled, partially restored data is now properly cleaned up. [#19578](https://github.com/cockroachdb/cockroach/pull/19578) - -- Added a placeholder during long-running [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) and [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs to protect against accidentally using it by concurrent operations. [#19713](https://github.com/cockroachdb/cockroach/pull/19713) - -

Doc Updates

- -- New RFCs: - - [Inverted indexes](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171020_inverted_indexes.md) [#18992](https://github.com/cockroachdb/cockroach/pull/18992) - - [`JSONB` encoding](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171005_jsonb_encoding.md) [#19062](https://github.com/cockroachdb/cockroach/pull/19062) - - [SQL Sequences](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171102_sql_sequences.md) [#19196](https://github.com/cockroachdb/cockroach/pull/19196) - - [Interleaved table JOINs](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171025_interleaved_table_joins.md) [#19028](https://github.com/cockroachdb/cockroach/pull/19028) - - [SQL consistency check command](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171025_scrub_sql_consistency_check_command.md) [#18675](https://github.com/cockroachdb/cockroach/pull/18675) -- Documented how to [increase the system-wide file descriptors limit on Linux](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#file-descriptors-limit). [#2139](https://github.com/cockroachdb/docs/pull/2139) -- Clarified that multiple transaction options in a single [`SET TRANSACTION`](https://www.cockroachlabs.com/docs/v2.0/set-transaction#set-isolation-priority) statement can be space-separated as well as comma-separated. [#2139](https://github.com/cockroachdb/docs/pull/2139) -- Added `e'\\x` to the list of supported [hexadecimal-encoded byte array literals](https://www.cockroachlabs.com/docs/v2.0/sql-constants#hexadecimal-encoded-byte-array-literals) formats. [#2134](https://github.com/cockroachdb/docs/pull/2134) -- Clarified the FAQ on [auto-generating unique row IDs](https://www.cockroachlabs.com/docs/v2.0/sql-faqs#how-do-i-auto-generate-unique-row-ids-in-cockroachdb). [#2128](https://github.com/cockroachdb/docs/pull/2128) -- Corrected the aliases and allowed widths of various [`INT`](https://www.cockroachlabs.com/docs/v1.1/int) types. [#2116](https://github.com/cockroachdb/docs/pull/2116) -- Corrected the description of the `--host` flag in our insecure [cloud deployment tutorials](https://www.cockroachlabs.com/docs/v1.1/manual-deployment). [#2117](https://github.com/cockroachdb/docs/pull/2117) -- Minor improvements to the [CockroachDB Architecture Overview](https://www.cockroachlabs.com/docs/v1.1/architecture/overview) page. [#2103](https://github.com/cockroachdb/docs/pull/2103) [#2104](https://github.com/cockroachdb/docs/pull/2104) [#2105](https://github.com/cockroachdb/docs/pull/2105) diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171204.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171204.md deleted file mode 100644 index 03dbbef7fcb..00000000000 --- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171204.md +++ /dev/null @@ -1,87 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- CockroachDB now uses RocksDB version 5.9.0. [#20070](https://github.com/cockroachdb/cockroach/pull/20070) - -

Build Changes

- -- Restored compatibility with older x86 CPUs that do not support SSE4.2 extensions. [#19909](https://github.com/cockroachdb/cockroach/issues/19909) - -

SQL Language Changes

- -- The `TIME` data type is now supported. [#19923](https://github.com/cockroachdb/cockroach/pull/19923) -- The [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) command now tolerates empty CSV files and supports `201` and `204` return codes from HTTP storage. [#19861](https://github.com/cockroachdb/cockroach/pull/19861) [#20027](https://github.com/cockroachdb/cockroach/pull/20027) -- [`nodelocal://`](https://www.cockroachlabs.com/docs/v2.0/import#import-file-urls) paths in `IMPORT` now default to relative within the "extern" subdirectory of the first store directory, configurable via the new `--external-io-dir` flag. [#19865](https://github.com/cockroachdb/cockroach/pull/19865) -- Added `AWS_ENDPOINT` and `AWS_REGION` parameters in S3 URIs to specify the AWS endpoint or region for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import). The endpoint can be any S3-compatible service. [#19860](https://github.com/cockroachdb/cockroach/pull/19860) -- For compatibility with PostgreSQL: - - The `time zone` [session variable](https://www.cockroachlabs.com/docs/v2.0/set-vars) (with a space) has been renamed `timezone` (without a space), and `SET TIMEZONE` and `SHOW TIMEZONE` are now supported alongside the existing `SET TIME ZONE` and `SHOW TIME ZONE` syntax. Also, `SET TIMEZONE =` can now be used as an alternative to `SET TIMEZONE TO`. [#19931](https://github.com/cockroachdb/cockroach/pull/19931) - - The `transaction_read_only` [session variable](https://www.cockroachlabs.com/docs/v2.0/set-vars) is now supported. It is always set to `off`. [#19971](https://github.com/cockroachdb/cockroach/pull/19971) - - The `transaction isolation level`, `transaction priority`, and `transaction status` [session variables](https://www.cockroachlabs.com/docs/v2.0/set-vars) have been renamed `transaction_isolation`, `transaction_priority`, and `transaction_status`. [#20264](https://github.com/cockroachdb/cockroach/pull/20264) -- [`SHOW TRACE FOR SELECT`](https://www.cockroachlabs.com/docs/v2.0/show-trace) now supports `AS OF SYSTEM TIME`. [#20162](https://github.com/cockroachdb/cockroach/pull/20162) -- Added the `system.table_statistics` table for maintaining statistics about columns or groups of columns. These statistics will eventually be used by the query optimizer. [#20072](https://github.com/cockroachdb/cockroach/pull/20072) -- The [`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update) and [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete) statements now support `ORDER BY` and `LIMIT` clauses. [#20069](https://github.com/cockroachdb/cockroach/pull/20069) - - For `UPDATE`, this is a MySQL extension that can help with updating the primary key of a table (`ORDER BY`) and control the maximum size of write transactions (`LIMIT`). - - For `DELETE`, the `ORDER BY` clause constrains the deletion order, the output of its `LIMIT` clause (if any), and the result order of its `RETURNING` clause (if any). -- On table creation, [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expressions no longer get evaluated. [#20031](https://github.com/cockroachdb/cockroach/pull/20031) - -

Command-Line Interface Changes

- -- The [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command now indicates if a node is dead. [#20192](https://github.com/cockroachdb/cockroach/pull/20192) -- The new `--external-io-dir` flag in [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) can be used to configure the location of [`nodelocal://`](https://www.cockroachlabs.com/docs/v2.0/import#import-file-urls) paths in `BACKUP`, `RESTORE`, and `IMPORT`. [#19725](https://github.com/cockroachdb/cockroach/pull/19725) - -

Admin UI Changes

- -- Updated time series axis labels to show the correct byte units. [#19870](https://github.com/cockroachdb/cockroach/pull/19870) -- Added a cluster overview page showing current capacity usage, node liveness, and replication status. [#19657](https://github.com/cockroachdb/cockroach/pull/19657) - -

Bug Fixes

- -- Fixed how column modifiers interact with [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values. [#19499](https://github.com/cockroachdb/cockroach/pull/19499) -- Enabled an RPC-saving optimization when the `--advertise-host` is used. [#20006](https://github.com/cockroachdb/cockroach/pull/20006) -- It is now possible to [drop a column](https://www.cockroachlabs.com/docs/v2.0/drop-column) that is referenced as a [foreign key](https://www.cockroachlabs.com/docs/v2.0/foreign-key) when it is the only column in that reference. [#19772](https://github.com/cockroachdb/cockroach/pull/19772) -- Fixed a panic involving the use of the `IN` operator and improperly typed subqueries. [#19858](https://github.com/cockroachdb/cockroach/pull/19858) -- Fixed a spurious panic about divergence of on-disk and in-memory state. [#19867](https://github.com/cockroachdb/cockroach/pull/19867) -- Fixed a bug allowing duplicate columns in primary indexes. [#20238](https://github.com/cockroachdb/cockroach/pull/20238) -- Fixed a bug with `NaN`s and `Infinity`s in `EXPLAIN` outputs. [#20233](https://github.com/cockroachdb/cockroach/pull/20233) -- Fixed a possible crash due to statements finishing execution after the client connection has been closed. [#20175](https://github.com/cockroachdb/cockroach/pull/20175) -- Fixed a correctness bug when using distributed SQL engine sorted merge joins. [#20090](https://github.com/cockroachdb/cockroach/pull/20090) -- Fixed a bug excluding some trace data from [`SHOW TRACE FOR `](https://www.cockroachlabs.com/docs/v2.0/show-trace). [#20081](https://github.com/cockroachdb/cockroach/pull/20081) -- Fixed a case in which ambiguous errors were treated as unambiguous and led to inappropriate retries. [#20073](https://github.com/cockroachdb/cockroach/pull/20073) -- Fixed a bug leading to incorrect results for some queries with `IN` constraints. [#20036](https://github.com/cockroachdb/cockroach/pull/20036) -- Fixed the encoding of indexes that use [`STORING`](https://www.cockroachlabs.com/docs/v2.0/create-index#store-columns) columns. [#20001](https://github.com/cockroachdb/cockroach/pull/20001) -- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) checkpoints are now correctly cleaned up. [#20211](https://github.com/cockroachdb/cockroach/pull/20211) -- Fixed a bug that could cause system overload during cleanup of large transactions. [#19538](https://github.com/cockroachdb/cockroach/pull/19538) -- On macOS, the built-in SQL shell ([`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client)) once again properly supports window resizing. [#20148](https://github.com/cockroachdb/cockroach/pull/20148), [#20153](https://github.com/cockroachdb/cockroach/pull/20153) -- `PARTITION BY` multiple columns with window functions now works properly. [#20151](https://github.com/cockroachdb/cockroach/pull/20151) -- Fixed a bug so deleting chains of 2 or more foreign key references is now possible. [#20050](https://github.com/cockroachdb/cockroach/pull/20050) -- Prometheus vars are now written outside the metrics lock. [#20194](https://github.com/cockroachdb/cockroach/pull/20194) - -

Enterprise Edition Changes

- -- Enterprise [`BACKUP`s](https://www.cockroachlabs.com/docs/v2.0/backup) no longer automatically include the `system.users` and `system.descriptor` tables. [#19975](https://github.com/cockroachdb/cockroach/pull/19975) -- Added `AWS_ENDPOINT` and `AWS_REGION` parameters in S3 URIs to specify the AWS endpoint or region for [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore). The endpoint can be any S3-compatible service. [#19860](https://github.com/cockroachdb/cockroach/pull/19860) -- `RESTORE DATABASE` is now allowed only when the backup contains a whole database. [#20023](https://github.com/cockroachdb/cockroach/pull/20023) -- Fixed `RESTORE` being resumed with `skip_missing_foreign_keys` specified. [#20092](https://github.com/cockroachdb/cockroach/pull/20092) -- [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) jobs now support `201` and `204` return codes from HTTP storage. [#20027](https://github.com/cockroachdb/cockroach/pull/20027) -- [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) now checks that all interleaved tables are included (as required by `RESTORE`). [#20206](https://github.com/cockroachdb/cockroach/pull/20206) -- Marked `revision_history` [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) as experimental. [#20164](https://github.com/cockroachdb/cockroach/pull/20164) -- [`nodelocal://`](https://www.cockroachlabs.com/docs/v2.0/import#import-file-urls) paths in `BACKUP`/`RESTORE` now default to relative within the "extern" subdirectory of the first store directory, configurable via the new `--external-io-dir` flag. [#19865](https://github.com/cockroachdb/cockroach/pull/19865) - -

Doc Updates

- -- In conjunction with beta-level support for the C# (.NET) Npgsql driver, added a tutorial on [building a C# app with CockroachDB](https://www.cockroachlabs.com/docs/v2.0/build-a-csharp-app-with-cockroachdb). [#2236](https://github.com/cockroachdb/docs/pull/2236) -- Improved Kubernetes guidance: - - Added a tutorial on [orchestrating a secure CockroachDB cluster with Kubernetes](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes), improved the tutorial for [insecure orchestrations](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes-insecure), and added a [local cluster tutorial using `minikube`](https://www.cockroachlabs.com/docs/v2.0/orchestrate-a-local-cluster-with-kubernetes-insecure). [#2147](https://github.com/cockroachdb/docs/pull/2147) - - Updated the StatefulSet configurations to support rolling upgrades, and added [initial documentation](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes#doing-a-rolling-upgrade-to-a-different-cockroachdb-version). [#19995](https://github.com/cockroachdb/cockroach/pull/19995) -- Added performance best practices for [`INSERT`](https://www.cockroachlabs.com/docs/v2.0/insert#performance-best-practices) and [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert#considerations) statements. [#2199](https://github.com/cockroachdb/docs/pull/2199) -- Documented how to use the `timeseries.resolution_10s.storage_duration` cluster setting to [truncate timeseries data sooner than the default 30 days](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#why-is-disk-usage-increasing-despite-lack-of-writes). [#2210](https://github.com/cockroachdb/docs/pull/2210) -- Clarified the treatment of `NULL` values in [`SELECT` statements with an `ORDER BY` clause](https://www.cockroachlabs.com/docs/v2.0/select-clause#sorting-and-limiting-query-results). [#2237](https://github.com/cockroachdb/docs/pull/2237) - -

New RFCs

- -- [`SELECT FOR UPDATE`](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171024_select_for_update.md) [#19577](https://github.com/cockroachdb/cockroach/pull/19577) -- [SQL Optimizer Statistics](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20170908_sql_optimizer_statistics.md) [#18399](https://github.com/cockroachdb/cockroach/pull/18399) -- [SCRUB Index and Physical Check Implementation](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171120_scrub_index_and_physical_implementation.md) [#19327](https://github.com/cockroachdb/cockroach/pull/19327) diff --git a/src/current/_includes/releases/v2.0/v1.2-alpha.20171211.md b/src/current/_includes/releases/v2.0/v1.2-alpha.20171211.md deleted file mode 100644 index d3bef7a3995..00000000000 --- a/src/current/_includes/releases/v2.0/v1.2-alpha.20171211.md +++ /dev/null @@ -1,57 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- Alpha and beta releases are now published as [Docker images under the name cockroachdb/cockroach-unstable](https://hub.docker.com/r/cockroachdb/cockroach-unstable/). [#20331](https://github.com/cockroachdb/cockroach/pull/20331) - -

SQL Language Changes

- -- The protocol statement tag for [`CREATE TABLE ... AS ...`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) is now [`SELECT`](https://www.cockroachlabs.com/docs/v2.0/select-clause), like in PostgreSQL. [#20268](https://github.com/cockroachdb/cockroach/pull/20268) -- OIDs can now be compared with inequality operators. [#20367](https://github.com/cockroachdb/cockroach/pull/20367) -- The [`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.0/cancel-job) statement now supports canceling [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs. [#20343](https://github.com/cockroachdb/cockroach/pull/20343) -- [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v2.0/pause-job)/[`RESUME JOB`](https://www.cockroachlabs.com/docs/v2.0/resume-job)/[`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.0/cancel-job) statements can now be used within SQL [transactions](https://www.cockroachlabs.com/docs/v2.0/transactions). [#20185](https://github.com/cockroachdb/cockroach/pull/20185) -- Added a cache for the internal `system.table_statistics` table. [#20212](https://github.com/cockroachdb/cockroach/pull/20212) -- The `intervalstyle` [session variable](https://www.cockroachlabs.com/docs/v2.0/set-vars) is now supported for PostgreSQL compatibility. [#20274](https://github.com/cockroachdb/cockroach/pull/20274) -- The [`SHOW [KV] TRACE`](https://www.cockroachlabs.com/docs/v2.0/show-trace) statement now properly extracts file/line number information when analyzing traces produced in debug mode. Also, the new `SHOW COMPACT [KV] TRACE` statement provides a more compact view on the same data. [#20093](https://github.com/cockroachdb/cockroach/pull/20093) -- Some queries using `IS NOT NULL` conditions are now better optimized. [#20436](https://github.com/cockroachdb/cockroach/pull/20436) -- [Views](https://www.cockroachlabs.com/docs/v2.0/views) now support `LIMIT` and `ORDER BY`. [#20246](https://github.com/cockroachdb/cockroach/pull/20246) - -

Command-Line Interface Changes

- -- Reduced temporary disk space usage for the `cockroach debug compact` command. [#20460](https://github.com/cockroachdb/cockroach/pull/20460) -- The [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) and [`cockroach node ls`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) commands now support a timeout. [#20308](https://github.com/cockroachdb/cockroach/pull/20308) - -

Admin UI Changes

- -- The Admin UI now sets the `Last-Modified` header when serving assets to permit browser caching. This improves page load times, especially on slow connections [#20429](https://github.com/cockroachdb/cockroach/pull/20429). - -

Bug Fixes

- -- Removed the possibility for OOM errors during distributed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) from csv. [#20506](https://github.com/cockroachdb/cockroach/pull/20506) -- Fixed a crash triggered by some corner-case queries containing [`ORDER BY`](https://www.cockroachlabs.com/docs/v2.0/query-order). [#20489](https://github.com/cockroachdb/cockroach/pull/20489) -- Added missing Distributed SQL flows to the exported `sql.distsql.flows.active` and `sql.distsql.flows.total` metrics and the "Active Flows for Distributed SQL Queries" admin UI graph. [#20503](https://github.com/cockroachdb/cockroach/pull/20503) -- Fixed an issue with stale buffer data when using the binary format for [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values. [#20461](https://github.com/cockroachdb/cockroach/pull/20461) -- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) shell now better reports the number of rows inserted by a [`CREATE TABLE ... AS ...`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) statement. Note, however, that the result are still formatted incorrectly if the `CREATE TABLE ... AS ...` statement creates zero rows in the new table. [#20268](https://github.com/cockroachdb/cockroach/pull/20268) -- Self-referencing tables can now reference a non-primary index without manually adding an index on the referencing column. [#20325](https://github.com/cockroachdb/cockroach/pull/20325) -- Fixed an issue where spans for descending indexes were displaying incorrectly and updated `NOT NULL` tokens from `#` to `!NULL`. [#20318](https://github.com/cockroachdb/cockroach/pull/20318) -- Fixed `BACKUP` jobs to correctly resume in all conditions. [#20185](https://github.com/cockroachdb/cockroach/pull/20185) -- Fix various race conditions with jobs. [#20185](https://github.com/cockroachdb/cockroach/pull/20185) -- It is no longer possible to use conflicting `AS OF SYSTEM TIME` clauses in different parts of a query. [#20267](https://github.com/cockroachdb/cockroach/pull/20267) -- Fixed a panic caused by dependency cycles with `cockroach dump`. [#20255](https://github.com/cockroachdb/cockroach/pull/20255) -- Prevented context cancellation during lease acquisition from leaking to coalesced requests. [#20424](https://github.com/cockroachdb/cockroach/pull/20424) - -

Performance Improvements

- -- Improved handling of `IS NULL` conditions. [#20366](https://github.com/cockroachdb/cockroach/pull/20366) -- Improved p99 latencies for garbage collection of previous versions of a key, when there are many versions. [#20373](https://github.com/cockroachdb/cockroach/pull/20373) -- Smoothed out disk usage under very write heavy workloads by syncing to disk more frequently. [#20352](https://github.com/cockroachdb/cockroach/pull/20352) -- Improved garbage collection of very large [transactions](https://www.cockroachlabs.com/docs/v2.0/transactions) and large volumes of abandoned write intents. [#20396](https://github.com/cockroachdb/cockroach/pull/20396) -- Improved table scans and seeks on interleaved parent tables by skipping interleaved children rows at the end of a scan. [#20235](https://github.com/cockroachdb/cockroach/pull/20235) -- Replaced the interval tree structure in `TimestampCache` with arena-backed concurrent skiplist. This reduces global locking and garbage collection pressure, improving average and tail latencies. [#20300](https://github.com/cockroachdb/cockroach/pull/20300) - -

Doc Updates

- -- Added an [introduction to CockroachDB video](https://www.cockroachlabs.com/docs/v2.0/). [#2234](https://github.com/cockroachdb/docs/pull/2234) -- Clarified that we have tested the PostgreSQL-compatible drivers and ORMs featured in our documentation enough to claim **beta-level** support. This means that applications using advanced or obscure features of a driver or ORM may encounter incompatibilities. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. [#2235](https://github.com/cockroachdb/docs/pull/2235) diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20171218.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20171218.md deleted file mode 100644 index 52d01146370..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-alpha.20171218.md +++ /dev/null @@ -1,26 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

SQL Language Changes

- -- Added support for read-only [transactions](https://www.cockroachlabs.com/docs/v2.0/transactions) via PostgreSQL-compatible syntax. [#20547](https://github.com/cockroachdb/cockroach/pull/20547) - - `SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY/READ WRITE` - - `SET TRANSACTION READ ONLY/READ WRITE` - - `SET default_transaction_read_only` - - `SET transaction_read_only` -- For compatibility with PostgreSQL, the return type of the `date_trunc(STRING,TIME)` [function](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) was changed from `TIME` to `INTERVAL`, and the return type of the `date_trunc(STRING,DATE)` function was changed from `DATE` to `TIMESTAMPTZ`. [#20467](https://github.com/cockroachdb/cockroach/pull/20467) - -

Bug Fixes

- -- Fixed a bug preventing CockroachDB from starting when the filesystem generates a `lost+found` directory in the Cockroach data directory. [#20565](https://github.com/cockroachdb/cockroach/pull/20565) -- Fixed the over-counting of memory usage by aggregations. [#20585](https://github.com/cockroachdb/cockroach/pull/20585) -- Fix a panic when using the `date_trunc(STRING,TIMESTAMP)` or `date_trunc(STRING,DATE)` function during queries that run with the distributed execution engine. [#20467](https://github.com/cockroachdb/cockroach/pull/20467) -- Fixed a bug where the `date_trunc(STRING,TIMESTAMP)` function would return a `TIMESTAMPTZ` value. [#20467](https://github.com/cockroachdb/cockroach/pull/20467) -- Fixed a race condition that would result in some queries hanging after cancellation. [#20088](https://github.com/cockroachdb/cockroach/pull/20088) -- Fixed a bug allowing [privileges](https://www.cockroachlabs.com/docs/v2.0/privileges) to be granted to non-existent users. [#20438](https://github.com/cockroachdb/cockroach/pull/20438) - -

Performance Improvements

- -- Queries that use inequalities using tuples (e.g., `(a,b,c) < (x,y,z)`) are now slightly better optimized. [#20484](https://github.com/cockroachdb/cockroach/pull/20484) -- `IS DISTINCT FROM` and `IS NOT DISTINCT FROM` clauses are now smarter about using available indexes. [#20346](https://github.com/cockroachdb/cockroach/pull/20346) diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20180116.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20180116.md deleted file mode 100644 index 6a5ff4ff30b..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-alpha.20180116.md +++ /dev/null @@ -1,124 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -{{site.data.alerts.callout_danger}}A bug that could trigger an assertion failure was discovered in this -release. Bringing up a node too soon after the assertion fired could introduce consistency problems, so this release has been withdrawn.{{site.data.alerts.end}} - -{% comment %} -

Backwards-Incompatible Changes

- -- Removed the obsolete `kv.gc.batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings). [#21070](https://github.com/cockroachdb/cockroach/pull/21070) -- Removed the `COCKROACH_METRICS_SAMPLE_INTERVAL` environment variable. Users that relied on it should reduce the value for the `timeseries.resolution_10s.storage_duration` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) instead. [#20810](https://github.com/cockroachdb/cockroach/pull/20810) - -

General Changes

- -- CockroachDB now proactively rebalances data when the diversity of the localities that a given range is located on can be improved. [#19489](https://github.com/cockroachdb/cockroach/pull/19489) -- Clusters are now initialized with default `.meta` and `.liveness` replication zones with lower GC TTL configurations. [#17628](https://github.com/cockroachdb/cockroach/pull/17628) - -

SQL Language Changes

- -- The new `SHOW CREATE SEQUENCE` statement shows the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence) statement that would create a carbon copy of the specified sequence. [#21208](https://github.com/cockroachdb/cockroach/pull/21208) -- The [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.0/drop-column) statement now drops [`CHECK`](https://www.cockroachlabs.com/docs/v2.0/check) constraints. [#21203](https://github.com/cockroachdb/cockroach/pull/21203) -- The `pg_sequence_parameters()` built-in function is now supported. [#21069](https://github.com/cockroachdb/cockroach/pull/21069) -- `ON DELETE CASCADE` foreign key constraints are now fully supported and memory bounded. [#20064](https://github.com/cockroachdb/cockroach/pull/20064) [#20706](https://github.com/cockroachdb/cockroach/pull/20706) -- For improved troubleshooting, more complete and useful details are now reported to clients when SQL errors are encountered. [#19793](https://github.com/cockroachdb/cockroach/pull/19793) -- The new `SHOW SYNTAX` statement allows clients to analyze arbitrary SQL syntax server-side and retrieve either the (pretty-printed) syntax decomposition of the string or the details of the syntax error, if any. This statement is intended for use in the CockroachDB interactive SQL shell. [#19793](https://github.com/cockroachdb/cockroach/pull/19793) -- Enhanced type checking of subqueries in order to generalize subquery support. As a side-effect, fixed a crash with subquery edge cases such as `SELECT (SELECT (1, 2)) IN (SELECT (1, 2))`. [#21076](https://github.com/cockroachdb/cockroach/pull/21076) -- Single-use common table expressions are now supported. [#20359](https://github.com/cockroachdb/cockroach/pull/20359) -- Statement sources with no output columns are now disallowed. [#20998](https://github.com/cockroachdb/cockroach/pull/20998) -- `WHERE` predicates that simplify to NULL no longer performs table scans. [#21067](https://github.com/cockroachdb/cockroach/pull/21067) -- The experimental `CREATE ROLE`, `DROP ROLE`, and `SHOW ROLES` statements are now supported. [#21020](https://github.com/cockroachdb/cockroach/pull/21020) [#20980](https://github.com/cockroachdb/cockroach/pull/20980) -- Improved the output of `EXPLAIN` to show the plan tree structure. [#20697](https://github.com/cockroachdb/cockroach/pull/20697) -- `OUTER` interleaved joins are now supported. [#20963](https://github.com/cockroachdb/cockroach/pull/20963) -- Added the `rolreplication` and `rolbypassrls` columns to the `pg_catalog.pg_roles` table. [#20397](https://github.com/cockroachdb/cockroach/pull/20397) -- [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values can now be cast to their own type. [#19816](https://github.com/cockroachdb/cockroach/pull/19816) -- The `||` operator is now supported for `JSONB`. [#20689](https://github.com/cockroachdb/cockroach/pull/20689) -- The `CASCADE` option is now required to drop an index that is used by Unique constraint. [#20837](https://github.com/cockroachdb/cockroach/pull/20837) -- The `BOOL` type now matches PostgreSQL's list of accepted formats. [#20833](https://github.com/cockroachdb/cockroach/pull/20833) -- The `sql_safe_updates` session variable now defaults to `false` unless the shell is truly interactive (using `cockroach sql`, `-e` not specified, standard input not redirected) and `--unsafe-updates` is not specified. Previously, `sql_safe_updates` would always default to `true` unless `--unsafe-updates` was specified. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- The `errexit` client-side option now defaults to `false` only if the shell is truly interactive, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- The `display_format` client-side option now defaults to `pretty` in every case where the output goes to a terminal, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- The `check_syntax` and `smart_prompt` client-side options, together with the interactive line editor, are only enabled if the session is interactive and output goes to a terminal. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- Table aliases are now permitted in `RETURNING` clauses. [#20808](https://github.com/cockroachdb/cockroach/pull/20808) -- Added the `SERIAL2`, `SERIAL4`, and `SERIAL8` aliases for the [`SERIAL`](https://www.cockroachlabs.com/docs/v2.0/serial) type. [#20776](https://github.com/cockroachdb/cockroach/pull/20776) -- NULL values are now supported in `COLLATE` expressions. [#20795](https://github.com/cockroachdb/cockroach/pull/20795) -- The new `crdb_internal.node_executable_version()` built-in function simplifies rolling upgrades. [#20292](https://github.com/cockroachdb/cockroach/pull/20292) -- The `json_pretty()`, `json_extract_path()`, `jsonb_extract_path()`, `json_object()`, and `asJSON()` built-in function are now supported. [#20702](https://github.com/cockroachdb/cockroach/pull/20702) [#20520](https://github.com/cockroachdb/cockroach/pull/20520) [#21015](https://github.com/cockroachdb/cockroach/pull/21015) [#20234](https://github.com/cockroachdb/cockroach/pull/20234) -- The `DISTINCT ON` clause is now supported for `SELECT` statements. [#20463](https://github.com/cockroachdb/cockroach/pull/20463) -- For compatibility with PostgreSQL and related tools: - - Parsing of the `COMMENT ON` syntax is now allowed. [#21063](https://github.com/cockroachdb/cockroach/pull/21063) - - The following built-in functions are now supported: `pg_catalog.pg_trigger()`, `pg_catalog.pg_rewrite()`, `pg_catalog.pg_operator()`, `pg_catalog.pg_user_mapping()`, `pg_catalog.foreign_data_wrapper()`, `pg_get_constraintdef()`, `inet_client_addr()`, `inet_client_port()`, `inet_server_addr()`, `inet_server_port()`. [#21065](https://github.com/cockroachdb/cockroach/pull/21065) [#20788](https://github.com/cockroachdb/cockroach/pull/20788) - - Missing columns have been added to `information_schema.columns`, and the `pg_catalog.pg_user()` virtual table has been added. [#20788](https://github.com/cockroachdb/cockroach/pull/20788) - - A string cast to `regclass` is interpreted as a possibly qualified name like `db.name`. [#20788](https://github.com/cockroachdb/cockroach/pull/20788) - - Rendered columns for built-in functions are now titled by the name of the built-in function. [#20820](https://github.com/cockroachdb/cockroach/pull/20820) - -

Command-Line Changes

- -- Client `cockroach` commands that use SQL (`cockroach sql`, `cockroach node ls`, etc.) now print a warning if the server is running an older version of CockroachDB than the client. Also, this and other warning messages are now clearly indicated with the "warning:" prefix. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) -- Client-side syntax checking performed by [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) when the `check_syntax` option is enabled has been enhanced for forward-compatibility with later CockroachDB versions. [#21119](https://github.com/cockroachdb/cockroach/pull/21119) -- The `?` [client-side command](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client#sql-shell-commands) of `cockroach sql` now prints out a description of each option. [#21119](https://github.com/cockroachdb/cockroach/pull/21119) -- The `--unsafe-updates` of `cockroach sql` was renamed to `--safe-updates`. The default behavior is unchanged: The previous flag defaulted to `false`; the new flag defaults to `true`. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) -- The `cockroach sql` command no longer fails when the server is running a version of CockroachDB that does not support the sql_safe_updates session variable. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) - -

Admin UI Changes

- -- Added graphs of node liveness heartbeat latency, an important internal signal of health, to the **Distributed** dashboard. [#21002](https://github.com/cockroachdb/cockroach/pull/21002) -- **Capacity Used** is now shown as "-" instead of 100% when the UI cannot load the real data from the server. [#20824](https://github.com/cockroachdb/cockroach/pull/20824) -- Removed a redundant rendering of the GC pause time from the **CPU Time** graph. [#20802](https://github.com/cockroachdb/cockroach/pull/20802) -- The **Databases** page now reports table sizes that are better approximations to actual disk space usage. [#20627](https://github.com/cockroachdb/cockroach/pull/20627) -- Added a system table to allow operators to designate geographic coordinates for any locality. This is for use with upcoming cluster visualization functionality in the Admin UI. [#19652](https://github.com/cockroachdb/cockroach/pull/19652) - -

Bug Fixes

- -- Fixed the `debug compact` command to compact all sstables. [#21293](https://github.com/cockroachdb/cockroach/pull/21293) -- Fixed tuple equality to evaluate correctly in the presence of NULL elements. [#21230](https://github.com/cockroachdb/cockroach/pull/21230) -- Fixed a bug where the temporary directory was being wiped on failed CockroachDB restart, causing importing and DistSQL queries to fail. [#20854](https://github.com/cockroachdb/cockroach/pull/20854) -- The "JSON" column in the output of `EXPLAIN(DISTSQL)` is now properly hidden by default. It can be shown using `SELECT *, JSON FROM [EXPLAIN(DISTSQL) ...]`. [#21154](https://github.com/cockroachdb/cockroach/pull/21154) -- `EXPLAIN` queries with placeholders no longer panic. [#21168](https://github.com/cockroachdb/cockroach/pull/21168) -- The `--safe-updates` flag of `cockroach sql` can now be used effectively in non-interactive sessions. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) -- Fixed a bug where non-matching interleaved rows were being inner-joined with their parent rows. [#20938](https://github.com/cockroachdb/cockroach/pull/20938) -- Fixed an issue where seemingly irrelevant error messages were being returned for certain `INSERT` statements. [#20841](https://github.com/cockroachdb/cockroach/pull/20841) -- Crash details are now properly copied to the log file even when a node was started with `--logtostderr` as well as in other circumstances when crash details could be lost previously. [#20839](https://github.com/cockroachdb/cockroach/pull/20839) -- It is no longer possible to log in as a non-existent user in insecure mode. [#20800](https://github.com/cockroachdb/cockroach/pull/20800) -- The `BIGINT` type alias is now correctly shown when using `SHOW CREATE TABLE`. [#20798](https://github.com/cockroachdb/cockroach/pull/20798) -- Fixed a scenario where a range that is too big to snapshot can lose availability even with a majority of nodes alive. [#20589](https://github.com/cockroachdb/cockroach/pull/20589) -- Fixed `BETWEEN SYMMETRIC`, which was incorrectly considered an alias for `BETWEEN`. Per the SQL99 specification, `BETWEEN SYMMETRIC` is like `BETWEEN`, except that its arguments are automatically swapped if they would specify an empty range. [#20747](https://github.com/cockroachdb/cockroach/pull/20747) -- Fixed a replica corruption that could occur if a process crashed in the middle of a range split. [#20704](https://github.com/cockroachdb/cockroach/pull/20704) -- Fixed an issue with the formatting of unicode values in string arrays. [#20657](https://github.com/cockroachdb/cockroach/pull/20657) -- Fixed detection and proper handling of certain variations of network partitions using server-side RPC keepalive in addition to client-side RPC keepalive. [#20707](https://github.com/cockroachdb/cockroach/pull/20707) -- Prevented RPC connections between nodes with incompatible versions. [#20587](https://github.com/cockroachdb/cockroach/pull/20587) -- Dangling intents are now eagerly cleaned up when `AmbiguousResultErrors` are seen. [#20628](https://github.com/cockroachdb/cockroach/pull/20628) -- Fixed the return type signature of the JSON `#>>` operator and `array_positions()` built-in function. [#20524](https://github.com/cockroachdb/cockroach/pull/20524) -- Fixed an issue where escaped characters like `A` and `\` in `LIKE`/`ILIKE` patterns were not handled properly. [#20600](https://github.com/cockroachdb/cockroach/pull/20600) -- Fixed an issue with `(NOT) (I)LIKE` pattern matching on `_...%` and `%..._` returning incorrect results. [#20600](https://github.com/cockroachdb/cockroach/pull/20600) -- Fixed a small spelling bug that made it such that a `DOUBLE PRECISION` specified type would erroneously display as a float. [#20727](https://github.com/cockroachdb/cockroach/pull/20727) -- Fixed a crash caused by null collated strings. [#20637](https://github.com/cockroachdb/cockroach/pull/20637) - -

Performance Improvements

- -- Improved the efficiency of scans with joins and certain complex `WHERE` clauses containing tuple equality. [#21288](https://github.com/cockroachdb/cockroach/pull/21288) -- Improved the efficiency scans for certain `WHERE` clauses. [#21217](https://github.com/cockroachdb/cockroach/pull/21217) -- Reduced per-row overhead in distsql query execution. [#21251](https://github.com/cockroachdb/cockroach/pull/21251) -- Added support for distributed execution of [`UNION`](https://www.cockroachlabs.com/docs/v2.0/set-operations#union-combine-two-queries) queries. [#21175](https://github.com/cockroachdb/cockroach/pull/21175) -- Improved performance for aggregation and distinct operations by arena allocating "bucket" storage. [#21160](https://github.com/cockroachdb/cockroach/pull/21160) -- Distributed execution of `UNION ALL` queries is now supported. [#20742](https://github.com/cockroachdb/cockroach/pull/20742) -- Reduced the fixed overhead of commands sent through Raft by 40% by only sending lease sequence numbers instead of sending the entire lease structure. [#20953](https://github.com/cockroachdb/cockroach/pull/20953) -- When tables are dropped, the space is now reclaimed in a more timely fashion. [#20607](https://github.com/cockroachdb/cockroach/pull/20607) -- Increased speed of except and merge joins by avoiding an unnecessary allocation. [#20759](https://github.com/cockroachdb/cockroach/pull/20759) -- Improved rebalancing to make thrashing back and forth between nodes much less likely, including when localities have very different numbers of nodes. [#20709](https://github.com/cockroachdb/cockroach/pull/20709) -- Improved performance of `DISTINCT` queries by avoiding an unnecessary allocation. [#20755](https://github.com/cockroachdb/cockroach/pull/20755) [#20750](https://github.com/cockroachdb/cockroach/pull/20750) -- Significantly improved the efficiency of `DROP TABLE` and `TRUNCATE`. [#20601](https://github.com/cockroachdb/cockroach/pull/20601) -- Improved performance of low-level row manipulation routines. [#20688](https://github.com/cockroachdb/cockroach/pull/20688) -- Raft followers now write to their disks in parallel with the leader. [#19229](https://github.com/cockroachdb/cockroach/pull/19229) -- Significantly reduced the overhead of SQL memory accounting. [#20590](https://github.com/cockroachdb/cockroach/pull/20590) -- Equality joins on the entire interleave prefix between parent and (not necessarily direct) child interleaved tables are now faster. [#19853](https://github.com/cockroachdb/cockroach/pull/19853) - -

Doc Updates

- -- Added a tutorial on using our Kubernetes-orchestrated AWS CloudFormation template for easy deployment and testing of CockroachDB. [#2356](https://github.com/cockroachdb/docs/pull/2356) -- Added docs on the [`TIME`](https://www.cockroachlabs.com/docs/v2.0/time) data type. [#2336](https://github.com/cockroachdb/docs/pull/2336) -- Added guidance on [reducing or disabling the storage of timeseries data](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0). [#2361](https://github.com/cockroachdb/docs/pull/2361) -- Added docs on the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence), [`ALTER SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/alter-sequence), and [`DROP SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/drop-sequence) statements. [#2292](https://github.com/cockroachdb/docs/pull/2292) -- Improved the font and coloring of code samples. [#2323](https://github.com/cockroachdb/docs/pull/2323) -{% endcomment %} diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20180122.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20180122.md deleted file mode 100644 index c67e61f4c2e..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-alpha.20180122.md +++ /dev/null @@ -1,290 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -{{site.data.alerts.callout_danger}}A bug that could trigger range splits in a tight loop was discovered in this release, so this release has been withdrawn.{{site.data.alerts.end}} - -

Backwards-Incompatible Changes

- -- Removed the obsolete `kv.gc.batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings). [#21070](https://github.com/cockroachdb/cockroach/pull/21070) -- Removed the `COCKROACH_METRICS_SAMPLE_INTERVAL` environment variable. Users that relied on it should reduce the value for the `timeseries.resolution_10s.storage_duration` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) instead. [#20810](https://github.com/cockroachdb/cockroach/pull/20810) - -

General Changes

- -- CockroachDB now proactively rebalances data when the diversity of the localities that a given range is located on can be improved. [#19489](https://github.com/cockroachdb/cockroach/pull/19489) -- Clusters are now initialized with default `.meta` and `.liveness` replication zones with lower GC TTL configurations. [#17628](https://github.com/cockroachdb/cockroach/pull/17628) -- CockroachDB now uses gRPC version 1.9.1 [#21398] - -

Build Changes

- -- The build system now enforces a minimum version of Go, rather than enforcing a specific version of Go. Since the Go 1.x series has strict backward-compatibility guarantees, the old rule was unnecessarily restrictive. [#21426] - -

SQL Language Changes

- -- The new `SHOW CREATE SEQUENCE` statement shows the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence) statement that would create a carbon copy of the specified sequence. [#21208](https://github.com/cockroachdb/cockroach/pull/21208) -- The [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.0/drop-column) statement now drops [`CHECK`](https://www.cockroachlabs.com/docs/v2.0/check) constraints. [#21203](https://github.com/cockroachdb/cockroach/pull/21203) -- The `pg_sequence_parameters()` built-in function is now supported. [#21069](https://github.com/cockroachdb/cockroach/pull/21069) -- `ON DELETE CASCADE` foreign key constraints are now fully supported and memory bounded. [#20064](https://github.com/cockroachdb/cockroach/pull/20064) [#20706](https://github.com/cockroachdb/cockroach/pull/20706) -- `ON UPDATE CASCADE` foreign key constraints are now fully supported [#21329] -- For improved troubleshooting, more complete and useful details are now reported to clients when SQL errors are encountered. [#19793](https://github.com/cockroachdb/cockroach/pull/19793) -- The new `SHOW SYNTAX` statement allows clients to analyze arbitrary SQL syntax server-side and retrieve either the (pretty-printed) syntax decomposition of the string or the details of the syntax error, if any. This statement is intended for use in the CockroachDB interactive SQL shell. [#19793](https://github.com/cockroachdb/cockroach/pull/19793) -- Enhanced type checking of subqueries in order to generalize subquery support. As a side-effect, fixed a crash with subquery edge cases such as `SELECT (SELECT (1, 2)) IN (SELECT (1, 2))`. [#21076](https://github.com/cockroachdb/cockroach/pull/21076) -- Single-use common table expressions are now supported. [#20359](https://github.com/cockroachdb/cockroach/pull/20359) -- Statement sources with no output columns are now disallowed. [#20998](https://github.com/cockroachdb/cockroach/pull/20998) -- `WHERE` predicates that simplify to `NULL` no longer performs table scans. [#21067](https://github.com/cockroachdb/cockroach/pull/21067) -- The experimental `CREATE ROLE`, `DROP ROLE`, `SHOW ROLES`, `GRANT `, and `REVOKE ` statements are now supported as part of [role-based access -control](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20171220_sql_role_based_access_control.md). [#21020](https://github.com/cockroachdb/cockroach/pull/21020) [#20980](https://github.com/cockroachdb/cockroach/pull/20980) [#21341] -- Improved the output of `EXPLAIN` to show the plan tree structure. [#20697](https://github.com/cockroachdb/cockroach/pull/20697) -- `OUTER` interleaved joins are now supported. [#20963](https://github.com/cockroachdb/cockroach/pull/20963) -- Added the `rolreplication` and `rolbypassrls` columns to the `pg_catalog.pg_roles` table. [#20397](https://github.com/cockroachdb/cockroach/pull/20397) -- [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) values can now be cast to their own type. [#19816](https://github.com/cockroachdb/cockroach/pull/19816) -- The `||` operator is now supported for `JSONB`. [#20689](https://github.com/cockroachdb/cockroach/pull/20689) -- The `CASCADE` option is now required to drop an index that is used by Unique constraint. [#20837](https://github.com/cockroachdb/cockroach/pull/20837) -- The `BOOL` type now matches PostgreSQL's list of accepted formats. [#20833](https://github.com/cockroachdb/cockroach/pull/20833) -- The `sql_safe_updates` session variable now defaults to `false` unless the shell is truly interactive (using `cockroach sql`, `-e` not specified, standard input not redirected) and `--unsafe-updates` is not specified. Previously, `sql_safe_updates` would always default to `true` unless `--unsafe-updates` was specified. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- The `errexit` client-side option now defaults to `false` only if the shell is truly interactive, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- The `display_format` client-side option now defaults to `pretty` in every case where the output goes to a terminal, not only when the input is not redirected as previously. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- The `check_syntax` and `smart_prompt` client-side options, together with the interactive line editor, are only enabled if the session is interactive and output goes to a terminal. [#20805](https://github.com/cockroachdb/cockroach/pull/20805) -- Table aliases are now permitted in `RETURNING` clauses. [#20808](https://github.com/cockroachdb/cockroach/pull/20808) -- Added the `SERIAL2`, `SERIAL4`, and `SERIAL8` aliases for the [`SERIAL`](https://www.cockroachlabs.com/docs/v2.0/serial) type. [#20776](https://github.com/cockroachdb/cockroach/pull/20776) -- `NULL` values are now supported in `COLLATE` expressions. [#20795](https://github.com/cockroachdb/cockroach/pull/20795) -- The new `crdb_internal.node_executable_version()` built-in function simplifies rolling upgrades. [#20292](https://github.com/cockroachdb/cockroach/pull/20292) -- The `json_pretty()`, `json_extract_path()`, `jsonb_extract_path()`, `json_object()`, `asJSON()`, `jsonb_set()`, `json_build_object()`, `json_strip_nulls()`, and `json_each{_text}()` built-in functions are now supported. [#20702](https://github.com/cockroachdb/cockroach/pull/20702) [#20520](https://github.com/cockroachdb/cockroach/pull/20520) [#21015](https://github.com/cockroachdb/cockroach/pull/21015) [#20234](https://github.com/cockroachdb/cockroach/pull/20234) [#21010] [#21019] [#21335] [#21044] -- The `DISTINCT ON` clause is now supported for `SELECT` statements. [#20463](https://github.com/cockroachdb/cockroach/pull/20463) -- The output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.0/explain) is now enhanced when applied to statements containing scalar sub-queries. [#21305] -- The `array_agg()` built-in function is now supported in distributed queries. [#21475] -- Removed the limit on transaction keys. There was formerly a limit of 100,000 keys. [#21078] -- Improved the error message when a column reference is ambiguous. [#21361] -- The new `SHOW EXPERIMENTAL_REPLICA TRACE` statement executes a query and returns which nodes served reads and writes. [#21349] -- Multiplication between [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.0/interval) and [`FLOAT`](https://www.cockroachlabs.com/docs/v2.0/float) values, and between `INTERVAL` and [`DECIMAL`](https://www.cockroachlabs.com/docs/v2.0/decimal) values, is now fully supported. [#21292] -- Reduced the size of entries stored in the `system.rangelog` table by not storing empty JSON fields. [#21318] -- The `ALTER TABLE SCATTER` statement now randomizes the locations of the leases for all ranges in the referenced table or index. [#21431] -- When using HTTPS storage for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import), a custom CA for HTTPS storage providers can now be specified via the `cloudstorage.http.custom_ca` cluster setting. This is also used when accessing custom s3 export storage endpoints. [#21358] [#21404] -- Storage of timeseries data within the cluster can be disabled by setting the new `timeseries.storage.enabled` cluster setting to false. This is recommended only if you exclusively use a third-party tool such as Prometheus for timeseries monitoring. See this [FAQ](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0) for more details. [#21314] -- For compatibility with PostgreSQL and related tools: - - Parsing of the `COMMENT ON` syntax is now allowed. [#21063](https://github.com/cockroachdb/cockroach/pull/21063) - - The following built-in functions are now supported: `pg_catalog.pg_trigger()`, `pg_catalog.pg_rewrite()`, `pg_catalog.pg_operator()`, `pg_catalog.pg_user_mapping()`, `pg_catalog.foreign_data_wrapper()`, `pg_get_constraintdef()`, `inet_client_addr()`, `inet_client_port()`, `inet_server_addr()`, `inet_server_port()`. [#21065](https://github.com/cockroachdb/cockroach/pull/21065) [#20788](https://github.com/cockroachdb/cockroach/pull/20788) - - Missing columns have been added to `information_schema.columns`, and the `pg_catalog.pg_user()` virtual table has been added. [#20788](https://github.com/cockroachdb/cockroach/pull/20788) - - A string cast to `regclass` is interpreted as a possibly qualified name like `db.name`. [#20788](https://github.com/cockroachdb/cockroach/pull/20788) - - Rendered columns for built-in functions are now titled by the name of the built-in function. [#20820](https://github.com/cockroachdb/cockroach/pull/20820) - -

Command-Line Changes

- -- Client `cockroach` commands that use SQL (`cockroach sql`, `cockroach node ls`, etc.) now print a warning if the server is running an older version of CockroachDB than the client. Also, this and other warning messages are now clearly indicated with the "warning:" prefix. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) -- Client-side syntax checking performed by [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) when the `check_syntax` option is enabled has been enhanced for forward-compatibility with later CockroachDB versions. [#21119](https://github.com/cockroachdb/cockroach/pull/21119) -- The `?` [client-side command](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client#sql-shell-commands) of `cockroach sql` now prints out a description of each option. [#21119](https://github.com/cockroachdb/cockroach/pull/21119) -- The `--unsafe-updates` of `cockroach sql` was renamed to `--safe-updates`. The default behavior is unchanged: The previous flag defaulted to `false`; the new flag defaults to `true`. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) -- The `cockroach sql` command no longer fails when the server is running a version of CockroachDB that does not support the sql_safe_updates session variable. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) - -

Admin UI Changes

- -- Added graphs of node liveness heartbeat latency, an important internal signal of health, to the **Distributed** dashboard. [#21002](https://github.com/cockroachdb/cockroach/pull/21002) -- **Capacity Used** is now shown as "-" instead of 100% when the UI cannot load the real data from the server. [#20824](https://github.com/cockroachdb/cockroach/pull/20824) -- Removed a redundant rendering of the GC pause time from the **CPU Time** graph. [#20802](https://github.com/cockroachdb/cockroach/pull/20802) -- The **Databases** page now reports table sizes that are better approximations to actual disk space usage. [#20627](https://github.com/cockroachdb/cockroach/pull/20627) -- Added a system table to allow operators to designate geographic coordinates for any locality. This is for use with upcoming cluster visualization functionality in the Admin UI. [#19652](https://github.com/cockroachdb/cockroach/pull/19652) -- When a new version of CockroachDB is available, the Admin UI now links you to the documentation explaining how to upgrade your cluster. [#19718] -- Job descriptions are now expandable. [#21333] - -

Bug Fixes

- -- Fixed the `debug compact` command to compact all sstables. [#21293](https://github.com/cockroachdb/cockroach/pull/21293) -- Fixed tuple equality to evaluate correctly in the presence of `NULL` elements. [#21230](https://github.com/cockroachdb/cockroach/pull/21230) -- Fixed a bug where the temporary directory was being wiped on failed CockroachDB restart, causing importing and DistSQL queries to fail. [#20854](https://github.com/cockroachdb/cockroach/pull/20854) -- The "JSON" column in the output of `EXPLAIN(DISTSQL)` is now properly hidden by default. It can be shown using `SELECT *, JSON FROM [EXPLAIN(DISTSQL) ...]`. [#21154](https://github.com/cockroachdb/cockroach/pull/21154) -- `EXPLAIN` queries with placeholders no longer panic. [#21168](https://github.com/cockroachdb/cockroach/pull/21168) -- The `--safe-updates` flag of `cockroach sql` can now be used effectively in non-interactive sessions. [#20935](https://github.com/cockroachdb/cockroach/pull/20935) -- Fixed a bug where non-matching interleaved rows were being inner-joined with their parent rows. [#20938](https://github.com/cockroachdb/cockroach/pull/20938) -- Fixed an issue where seemingly irrelevant error messages were being returned for certain `INSERT` statements. [#20841](https://github.com/cockroachdb/cockroach/pull/20841) -- Crash details are now properly copied to the log file even when a node was started with `--logtostderr` as well as in other circumstances when crash details could be lost previously. [#20839](https://github.com/cockroachdb/cockroach/pull/20839) -- It is no longer possible to log in as a non-existent user in insecure mode. [#20800](https://github.com/cockroachdb/cockroach/pull/20800) -- The `BIGINT` type alias is now correctly shown when using `SHOW CREATE TABLE`. [#20798](https://github.com/cockroachdb/cockroach/pull/20798) -- Fixed a scenario where a range that is too big to snapshot can lose availability even with a majority of nodes alive. [#20589](https://github.com/cockroachdb/cockroach/pull/20589) -- Fixed `BETWEEN SYMMETRIC`, which was incorrectly considered an alias for `BETWEEN`. Per the SQL99 specification, `BETWEEN SYMMETRIC` is like `BETWEEN`, except that its arguments are automatically swapped if they would specify an empty range. [#20747](https://github.com/cockroachdb/cockroach/pull/20747) -- Fixed a replica corruption that could occur if a process crashed in the middle of a range split. [#20704](https://github.com/cockroachdb/cockroach/pull/20704) -- Fixed an issue with the formatting of unicode values in string arrays. [#20657](https://github.com/cockroachdb/cockroach/pull/20657) -- Fixed detection and proper handling of certain variations of network partitions using server-side RPC keepalive in addition to client-side RPC keepalive. [#20707](https://github.com/cockroachdb/cockroach/pull/20707) -- Prevented RPC connections between nodes with incompatible versions. [#20587](https://github.com/cockroachdb/cockroach/pull/20587) -- Dangling intents are now eagerly cleaned up when `AmbiguousResultErrors` are seen. [#20628](https://github.com/cockroachdb/cockroach/pull/20628) -- Fixed the return type signature of the JSON `#>>` operator and `array_positions()` built-in function. [#20524](https://github.com/cockroachdb/cockroach/pull/20524) -- Fixed an issue where escaped characters like `A` and `\` in `LIKE`/`ILIKE` patterns were not handled properly. [#20600](https://github.com/cockroachdb/cockroach/pull/20600) -- Fixed an issue with `(NOT) (I)LIKE` pattern matching on `_...%` and `%..._` returning incorrect results. [#20600](https://github.com/cockroachdb/cockroach/pull/20600) -- Fixed a small spelling bug that made it such that a `DOUBLE PRECISION` specified type would erroneously display as a float. [#20727](https://github.com/cockroachdb/cockroach/pull/20727) -- Fixed a crash caused by null collated strings. [#20637](https://github.com/cockroachdb/cockroach/pull/20637) -- Fixed a problem that could cause spurious garbage collection activity, in particular after dropping a table. [#21407] -- Fixed incorrect logic in lease rebalancing that prevented leases from being transferred [#21430] -- `INSERT`/`UPDATE`/`DELETE ... RETURNING` statements used with the "statement data source" syntax (e.g., `SELECT * FROM [INSERT INTO ... RETURNING ...]`) no longer ever commit the transaction, causing errors for the higher-level query. [#20847] -- Fixed a crash caused by a column being backfilled in an index constraint. [#21308] -- Fixed a bug in which ranges could get stuck if the uncommitted raft log grew too large [#21356] -- The `setval()` built-in function no longer lets you set a value outside the `MAXVALUE`/`MINVALUE` of a [`SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#20973] - -

Performance Improvements

- -- Improved the efficiency of scans with joins and certain complex `WHERE` clauses containing tuple equality. [#21288](https://github.com/cockroachdb/cockroach/pull/21288) -- Improved the efficiency scans for certain `WHERE` clauses. [#21217](https://github.com/cockroachdb/cockroach/pull/21217) -- Reduced per-row overhead in distsql query execution. [#21251](https://github.com/cockroachdb/cockroach/pull/21251) -- Added support for distributed execution of [`UNION`](https://www.cockroachlabs.com/docs/v2.0/selection-queries#union-combine-two-queries) queries. [#21175](https://github.com/cockroachdb/cockroach/pull/21175) -- Improved performance for aggregation and distinct operations by arena allocating "bucket" storage. [#21160](https://github.com/cockroachdb/cockroach/pull/21160) -- Distributed execution of `UNION ALL` queries is now supported. [#20742](https://github.com/cockroachdb/cockroach/pull/20742) -- Reduced the fixed overhead of commands sent through Raft by 40% by only sending lease sequence numbers instead of sending the entire lease structure. [#20953](https://github.com/cockroachdb/cockroach/pull/20953) -- When tables are dropped, the space is now reclaimed in a more timely fashion. [#20607](https://github.com/cockroachdb/cockroach/pull/20607) -- Increased speed of except and merge joins by avoiding an unnecessary allocation. [#20759](https://github.com/cockroachdb/cockroach/pull/20759) -- Improved rebalancing to make thrashing back and forth between nodes much less likely, including when localities have very different numbers of nodes. [#20709](https://github.com/cockroachdb/cockroach/pull/20709) -- Improved performance of `DISTINCT` queries by avoiding an unnecessary allocation. [#20755](https://github.com/cockroachdb/cockroach/pull/20755) [#20750](https://github.com/cockroachdb/cockroach/pull/20750) -- Significantly improved the efficiency of `DROP TABLE` and `TRUNCATE`. [#20601](https://github.com/cockroachdb/cockroach/pull/20601) -- Improved performance of low-level row manipulation routines. [#20688](https://github.com/cockroachdb/cockroach/pull/20688) -- Raft followers now write to their disks in parallel with the leader. [#19229](https://github.com/cockroachdb/cockroach/pull/19229) -- Significantly reduced the overhead of SQL memory accounting. [#20590](https://github.com/cockroachdb/cockroach/pull/20590) -- Equality joins on the entire interleave prefix between parent and (not necessarily direct) child interleaved tables are now faster. [#19853](https://github.com/cockroachdb/cockroach/pull/19853) -- Improved the performance of scans that need to look at non-indexed columns. [#21459] -- Improved the performance of all scans that encounter a large number of versions per key, and the low-level reverse scan throughput by up almost 4x when there are a few versions per key.. [#21438] -- Queries on tables with many columns are now more efficient. [#21450] -- Queries that only need to read part of a table with many columns are now more efficient. [#21450] -- Improved low-level scan performance by 15-20% by disabling redundant checksums. [#21395] -- Re-implemented low-level scan operations in C++, doubling performance for scans of contiguous keys/rows. [#21395] -- Reduced the occurrence of ambiguous errors when a node is down. [#21376] -- Sped up distsql query execution by "fusing" processors executing on the same node together. [#21254] - -

Enterprise Edition Changes

- -- [Incremental backups](https://www.cockroachlabs.com/docs/v2.0/backup#incremental-backups) of a database after a table or index was added is now supported.[#21170] -- When using HTTPS storage for [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) or [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore), a custom CA for HTTPS storage providers can now be specified via the `cloudstorage.http.custom_ca` cluster setting. This is also used when accessing custom s3 export storage endpoints. [#21358] [#21404] -- Bulk writes are now synced to disk periodically to ensure more predictable performance. [#20449] - -

Doc Updates

- -- Added [best practices for optimizing SQL performance](https://www.cockroachlabs.com/docs/v2.0/performance-best-practices-overview) in CockroachDB. [#2243](https://github.com/cockroachdb/docs/pull/2243) -- Added more detailed [clock synchronization guidance per cloud provider](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#clock-synchronization). [#2295](https://github.com/cockroachdb/docs/pull/2295) -- Added a tutorial on using our Kubernetes-orchestrated AWS CloudFormation template for easy deployment and testing of CockroachDB. [#2356](https://github.com/cockroachdb/docs/pull/2356) -- Added docs on the [`TIME`](https://www.cockroachlabs.com/docs/v2.0/time) data type. [#2336](https://github.com/cockroachdb/docs/pull/2336) -- Added guidance on [reducing or disabling the storage of timeseries data](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0). [#2361](https://github.com/cockroachdb/docs/pull/2361) -- Added docs on the [`CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/create-sequence), [`ALTER SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/alter-sequence), and [`DROP SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/drop-sequence) statements. [#2292](https://github.com/cockroachdb/docs/pull/2292) -- Various improvements to the docs on the [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import), [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup), and [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) statements. [#2340](https://github.com/cockroachdb/docs/pull/2340) -- Improved the styling of code samples and page tocs. [#2323](https://github.com/cockroachdb/docs/pull/2323) [#2371](https://github.com/cockroachdb/docs/pull/2371) - -
- -

Contributors

- -This release includes 111 merged PRs by 33 authors. We would like to thank the following contributors from the CockroachDB community: - -- Yang Yuting -- Dmitry Saveliev -- Jincheng Li -- Mohamed Elqdusy -- 何羿宏 -- louishust - -
- -[#19718]: https://github.com/cockroachdb/cockroach/pull/19718 -[#20449]: https://github.com/cockroachdb/cockroach/pull/20449 -[#20632]: https://github.com/cockroachdb/cockroach/pull/20632 -[#20645]: https://github.com/cockroachdb/cockroach/pull/20645 -[#20710]: https://github.com/cockroachdb/cockroach/pull/20710 -[#20721]: https://github.com/cockroachdb/cockroach/pull/20721 -[#20735]: https://github.com/cockroachdb/cockroach/pull/20735 -[#20825]: https://github.com/cockroachdb/cockroach/pull/20825 -[#20847]: https://github.com/cockroachdb/cockroach/pull/20847 -[#20950]: https://github.com/cockroachdb/cockroach/pull/20950 -[#20971]: https://github.com/cockroachdb/cockroach/pull/20971 -[#20972]: https://github.com/cockroachdb/cockroach/pull/20972 -[#20973]: https://github.com/cockroachdb/cockroach/pull/20973 -[#20976]: https://github.com/cockroachdb/cockroach/pull/20976 -[#21010]: https://github.com/cockroachdb/cockroach/pull/21010 -[#21019]: https://github.com/cockroachdb/cockroach/pull/21019 -[#21044]: https://github.com/cockroachdb/cockroach/pull/21044 -[#21078]: https://github.com/cockroachdb/cockroach/pull/21078 -[#21118]: https://github.com/cockroachdb/cockroach/pull/21118 -[#21120]: https://github.com/cockroachdb/cockroach/pull/21120 -[#21170]: https://github.com/cockroachdb/cockroach/pull/21170 -[#21177]: https://github.com/cockroachdb/cockroach/pull/21177 -[#21180]: https://github.com/cockroachdb/cockroach/pull/21180 -[#21254]: https://github.com/cockroachdb/cockroach/pull/21254 -[#21263]: https://github.com/cockroachdb/cockroach/pull/21263 -[#21267]: https://github.com/cockroachdb/cockroach/pull/21267 -[#21268]: https://github.com/cockroachdb/cockroach/pull/21268 -[#21272]: https://github.com/cockroachdb/cockroach/pull/21272 -[#21274]: https://github.com/cockroachdb/cockroach/pull/21274 -[#21275]: https://github.com/cockroachdb/cockroach/pull/21275 -[#21279]: https://github.com/cockroachdb/cockroach/pull/21279 -[#21280]: https://github.com/cockroachdb/cockroach/pull/21280 -[#21292]: https://github.com/cockroachdb/cockroach/pull/21292 -[#21297]: https://github.com/cockroachdb/cockroach/pull/21297 -[#21299]: https://github.com/cockroachdb/cockroach/pull/21299 -[#21305]: https://github.com/cockroachdb/cockroach/pull/21305 -[#21307]: https://github.com/cockroachdb/cockroach/pull/21307 -[#21308]: https://github.com/cockroachdb/cockroach/pull/21308 -[#21312]: https://github.com/cockroachdb/cockroach/pull/21312 -[#21313]: https://github.com/cockroachdb/cockroach/pull/21313 -[#21314]: https://github.com/cockroachdb/cockroach/pull/21314 -[#21315]: https://github.com/cockroachdb/cockroach/pull/21315 -[#21317]: https://github.com/cockroachdb/cockroach/pull/21317 -[#21318]: https://github.com/cockroachdb/cockroach/pull/21318 -[#21319]: https://github.com/cockroachdb/cockroach/pull/21319 -[#21322]: https://github.com/cockroachdb/cockroach/pull/21322 -[#21325]: https://github.com/cockroachdb/cockroach/pull/21325 -[#21328]: https://github.com/cockroachdb/cockroach/pull/21328 -[#21329]: https://github.com/cockroachdb/cockroach/pull/21329 -[#21330]: https://github.com/cockroachdb/cockroach/pull/21330 -[#21333]: https://github.com/cockroachdb/cockroach/pull/21333 -[#21335]: https://github.com/cockroachdb/cockroach/pull/21335 -[#21336]: https://github.com/cockroachdb/cockroach/pull/21336 -[#21339]: https://github.com/cockroachdb/cockroach/pull/21339 -[#21341]: https://github.com/cockroachdb/cockroach/pull/21341 -[#21344]: https://github.com/cockroachdb/cockroach/pull/21344 -[#21346]: https://github.com/cockroachdb/cockroach/pull/21346 -[#21347]: https://github.com/cockroachdb/cockroach/pull/21347 -[#21349]: https://github.com/cockroachdb/cockroach/pull/21349 -[#21352]: https://github.com/cockroachdb/cockroach/pull/21352 -[#21353]: https://github.com/cockroachdb/cockroach/pull/21353 -[#21356]: https://github.com/cockroachdb/cockroach/pull/21356 -[#21358]: https://github.com/cockroachdb/cockroach/pull/21358 -[#21359]: https://github.com/cockroachdb/cockroach/pull/21359 -[#21361]: https://github.com/cockroachdb/cockroach/pull/21361 -[#21365]: https://github.com/cockroachdb/cockroach/pull/21365 -[#21366]: https://github.com/cockroachdb/cockroach/pull/21366 -[#21368]: https://github.com/cockroachdb/cockroach/pull/21368 -[#21370]: https://github.com/cockroachdb/cockroach/pull/21370 -[#21374]: https://github.com/cockroachdb/cockroach/pull/21374 -[#21375]: https://github.com/cockroachdb/cockroach/pull/21375 -[#21376]: https://github.com/cockroachdb/cockroach/pull/21376 -[#21377]: https://github.com/cockroachdb/cockroach/pull/21377 -[#21379]: https://github.com/cockroachdb/cockroach/pull/21379 -[#21380]: https://github.com/cockroachdb/cockroach/pull/21380 -[#21381]: https://github.com/cockroachdb/cockroach/pull/21381 -[#21382]: https://github.com/cockroachdb/cockroach/pull/21382 -[#21388]: https://github.com/cockroachdb/cockroach/pull/21388 -[#21389]: https://github.com/cockroachdb/cockroach/pull/21389 -[#21391]: https://github.com/cockroachdb/cockroach/pull/21391 -[#21392]: https://github.com/cockroachdb/cockroach/pull/21392 -[#21393]: https://github.com/cockroachdb/cockroach/pull/21393 -[#21395]: https://github.com/cockroachdb/cockroach/pull/21395 -[#21397]: https://github.com/cockroachdb/cockroach/pull/21397 -[#21398]: https://github.com/cockroachdb/cockroach/pull/21398 -[#21401]: https://github.com/cockroachdb/cockroach/pull/21401 -[#21403]: https://github.com/cockroachdb/cockroach/pull/21403 -[#21404]: https://github.com/cockroachdb/cockroach/pull/21404 -[#21405]: https://github.com/cockroachdb/cockroach/pull/21405 -[#21407]: https://github.com/cockroachdb/cockroach/pull/21407 -[#21422]: https://github.com/cockroachdb/cockroach/pull/21422 -[#21426]: https://github.com/cockroachdb/cockroach/pull/21426 -[#21427]: https://github.com/cockroachdb/cockroach/pull/21427 -[#21428]: https://github.com/cockroachdb/cockroach/pull/21428 -[#21430]: https://github.com/cockroachdb/cockroach/pull/21430 -[#21431]: https://github.com/cockroachdb/cockroach/pull/21431 -[#21436]: https://github.com/cockroachdb/cockroach/pull/21436 -[#21438]: https://github.com/cockroachdb/cockroach/pull/21438 -[#21439]: https://github.com/cockroachdb/cockroach/pull/21439 -[#21441]: https://github.com/cockroachdb/cockroach/pull/21441 -[#21448]: https://github.com/cockroachdb/cockroach/pull/21448 -[#21449]: https://github.com/cockroachdb/cockroach/pull/21449 -[#21450]: https://github.com/cockroachdb/cockroach/pull/21450 -[#21457]: https://github.com/cockroachdb/cockroach/pull/21457 -[#21459]: https://github.com/cockroachdb/cockroach/pull/21459 -[#21466]: https://github.com/cockroachdb/cockroach/pull/21466 -[#21469]: https://github.com/cockroachdb/cockroach/pull/21469 -[#21472]: https://github.com/cockroachdb/cockroach/pull/21472 -[#21475]: https://github.com/cockroachdb/cockroach/pull/21475 -[#21484]: https://github.com/cockroachdb/cockroach/pull/21484 -[#21493]: https://github.com/cockroachdb/cockroach/pull/21493 diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20180129.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20180129.md deleted file mode 100644 index 5431659c313..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-alpha.20180129.md +++ /dev/null @@ -1,186 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- CockroachDB now uses gRPC version 1.9.2 [#21600][#21600] - -

Enterprise Changes

- -- Failed [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) cleanup no longer causes the [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) job to perpetually loop if external storage fails or is removed. [#21559][#21559] -- Non-transactional [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) and [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) statements are now disallowed inside transactions. [#21488][#21488] - -

SQL Language Changes

- -- Reduced the size of `system.rangelog` entries to save disk space. [#21410][#21410] -- Prevented adding both a cascading referential constraint action and a check constraint to a column. [#21690][#21690] -- Added `json_array_length` function that returns the number of elements in the outermost `JSON` or `JSONB` array. [#21611][#21611] -- Added `referential_constraints` table to the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema). The `referential_constraints` table contains all foreign key constraints in the current database. [#21615][#21615] -- Replaced `BOOL` columns in the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema) with `STRING` columns to conform to the SQL specification. [#21612][#21612] -- Added support for inverted indexes for `JSON`. [#20941][#20941] -- [`DROP SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/drop-sequence) cannot drop a `SEQUENCE` that is currently in use. [#21364][#21364] -- The [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) function now does not require the `temp` directory and no longer creates a `RESTORE` job. Additionally, the `TRANSFORM_ONLY` option for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) has been renamed to `TRANSFORM` and now takes an argument specifying the target directory. [#21490][#21490] - -

Command-Line Changes

- -- `cockroach start --background` now returns earlier for nodes awaiting the `cockroach init` command, facilitating use of automated scripts. [#21682][#21682] -- Command-line utilities that print results with the `pretty` format now use consistent horizontal alignment for every row of the result. [#18491][#18491] - -

Admin UI Changes

- -- The **Command Queue** debug page now displays errors correctly. [#21529][#21529] -- The **Problem Ranges** debug page now displays all problem ranges for the cluster. [#21522][#21522] - -

Bug Fixes

- -- Fixed a crash caused by `NULL` placeholder in comparison expressions. [#21705][#21705] -- The [`EXPLAIN (VERBOSE)`](https://www.cockroachlabs.com/docs/v2.0/explain#verbose-option) output now correctly shows the columns and properties of each query node (instead of incorrectly showing the columns and properties of the `root`). [#21527][#21527] -- Added a mechanism to recompute range stats automatically over time to reflect changes in the underlying logic. [#21345][#21345] - -

Performance Improvements

- -- Multiple ranges can now split at the same time, improving our ability to handle hotspot workloads. [#21673][#21673] -- Improved performance for queries that do not read any columns from the key component of the row. [#21571][#21571] -- Improved performance of scans by reducing efforts for non-required columns. [#21572][#21572] -- Improved efficiency of the key decoding operation. [#21498][#21498] -- Sped up the performance of low-level delete operations. [#21507][#21507] -- Prevented the jobs table from growing excessively large during jobs table updates. [#21575][#21575] - -
- -

Contributors

- -This release includes 110 merged PRs by 31 authors. We would like to thank the following contributors from the CockroachDB community: - -- Constantine Peresypkin -- 何羿宏 - -Special thanks to first-time contributors Andrew Kimball, Nathaniel Stewart, Constantine Peresypkin and Paul Bardea. - -
- -[#18491]: https://github.com/cockroachdb/cockroach/pull/18491 -[#19618]: https://github.com/cockroachdb/cockroach/pull/19618 -[#20215]: https://github.com/cockroachdb/cockroach/pull/20215 -[#20915]: https://github.com/cockroachdb/cockroach/pull/20915 -[#20941]: https://github.com/cockroachdb/cockroach/pull/20941 -[#21132]: https://github.com/cockroachdb/cockroach/pull/21132 -[#21345]: https://github.com/cockroachdb/cockroach/pull/21345 -[#21363]: https://github.com/cockroachdb/cockroach/pull/21363 -[#21364]: https://github.com/cockroachdb/cockroach/pull/21364 -[#21373]: https://github.com/cockroachdb/cockroach/pull/21373 -[#21386]: https://github.com/cockroachdb/cockroach/pull/21386 -[#21387]: https://github.com/cockroachdb/cockroach/pull/21387 -[#21408]: https://github.com/cockroachdb/cockroach/pull/21408 -[#21410]: https://github.com/cockroachdb/cockroach/pull/21410 -[#21411]: https://github.com/cockroachdb/cockroach/pull/21411 -[#21418]: https://github.com/cockroachdb/cockroach/pull/21418 -[#21447]: https://github.com/cockroachdb/cockroach/pull/21447 -[#21473]: https://github.com/cockroachdb/cockroach/pull/21473 -[#21478]: https://github.com/cockroachdb/cockroach/pull/21478 -[#21479]: https://github.com/cockroachdb/cockroach/pull/21479 -[#21481]: https://github.com/cockroachdb/cockroach/pull/21481 -[#21486]: https://github.com/cockroachdb/cockroach/pull/21486 -[#21488]: https://github.com/cockroachdb/cockroach/pull/21488 -[#21490]: https://github.com/cockroachdb/cockroach/pull/21490 -[#21491]: https://github.com/cockroachdb/cockroach/pull/21491 -[#21497]: https://github.com/cockroachdb/cockroach/pull/21497 -[#21498]: https://github.com/cockroachdb/cockroach/pull/21498 -[#21505]: https://github.com/cockroachdb/cockroach/pull/21505 -[#21507]: https://github.com/cockroachdb/cockroach/pull/21507 -[#21508]: https://github.com/cockroachdb/cockroach/pull/21508 -[#21511]: https://github.com/cockroachdb/cockroach/pull/21511 -[#21512]: https://github.com/cockroachdb/cockroach/pull/21512 -[#21513]: https://github.com/cockroachdb/cockroach/pull/21513 -[#21515]: https://github.com/cockroachdb/cockroach/pull/21515 -[#21517]: https://github.com/cockroachdb/cockroach/pull/21517 -[#21519]: https://github.com/cockroachdb/cockroach/pull/21519 -[#21520]: https://github.com/cockroachdb/cockroach/pull/21520 -[#21521]: https://github.com/cockroachdb/cockroach/pull/21521 -[#21522]: https://github.com/cockroachdb/cockroach/pull/21522 -[#21524]: https://github.com/cockroachdb/cockroach/pull/21524 -[#21527]: https://github.com/cockroachdb/cockroach/pull/21527 -[#21529]: https://github.com/cockroachdb/cockroach/pull/21529 -[#21532]: https://github.com/cockroachdb/cockroach/pull/21532 -[#21533]: https://github.com/cockroachdb/cockroach/pull/21533 -[#21534]: https://github.com/cockroachdb/cockroach/pull/21534 -[#21540]: https://github.com/cockroachdb/cockroach/pull/21540 -[#21541]: https://github.com/cockroachdb/cockroach/pull/21541 -[#21542]: https://github.com/cockroachdb/cockroach/pull/21542 -[#21546]: https://github.com/cockroachdb/cockroach/pull/21546 -[#21547]: https://github.com/cockroachdb/cockroach/pull/21547 -[#21549]: https://github.com/cockroachdb/cockroach/pull/21549 -[#21550]: https://github.com/cockroachdb/cockroach/pull/21550 -[#21553]: https://github.com/cockroachdb/cockroach/pull/21553 -[#21554]: https://github.com/cockroachdb/cockroach/pull/21554 -[#21555]: https://github.com/cockroachdb/cockroach/pull/21555 -[#21556]: https://github.com/cockroachdb/cockroach/pull/21556 -[#21557]: https://github.com/cockroachdb/cockroach/pull/21557 -[#21559]: https://github.com/cockroachdb/cockroach/pull/21559 -[#21562]: https://github.com/cockroachdb/cockroach/pull/21562 -[#21567]: https://github.com/cockroachdb/cockroach/pull/21567 -[#21568]: https://github.com/cockroachdb/cockroach/pull/21568 -[#21571]: https://github.com/cockroachdb/cockroach/pull/21571 -[#21572]: https://github.com/cockroachdb/cockroach/pull/21572 -[#21575]: https://github.com/cockroachdb/cockroach/pull/21575 -[#21579]: https://github.com/cockroachdb/cockroach/pull/21579 -[#21582]: https://github.com/cockroachdb/cockroach/pull/21582 -[#21585]: https://github.com/cockroachdb/cockroach/pull/21585 -[#21587]: https://github.com/cockroachdb/cockroach/pull/21587 -[#21588]: https://github.com/cockroachdb/cockroach/pull/21588 -[#21591]: https://github.com/cockroachdb/cockroach/pull/21591 -[#21596]: https://github.com/cockroachdb/cockroach/pull/21596 -[#21600]: https://github.com/cockroachdb/cockroach/pull/21600 -[#21603]: https://github.com/cockroachdb/cockroach/pull/21603 -[#21606]: https://github.com/cockroachdb/cockroach/pull/21606 -[#21607]: https://github.com/cockroachdb/cockroach/pull/21607 -[#21610]: https://github.com/cockroachdb/cockroach/pull/21610 -[#21611]: https://github.com/cockroachdb/cockroach/pull/21611 -[#21612]: https://github.com/cockroachdb/cockroach/pull/21612 -[#21614]: https://github.com/cockroachdb/cockroach/pull/21614 -[#21615]: https://github.com/cockroachdb/cockroach/pull/21615 -[#21626]: https://github.com/cockroachdb/cockroach/pull/21626 -[#21631]: https://github.com/cockroachdb/cockroach/pull/21631 -[#21633]: https://github.com/cockroachdb/cockroach/pull/21633 -[#21636]: https://github.com/cockroachdb/cockroach/pull/21636 -[#21644]: https://github.com/cockroachdb/cockroach/pull/21644 -[#21645]: https://github.com/cockroachdb/cockroach/pull/21645 -[#21650]: https://github.com/cockroachdb/cockroach/pull/21650 -[#21652]: https://github.com/cockroachdb/cockroach/pull/21652 -[#21656]: https://github.com/cockroachdb/cockroach/pull/21656 -[#21658]: https://github.com/cockroachdb/cockroach/pull/21658 -[#21662]: https://github.com/cockroachdb/cockroach/pull/21662 -[#21666]: https://github.com/cockroachdb/cockroach/pull/21666 -[#21667]: https://github.com/cockroachdb/cockroach/pull/21667 -[#21668]: https://github.com/cockroachdb/cockroach/pull/21668 -[#21670]: https://github.com/cockroachdb/cockroach/pull/21670 -[#21673]: https://github.com/cockroachdb/cockroach/pull/21673 -[#21676]: https://github.com/cockroachdb/cockroach/pull/21676 -[#21677]: https://github.com/cockroachdb/cockroach/pull/21677 -[#21678]: https://github.com/cockroachdb/cockroach/pull/21678 -[#21679]: https://github.com/cockroachdb/cockroach/pull/21679 -[#21682]: https://github.com/cockroachdb/cockroach/pull/21682 -[#21685]: https://github.com/cockroachdb/cockroach/pull/21685 -[#21690]: https://github.com/cockroachdb/cockroach/pull/21690 -[#21691]: https://github.com/cockroachdb/cockroach/pull/21691 -[#21698]: https://github.com/cockroachdb/cockroach/pull/21698 -[#21705]: https://github.com/cockroachdb/cockroach/pull/21705 -[#21708]: https://github.com/cockroachdb/cockroach/pull/21708 -[#21714]: https://github.com/cockroachdb/cockroach/pull/21714 -[#21720]: https://github.com/cockroachdb/cockroach/pull/21720 -[#21721]: https://github.com/cockroachdb/cockroach/pull/21721 -[2f402e234]: https://github.com/cockroachdb/cockroach/commit/2f402e234 -[d2e5fd351]: https://github.com/cockroachdb/cockroach/commit/d2e5fd351 -[187f8e662]: https://github.com/cockroachdb/cockroach/commit/187f8e662 -[559fcffe7]: https://github.com/cockroachdb/cockroach/commit/559fcffe7 -[f8cb074e2]: https://github.com/cockroachdb/cockroach/commit/f8cb074e2 -[0cde4dcdb]: https://github.com/cockroachdb/cockroach/commit/0cde4dcdb -[2d4883c12]: https://github.com/cockroachdb/cockroach/commit/2d4883c12 -[7b6e775b9]: https://github.com/cockroachdb/cockroach/commit/7b6e775b9 -[b74ce84bc]: https://github.com/cockroachdb/cockroach/commit/b74ce84bc -[7ee5c635b]: https://github.com/cockroachdb/cockroach/commit/7ee5c635b -[9e25aeb8a]: https://github.com/cockroachdb/cockroach/commit/9e25aeb8a -[900291e48]: https://github.com/cockroachdb/cockroach/commit/900291e48 -[33d1f2749]: https://github.com/cockroachdb/cockroach/commit/33d1f2749 -[4f33f0239]: https://github.com/cockroachdb/cockroach/commit/4f33f0239 diff --git a/src/current/_includes/releases/v2.0/v2.0-alpha.20180212.md b/src/current/_includes/releases/v2.0/v2.0-alpha.20180212.md deleted file mode 100644 index 3b191feeaee..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-alpha.20180212.md +++ /dev/null @@ -1,135 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -{{site.data.alerts.callout_danger}}A bug that could cause transactional anomalies was introduced in this release, so this release has been withdrawn.{{site.data.alerts.end}} - -

Enterprise Edition Changes

- -- [Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) are now supported in enterprise [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) jobs. - - This changes how sequences are stored in the key-value storage layer, so existing sequences must be dropped and recreated. Since a sequence cannot be dropped while it is being used in a column's `DEFAULT` expression, those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval` function can be used to set the value of a sequence to what it was previously. [#21684][#21684] - -

SQL Language Changes

- -- Casts between [array types](https://www.cockroachlabs.com/docs/v2.0/array) are now allowed when a cast between the parameter types is allowed. [#22338][#22338] -- Scalar functions can now be used in `FROM` clauses. [#22314][#22314] -- Added privilege checks on [sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#22284][#22284] -- The `ON DELETE SET DEFAULT`, `ON UPDATE SET DEFAULT`, `ON DELETE SET NULL`, and `ON UPDATE SET NULL` foreign key constraint actions are now fully supported. [#22220][#22220] [#21767][#21767] [#21716][#21716] -- JSON inverted indexes can now be specified in a [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v2.0/create-table) statement. [#22217][#22217] -- When a node is gracefully shut down, planning queries are avoided and distributed queries are allowed the amount of time specified by the new `server.drain_max_wait` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) before the node is drained and stopped. [#20450][#20450] -- [Collated string](https://www.cockroachlabs.com/docs/v2.0/collate) are now supported in [IMPORT](https://www.cockroachlabs.com/docs/v2.0/import) jobs. [#21859][#21859] -- The new `SHOW GRANTS ON ROLE` statement and `pg_catalog.pg_auth_members` table lists role memberships. [#22205][#22205] [#21780][#21780] -- Role memberships are now considered in permission checks. [#21820][#21820] - -

Command-Line Changes

- -- [Debug commands](https://www.cockroachlabs.com/docs/v2.0/debug-zip) now open RocksDB in read-only mode. This makes them faster and able to run in parallel. [#21778][#21778] -- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) command now outputs `CREATE SEQUENCE` statements before the `CREATE TABLE` statements that use them. [#21774][#21774] -- For better compatibility with `psql`'s extended format, the table formatter `records` now properly indicates line continuations in multi-line rows. [#22325][#22325] -- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) client-side option `show_times` is now always enabled when output goes to a terminal, not just when `display_format` is set to `pretty`. [#22326][#22326] -- When formatting `cockroach sql` results with `--format=sql`, the row count is now printed in a SQL comment at the end. [#22327][#22327] -- When formatting `cockroach sql` results with `--format=csv` or `--format=tsv`, result rows that contain special characters are now quoted properly. [#19306][#19306] - -

Admin UI Changes

- -- Added an icon to indicate when descriptions in the **Jobs** table are shortened and expandable. [#22221][#22221] -- Added "compaction queue" graphs to the **Queues** dashboard. [#22218][#22218] -- Added Raft snapshot queue metrics to the **Queue** dashboard. [#22210][#22210] -- When there are dead nodes, they are now show before live nodes on the **Nodes List** page. [#22222][#22222] -- Links to documentation in the Admin UI now point to the docs for v2.0. [#21894][#21894] - -

Bug Fixes

- -- Errors from DDL statements sent by a client as part of a transaction, but in a different query string than the final commit, are no longer silently swallowed. [#21829][#21829] -- Fixed a bug in cascading foreign key actions. [#21799][#21799] -- Tabular results where the column labels contain newline characters are now rendered properly. [#19306][#19306] -- Fixed a bug that prevented long descriptions in the Admin UI **Jobs** table from being collapsed after being expanding. [#22221][#22221] -- Fixed a bug that prevented using `SHOW GRANTS` with a grantee but no targets. [#21864][#21864] -- Fixed a panic with certain queries involving the `REGCLASS` type. [#22310][#22310] -- Fixed the behavior and types of the `encode()` and `decode()` functions. [#22230][#22230] -- Fixed a bug that prevented passing the same tuple for `FROM` and `TO` in `ALTER TABLE ... SCATTER`. [#21830][#21830] -- Fixed a regression that caused certain queries using `LIKE` or `SIMILAR TO` with an indexed column to be slow. [#21842][#21842] -- Fixed a stack overflow in the code for shutting down a server when out of disk space [#21768][#21768] -- Fixed Windows release builds. [#21793][#21793] -- Fixed an issue with the wire-formatting of `BYTES` arrays. [#21712][#21712] -- Fixed a bug that could lead to a node crashing and needing to be reinitialized. [#21771][#21771] -- When a database is created, dropped, or renamed, the SQL session is blocked until the effects of the operation are visible to future queries in that session. [#21900][#21900] -- Fixed a bug where healthy nodes could appear as "Suspect" in the Admin UI if the web browser's local clock was skewed. [#22237][#22237] - -

Performance Improvements

- -- Significantly reduced the likelihood of serializable restarts seen by clients due to concurrent workloads. [#21140][#21140] -- Reduced disruption from nodes recovering from network partitions. [#22316][#22316] -- Improved the performance of scans by copying less data in memory. [#22309][#22309] -- Slightly improved the performance of low-level scan operations. [#22244][#22244] -- When a range grows too large, writes are now be backpressured until the range is successfully able to split. This prevents unbounded range growth and improves a clusters ability to stay healthy under hotspot workloads. [#21777][#21777] -- The `information_schema` and `pg_catalog` databases are now faster to query. [#21609][#21609] -- Reduced the write amplification of Raft replication. [#20647][#20647] - -

Doc Updates

- -- Added [cloud-specific hardware recommendations](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#cloud-specific-recommendations). [#2312](https://github.com/cockroachdb/docs/pull/2312) -- Added a [detailed listing of SQL standard features with CockroachDB's level of support](https://www.cockroachlabs.com/docs/v2.0/sql-feature-support). [#2442](https://github.com/cockroachdb/docs/pull/2442) -- Added docs on the [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) data type. [#2439](https://github.com/cockroachdb/docs/pull/2439) -- Added docs on the [`SHOW CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/show-create-sequence) statement. [#2406](https://github.com/cockroachdb/docs/pull/2406) - -
- -

Contributors

- -This release includes 133 merged PRs by 28 authors. We would like to thank the contributors from the CockroachDB community, especially first-time contributor pocockn. - -
- -[#19306]: https://github.com/cockroachdb/cockroach/pull/19306 -[#20290]: https://github.com/cockroachdb/cockroach/pull/20290 -[#20450]: https://github.com/cockroachdb/cockroach/pull/20450 -[#20647]: https://github.com/cockroachdb/cockroach/pull/20647 -[#21140]: https://github.com/cockroachdb/cockroach/pull/21140 -[#21609]: https://github.com/cockroachdb/cockroach/pull/21609 -[#21684]: https://github.com/cockroachdb/cockroach/pull/21684 -[#21712]: https://github.com/cockroachdb/cockroach/pull/21712 -[#21716]: https://github.com/cockroachdb/cockroach/pull/21716 -[#21754]: https://github.com/cockroachdb/cockroach/pull/21754 -[#21767]: https://github.com/cockroachdb/cockroach/pull/21767 -[#21768]: https://github.com/cockroachdb/cockroach/pull/21768 -[#21771]: https://github.com/cockroachdb/cockroach/pull/21771 -[#21774]: https://github.com/cockroachdb/cockroach/pull/21774 -[#21777]: https://github.com/cockroachdb/cockroach/pull/21777 -[#21778]: https://github.com/cockroachdb/cockroach/pull/21778 -[#21780]: https://github.com/cockroachdb/cockroach/pull/21780 -[#21793]: https://github.com/cockroachdb/cockroach/pull/21793 -[#21799]: https://github.com/cockroachdb/cockroach/pull/21799 -[#21816]: https://github.com/cockroachdb/cockroach/pull/21816 -[#21820]: https://github.com/cockroachdb/cockroach/pull/21820 -[#21829]: https://github.com/cockroachdb/cockroach/pull/21829 -[#21830]: https://github.com/cockroachdb/cockroach/pull/21830 -[#21842]: https://github.com/cockroachdb/cockroach/pull/21842 -[#21847]: https://github.com/cockroachdb/cockroach/pull/21847 -[#21859]: https://github.com/cockroachdb/cockroach/pull/21859 -[#21864]: https://github.com/cockroachdb/cockroach/pull/21864 -[#21894]: https://github.com/cockroachdb/cockroach/pull/21894 -[#21900]: https://github.com/cockroachdb/cockroach/pull/21900 -[#22205]: https://github.com/cockroachdb/cockroach/pull/22205 -[#22210]: https://github.com/cockroachdb/cockroach/pull/22210 -[#22217]: https://github.com/cockroachdb/cockroach/pull/22217 -[#22218]: https://github.com/cockroachdb/cockroach/pull/22218 -[#22220]: https://github.com/cockroachdb/cockroach/pull/22220 -[#22221]: https://github.com/cockroachdb/cockroach/pull/22221 -[#22222]: https://github.com/cockroachdb/cockroach/pull/22222 -[#22230]: https://github.com/cockroachdb/cockroach/pull/22230 -[#22237]: https://github.com/cockroachdb/cockroach/pull/22237 -[#22244]: https://github.com/cockroachdb/cockroach/pull/22244 -[#22245]: https://github.com/cockroachdb/cockroach/pull/22245 -[#22278]: https://github.com/cockroachdb/cockroach/pull/22278 -[#22284]: https://github.com/cockroachdb/cockroach/pull/22284 -[#22309]: https://github.com/cockroachdb/cockroach/pull/22309 -[#22310]: https://github.com/cockroachdb/cockroach/pull/22310 -[#22314]: https://github.com/cockroachdb/cockroach/pull/22314 -[#22316]: https://github.com/cockroachdb/cockroach/pull/22316 -[#22319]: https://github.com/cockroachdb/cockroach/pull/22319 -[#22325]: https://github.com/cockroachdb/cockroach/pull/22325 -[#22326]: https://github.com/cockroachdb/cockroach/pull/22326 -[#22327]: https://github.com/cockroachdb/cockroach/pull/22327 -[#22338]: https://github.com/cockroachdb/cockroach/pull/22338 diff --git a/src/current/_includes/releases/v2.0/v2.0-beta.20180305.md b/src/current/_includes/releases/v2.0/v2.0-beta.20180305.md deleted file mode 100644 index eaa1cff8f63..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-beta.20180305.md +++ /dev/null @@ -1,375 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This week's release includes: - - - Improved support for large delete statements. - - Reduced disruption during upgrades and restarts. - - Reduced occurrence of serializable transaction restarts. - -{{site.data.alerts.callout_danger}}This release has a bug that may result in incorrect results for certain queries using JOIN and ORDER BY. This bug will be fixed in next week's beta.{{site.data.alerts.end}} - -

Backwards-Incompatible Changes

- -- [Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) are now supported in enterprise [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) jobs. - - This changes how sequences are stored in the key-value storage layer, so existing sequences must be dropped and recreated. Since a sequence cannot be dropped while it is being used in a column's [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expression, those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval()` function can be used to set the value of a sequence to what it was previously. [#21684][#21684] - -- Positive [constraints in replication zone configs](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#replication-constraints) no longer work. Any existing positive constraints will be ignored. This change should not impact existing deployments since positive constraints have not been documented or supported for some time. [#22906][#22906] - -

Build Changes

- -- CockroachDB now builds with go 1.9.4 and higher. [#22608][#22608] - -

General Changes

- -- [Diagnostics reports](https://www.cockroachlabs.com/docs/v2.0/diagnostics-reporting) now include information about changed settings and statistics on types of errors encountered during SQL execution. [#22705][#22705], [#22693][#22693], [#22948][#22948] - -

Enterprise Edition Changes

- -- Revision history [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) is no longer considered experimental. [#22679][#22679] -- Revision history [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) now handles schema changes. [#21717][#21717] -- CockroachDB now checks that a backup actually contains the requested restore time. [#22659][#22659] -- Improved [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup)'s handling of tables after [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.0/truncate). [#21895][#21895] -- Ensured that only the backups created by the same cluster can be used in incremental backups. [#22474][#22474] -- Avoided extra internal copying of files during [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore). [#22281][#22281] -- Added a geographical map to the homepage of the Admin UI enterprise version, showing the location of nodes and localities in the cluster. The map is annotated with several top-level metrics: storage capacity used, queries per second, and current CPU usage, as well as the liveness status of nodes in the cluster. [#22763][#22763] - -

SQL Language Changes

- -- The type determined for constant `NULL` expressions is renamed to `unknown` for better compatibility with PostgreSQL. [#23150][#23150] -- Deleting multiple rows at once now consumes less memory. [#23013][#23013] -- Attempts to modify virtual schemas with DDL statements now fail with a clearer error message. [#23041][#23041] -- The new `SHOW SCHEMAS` statement reveals which are the valid virtual schemas next to the physical schema `public`. [#23041][#23041] -- CockroachDB now recognizes the special syntax `SET SCHEMA ` as an alias for `SET search_path = ` for better compatibility with PostgreSQL. [#23041][#23041] -- `current_role()` and `current_catalog()` are supported as aliases for the `current_user()` and `current_database()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) for better compatibility with PostgreSQL. [#23041][#23041] -- CockroachDB now returns the correct error code for division by zero. [#22948][#22948] -- The GC of table data after a [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.0/drop-table) statement now respects changes to the GC TTL interval specified in the zone config [#22903][#22903] -- The full names of tables/view/sequences are now properly logged in the system event log. [#22848][#22848] -- CockroachDB now recognizes the syntax `db.public.tbl` in addition to `db.tbl` for better compatibility with PostgreSQL. The handling of the session variable `search_path`, as well as that of the built-in functions `current_schemas()` and `current_schema()`, is now closer to that of PostgreSQL. Thus `SHOW TABLES FROM` can now inspect the tables of a specific schema (for example, `SHOW TABLES FROM db.public` or `SHOW TABLES FROM db.pg_catalog`). `SHOW GRANTS` also shows the schema of the databases and tables. [#22753][#22753] -- Users can now configure auditing per table and per access mode with `ALTER TABLE`. [#22534][#22534] -- SQL execution logs enabled by the [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) `sql.trace.log_statement_execute` now go to a separate log file. This is an experimental feature meant to aid troubleshooting CockroachDB. [#22534][#22534] -- Added the `string_to_array()` [built-in function](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#22391][#22391] -- Added the `constraint_column_usage` table and roles-related tables to the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema) database. [#22323][#22323] [#22242][#22242] -- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) no longer requires the experimental setting. [#22531][#22531] -- Computed columns and `CHECK` constraints now correctly report column names in the case of a type error. [#22500][#22500] -- The output of [`JSON`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data now matches that of PostgreSQL. [#22393][#22393] -- Allowed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs to be paused. `IMPORT` jobs now correctly resume instead of being abandoned if the coordinator goes down. [#22291][#22291] -- Removed the `into_db` option in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import). The database is now specified as part of the table name. [#21813][#21813] -- Changed computed column syntax and improved related error messages. [#22429][#22429] -- Implemented additional [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) column type operators such as `contains` and `contained by`, binary operations, and addition/subtraction. -- Implemented the following operators for [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) column types: `<<`, `<<=`, `>>`, `>>=`, `&&`, `+`, `-`, `^`, `|`, `&`. These operators are compatible with PostgreSQL 10 and are described in Table: 9.36 of the PostgreSQL documentation. [#21437][#21437] -- CockroachDB now properly rejects incorrectly-cased SQL function names with an error. [#22365][#22365] -- Allowed [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expressions in the `CREATE TABLE` of an [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) CSV. The expressions are not evaluated (data in the CSV is still required to be present). This change only allows them to be part of the table definition. [#22307][#22307] -- Added the `#-` [operator](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) for `JSON`. [#22375][#22375] -- The `SET transaction_isolation` statement is now supported for better PostgreSQL compatibility. [#22389][#22389] -- Allowed creation of computed columns. [#21823][#21823] -- Avoided extra internal copying of files during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import). [#22281][#22281] -- Casts between [array types](https://www.cockroachlabs.com/docs/v2.0/array) are now allowed when a cast between the parameter types is allowed. [#22338][#22338] -- Scalar functions can now be used in `FROM` clauses. [#22314][#22314] -- Added privilege checks on [sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#22284][#22284] -- The `ON DELETE SET DEFAULT`, `ON UPDATE SET DEFAULT`, `ON DELETE SET NULL`, and `ON UPDATE SET NULL` [foreign key constraint actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) are now fully supported. [#22220][#22220] [#21767][#21767] [#21716][#21716] -- The `ON DELETE CASCADE` and `ON UPDATE CASCADE` [foreign key constraint actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) can now also contain `CHECK` constraints. [#22535][#22535] -- JSON inverted indexes can now be specified in a [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v2.0/create-table) statement. [#22217][#22217] -- When a node is gracefully shut down, planning queries are avoided and distributed queries are allowed the amount of time specified by the new `server.drain_max_wait` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) before the node is drained and stopped. [#20450][#20450] -- [Collated string](https://www.cockroachlabs.com/docs/v2.0/collate) are now supported in [IMPORT](https://www.cockroachlabs.com/docs/v2.0/import) jobs. [#21859][#21859] -- The new `SHOW GRANTS ON ROLE` statement and `pg_catalog.pg_auth_members` table lists role memberships. [#22205][#22205] [#21780][#21780] -- Role memberships are now considered in permission checks. [#21820][#21820] - -

Command-Line Changes

- -- [Replication zone constraints](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#replication-constraints) can now be specified on a per-replica basis, meaning you can configure some replicas in a zone's ranges to follow one set of constraints and other replicas to follow other constraints. [#22906][#22906] -- Per-replica constraints no longer have to add up to the total number of replicas in a range. If you do not specify all the replicas, then the remaining replicas will be allowed on any store. [#23081][#23081] -- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) now reminds the user about `SET database = ...` and `CREATE DATABASE` if started with no current database set. [#23089][#23089] -- Error messages displayed while connecting to a server with an incompatible version have been improved. [#22709][#22709] -- The `--cache` and `--max-sql-memory` flags of [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) now also support decimal notation to support a fraction of total available RAM size, e.g., `--cache=.25` is equivalent to `--cache=25%`. This simplifies integration with system management tools. [#22460][#22460] -- When printing tabular results as CSV or TSV, no final row count is emitted. This is intended to increase interoperability with external tools. [#20835][#20835] -- The `pretty` formatter does not introduce special unicode characters in multi-line table cells, for better compatibility with certain clients. To disambiguate multi-line cells from multiple single-line cells, a user can use `WITH ORDINALITY` to add a row numbering column. [#22324][#22324] -- Allowed specification of arbitrary RocksDB options. [#22401][#22401] -- [Debug commands](https://www.cockroachlabs.com/docs/v2.0/debug-zip) now open RocksDB in read-only mode. This makes them faster and able to run in parallel. [#21778][#21778] -- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) command now outputs `CREATE SEQUENCE` statements before the `CREATE TABLE` statements that use them. [#21774][#21774] -- For better compatibility with `psql`'s extended format, the table formatter `records` now properly indicates line continuations in multi-line rows. [#22325][#22325] -- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) client-side option `show_times` is now always enabled when output goes to a terminal, not just when `display_format` is set to `pretty`. [#22326][#22326] -- When formatting [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) results with `--format=sql`, the row count is now printed in a SQL comment at the end. [#22327][#22327] -- When formatting [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) results with `--format=csv` or `--format=tsv`, result rows that contain special characters are now quoted properly. [#19306][#19306] - -

Admin UI Changes

- -- [Decommissioned nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) are no longer included in cluster stats aggregates. [#22711][#22711] -- Time series metrics dashboards now show their own title rather than the generic "Cluster Overview". [#22746][#22746] -- The URLs for Admin UI pages have been reorganized to provide more consistent structure to the site. Old links will redirect to the new location of the page. [#22746][#22746] -- Nodes being decommissioned are now included in the total nodes count until they are completely decommissioned. [#22690][#22690] -- Added new graphs for monitoring activity of the time series system. [#22672][#22672] -- Disk usage for time series data is now visible on the **Databases** page. [#22398][#22398] -- Added a ui-clean task. [#22552][#22552] -- Added an icon to indicate when descriptions in the **Jobs** table are shortened and expandable. [#22221][#22221] -- Added "compaction queue" graphs to the **Queues** dashboard. [#22218][#22218] -- Added Raft snapshot queue metrics to the **Queue** dashboard. [#22210][#22210] -- Dead nodes are now displayed before live nodes on the **Nodes List** page. [#22222][#22222] -- Links to documentation in the Admin UI now point to the docs for v2.0. [#21894][#21894] - -

Bug Fixes

- -- Fixed an issue where Admin UI graph tooltips would continue to display zero values for nodes which had long been decommissioned. [#22626][#22626] -- Fixed an issue where Admin UI graphs would occasionally have a persistent "dip" at the leading edge of data. [#22570][#22570] -- Fixed an issue where viewing Admin UI graphs for very long time spans (e.g., 1 month) could cause excessive memory usage. [#22392][#22392] -- Fixed padding on the **Node Diagnostics** page of the Admin UI. [#23019][#23019] -- Corrected the title of the decommissioned node list, which was mistakenly updated to say "Decommissioning". [#22703][#22703] -- Fixed a bug in [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) output with `SEQUENCES`. [#22619][#22619] -- Fixed a bug that created uneven distribution of data (or failures in some cases) during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) of tables without an explicit primary key. [#22542][#22542] -- Fixed a bug that could prevent disk space from being reclaimed. [#23153][#23153] -- [Replication zone configs](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) no longer accept negative numbers as input. [#23081][#23081] -- Fixed the occasional selection of sub-optimal rebalance targets. [#23081][#23081] -- [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.0/sql-dump) is now able to dump sequences with non-default parameters. [#23062][#23062] -- [Arrays](https://www.cockroachlabs.com/docs/v2.0/array) now support the `IS [NOT] DISTINCT FROM` operators. [#23090][#23090] -- [`SHOW TABLES`](https://www.cockroachlabs.com/docs/v2.0/show-tables) is now again able to inspect virtual schemas. [#23041][#23041] -- The special form of [`CREATE TABLE .. AS`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) now properly supports placeholders in the subquery. [#23046][#23046] -- Fixed a bug where ranges could get stuck in an infinite "removal pending" state and would refuse to accept new writes. [#23024][#23024] -- Fixed incorrect index constraints on primary key columns on unique indexes. [#23003][#23003] -- Fixed a panic when upgrading quickly from v1.0.x to v2.0.x [#22971][#22971] -- Fixed a bug that prevented joins on interleaved tables with certain layouts from working. [#22935][#22935] -- The service latency tracked for SQL statement now includes the wait time of the execute message in the input queue. [#22881][#22881] -- The conversion from [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.0/interval) to [`FLOAT`](https://www.cockroachlabs.com/docs/v2.0/float) now properly returns the number of seconds in the interval. [#22894][#22894] -- Fixed incorrect query results when the `WHERE` condition contains `IN` expressions where the right-hand side tuple contains `NULL`s. [#22735][#22735] -- Fixed incorrect handling for `IS (NOT) DISTINCT FROM` when either side is a tuple that contains `NULL`. [#22718][#22718] -- Fixed incorrect evaluation of `IN` expressions where the left-hand side is a tuple, and some of the tuples on either side contain `NULL`. [#22718][#22718] -- Expressions stored in check constraints and computed columns are now stored de-qualified so that they no longer refer to a specific database or table. [#22667][#22667] -- Fixed a bug where reusing addresses of decommissioned nodes could cause issues with Admin UI graphs. [#22614][#22614] -- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs can no longer be started if the target table already exists. [#22627][#22627] -- Computed columns can no longer be added to a table after table creation. [#22653][#22653] -- Allowed [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert)ing into a table with computed columns. [#22517][#22517] -- Computed columns are now correctly disallowed from being foreign key references. [#22511][#22511] -- Various primitives that expect table names as argument now properly reject invalid table names. [#22577][#22577] -- `AddSSTable` no longer accidentally destroys files in the log on success. [#22551][#22551] -- `IsDistinctFrom` with `NULL` placeholder no longer returns incorrect results. [#22433][#22433] -- Fixed a bug that caused incorrect results for joins where columns that are constrained to be equal have different types. [#22549][#22549] -- Implemented additional safeguards against RPC connections between nodes that belong to different clusters. [#22518][#22518] -- The [`/health` endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) now returns a node as unhealthy when draining or decommissioning. [#22502][#22502] -- Aggregates that take null arguments no return the correct results. [#22507][#22507] -- Fixed empty plan columns of `sequenceSelectNode`. [#22495][#22495] -- Disallowed any inserts into computed columns. [#22470][#22470] -- Tables with computed columns will produce a meaningful dump. [#22402][#22402] -- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) no longer produces an error anymore when an empty statement is entered at the interactive prompt. [#22449][#22449] -- The `pg_typeof()` function now returns the correct type for the output of `UNION ALL` even when the left sub-select has a `NULL` column. [#22438][#22438] -- ` ` literal casts now work correctly for all fixed-length types. [#22397][#22397] -- Errors from DDL statements sent by a client as part of a transaction, but in a different query string than the final commit, are no longer silently swallowed. [#21829][#21829] -- Fixed a bug in cascading foreign key actions. [#21799][#21799] -- Tabular results where the column labels contain newline characters are now rendered properly. [#19306][#19306] -- Fixed a bug that prevented long descriptions in the Admin UI **Jobs** table from being collapsed after being expanding. [#22221][#22221] -- Fixed a bug that prevented using [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.0/show-grants) with a grantee but no targets. [#21864][#21864] -- Fixed a panic with certain queries involving the `REGCLASS` type. [#22310][#22310] -- Fixed the behavior and types of the `encode()` and `decode()` functions. [#22230][#22230] -- Fixed a bug that prevented passing the same tuple for `FROM` and `TO` in `ALTER TABLE ... SCATTER`. [#21830][#21830] -- Fixed a regression that caused certain queries using `LIKE` or `SIMILAR TO` with an indexed column to be slow. [#21842][#21842] -- Fixed a stack overflow in the code for shutting down a server when out of disk space [#21768][#21768] -- Fixed Windows release builds. [#21793][#21793] -- Fixed an issue with the wire-formatting of [`BYTES`](https://www.cockroachlabs.com/docs/v2.0/bytes) arrays. [#21712][#21712] -- Fixed a bug that could lead to a node crashing and needing to be reinitialized. [#21771][#21771] -- When a database is created, dropped, or renamed, the SQL session is blocked until the effects of the operation are visible to future queries in that session. [#21900][#21900] -- Fixed a bug where healthy nodes could appear as "Suspect" in the Admin UI if the web browser's local clock was skewed. [#22237][#22237] -- Fixed bugs when running DistSQL queries across mixed-version (1.1.x and 2.0-alpha) clusters. [#22897][#22897] - -

Performance Improvements

- -- Improved a cluster's ability to continue operating when nearly out of disk space on most nodes. [#21866][#21866] -- Disk space is more aggressively freed up when the disk is almost full. [#22235][#22235] -- Experimentally enabled some joins to perform a lookup join and increase join speed for cases where the right side of the join is much larger than the left. [#22674][#22674] -- Supported distributed execution of `INTERSECT` and `EXCEPT` queries. [#22442][#22442] -- Reduced cancellation time of DistSQL aggregation queries. [#22684][#22684] -- Unnecessary value checksums are no longer computed, speeding up database writes. [#22487][#22487] -- Reduced unnecessary logging in the storage layer. [#22516][#22516] -- Improved the performance of distributed sql queries. [#22471][#22471] -- Distributed execution of `INTERSECT ALL` and `EXCEPT ALL` queries is now supported. [#21896][#21896] -- Allowed `-` in usernames, but not as the first character. [#22728][#22728] -- A `COMMIT` reporting an error generated by a previous parallel statement (i.e., `RETURNING NOTHING`) no longer leaves the connection in an aborted transaction state. Instead, the transaction is considered completed and a `ROLLBACK` is not necessary. [#22683][#22683] -- Significantly reduced the likelihood of serializable restarts seen by clients due to concurrent workloads. [#21140][#21140] -- Reduced disruption from nodes recovering from network partitions. [#22316][#22316] -- Improved the performance of scans by copying less data in memory. [#22309][#22309] -- Slightly improved the performance of low-level scan operations. [#22244][#22244] -- When a range grows too large, writes are now be backpressured until the range is successfully able to split. This prevents unbounded range growth and improves a clusters ability to stay healthy under hotspot workloads. [#21777][#21777] -- The `information_schema` and `pg_catalog` databases are now faster to query. [#21609][#21609] -- Reduced the write amplification of Raft replication. [#20647][#20647] - -

Doc Updates

- -- Added [cloud-specific hardware recommendations](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#cloud-specific-recommendations). [#2312](https://github.com/cockroachdb/docs/pull/2312) -- Added a [detailed listing of SQL standard features with CockroachDB's level of support](https://www.cockroachlabs.com/docs/v2.0/sql-feature-support). [#2442](https://github.com/cockroachdb/docs/pull/2442) -- Added docs on the [`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) data type. [#2439](https://github.com/cockroachdb/docs/pull/2439) -- Added docs on the [`SHOW CREATE SEQUENCE`](https://www.cockroachlabs.com/docs/v2.0/show-create-sequence) statement. [#2406](https://github.com/cockroachdb/docs/pull/2406) -- Clarified that [password creation](https://www.cockroachlabs.com/docs/v2.0/create-and-manage-users#user-authentication) is only supported in secure clusters. [#2567](https://github.com/cockroachdb/docs/pull/2567) -- Added docs on [production monitoring tools](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) and the critical events and metrics to alert on. [#2564](https://github.com/cockroachdb/docs/pull/2564) -- Added docs on the [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data type. [#2561](https://github.com/cockroachdb/docs/pull/2561) -- Added docs on the `BETWEEN SYMMETRIC` [operator](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#operators). [#2551](https://github.com/cockroachdb/docs/pull/2551) -- Updated docs on [supporting castings for `ARRAY` values](https://www.cockroachlabs.com/docs/v2.0/array#supported-casting-conversionnew-in-v2-0). [#2549](https://github.com/cockroachdb/docs/pull/2549) -- Various improvements to docs on the [built-in SQL client](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client). [#2544](https://github.com/cockroachdb/docs/pull/2544) - -
- -

Contributors

- -This release includes 430 merged PRs by 37 authors. We would like to thank all contributors from the CockroachDB community, with special thanks to first-time contributors noonan, Mark Wistrom, pocockn, and 何羿宏. - -
- -[#19306]: https://github.com/cockroachdb/cockroach/pull/19306 -[#20290]: https://github.com/cockroachdb/cockroach/pull/20290 -[#20450]: https://github.com/cockroachdb/cockroach/pull/20450 -[#20647]: https://github.com/cockroachdb/cockroach/pull/20647 -[#20790]: https://github.com/cockroachdb/cockroach/pull/20790 -[#20835]: https://github.com/cockroachdb/cockroach/pull/20835 -[#21140]: https://github.com/cockroachdb/cockroach/pull/21140 -[#21437]: https://github.com/cockroachdb/cockroach/pull/21437 -[#21477]: https://github.com/cockroachdb/cockroach/pull/21477 -[#21609]: https://github.com/cockroachdb/cockroach/pull/21609 -[#21684]: https://github.com/cockroachdb/cockroach/pull/21684 -[#21712]: https://github.com/cockroachdb/cockroach/pull/21712 -[#21716]: https://github.com/cockroachdb/cockroach/pull/21716 -[#21717]: https://github.com/cockroachdb/cockroach/pull/21717 -[#21754]: https://github.com/cockroachdb/cockroach/pull/21754 -[#21767]: https://github.com/cockroachdb/cockroach/pull/21767 -[#21768]: https://github.com/cockroachdb/cockroach/pull/21768 -[#21771]: https://github.com/cockroachdb/cockroach/pull/21771 -[#21774]: https://github.com/cockroachdb/cockroach/pull/21774 -[#21777]: https://github.com/cockroachdb/cockroach/pull/21777 -[#21778]: https://github.com/cockroachdb/cockroach/pull/21778 -[#21780]: https://github.com/cockroachdb/cockroach/pull/21780 -[#21793]: https://github.com/cockroachdb/cockroach/pull/21793 -[#21799]: https://github.com/cockroachdb/cockroach/pull/21799 -[#21813]: https://github.com/cockroachdb/cockroach/pull/21813 -[#21816]: https://github.com/cockroachdb/cockroach/pull/21816 -[#21820]: https://github.com/cockroachdb/cockroach/pull/21820 -[#21823]: https://github.com/cockroachdb/cockroach/pull/21823 -[#21829]: https://github.com/cockroachdb/cockroach/pull/21829 -[#21830]: https://github.com/cockroachdb/cockroach/pull/21830 -[#21842]: https://github.com/cockroachdb/cockroach/pull/21842 -[#21847]: https://github.com/cockroachdb/cockroach/pull/21847 -[#21859]: https://github.com/cockroachdb/cockroach/pull/21859 -[#21864]: https://github.com/cockroachdb/cockroach/pull/21864 -[#21866]: https://github.com/cockroachdb/cockroach/pull/21866 -[#21894]: https://github.com/cockroachdb/cockroach/pull/21894 -[#21895]: https://github.com/cockroachdb/cockroach/pull/21895 -[#21896]: https://github.com/cockroachdb/cockroach/pull/21896 -[#21900]: https://github.com/cockroachdb/cockroach/pull/21900 -[#22205]: https://github.com/cockroachdb/cockroach/pull/22205 -[#22210]: https://github.com/cockroachdb/cockroach/pull/22210 -[#22217]: https://github.com/cockroachdb/cockroach/pull/22217 -[#22218]: https://github.com/cockroachdb/cockroach/pull/22218 -[#22220]: https://github.com/cockroachdb/cockroach/pull/22220 -[#22221]: https://github.com/cockroachdb/cockroach/pull/22221 -[#22222]: https://github.com/cockroachdb/cockroach/pull/22222 -[#22230]: https://github.com/cockroachdb/cockroach/pull/22230 -[#22235]: https://github.com/cockroachdb/cockroach/pull/22235 -[#22237]: https://github.com/cockroachdb/cockroach/pull/22237 -[#22242]: https://github.com/cockroachdb/cockroach/pull/22242 -[#22244]: https://github.com/cockroachdb/cockroach/pull/22244 -[#22245]: https://github.com/cockroachdb/cockroach/pull/22245 -[#22278]: https://github.com/cockroachdb/cockroach/pull/22278 -[#22281]: https://github.com/cockroachdb/cockroach/pull/22281 -[#22284]: https://github.com/cockroachdb/cockroach/pull/22284 -[#22291]: https://github.com/cockroachdb/cockroach/pull/22291 -[#22307]: https://github.com/cockroachdb/cockroach/pull/22307 -[#22309]: https://github.com/cockroachdb/cockroach/pull/22309 -[#22310]: https://github.com/cockroachdb/cockroach/pull/22310 -[#22314]: https://github.com/cockroachdb/cockroach/pull/22314 -[#22316]: https://github.com/cockroachdb/cockroach/pull/22316 -[#22319]: https://github.com/cockroachdb/cockroach/pull/22319 -[#22323]: https://github.com/cockroachdb/cockroach/pull/22323 -[#22324]: https://github.com/cockroachdb/cockroach/pull/22324 -[#22325]: https://github.com/cockroachdb/cockroach/pull/22325 -[#22326]: https://github.com/cockroachdb/cockroach/pull/22326 -[#22327]: https://github.com/cockroachdb/cockroach/pull/22327 -[#22338]: https://github.com/cockroachdb/cockroach/pull/22338 -[#22365]: https://github.com/cockroachdb/cockroach/pull/22365 -[#22375]: https://github.com/cockroachdb/cockroach/pull/22375 -[#22389]: https://github.com/cockroachdb/cockroach/pull/22389 -[#22391]: https://github.com/cockroachdb/cockroach/pull/22391 -[#22392]: https://github.com/cockroachdb/cockroach/pull/22392 -[#22393]: https://github.com/cockroachdb/cockroach/pull/22393 -[#22397]: https://github.com/cockroachdb/cockroach/pull/22397 -[#22398]: https://github.com/cockroachdb/cockroach/pull/22398 -[#22401]: https://github.com/cockroachdb/cockroach/pull/22401 -[#22402]: https://github.com/cockroachdb/cockroach/pull/22402 -[#22404]: https://github.com/cockroachdb/cockroach/pull/22404 -[#22416]: https://github.com/cockroachdb/cockroach/pull/22416 -[#22429]: https://github.com/cockroachdb/cockroach/pull/22429 -[#22433]: https://github.com/cockroachdb/cockroach/pull/22433 -[#22438]: https://github.com/cockroachdb/cockroach/pull/22438 -[#22442]: https://github.com/cockroachdb/cockroach/pull/22442 -[#22449]: https://github.com/cockroachdb/cockroach/pull/22449 -[#22455]: https://github.com/cockroachdb/cockroach/pull/22455 -[#22460]: https://github.com/cockroachdb/cockroach/pull/22460 -[#22470]: https://github.com/cockroachdb/cockroach/pull/22470 -[#22471]: https://github.com/cockroachdb/cockroach/pull/22471 -[#22474]: https://github.com/cockroachdb/cockroach/pull/22474 -[#22487]: https://github.com/cockroachdb/cockroach/pull/22487 -[#22495]: https://github.com/cockroachdb/cockroach/pull/22495 -[#22500]: https://github.com/cockroachdb/cockroach/pull/22500 -[#22502]: https://github.com/cockroachdb/cockroach/pull/22502 -[#22507]: https://github.com/cockroachdb/cockroach/pull/22507 -[#22511]: https://github.com/cockroachdb/cockroach/pull/22511 -[#22514]: https://github.com/cockroachdb/cockroach/pull/22514 -[#22516]: https://github.com/cockroachdb/cockroach/pull/22516 -[#22517]: https://github.com/cockroachdb/cockroach/pull/22517 -[#22518]: https://github.com/cockroachdb/cockroach/pull/22518 -[#22531]: https://github.com/cockroachdb/cockroach/pull/22531 -[#22534]: https://github.com/cockroachdb/cockroach/pull/22534 -[#22535]: https://github.com/cockroachdb/cockroach/pull/22535 -[#22542]: https://github.com/cockroachdb/cockroach/pull/22542 -[#22549]: https://github.com/cockroachdb/cockroach/pull/22549 -[#22551]: https://github.com/cockroachdb/cockroach/pull/22551 -[#22552]: https://github.com/cockroachdb/cockroach/pull/22552 -[#22570]: https://github.com/cockroachdb/cockroach/pull/22570 -[#22577]: https://github.com/cockroachdb/cockroach/pull/22577 -[#22601]: https://github.com/cockroachdb/cockroach/pull/22601 -[#22608]: https://github.com/cockroachdb/cockroach/pull/22608 -[#22614]: https://github.com/cockroachdb/cockroach/pull/22614 -[#22619]: https://github.com/cockroachdb/cockroach/pull/22619 -[#22620]: https://github.com/cockroachdb/cockroach/pull/22620 -[#22626]: https://github.com/cockroachdb/cockroach/pull/22626 -[#22627]: https://github.com/cockroachdb/cockroach/pull/22627 -[#22637]: https://github.com/cockroachdb/cockroach/pull/22637 -[#22648]: https://github.com/cockroachdb/cockroach/pull/22648 -[#22653]: https://github.com/cockroachdb/cockroach/pull/22653 -[#22658]: https://github.com/cockroachdb/cockroach/pull/22658 -[#22659]: https://github.com/cockroachdb/cockroach/pull/22659 -[#22667]: https://github.com/cockroachdb/cockroach/pull/22667 -[#22672]: https://github.com/cockroachdb/cockroach/pull/22672 -[#22674]: https://github.com/cockroachdb/cockroach/pull/22674 -[#22679]: https://github.com/cockroachdb/cockroach/pull/22679 -[#22683]: https://github.com/cockroachdb/cockroach/pull/22683 -[#22684]: https://github.com/cockroachdb/cockroach/pull/22684 -[#22690]: https://github.com/cockroachdb/cockroach/pull/22690 -[#22693]: https://github.com/cockroachdb/cockroach/pull/22693 -[#22703]: https://github.com/cockroachdb/cockroach/pull/22703 -[#22705]: https://github.com/cockroachdb/cockroach/pull/22705 -[#22709]: https://github.com/cockroachdb/cockroach/pull/22709 -[#22711]: https://github.com/cockroachdb/cockroach/pull/22711 -[#22718]: https://github.com/cockroachdb/cockroach/pull/22718 -[#22728]: https://github.com/cockroachdb/cockroach/pull/22728 -[#22729]: https://github.com/cockroachdb/cockroach/pull/22729 -[#22735]: https://github.com/cockroachdb/cockroach/pull/22735 -[#22746]: https://github.com/cockroachdb/cockroach/pull/22746 -[#22753]: https://github.com/cockroachdb/cockroach/pull/22753 -[#22848]: https://github.com/cockroachdb/cockroach/pull/22848 -[#22881]: https://github.com/cockroachdb/cockroach/pull/22881 -[#22894]: https://github.com/cockroachdb/cockroach/pull/22894 -[#22897]: https://github.com/cockroachdb/cockroach/pull/22897 -[#22903]: https://github.com/cockroachdb/cockroach/pull/22903 -[#22906]: https://github.com/cockroachdb/cockroach/pull/22906 -[#22935]: https://github.com/cockroachdb/cockroach/pull/22935 -[#22948]: https://github.com/cockroachdb/cockroach/pull/22948 -[#22971]: https://github.com/cockroachdb/cockroach/pull/22971 -[#22983]: https://github.com/cockroachdb/cockroach/pull/22983 -[#23003]: https://github.com/cockroachdb/cockroach/pull/23003 -[#23013]: https://github.com/cockroachdb/cockroach/pull/23013 -[#23019]: https://github.com/cockroachdb/cockroach/pull/23019 -[#23024]: https://github.com/cockroachdb/cockroach/pull/23024 -[#23041]: https://github.com/cockroachdb/cockroach/pull/23041 -[#23046]: https://github.com/cockroachdb/cockroach/pull/23046 -[#23062]: https://github.com/cockroachdb/cockroach/pull/23062 -[#23081]: https://github.com/cockroachdb/cockroach/pull/23081 -[#23089]: https://github.com/cockroachdb/cockroach/pull/23089 -[#23090]: https://github.com/cockroachdb/cockroach/pull/23090 -[#23150]: https://github.com/cockroachdb/cockroach/pull/23150 -[#23153]: https://github.com/cockroachdb/cockroach/pull/23153 diff --git a/src/current/_includes/releases/v2.0/v2.0-beta.20180312.md b/src/current/_includes/releases/v2.0/v2.0-beta.20180312.md deleted file mode 100644 index 7d1f164843d..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-beta.20180312.md +++ /dev/null @@ -1,110 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -In this release, we’ve enhanced our debug pages to support graphing custom metrics, improved handling of large deletes, and fixed several bugs. - -- Custom graph debug page [#23227](https://github.com/cockroachdb/cockroach/pull/23227) -- Improve handling of large deletes [#23289](https://github.com/cockroachdb/cockroach/pull/23289) - -

Build Changes

- -- Binaries are now built with Go 1.10 by default. [#23311][#23311] - -

General Changes

- -- [Logging data](https://www.cockroachlabs.com/docs/v2.0/debug-and-error-logs) is now flushed to files every second to aid troubleshooting and monitoring. Synchronization to disk is performed separately every 30 seconds. [#23231][#23231] -- Disabling [diagnostics reporting](https://www.cockroachlabs.com/docs/v1.1/diagnostics-reporting) also disables new version notification checks. [#23007][#23007] -- Removed the `diagnostics.reporting.report_metrics` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings), which is duplicative with the `COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING` environment variable. [#23007][#23007] -- All internal error messages are now logged when logging is set to a high enough verbosity. [#23127][#23127] - -

SQL Language Changes

- -- Improved handling of large [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete) statements. They are now either allowed to complete or exited with an error message indicating that the transaction is too large to complete. [#23289][#23289] -- The `pg_catalog` virtual tables, as well as the special casts `::regproc` and `::regclass`, can now only be queried by clients that [set a current database](https://www.cockroachlabs.com/docs/v2.0/set-vars). [#23148][#23148] - -

Command-Line Changes

- -- When a node spends more than 30 seconds waiting for an `init` command or to join a cluster, a help message now gets prints to `stdout`. [#23181][#23181] - -

Admin UI Changes

- -- Added a new debug page that allows users to create a "custom" graph displaying any combination of metrics. [#23227][#23227] -- In the geographical map on the homepage of the Admin UI enterprise version, node components now link to **Note Details** page. [#23283][#23283] -- Removed the **Keys Written per Second per Store** graph. [#23303][#23303] -- Added the **Lead Transferee** field to the **Range Debug** page. [#23241][#23241] - -

Bug Fixes

- -- Fixed a correctness bug where some `ORDER BY` queries would not return the correct results. [#23541][#23541] -- The Admin UI no longer hangs after a node's network configuration has changed. [#23348][#23348] -- The binary format for [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) values is now supported. [#23215][#23215] -- A node now waits in an unready state for the length of time specified by the `server.shutdown.drain_wait` cluster setting before draining. This helps ensure that load balancers will not send client traffic to a node about to be drained. [#23319][#23319] -- Fixed a panic when using `UPSERT ... RETURNING` with `UNION`. [#23317][#23317] -- Prevented disruptions in performance when gracefully shutting down a node. [#23300][#23300] -- Hardened the [cluster version upgrade](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version) mechanism. Rapid upgrades through more than two versions could sometimes fail recoverably. [#23287][#23287] -- Fixed a deadlock when tables are rapidly created or dropped. [#23288][#23288] -- Fixed a small memory leak in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) CSV. [#23259][#23259] -- Prevented a panic in DistSQL under certain error conditions. [#23201][#23201] -- Added a [readiness endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting#health-ready-1) (`/health?ready=1`) for better integration with load balancers. [#23247][#23247] -- Fixed a zero QPS scenario when gracefully shutting down a node. [#23246][#23246] -- Secondary log files (e.g., the SQL execution log) are now periodically flushed to disk, in addition to the flush - that occurs naturally when single log files are full (`--log-file-max-size`) and when the process terminates gracefully. Log file rotation is also now properly active for these files. [#23231][#23231] -- Previously, the `ranges` column in the `node status` command output only included ranges whose raft leader was on the node in question. It now includes the count of all ranges on the node, regardless of where the raft leader is. [#23180][#23180] -- Fixed a panic caused by empty `COCKROACH_UPDATE_CHECK_URL` or `COCKROACH_USAGE_REPORT_URL` environment variables. [#23008][#23008] -- Prevented stale reads caused by the system clock moving backwards while the `cockroach` process is not running. [#23122][#23122] -- Corrected the handling of cases where a replica fails to retrieve the last processed timestamp of a queue. [#23127][#23127] -- Fixed a bug where the liveness status would not always display correctly on the single-node page in the Admin UI. [#23193][#23193] -- Fixed a bug that showed incorrect descriptions on the **Jobs** page in the Admin UI. [#23256][#23256] - -

Doc Updates

- -- Updated the [GCE deployment tutorial](https://www.cockroachlabs.com/docs/v2.0/deploy-cockroachdb-on-google-cloud-platform) with guidance on increasing the backend timeout setting for TCP Proxy load balancers. [#2687](https://github.com/cockroachdb/docs/pull/2687) -- Documented [read refreshing](https://www.cockroachlabs.com/docs/v2.0/architecture/transaction-layer#read-refreshing) in the CockroachDB architecture documentation. [#2684](https://github.com/cockroachdb/docs/pull/2684) -- Updated the explanation of [automatic retries](https://www.cockroachlabs.com/docs/v2.0/transactions#automatic-retries). [#2680](https://github.com/cockroachdb/docs/pull/2680) -- Documented changes to the [built-in replication zone](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#list-the-pre-configured-replication-zones). [#2677](https://github.com/cockroachdb/docs/pull/2677) -- Updated the [`information_schema`](https://www.cockroachlabs.com/docs/v2.0/information-schema) documentation to cover new views. [#2675](https://github.com/cockroachdb/docs/pull/2675) [#2673](https://github.com/cockroachdb/docs/pull/2673) [#2672](https://github.com/cockroachdb/docs/pull/2672) [#2668](https://github.com/cockroachdb/docs/pull/2668) [#2662](https://github.com/cockroachdb/docs/pull/2662) [#2654](https://github.com/cockroachdb/docs/pull/2654) [#2637](https://github.com/cockroachdb/docs/pull/2637) -- Clarified the target of the [`cockroach init`](https://www.cockroachlabs.com/docs/v2.0/initialize-a-cluster) command. [#2670](https://github.com/cockroachdb/docs/pull/2670) -- Added details about [how to monitor clock offsets](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#how-can-i-tell-how-well-node-clocks-are-synchronized). [#2663](https://github.com/cockroachdb/docs/pull/2663) -- Documented how to [perform a rolling upgrade on a Kubernetes-orchestrated cluster](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes). [#2661](https://github.com/cockroachdb/docs/pull/2661) -- Updated the [Azure deployment tutorial](https://www.cockroachlabs.com/docs/v2.0/deploy-cockroachdb-on-microsoft-azure) with correct network security rule guidance. [#2653](https://github.com/cockroachdb/docs/pull/2653) -- Improved the documentation of the [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command. [#2639](https://github.com/cockroachdb/docs/pull/2639) -- Clarified that the [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.0/drop-column) statement now drops `CHECK` constraints. [#2638](https://github.com/cockroachdb/docs/pull/2638) -- Added details about [disk space usage after deletes and select performance on deleted rows](https://www.cockroachlabs.com/docs/v2.0/delete#disk-space-usage-after-deletes). [#2635](https://github.com/cockroachdb/docs/pull/2635) -- Clarified that [`DROP INDEX .. CASCADE`](https://www.cockroachlabs.com/docs/v2.0/drop-index) is required to drop a `UNIQUE` index. [#2633](https://github.com/cockroachdb/docs/pull/2633) -- Updated the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.0/explain) documentation to identify all explainable statements, cover the new output structure, and better explain the contents of the `Ordering` column. [#2632](https://github.com/cockroachdb/docs/pull/2632) [#2682](https://github.com/cockroachdb/docs/pull/2682) -- Defined "range lease" in the [CockroachDB architecture overview](https://www.cockroachlabs.com/docs/v2.0/architecture/overview#terms). [#2625](https://github.com/cockroachdb/docs/pull/2625) -- Added an FAQ on [preparing for planned node maintenance](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#how-do-i-prepare-for-planned-node-maintenance). [#2600](https://github.com/cockroachdb/docs/pull/2600) - -

Contributors

- -This release includes 44 merged PRs by 21 authors. - -[#23007]: https://github.com/cockroachdb/cockroach/pull/23007 -[#23122]: https://github.com/cockroachdb/cockroach/pull/23122 -[#23127]: https://github.com/cockroachdb/cockroach/pull/23127 -[#23148]: https://github.com/cockroachdb/cockroach/pull/23148 -[#23180]: https://github.com/cockroachdb/cockroach/pull/23180 -[#23181]: https://github.com/cockroachdb/cockroach/pull/23181 -[#23193]: https://github.com/cockroachdb/cockroach/pull/23193 -[#23201]: https://github.com/cockroachdb/cockroach/pull/23201 -[#23215]: https://github.com/cockroachdb/cockroach/pull/23215 -[#23227]: https://github.com/cockroachdb/cockroach/pull/23227 -[#23231]: https://github.com/cockroachdb/cockroach/pull/23231 -[#23241]: https://github.com/cockroachdb/cockroach/pull/23241 -[#23246]: https://github.com/cockroachdb/cockroach/pull/23246 -[#23247]: https://github.com/cockroachdb/cockroach/pull/23247 -[#23256]: https://github.com/cockroachdb/cockroach/pull/23256 -[#23259]: https://github.com/cockroachdb/cockroach/pull/23259 -[#23283]: https://github.com/cockroachdb/cockroach/pull/23283 -[#23287]: https://github.com/cockroachdb/cockroach/pull/23287 -[#23288]: https://github.com/cockroachdb/cockroach/pull/23288 -[#23289]: https://github.com/cockroachdb/cockroach/pull/23289 -[#23300]: https://github.com/cockroachdb/cockroach/pull/23300 -[#23303]: https://github.com/cockroachdb/cockroach/pull/23303 -[#23311]: https://github.com/cockroachdb/cockroach/pull/23311 -[#23317]: https://github.com/cockroachdb/cockroach/pull/23317 -[#23319]: https://github.com/cockroachdb/cockroach/pull/23319 -[#23348]: https://github.com/cockroachdb/cockroach/pull/23348 -[#23372]: https://github.com/cockroachdb/cockroach/pull/23372 -[#23541]: https://github.com/cockroachdb/cockroach/pull/23541 diff --git a/src/current/_includes/releases/v2.0/v2.0-beta.20180319.md b/src/current/_includes/releases/v2.0/v2.0-beta.20180319.md deleted file mode 100644 index 53b66bdb7c2..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-beta.20180319.md +++ /dev/null @@ -1,89 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -In this release, we’ve improved CockroachDB’s ability to run in orchestrated environments and closed several Postgres capability gaps. - -- Better [`/health`](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) behavior to support orchestrated deployments. [#23551][#23551] - -

SQL Language Changes

- -- `NO CYCLE` and `CACHE 1` are now supported options during [sequence creation](https://www.cockroachlabs.com/docs/v2.0/create-sequence). [#23518][#23518] -- `ISNULL` and `NOTNULL` are now accepted as alternatives to `IS NULL` and `IS NOT NULL`. [#23518][#23518] - -

Command-Line Changes

- -- Changed the `server.drain_max_wait` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) to `server.shutdown.query_wait` [#23629][#23629] -- The generated HAProxy config generated by [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v2.0/generate-cockroachdb-resources) has been extended with readiness checks. [#23590][#23590] - -

Admin UI Changes

- -- Implemented a spinner on the **Logs** page instead of saying "No Data" while loading. [#23556][#23556] -- Now using monospace font and rendering new lines on the **Logs** page. Also, packed lines together more tightly. [#23556][#23556] -- Moved the **Clock Offset** graph from the **Distributed** dashboard to the [Runtime dashboard](https://www.cockroachlabs.com/docs/v2.0/admin-ui-runtime-dashboard). Now displaying each node's clock offset independently rather than aggregating them together. [#23627][#23627] -- When [reporting anonymous usage details](https://www.cockroachlabs.com/docs/v2.0/diagnostics-reporting), locality tier names are now redacted. [#23588][#23588] -- Clicking on the entire node component is now allowed in the Node Map, not just the visible elements. [#23536][#23536] -- The Node Map now shows how long a node has been dead. [#23404][#23404] -- Correct liveness status text now displays on nodes in the Node Map. [#23404][#23404] - -

Bug Fixes

- -- Fixed a bug in which the usable capacity of nodes was not added up correctly in the Admin UI. [#23695][#23695] -- An [`ARRAY`](https://www.cockroachlabs.com/docs/v2.0/array) can now be used with the PostgreSQL binary format. [#23467][#23467] -- Fixed a panic when a query would incorrectly attempt to use an aggregation or window function in [`ON CONFLICT DO UPDATE`](https://www.cockroachlabs.com/docs/v2.0/insert#update-values-on-conflict). [#23658][#23658] -- [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v2.0/create-table-as) can now be used with [scalar subqueries](https://www.cockroachlabs.com/docs/v2.0/scalar-expressions#scalar-subqueries). [#23470][#23470] -- Connection attempts to former members of the cluster and the associated spurious log messages are now prevented. [#23605][#23605] -- Fixed a panic when executing `INSERT INTO ... SELECT` queries where the number of columns targeted for insertion does not match the number of columns returned by the [`SELECT`](https://www.cockroachlabs.com/docs/v2.0/select-clause). [#23642][#23642] -- Reduced the risk that a node in the process of crashing could serve inconsistent data. [#23616][#23616] -- Fixed a correctness bug where some `ORDER BY` queries would not return the correct results under concurrent transactional load. [#23602][#23602] -- `RETURNING NOTHING` now properly detects [parallel statement execution](https://www.cockroachlabs.com/docs/v2.0/parallel-statement-execution) opportunities against statements that contain data-modifying statements in subqueries. [#23524][#23524] -- The [`/health` HTTP endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting#health-endpoints) is now accessible before a node has successfully become part of an initialized cluster, meaning that it now accurately reflects the health of the process rather than the ability of the process to serve queries. This has been the intention all along, but it didn't work until the node had joined a cluster or had `cockroach init` run on it. [#23551][#23551] -- Fixed a panic that could occur with certain types of casts. [#23535][#23535] -- Prevented a hang while crashing when `stderr` is blocked. [#23484][#23484] -- Fixed panics related to distributed execution of queries with `REGCLASS` casts. [#23482][#23482] -- Fixed a panic with computed columns. [#23435][#23435] -- Added prevention against potential consistency issues when a node is stopped and restarted in rapid succession. [#23339][#23339] -- [Decommissioning a node](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) that has already been terminated now works in all cases. Success previously depended on whether the gateway node "remembered" the absent decommissioned node. [#23378][#23378] - -

Build Changes

- -- [Go 1.10](https://golang.org/dl/) is now the minimum required version necessary to build CockroachDB. [#23494][#23494] - -

Doc Updates

- -- Documented the [`SPLIT AT`](https://www.cockroachlabs.com/docs/v2.0/split-at) statement, which forces a key-value layer range split at a specified row in a table or index. [#2704](https://github.com/cockroachdb/docs/pull/2704) -- Various updates to the [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) documentation. [#2676](https://github.com/cockroachdb/docs/pull/2676) -- Various updates to the [`SHOW TRACE`](https://www.cockroachlabs.com/docs/v2.0/show-trace) documentation. [#2674](https://github.com/cockroachdb/docs/pull/2674) -- Clarified the upgrade path for [rolling upgrades](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version). [#2627](https://github.com/cockroachdb/docs/pull/2627) -- Added more detailed documentation on [ordering query results](https://www.cockroachlabs.com/docs/v2.0/query-order) with `ORDER BY`. [#2658](https://github.com/cockroachdb/docs/pull/2658) -- Documented [inverted indexes](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) for [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data. [#2608](https://github.com/cockroachdb/docs/pull/2608) - -

Contributors

- -This release includes 48 merged PRs by 21 authors. - -[#23339]: https://github.com/cockroachdb/cockroach/pull/23339 -[#23378]: https://github.com/cockroachdb/cockroach/pull/23378 -[#23404]: https://github.com/cockroachdb/cockroach/pull/23404 -[#23435]: https://github.com/cockroachdb/cockroach/pull/23435 -[#23467]: https://github.com/cockroachdb/cockroach/pull/23467 -[#23470]: https://github.com/cockroachdb/cockroach/pull/23470 -[#23482]: https://github.com/cockroachdb/cockroach/pull/23482 -[#23484]: https://github.com/cockroachdb/cockroach/pull/23484 -[#23494]: https://github.com/cockroachdb/cockroach/pull/23494 -[#23518]: https://github.com/cockroachdb/cockroach/pull/23518 -[#23524]: https://github.com/cockroachdb/cockroach/pull/23524 -[#23535]: https://github.com/cockroachdb/cockroach/pull/23535 -[#23536]: https://github.com/cockroachdb/cockroach/pull/23536 -[#23551]: https://github.com/cockroachdb/cockroach/pull/23551 -[#23556]: https://github.com/cockroachdb/cockroach/pull/23556 -[#23588]: https://github.com/cockroachdb/cockroach/pull/23588 -[#23590]: https://github.com/cockroachdb/cockroach/pull/23590 -[#23602]: https://github.com/cockroachdb/cockroach/pull/23602 -[#23605]: https://github.com/cockroachdb/cockroach/pull/23605 -[#23616]: https://github.com/cockroachdb/cockroach/pull/23616 -[#23627]: https://github.com/cockroachdb/cockroach/pull/23627 -[#23629]: https://github.com/cockroachdb/cockroach/pull/23629 -[#23642]: https://github.com/cockroachdb/cockroach/pull/23642 -[#23658]: https://github.com/cockroachdb/cockroach/pull/23658 -[#23695]: https://github.com/cockroachdb/cockroach/pull/23695 diff --git a/src/current/_includes/releases/v2.0/v2.0-beta.20180326.md b/src/current/_includes/releases/v2.0/v2.0-beta.20180326.md deleted file mode 100644 index 40c06c8bf49..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-beta.20180326.md +++ /dev/null @@ -1,114 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- A CockroachDB process now flushes its logs upon receiving `SIGHUP` instead of `SIGUSR1` as it did earlier. This is aimed to simplify the automation of process monitoring, test, and backup tools. [#23783][#23783] -- Information about [zone config](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) usage is now included in [diagnostic reports](https://www.cockroachlabs.com/docs/v2.0/diagnostics-reporting). [#23750][#23750] - -

Enterprise Edition Changes

- -- Added the `cloudstorage.timeout` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) for import/export operations. [#23776][#23776] - -

SQL Language Changes

- -- SQL features introduced in CockroachDB v2.0 cannot be used in clusters that are not [upgraded fully to v2.0](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version). [#24013][#24013] -- Added an `escape` option to the `encode()` and `decode()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#23781][#23781] -- Introduced a series of PostgreSQL-compatible, privilege-related [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#23839][#23839] -- Added the `pg_language` table to the `pg_catalog` virtual schema. [#23839][#23839] -- Added the `anyarray` type to the `pg_type` virtual schema. [#23836][#23836] -- [Retryable errors](https://www.cockroachlabs.com/docs/v2.0/transactions#error-handling) on schema change operations are now less likely to be returned to clients; more operations are retried internally. [#24050][#24050] - -

Command-Line Changes

- -- Client commands now report a warning if the connection URL is specified by the `--url` as well as some other command-line flag. If you use the `--url` flag, other flags can fill in pieces missing from the URL. -- Added per-node heap profiles to the [`debug zip`](https://www.cockroachlabs.com/docs/v2.0/debug-zip) command output. [#23858][#23858] - -

Admin UI Changes

- -- More debug pages are now locked down by the `server.remote_debugging.mode` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings). [#24022][#24022] -- The **Network Diagnostics** report no longer crashes when the latencies are very small or on a single node cluster. [#23868][#23868] -- Fixed the flicker in the **Node Map** as data is being reloaded. [#23757][#23757] -- Fixed the text overflowing past the table cell boundaries on the **Jobs** page. [#23748][#23748] -- Updated the labels for the **Snapshots** graph on the **Replication** dashboard to be more specific. [#23742][#23742] -- Fixed a bug where graphs would not display on clusters with large numbers of nodes. [#24045][#24045] -- [Decommissioned nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) no longer appear in the node selection dropdown on the **Metrics** page. [#23800][#23800] -- Fixed a condition where a persistent trailing dip could appear in graphs for longer time periods. [#23874][#23874] - -

Bug Fixes

- -- Redacted string values in debug API responses. [#24070][#24070] -- Old replicas are now garbage collected in a more timely fashion after a node has been offline for a long time (this bug only exists in recent v2.0 alpha/beta releases, not in v1.1). [#24066][#24066] -- Fixed a bug where some [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries could return incorrect results. [#23968][#23968] -- Fixed the behavior of the `@>` operator with arrays and scalars. [#23969][#23969] -- [Inverted indexes](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) can no longer be hinted for inappropriate queries. [#23989][#23989] -- Enforced minimum privileges for the `admin` role. [#23935][#23935] -- A panic is now avoided if the SQL audit log directory does not exist when the node is started. [#23928][#23928] -- Supported Postgres syntax for `USING GIN`. [#23910][#23910] -- Fixed a bug where [`INSERT`](https://www.cockroachlabs.com/docs/v2.0/insert)/[`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete)/[`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update)/[`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert) may lose updates if run using `WITH` or the `[ ... ]` syntax. [#23895][#23895] -- Made sure that all built-in functions have a unique Postgres OID for compatibility. [#23880][#23880] -- Fixed an error message generated by the experimental SCRUB feature. [#23845][#23845] -- Fixed a bug where [`CREATE VIEW`](https://www.cockroachlabs.com/docs/v2.0/create-view) after [`ALTER TABLE ADD COLUMN`](https://www.cockroachlabs.com/docs/v2.0/add-column) would fail to register the dependency on the newly added column. [#23845][#23845] -- Fixed crashes or incorrect results when combining an `OUTER JOIN` with a `VALUES` clause that contains only `NULL` values on a column (or other subqueries which result in a `NULL` column). [#23838][#23838] -- Fixed rare nil pointer exception in rebalance target selection. [#23807][#23807] -- The [`cockroach zone set`](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) command will now automatically retry if it encounters an error while setting zone configurations. [#23782][#23782] -- Fixed a bug where closing a connection in the middle of executing a query sometimes crashed the server. [#23761][#23761] -- Fixed a bug where expressions could be mistakenly considered equal, despite their types being different. [#23722][#23722] -- Fixed a bug where the `RANGE COUNT` metric on the Cluster Overview page of the Admin UI could significantly undercount the number of ranges. [#23746][#23746] -- The client URL reported upon [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) now does not include the option `application_name`. [#23894][#23894] - -

Performance Improvements

- -- Improved cluster performance during overload scenarios. [#23884][#23884] - -

Doc Updates

- -- Added a local cluster tutorial demonstrating [JSON Support](https://www.cockroachlabs.com/docs/v2.0/demo-json-support). [#2716](https://github.com/cockroachdb/docs/pull/2716) -- Added full documentation for the [`VALIDATE CONSTRAINT`](https://www.cockroachlabs.com/docs/v2.0/validate-constraint) statement. [#2730](https://github.com/cockroachdb/docs/pull/2730) - -
- -

Contributors

- -This release includes 64 merged PRs by 23 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Bob Vawter. - -
- -[#23577]: https://github.com/cockroachdb/cockroach/pull/23577 -[#23722]: https://github.com/cockroachdb/cockroach/pull/23722 -[#23742]: https://github.com/cockroachdb/cockroach/pull/23742 -[#23746]: https://github.com/cockroachdb/cockroach/pull/23746 -[#23748]: https://github.com/cockroachdb/cockroach/pull/23748 -[#23750]: https://github.com/cockroachdb/cockroach/pull/23750 -[#23757]: https://github.com/cockroachdb/cockroach/pull/23757 -[#23761]: https://github.com/cockroachdb/cockroach/pull/23761 -[#23776]: https://github.com/cockroachdb/cockroach/pull/23776 -[#23781]: https://github.com/cockroachdb/cockroach/pull/23781 -[#23782]: https://github.com/cockroachdb/cockroach/pull/23782 -[#23783]: https://github.com/cockroachdb/cockroach/pull/23783 -[#23800]: https://github.com/cockroachdb/cockroach/pull/23800 -[#23807]: https://github.com/cockroachdb/cockroach/pull/23807 -[#23836]: https://github.com/cockroachdb/cockroach/pull/23836 -[#23838]: https://github.com/cockroachdb/cockroach/pull/23838 -[#23839]: https://github.com/cockroachdb/cockroach/pull/23839 -[#23845]: https://github.com/cockroachdb/cockroach/pull/23845 -[#23858]: https://github.com/cockroachdb/cockroach/pull/23858 -[#23868]: https://github.com/cockroachdb/cockroach/pull/23868 -[#23874]: https://github.com/cockroachdb/cockroach/pull/23874 -[#23880]: https://github.com/cockroachdb/cockroach/pull/23880 -[#23884]: https://github.com/cockroachdb/cockroach/pull/23884 -[#23894]: https://github.com/cockroachdb/cockroach/pull/23894 -[#23895]: https://github.com/cockroachdb/cockroach/pull/23895 -[#23910]: https://github.com/cockroachdb/cockroach/pull/23910 -[#23928]: https://github.com/cockroachdb/cockroach/pull/23928 -[#23935]: https://github.com/cockroachdb/cockroach/pull/23935 -[#23968]: https://github.com/cockroachdb/cockroach/pull/23968 -[#23969]: https://github.com/cockroachdb/cockroach/pull/23969 -[#23989]: https://github.com/cockroachdb/cockroach/pull/23989 -[#24013]: https://github.com/cockroachdb/cockroach/pull/24013 -[#24022]: https://github.com/cockroachdb/cockroach/pull/24022 -[#24045]: https://github.com/cockroachdb/cockroach/pull/24045 -[#24050]: https://github.com/cockroachdb/cockroach/pull/24050 -[#24066]: https://github.com/cockroachdb/cockroach/pull/24066 -[#24070]: https://github.com/cockroachdb/cockroach/pull/24070 diff --git a/src/current/_includes/releases/v2.0/v2.0-rc.1.md b/src/current/_includes/releases/v2.0/v2.0-rc.1.md deleted file mode 100644 index 3706211c14d..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0-rc.1.md +++ /dev/null @@ -1,39 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This is the first release candidate for CockroachDB v2.0. All known bugs have either been fixed or pushed to a future release, with large bugs documented as [known limitations](https://www.cockroachlabs.com/docs/v2.0/known-limitations). - -- Improved the **Node Map** to provide guidance when an enterprise license or additional configuration is required. [#24271][#24271] -- Bug fixes and stability improvements. - -

Admin UI Changes

- -- Improved the **Node Map** to provide guidance when an enterprise license or additional configuration is required. [#24271][#24271] -- Added the available storage capacity to the **Cluster Overview** metrics. [#24254][#24254] - -

Bug Fixes

- -- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) that could lead to missing rows if the `RESTORE` was interrupted. [#24089][#24089] -- New nodes running CockroachDB v2.0 can now join clusters that contain nodes running v1.1. [#24257][#24257] -- Fixed a crash in [`cockroach zone ls`](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) that would happen if a table with a zone config on it had been deleted but not yet garbage collected. (This was broken in v2.0 alphas, not in v1.1.) [#24180][#24180] -- Fixed a bug where zooming on the **Node Map** could break after zooming out to the maximum extent. [#24183][#24183] -- Fixed a crash while performing rolling restarts. [#24260][#24260] -- Fixed a bug where [privileges](https://www.cockroachlabs.com/docs/v2.0/privileges) were sometimes set incorrectly after upgrading from an older release. [#24393][#24393] - -
- -

Contributors

- -This release includes 11 merged PRs by 10 authors. We would like to thank all contributors from the CockroachDB community, with special thanks to first-time contributor Vijay Karthik. - -
- -[#24089]: https://github.com/cockroachdb/cockroach/pull/24089 -[#24180]: https://github.com/cockroachdb/cockroach/pull/24180 -[#24183]: https://github.com/cockroachdb/cockroach/pull/24183 -[#24254]: https://github.com/cockroachdb/cockroach/pull/24254 -[#24257]: https://github.com/cockroachdb/cockroach/pull/24257 -[#24260]: https://github.com/cockroachdb/cockroach/pull/24260 -[#24271]: https://github.com/cockroachdb/cockroach/pull/24271 -[#24393]: https://github.com/cockroachdb/cockroach/pull/24393 diff --git a/src/current/_includes/releases/v2.0/v2.0.0.md b/src/current/_includes/releases/v2.0/v2.0.0.md deleted file mode 100644 index dc8b33745d5..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.0.md +++ /dev/null @@ -1,93 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -With the release of CockroachDB v2.0, we’ve made significant performance improvements, expanded our PostgreSQL compatibility by adding support for JSON (among other types), and provided functionality for managing multi-regional clusters in production. - -- Read more about these changes in the [v2.0 blog post](https://www.cockroachlabs.com/blog/cockroachdb-2-0-release/). -- Check out a [summary of the most significant user-facing changes](#v2-0-0-summary). -- Then [upgrade to CockroachDB v2.0](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version). - -

Summary

- -This section summarizes the most significant user-facing changes in v2.0.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases. - -- [Enterprise Features](#v2-0-0-enterprise-features) -- [Core Features](#v2-0-0-core-features) -- [Backward-Incompatible Changes](#v2-0-0-backward-incompatible-changes) -- [Known Limitations](#v2-0-0-known-limitations) -- [Documentation Updates](#v2-0-0-documentation-updates) - - - -

Enterprise Features

- -These new features require an [enterprise license](https://www.cockroachlabs.com/docs/v2.0/enterprise-licensing). You can [register for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -Feature | Description ---------|------------ -[Table Partitioning](https://www.cockroachlabs.com/docs/v2.0/partitioning) | Table partitioning gives you row-level control of how and where your data is stored. This feature can be used to keep data close to users, thereby reducing latency, or to store infrequently-accessed data on slower and cheaper storage, thereby reducing costs. -[Node Map](https://www.cockroachlabs.com/docs/v2.0/enable-node-map) | The **Node Map** in the Admin UI visualizes the geographical configuration of a multi-region cluster by plotting the node localities on a world map. This feature provides real-time cluster metrics, with the ability to drill down to individual nodes to monitor and troubleshoot cluster health and performance. -[Role-Based Access Control](https://www.cockroachlabs.com/docs/v2.0/roles) | Roles simplify access control by letting you assign SQL privileges to groups of users rather than to individuals. -[Point-in-time Backup/Restore](https://www.cockroachlabs.com/docs/v2.0/restore#point-in-time-restore-new-in-v2-0) (Beta) | Data can now be restored as it existed at a specific point-in-time within the [revision history of a backup](https://www.cockroachlabs.com/docs/v2.0/backup#backups-with-revision-history-new-in-v2-0).

This is a **beta** feature. It is currently undergoing continued testing. Please [file a Github issue](https://www.cockroachlabs.com/docs/v2.0/file-an-issue) with us if you identify a bug. - -

Core Features

- -These new features are freely available in the core version and do not require an enterprise license. - -

SQL

- -Feature | Description ---------|------------ -[JSON Support](https://www.cockroachlabs.com/docs/v2.0/demo-json-support) | The [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) data type and [inverted indexes](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) give you the flexibility to store and efficiently query semi-structured data. -[Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) | Sequences generate sequential integers according to defined rules. They are generally used for creating numeric primary keys. -[SQL Audit Logging](https://www.cockroachlabs.com/docs/v2.0/sql-audit-logging) (Experimental)| SQL audit logging gives you detailed information about queries being executed against your system. This feature is especially useful when you want to log all queries that are run against a table containing personally identifiable information (PII).

This is an **experimental** feature. Its interface and output are subject to change. -[Common Table Expressions](https://www.cockroachlabs.com/docs/v2.0/common-table-expressions) | Common Table Expressions (CTEs) simplify the definition and use of subqueries. They can be used in combination with [`SELECT` clauses](https://www.cockroachlabs.com/docs/v2.0/select-clause) and [`INSERT`](https://www.cockroachlabs.com/docs/v2.0/insert), [`DELETE`](https://www.cockroachlabs.com/docs/v2.0/delete), [`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update) and [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert) statements. -[Computed Columns](https://www.cockroachlabs.com/docs/v2.0/computed-columns) | Computed columns store data generated from other columns by an expression that's included in the column definition. They are especially useful in combination with [table partitioning](https://www.cockroachlabs.com/docs/v2.0/partitioning), [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) columns, and [secondary indexes](https://www.cockroachlabs.com/docs/v2.0/indexes). -[Foreign Key Actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) | The `ON UPDATE` and `ON DELETE` foreign key actions control what happens to a constrained column when the column it's referencing (the foreign key) is deleted or updated. -[Virtual Schemas](https://www.cockroachlabs.com/docs/v2.0/sql-name-resolution#logical-schemas-and-namespaces) | For PostgreSQL compatibility, CockroachDB now supports a three-level structure for names: database name > virtual schema name > object name. The new [`SHOW SCHEMAS`](https://www.cockroachlabs.com/docs/v2.0/show-schemas) statement can be used to list all virtual schemas for a given database. -[`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) | The `IMPORT` statement now imports tabular data in a fully distributed fashion, and import jobs can now be [paused](https://www.cockroachlabs.com/docs/v2.0/pause-job), [resumed](https://www.cockroachlabs.com/docs/v2.0/resume-job), and [cancelled](https://www.cockroachlabs.com/docs/v2.0/cancel-job). -[`INET`](https://www.cockroachlabs.com/docs/v2.0/inet) | The `INET` data type stores an IPv4 or IPv6 address. -[`TIME`](https://www.cockroachlabs.com/docs/v2.0/time) | The `TIME` data type stores the time of day without a time zone. - -

Operations

- -Feature | Description ---------|------------ -[Node Readiness Endpoint](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting#health-ready-1) | The new `/health?ready=1` endpoint returns an `HTTP 503 Service Unavailable` status response code with an error when a node is being decommissioned or is in the process of shutting down and is therefore not able to accept SQL connections and execute queries. This is especially useful for making sure [load balancers](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#load-balancing) do not direct traffic to nodes that are live but not "ready", which is a necessary check during [rolling upgrades](https://www.cockroachlabs.com/docs/v2.0/upgrade-cockroach-version). -[Node Decommissioning](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) | Nodes that have been decommissioned and stopped no longer appear in Admin UI and command-line interface metrics. -[Per-Replica Constraints in Replication Zones](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#scope-of-constraints) | When defining a replication zone, unique constraints can be defined for each affected replica, meaning you can effectively pick the exact location of each replica. -[Replication Zone for "Liveness" Range](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#create-a-replication-zone-for-a-system-range) | Clusters now come with a pre-defined replication zone for the "liveness" range, which contains the authoritative information about which nodes are live at any given time. -[Timeseries Data Controls](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#can-i-reduce-or-disable-the-storage-of-timeseries-data-new-in-v2-0) | It is now possible to reduce the amount of timeseries data stored by a CockroachDB cluster or to disable the storage of timeseries data entirely. The latter is recommended only when using a third-party tool such as Prometheus for timeseries monitoring. - -

Backward-Incompatible Changes

- -Change | Description --------|------------ -Replication Zones | [Positive replication zone constraints](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones#replication-constraints) no longer work. Any existing positive constraints will be ignored. This change should not impact existing deployments since positive constraints have not been documented or supported for some time. -Casts from `BYTES` to `STRING` | Casting between these types now works the same way as in PostgreSQL. New functions `encode()` and `decode()` are available to replace the former functionality. -`NaN` Comparisons | `NaN` comparisons have been redefined to be compatible with PostgreSQL. `NaN` is now equal to itself and sorts before all other non-NULL values. -[`DROP USER`](https://www.cockroachlabs.com/docs/v2.0/drop-user) | It is no longer possible to drop a user with grants; the user's grants must first be [revoked](https://www.cockroachlabs.com/docs/v2.0/revoke). -[Cluster Settings](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) | The obsolete `kv.gc.batch_size` cluster setting has been removed. -Environment Variables | The `COCKROACH_METRICS_SAMPLE_INTERVAL` environment variable has been removed. Users that relied on it should reduce the value for the `timeseries.resolution_10s.storage_duration` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) instead. -[Sequences](https://www.cockroachlabs.com/docs/v2.0/create-sequence) | As of the [v1.2-alpha.20171113](#v1-2-alpha-20171113) release, how sequences are stored in the key-value layer changed. Sequences created prior to that release must therefore be dropped and recreated. Since a sequence cannot be dropped while it is being used in a column's [`DEFAULT`](https://www.cockroachlabs.com/docs/v2.0/default-value) expression, those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval()` function can be used to set the value of a sequence to what it was previously. -[Reserved Keywords](https://www.cockroachlabs.com/docs/v2.0/sql-grammar#reserved_keyword) | `ROLE`, `VIRTUAL`, and `WORK` have been added as reserved keywords and are no longer allowed as [identifiers](https://www.cockroachlabs.com/docs/v2.0/keywords-and-identifiers). - -

Known Limitations

- -For information about limitations we've identified in CockroachDB v2.0, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v2.0/known-limitations). - -

Documentation Updates

- -Topic | Description -------|------------ -[Production Checklist](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings) | This topic now provides cloud-specific hardware, security, load balancing, monitoring and alerting, and clock synchronization recommendations as well as expanded cluster topology guidance. Related [deployment tutorials](https://www.cockroachlabs.com/docs/v2.0/manual-deployment) have been enhanced with much of this information as well. -[Monitoring and Alerting](https://www.cockroachlabs.com/docs/v2.0/monitoring-and-alerting) | This new topic explains available tools for monitoring the overall health and performance of a cluster and critical events and metrics to alert on. -[Common Errors](https://www.cockroachlabs.com/docs/v2.0/common-errors) | This new topic helps you understand and resolve errors you might encounter, including retryable and ambiguous errors for transactions. -[SQL Performance](https://www.cockroachlabs.com/docs/v2.0/performance-best-practices-overview) | This new topic provides best practices for optimizing SQL performance in CockroachDB. -[SQL Standard Comparison](https://www.cockroachlabs.com/docs/v2.0/sql-feature-support) | This new topic lists which SQL standard features are supported, partially-supported, and unsupported by CockroachDB. -[Selection Queries](https://www.cockroachlabs.com/docs/v2.0/selection-queries) | This new topic explains the function and syntax of queries and operations involved in reading and processing data in CockroachDB, alongside more detailed information about [ordering query results](https://www.cockroachlabs.com/docs/v2.0/query-order), [limiting query results](https://www.cockroachlabs.com/docs/v2.0/limit-offset), [subqueries](https://www.cockroachlabs.com/docs/v2.0/subqueries), and [join expressions](https://www.cockroachlabs.com/docs/v2.0/joins). diff --git a/src/current/_includes/releases/v2.0/v2.0.1.md b/src/current/_includes/releases/v2.0/v2.0.1.md deleted file mode 100644 index c74d2d8a79a..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.1.md +++ /dev/null @@ -1,78 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- The new `server.clock.persist_upper_bound_interval` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to guarantees monotonic wall time across server restarts. [#24624][#24624] -- The new `server.clock.forward_jump_check_enabled` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to cause nodes to panic on clock jumps. [#24606][#24606] -- Prevented execution errors reporting a missing `libtinfo.so.5` on Linux systems. [#24531][#24531] - -

Enterprise Edition Changes

- -- It is now possible to [`RESTORE`](https://www.cockroachlabs.com/docs/v2.0/restore) views when using the `into_db` option. [#24590][#24590] {% comment %}doc{% endcomment %} -- The new `jobs.registry.leniency` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to allow long-running [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) jobs to survive temporary node saturation. [#24505][#24505] -- Relaxed the limitation on using [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup) in a mixed version cluster. [#24515][#24515] - -

SQL Language Changes

- -- Improved the error message returned on object creation when no current database is set or only invalid schemas are in the `search_path`. [#24812][#24812] -- The `current_schema()` and `current_schemas()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators) now only consider valid schemas, like PostgreSQL does. [#24758][#24758] -- The experimental SQL features `SHOW TESTING_RANGES` and `ALTER ... TESTING_RELOCATE` have been renamed [`SHOW EXPERIMENTAL_RANGES`](https://www.cockroachlabs.com/docs/v2.0/show-experimental-ranges) and `ALTER ... EXPERIMENTAL_RELOCATE`. [#24699][#24699] -- `ROLE`, `VIRTUAL`, and `WORK` are no longer reserved keywords and can again be used as unrestricted names. [#24665][#24665] [#24549][#24549] - -

Command-Line Changes

- -- When [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v2.0/generate-cockroachdb-resources) is run, if an `haproxy.cfg` file already exists in the current directory, it now gets fully overwritten instead of potentially resulting in an unusable config. [#24336][#24336] {% comment %}doc{% endcomment %} - -

Bug Fixes

- -- Fixed a bug when using fractional units (e.g., `0.5GiB`) for the `--cache` and `--sql-max-memory` flags of [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node). [#24388][#24388] -- Fixed the handling of role membership lookups within transactions. [#24334][#24334] -- Fixed a bug causing some lookup join queries to report incorrect type errors. [#24825][#24825] -- `ALTER INDEX ... RENAME` can now be used on the primary index. [#24777][#24777] -- Fixed a panic involving [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries using the `->` operator. [#24596][#24596] -- Fix a panic involving [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries over `NULL`. [#24566][#24566] -- Fixed a bug preventing [inverted index](https://www.cockroachlabs.com/docs/v2.0/inverted-indexes) queries that have a root with a single entry or element but multiple children overall. [#24376][#24376] -- [`JSONB`](https://www.cockroachlabs.com/docs/v2.0/jsonb) values can now be cast to [`STRING`](https://www.cockroachlabs.com/docs/v2.0/string) values. [#24553][#24553] -- Prevented executing distributed SQL operations on draining nodes. [#23916][#23916] -- Fixed a panic caused by a `WHERE` condition that requires a column to equal a specific value and at the same time equal another column. [#24517][#24517] -- Fixed a panic caused by passing a `Name` type to `has_database_privilege()`. [#24270][#24270] -- Fixed a bug causing index backfills to fail in a loop after exceeding the GC TTL of their source table. [#24427][#24427] -- Fixed a panic caused by null config zones in diagnostics reporting. [#24526][#24526] - -

Performance Improvements

- -- Some [`SELECT`s](https://www.cockroachlabs.com/docs/v2.0/select-clause) with limits no longer require a second low-level scan, resulting in much faster execution. [#24796][#24796] - -

Contributors

- -This release includes 39 merged PRs by 16 authors. Special thanks to Vijay Karthik from the CockroachDB community. - -[#23916]: https://github.com/cockroachdb/cockroach/pull/23916 -[#24221]: https://github.com/cockroachdb/cockroach/pull/24221 -[#24270]: https://github.com/cockroachdb/cockroach/pull/24270 -[#24334]: https://github.com/cockroachdb/cockroach/pull/24334 -[#24336]: https://github.com/cockroachdb/cockroach/pull/24336 -[#24376]: https://github.com/cockroachdb/cockroach/pull/24376 -[#24388]: https://github.com/cockroachdb/cockroach/pull/24388 -[#24427]: https://github.com/cockroachdb/cockroach/pull/24427 -[#24505]: https://github.com/cockroachdb/cockroach/pull/24505 -[#24515]: https://github.com/cockroachdb/cockroach/pull/24515 -[#24517]: https://github.com/cockroachdb/cockroach/pull/24517 -[#24526]: https://github.com/cockroachdb/cockroach/pull/24526 -[#24531]: https://github.com/cockroachdb/cockroach/pull/24531 -[#24549]: https://github.com/cockroachdb/cockroach/pull/24549 -[#24553]: https://github.com/cockroachdb/cockroach/pull/24553 -[#24566]: https://github.com/cockroachdb/cockroach/pull/24566 -[#24590]: https://github.com/cockroachdb/cockroach/pull/24590 -[#24596]: https://github.com/cockroachdb/cockroach/pull/24596 -[#24606]: https://github.com/cockroachdb/cockroach/pull/24606 -[#24624]: https://github.com/cockroachdb/cockroach/pull/24624 -[#24665]: https://github.com/cockroachdb/cockroach/pull/24665 -[#24699]: https://github.com/cockroachdb/cockroach/pull/24699 -[#24758]: https://github.com/cockroachdb/cockroach/pull/24758 -[#24777]: https://github.com/cockroachdb/cockroach/pull/24777 -[#24796]: https://github.com/cockroachdb/cockroach/pull/24796 -[#24812]: https://github.com/cockroachdb/cockroach/pull/24812 -[#24825]: https://github.com/cockroachdb/cockroach/pull/24825 diff --git a/src/current/_includes/releases/v2.0/v2.0.2.md b/src/current/_includes/releases/v2.0/v2.0.2.md deleted file mode 100644 index a2e1df058e8..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.2.md +++ /dev/null @@ -1,89 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- The header of new log files generated by [`cockroach start`](https://www.cockroachlabs.com/docs/v2.0/start-a-node) will now include the cluster ID once it has been determined. [#24982][#24982] -- The cluster ID is now reported with tag `[config]` in the first log file, not only when log files are rotated. [#24982][#24982] -- Stopped spamming the server logs with "error closing gzip response writer" messages. [#25108][#25108] - -

SQL Language Changes

- -- Added more ways to specify an index name for statements that require one (e.g., `DROP INDEX`, `ALTER INDEX ... RENAME`, etc.), improving PostgreSQL compatibility. [#24817][#24817] {% comment %}doc{% endcomment %} -- Clarified the error message produced upon accessing a virtual schema with no database prefix (e.g., when `database` is not set). [#24809][#24809] -- `STORED` is no longer a reserved keyword and can again be used as an unrestricted name for databases, tables and columns. [#24864][#24864] {% comment %}doc{% endcomment %} -- Errors detected by `SHOW SYNTAX` are now tracked internally like other SQL errors. [#24900][#24900] -- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) now supports hex-encoded byte literals for [`BYTES`](https://www.cockroachlabs.com/docs/v2.0/bytes) columns. [#25063][#25063] {% comment %}doc{% endcomment %} -- [Collated strings](https://www.cockroachlabs.com/docs/v2.0/collate) can now be used in `WHERE` clauses on indexed columns. [#25175][#25175] {% comment %}doc{% endcomment %} -- The Level and Type columns of [`EXPLAIN (VERBOSE)`](https://www.cockroachlabs.com/docs/v2.0/explain) results are now hidden; if they are needed, they can be `SELECT`ed explicitly. [#25206][#25206] {% comment %}doc{% endcomment %} - -

Admin UI Changes

- -- Added RocksDB compactions/flushes to storage graphs. [#25457][#25457] - -

Bug Fixes

- -- It is once again possible to use a simply qualified table name in qualified stars (e.g., `SELECT mydb.kv.* FROM kv`) for compatibility with CockroachDB v1.x. [#24842][#24842] -- Fixed a scenario in which a node could deadlock while starting up. [#24831][#24831] -- Ranges in partitioned tables now properly split to respect their configured maximum size. [#24912][#24912] -- Some kinds of schema change errors that were stuck in a permanent loop now correctly fail. [#25015][#25015] -- When [adding a column](https://www.cockroachlabs.com/docs/v2.0/add-column), CockroachDB now verifies that the column is referenced by no more than one foreign key. Existing tables with a column that is used by multiple foreign key constraints should be manually changed to have at most one foreign key per column. [#25079][#25079] -- CockroachDB now properly reports an error when using the internal-only functions `final_variance()` and `final_stddev()` instead of causing a crash. [#25218][#25218] -- The `constraint_schema` column in `information_schema.constraint_column_usage` now displays the constraint's schema instead of its catalog. [#25220][#25220] -- Fix a panic caused by certain queries containing `OFFSET` and `ORDER BY`. [#25238][#25238] -- `BEGIN; RELEASE SAVEPOINT` now returns an error instead of causing a crash. [#25251][#25251] -- Fixed a rare `segfault` that occurred when reading from an invalid memory location returned from C++. [#25361][#25361] -- Fixed a bug with `IS DISTINCT FROM` not returning `NULL` values that pass the condition in some cases. [#25339][#25339] -- Restarting a CockroachDB server on Windows no longer fails due to file system locks in the store directory. [#25439][#25439] -- Prevented the consistency checker from deadlocking. This would previously manifest itself as a steady number of replicas queued for consistency checking on one or more nodes and would resolve by restarting the affected nodes. [#25474][#25474] -- Fixed problems with [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) sometimes failing after node decommissioning. [#25307][#25307] -- Fixed a bug causing `PREPARE` to hang when run in the same transaction as a `CREATE TABLE` statement. [#24874][#24874] - -

Build Changes

- -- Build metadata, like the commit SHA and build time, is properly injected into the binary when using Go 1.10 and building from a symlink. [#25062][#25062] - -

Doc Updates

- -- Improved the documentation of the `now()`, `current_time()`, `current_date()`, `current_timestamp()`, `clock_timestamp()`, `statement_timestamp()`, `cluster_logical_timestamp()`, and `age()` [built-in functions](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators). [#25383][#25383] [#25145][#25145] - -
- -

Contributors

- -This release includes 42 merged PRs by 16 authors. We would like to thank the following contributors from the CockroachDB community: - -- Garvit Juniwal -- Jingguo Yao - -
- -[#24809]: https://github.com/cockroachdb/cockroach/pull/24809 -[#24817]: https://github.com/cockroachdb/cockroach/pull/24817 -[#24831]: https://github.com/cockroachdb/cockroach/pull/24831 -[#24842]: https://github.com/cockroachdb/cockroach/pull/24842 -[#24864]: https://github.com/cockroachdb/cockroach/pull/24864 -[#24874]: https://github.com/cockroachdb/cockroach/pull/24874 -[#24900]: https://github.com/cockroachdb/cockroach/pull/24900 -[#24912]: https://github.com/cockroachdb/cockroach/pull/24912 -[#24982]: https://github.com/cockroachdb/cockroach/pull/24982 -[#25015]: https://github.com/cockroachdb/cockroach/pull/25015 -[#25062]: https://github.com/cockroachdb/cockroach/pull/25062 -[#25063]: https://github.com/cockroachdb/cockroach/pull/25063 -[#25079]: https://github.com/cockroachdb/cockroach/pull/25079 -[#25108]: https://github.com/cockroachdb/cockroach/pull/25108 -[#25145]: https://github.com/cockroachdb/cockroach/pull/25145 -[#25175]: https://github.com/cockroachdb/cockroach/pull/25175 -[#25206]: https://github.com/cockroachdb/cockroach/pull/25206 -[#25218]: https://github.com/cockroachdb/cockroach/pull/25218 -[#25220]: https://github.com/cockroachdb/cockroach/pull/25220 -[#25238]: https://github.com/cockroachdb/cockroach/pull/25238 -[#25251]: https://github.com/cockroachdb/cockroach/pull/25251 -[#25307]: https://github.com/cockroachdb/cockroach/pull/25307 -[#25339]: https://github.com/cockroachdb/cockroach/pull/25339 -[#25361]: https://github.com/cockroachdb/cockroach/pull/25361 -[#25383]: https://github.com/cockroachdb/cockroach/pull/25383 -[#25439]: https://github.com/cockroachdb/cockroach/pull/25439 -[#25457]: https://github.com/cockroachdb/cockroach/pull/25457 -[#25474]: https://github.com/cockroachdb/cockroach/pull/25474 diff --git a/src/current/_includes/releases/v2.0/v2.0.3.md b/src/current/_includes/releases/v2.0/v2.0.3.md deleted file mode 100644 index ca3a5d9af09..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.3.md +++ /dev/null @@ -1,67 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General Changes

- -- The new `compactor.threshold_bytes` and `max_record_age` [cluster settings](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to configure the compactor. [#25458][#25458] -- The new `cluster.preserve_downgrade_option` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) makes it possible to preserve the option to downgrade after [performing a rolling upgrade to v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version). [#25811][#25811] - -

SQL Language Changes

- -- Prevented [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.0/drop-table) from using too much CPU. [#25852][#25852] - -

Command-Line Changes

- -- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) command no longer prompts for a password when a certificate is provided. [#26232][#26232] -- The [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.0/stop-a-node) command now prints warning messages to the standard error stream, not to standard output. [#26163][#26163] - -

Bug Fixes

- -- Prevented the internal gossip network from being partitioned by making it much less likely that nodes in the network could forget about each other. [#25521][#25521] -- Prevented spurious `BudgetExceededErrors` for some queries that read a lot of JSON data from disk. [#25719][#25719] -- Fixed a crash in some cases when using a `GROUP BY` with `HAVING`. [#25654][#25654] -- Fixed a crash caused by inserting data into a table with [computed columns](https://www.cockroachlabs.com/docs/v2.0/computed-columns) that reference other columns that weren't present in the `INSERT` statement. [#25807][#25807] -- [`UPSERT`](https://www.cockroachlabs.com/docs/v2.0/upsert) is now properly able to write `NULL` values to every column in tables containing more than one [column family](https://www.cockroachlabs.com/docs/v2.0/column-families). [#26181][#26181] -- Fixed a bug where a long-running query running from one day to the next would not always produce the right value for `current_date()`. [#26413][#26413] -- Fixed a bug where [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.0/stop-a-node) would erroneously fail even though the node already successfully shut down. [#26163][#26163] -- Rows larger than 8192 bytes are now supported by the "copy from" protocol. [#26641][#26641] -- Trying to "copy from stdin" into a table that doesn't exist no longer drops the connection. [#26641][#26641] -- Previously, expired compactions could stay in the queue forever. Now, they are removed when they expire. [#26659][#26659] - -

Performance Improvements

- -- The performance impact of dropping a large table has been substantially reduced. [#26615][#26615] - -

Doc Updates

- -- Documented [special syntax forms](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#special-syntax-forms) of built-in SQL functions and [conditional and function-like operators](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#conditional-and-function-like-operators), and updated the [SQL operator order of precedence](https://www.cockroachlabs.com/docs/v2.0/functions-and-operators#operators). [#3192][#3192] -- Added best practices on [understanding and avoiding transaction contention](https://www.cockroachlabs.com/docs/v2.0/performance-best-practices-overview#understanding-and-avoiding-transaction-contention) and a related [FAQ](https://www.cockroachlabs.com/docs/v2.0/operational-faqs#why-would-increasing-the-number-of-nodes-not-result-in-more-operations-per-second). [#3156][#3156] -- Improved the documentation of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v2.0/as-of-system-time). [#3155][#3155] -- Expanded the [manual deployment](https://www.cockroachlabs.com/docs/v2.0/manual-deployment) guides to cover running a sample workload against a cluster. [#3149][#3149] -- Added FAQs on [generating unique, slowly increasing sequential numbers](https://www.cockroachlabs.com/docs/v2.0/sql-faqs#how-do-i-generate-unique-slowly-increasing-sequential-numbers-in-cockroachdb) and [the differences between `UUID`, sequences, and `unique_rowid()`](https://www.cockroachlabs.com/docs/v2.0/sql-faqs#what-are-the-differences-between-uuid-sequences-and-unique_rowid). [#3104][#3104] - -

Contributors

- -This release includes 19 merged PRs by 14 authors. - -[#25458]: https://github.com/cockroachdb/cockroach/pull/25458 -[#25521]: https://github.com/cockroachdb/cockroach/pull/25521 -[#25654]: https://github.com/cockroachdb/cockroach/pull/25654 -[#25719]: https://github.com/cockroachdb/cockroach/pull/25719 -[#25807]: https://github.com/cockroachdb/cockroach/pull/25807 -[#25811]: https://github.com/cockroachdb/cockroach/pull/25811 -[#25852]: https://github.com/cockroachdb/cockroach/pull/25852 -[#26163]: https://github.com/cockroachdb/cockroach/pull/26163 -[#26181]: https://github.com/cockroachdb/cockroach/pull/26181 -[#26232]: https://github.com/cockroachdb/cockroach/pull/26232 -[#26403]: https://github.com/cockroachdb/cockroach/pull/26403 -[#26413]: https://github.com/cockroachdb/cockroach/pull/26413 -[#26615]: https://github.com/cockroachdb/cockroach/pull/26615 -[#26641]: https://github.com/cockroachdb/cockroach/pull/26641 -[#26659]: https://github.com/cockroachdb/cockroach/pull/26659 -[#3104]: https://github.com/cockroachdb/docs/pull/3104 -[#3149]: https://github.com/cockroachdb/docs/pull/3149 -[#3155]: https://github.com/cockroachdb/docs/pull/3155 -[#3156]: https://github.com/cockroachdb/docs/pull/3156 -[#3192]: https://github.com/cockroachdb/docs/pull/3192 diff --git a/src/current/_includes/releases/v2.0/v2.0.4.md b/src/current/_includes/releases/v2.0/v2.0.4.md deleted file mode 100644 index 59634b0c9e8..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.4.md +++ /dev/null @@ -1,66 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

SQL Language Changes

- -- [`CHECK`](https://www.cockroachlabs.com/docs/v2.0/check) constraints are now checked when updating a conflicting row in [`INSERT ... ON CONFLICT DO UPDATE`](https://www.cockroachlabs.com/docs/v2.0/insert#update-values-on-conflict) statements. [#26699][#26699] {% comment %}doc{% endcomment %} -- An error is now returned to the user instead of panicking when trying to add a column with a [`UNIQUE`](https://www.cockroachlabs.com/docs/v2.0/unique) constraint when that column's type is not indexable. [#26728][#26728] {% comment %}doc{% endcomment %} - -

Command-Line Changes

- -- CockroachDB now computes the correct number of replicas on down nodes. Therefore, when [decommissioning nodes](https://www.cockroachlabs.com/docs/v2.0/remove-nodes) via the [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command, the `--wait=all` option no longer hangs indefinitely when there are down nodes. As a result, the `--wait=live` option is no longer necessary and has been deprecated. [#27158][#27158] - -

Bug Fixes

- -- Fixed a typo on **Node Map** screen of the Admin UI. [#27129][#27129] -- Fixed a rare crash on node [decommissioning](https://www.cockroachlabs.com/docs/v2.0/remove-nodes). [#26717][#26717] -- Joins across two [interleaved tables](https://www.cockroachlabs.com/docs/v2.0/interleave-in-parent) no longer return incorrect results under certain circumstances when the equality columns aren't all part of the interleaved columns. [#26832][#26832] -- Successes of time series maintenance queue operations are no longer counted as errors in the **Metrics** dashboard of the Admin UI. [#26820][#26820] -- Prevented a situation in which ranges repeatedly fail to perform a split. [#26944][#26944] -- Fixed a crash that could occur when distributed `LIMIT` queries were run on a cluster with at least one unhealthy node. [#26953][#26953] -- Failed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import)s now begin to clean up partially imported data immediately and in a faster manner. [#26986][#26986] -- Alleviated a scenario in which a large number of uncommitted Raft commands could cause memory pressure at startup time. [#27024][#27024] -- The pg-specific syntax `SET transaction_isolation` now supports settings other than `SNAPSHOT`. This bug did not affect the standard SQL `SET TRANSACTION - ISOLATION LEVEL`. [#27047][#27047] -- The `DISTINCT ON` clause is now reported properly in statement statistics. [#27222][#27222] -- Fixed a crash when trying to plan certain `UNION ALL` queries. [#27233][#27233] -- Commands are now abandoned earlier once a deadline has been reached. [#27215][#27215] -- Fixed a panic in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) when creating a table using a sequence operation (e.g., `nextval()`) in a column's [DEFAULT](https://www.cockroachlabs.com/docs/v2.0/default-value) expression. [#27294][#27294] - -

Doc Updates

- -- Added a tutorial on [benchmarking CockroachDB with TPC-C](https://www.cockroachlabs.com/docs/v2.0/performance-benchmarking-with-tpc-c). [#3281][#3281] -- Added `systemd` configs and instructions to [deployment tutorials](https://www.cockroachlabs.com/docs/v2.0/manual-deployment). [#3268][#3268] -- Updated the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes) to reflect that pods aren't "Ready" before init. [#3291][#3291] - -
- -

Contributors

- -This release includes 22 merged PRs by 17 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Emmanuel. - -- Emmanuel -- neeral - -
- -[#26699]: https://github.com/cockroachdb/cockroach/pull/26699 -[#26717]: https://github.com/cockroachdb/cockroach/pull/26717 -[#26728]: https://github.com/cockroachdb/cockroach/pull/26728 -[#26820]: https://github.com/cockroachdb/cockroach/pull/26820 -[#26832]: https://github.com/cockroachdb/cockroach/pull/26832 -[#26944]: https://github.com/cockroachdb/cockroach/pull/26944 -[#26953]: https://github.com/cockroachdb/cockroach/pull/26953 -[#26986]: https://github.com/cockroachdb/cockroach/pull/26986 -[#27024]: https://github.com/cockroachdb/cockroach/pull/27024 -[#27047]: https://github.com/cockroachdb/cockroach/pull/27047 -[#27129]: https://github.com/cockroachdb/cockroach/pull/27129 -[#27158]: https://github.com/cockroachdb/cockroach/pull/27158 -[#27215]: https://github.com/cockroachdb/cockroach/pull/27215 -[#27222]: https://github.com/cockroachdb/cockroach/pull/27222 -[#27233]: https://github.com/cockroachdb/cockroach/pull/27233 -[#27294]: https://github.com/cockroachdb/cockroach/pull/27294 -[#3268]: https://github.com/cockroachdb/docs/pull/3268 -[#3281]: https://github.com/cockroachdb/docs/pull/3281 -[#3291]: https://github.com/cockroachdb/docs/pull/3291 diff --git a/src/current/_includes/releases/v2.0/v2.0.5.md b/src/current/_includes/releases/v2.0/v2.0.5.md deleted file mode 100644 index 05a9eef59e7..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.5.md +++ /dev/null @@ -1,45 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

SQL language changes

- -- The binary Postgres wire format is now supported for [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.0/interval) values. [#28135][#28135] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) no longer silently converts `rn` characters in CSV files into `n`. [#28225][#28225] -- Fixed a bug that could cause the row following a deleted row to be skipped during [`BACKUP`](https://www.cockroachlabs.com/docs/v2.0/backup). [#28196][#28196] -- Limited the size of "batch groups" when committing a batch to RocksDB to avoid rare scenarios in which multi-gigabyte batch groups are created, which can cause a server to run out of memory when replaying the RocksDB log at startup. [#28009][#28009] -- Prevented unbounded growth of the Raft log caused by a loss of quorum. [#27868][#27868] -- CockroachDB now properly reports an error when a query attempts to use `ORDER BY` within a function argument list, - which is an unsupported feature. [#25147][#25147] - -

Doc updates

- -- Added a [Performance Tuning tutorial](https://www.cockroachlabs.com/docs/v2.0/performance-tuning) that demonstrates essential techniques for getting fast reads and writes in CockroachDB, starting with a single-region deployment and expanding into multiple regions. [#3378][#3378] -- Expanded the [Production Checklist](https://www.cockroachlabs.com/docs/v2.0/recommended-production-settings#networking) to cover a detailed explanation of network flags and scenarios and updated [production deployment tutorials](https://www.cockroachlabs.com/docs/v2.0/manual-deployment) to encourage the use of `--advertise-host` on node start. [#3352][#3352] -- Expanded the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v2.0/orchestrate-cockroachdb-with-kubernetes) to include setting up monitoring and alerting with Prometheus and Alertmanager. [#3370][#3370] -- Updated the [OpenSSL certificate tutorial](https://www.cockroachlabs.com/docs/v2.0/create-security-certificates-openssl) to allow multiple node certificates with the same subject. [#3423][#3423] - -
- -

Contributors

- -This release includes 9 merged PRs by 7 authors. We would like to thank the following contributor from the CockroachDB community: - -- neeral - -
- -[#25147]: https://github.com/cockroachdb/cockroach/pull/25147 -[#27868]: https://github.com/cockroachdb/cockroach/pull/27868 -[#28009]: https://github.com/cockroachdb/cockroach/pull/28009 -[#28135]: https://github.com/cockroachdb/cockroach/pull/28135 -[#28196]: https://github.com/cockroachdb/cockroach/pull/28196 -[#28225]: https://github.com/cockroachdb/cockroach/pull/28225 -[#3378]: https://github.com/cockroachdb/docs/pull/3378 -[#3352]: https://github.com/cockroachdb/docs/pull/3352 -[#3370]: https://github.com/cockroachdb/docs/pull/3370 -[#3385]: https://github.com/cockroachdb/docs/pull/3385 -[#3423]: https://github.com/cockroachdb/docs/pull/3423 diff --git a/src/current/_includes/releases/v2.0/v2.0.6.md b/src/current/_includes/releases/v2.0/v2.0.6.md deleted file mode 100644 index 7ae5cfecdad..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.6.md +++ /dev/null @@ -1,52 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Security bug fix

- -- Fixed a vulnerability in which TLS certificates were not validated correctly for internal RPC interfaces. This vulnerability could allow an unauthenticated user with network access to read and write to the cluster. [#30821](https://github.com/cockroachdb/cockroach/issues/30821) - -

Command-line changes

- -- The [`cockroach zone`](https://www.cockroachlabs.com/docs/v2.0/configure-replication-zones) command is now compatible with CockroachDB v2.1. However, note that `cockroach zone` is also *deprecated* in CockroachDB v2.1 in favor of `ALTER ... CONFIGURE ZONE` and `SHOW ZONE CONFIGURATION` statements to update and view replication zones. [#29632][#29632] - -

Bug fixes

- -- The [**Jobs** page](https://www.cockroachlabs.com/docs/v2.0/admin-ui-jobs-page) now sorts by **Creation Time** by default instead of by **User**. [#30429][#30429] -- Fixed out-of-memory errors caused by very large raft logs. [#28398][#28398] [#28526][#28526] -- Fixed a rare scenario where the value written for one system key was seen when another system key was read, leading to the violation of internal invariants. [#28798][#28798] -- Fixed a memory leak when contended queries time out. [#29100][#29100] -- Fixed a bug causing index creation to fail under rare circumstances. [#29203][#29203] -- Fixed a panic that occurred when not all values were present in a composite foreign key. [#30154][#30154] -- The `ON DELETE CASCADE` and `ON UPDATE CASCADE` [foreign key actions](https://www.cockroachlabs.com/docs/v2.0/foreign-key#foreign-key-actions-new-in-v2-0) no longer cascade through `NULL`s. [#30129][#30129] -- Fixed the occasional improper processing of the `WITH` operand with `IMPORT`/`BACKUP`/`RESTORE` and [common table expressions](https://www.cockroachlabs.com/docs/v2.0/common-table-expressions). [#30199][#30199] -- Transaction size limit errors are no longer returned for transactions that have already committed. [#30309][#30309] -- Fixed a potential infinite loop when the merge joiner encountered an error or cancellation. [#30380][#30380] -- This release includes the following fixes to the [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.0/use-the-built-in-sql-client) command: - - The command now properly prints a warning when a `?` character is mistakenly used to receive contextual help in a non-interactive session, instead of crashing. [#28325][#28325] - - The command now works properly even when the `TERM` environment variable is not set. [#28614][#28614] - - The commands are now properly able to customize the prompt with `~/.editrc` on Linux. [#28614][#28614] - - The commands once again support copy-pasting special unicode character from other documents. [#28614][#28614] - -

Performance improvements

- -- Greatly improved the performance of catching up followers that are behind when Raft logs are large. [#28526][#28526] - -

Contributors

- -This release includes 26 merged PRs by 12 authors. - -[#28325]: https://github.com/cockroachdb/cockroach/pull/28325 -[#28398]: https://github.com/cockroachdb/cockroach/pull/28398 -[#28526]: https://github.com/cockroachdb/cockroach/pull/28526 -[#28614]: https://github.com/cockroachdb/cockroach/pull/28614 -[#28798]: https://github.com/cockroachdb/cockroach/pull/28798 -[#29100]: https://github.com/cockroachdb/cockroach/pull/29100 -[#29203]: https://github.com/cockroachdb/cockroach/pull/29203 -[#29632]: https://github.com/cockroachdb/cockroach/pull/29632 -[#30129]: https://github.com/cockroachdb/cockroach/pull/30129 -[#30154]: https://github.com/cockroachdb/cockroach/pull/30154 -[#30199]: https://github.com/cockroachdb/cockroach/pull/30199 -[#30309]: https://github.com/cockroachdb/cockroach/pull/30309 -[#30380]: https://github.com/cockroachdb/cockroach/pull/30380 -[#30429]: https://github.com/cockroachdb/cockroach/pull/30429 diff --git a/src/current/_includes/releases/v2.0/v2.0.7.md b/src/current/_includes/releases/v2.0/v2.0.7.md deleted file mode 100644 index cb6b40833bb..00000000000 --- a/src/current/_includes/releases/v2.0/v2.0.7.md +++ /dev/null @@ -1,38 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Docker image

- -{% include copy-clipboard.html %} -~~~shell -$ docker pull cockroachdb/cockroach:v2.0.7 -~~~ - -

Bug fixes

- -- Fixed a security vulnerability in which data could be leaked from a cluster, or tampered with in a cluster, in secure mode. [#30823][#30823] -- Fixed a bug where queries could get stuck for seconds or minutes, usually following node restarts. [#31350][#31350] -- CockroachDB no longer crashes due to a `SIGTRAP` error soon after startup on macOS Mojave. [#31522][#31522] -- Fixed bug causing transactions to unnecessarily hit a "too large" error. [#31827][#31827] -- Fixed a bug causing transactions to appear partially committed. Occasionally, CockroachDB claimed to have failed to commit a transaction when some (or all) of its writes were actually persisted. [#32223][#32223] -- Fixed a bug where entry application on Raft followers could fall behind entry application on the leader, causing stalls during splits. [#32601][#32601] -- CockroachDB now properly rejects queries that use an invalid function (e.g., an aggregation) in the `SET` clause of an [`UPDATE`](https://www.cockroachlabs.com/docs/v2.0/update) statement. [#32507][#32507] - -

Build changes

- -- CockroachDB can now be built from source on macOS 10.14 (Mojave). [#31310][#31310] - -

Contributors

- -This release includes 11 merged PRs by 6 authors. - -[#30823]: https://github.com/cockroachdb/cockroach/pull/30823 -[#31310]: https://github.com/cockroachdb/cockroach/pull/31310 -[#31350]: https://github.com/cockroachdb/cockroach/pull/31350 -[#31522]: https://github.com/cockroachdb/cockroach/pull/31522 -[#31827]: https://github.com/cockroachdb/cockroach/pull/31827 -[#32223]: https://github.com/cockroachdb/cockroach/pull/32223 -[#32507]: https://github.com/cockroachdb/cockroach/pull/32507 -[#32601]: https://github.com/cockroachdb/cockroach/pull/32601 -[#32636]: https://github.com/cockroachdb/cockroach/pull/32636 diff --git a/src/current/_includes/sidebar-data-v2.0.json b/src/current/_includes/sidebar-data-v2.0.json deleted file mode 100644 index ef0bd41a4aa..00000000000 --- a/src/current/_includes/sidebar-data-v2.0.json +++ /dev/null @@ -1,1610 +0,0 @@ -[ - { - "title": "Docs Home", - "is_top_level": true, - "urls": [ - "/" - ] - }, - { - "title": "Quickstart", - "is_top_level": true, - "urls": [ - "/cockroachcloud/quickstart.html" - ] - }, - {% include sidebar-data-cockroachcloud.json %}, - { - "title": "CockroachDB", - "is_top_level": true, - "items": [ - { - "title": "Get Started", - "items": [ - { - "title": "Install CockroachDB", - "urls": [ - "/${VERSION}/install-cockroachdb.html" - ] - }, - { - "title": "Start a Local Cluster", - "items": [ - { - "title": "From Binary", - "urls": [ - "/${VERSION}/start-a-local-cluster.html", - "/${VERSION}/secure-a-cluster.html" - ] - }, - { - "title": "In Docker", - "urls": [ - "/${VERSION}/start-a-local-cluster-in-docker.html" - ] - } - ] - }, - { - "title": "Learn CockroachDB SQL", - "items": [ - { - "title": "Essential SQL Statements", - "urls": [ - "/${VERSION}/learn-cockroachdb-sql.html" - ] - }, - { - "title": "Use the Built-in SQL Client", - "urls": [ - "/${VERSION}/use-the-built-in-sql-client.html" - ] - } - ] - }, - { - "title": "Build an App", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/build-an-app-with-cockroachdb.html" - ] - }, - { - "title": "Go", - "urls": [ - "/${VERSION}/build-a-go-app-with-cockroachdb.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-gorm.html" - ] - }, - { - "title": "Python", - "urls": [ - "/${VERSION}/build-a-python-app-with-cockroachdb.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-sqlalchemy.html" - ] - }, - { - "title": "Ruby", - "urls": [ - "/${VERSION}/build-a-ruby-app-with-cockroachdb.html", - "/${VERSION}/build-a-ruby-app-with-cockroachdb-activerecord.html" - ] - }, - { - "title": "Java", - "urls": [ - "/${VERSION}/build-a-java-app-with-cockroachdb.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-hibernate.html" - ] - }, - { - "title": "Node.js", - "urls": [ - "/${VERSION}/build-a-nodejs-app-with-cockroachdb.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-sequelize.html" - ] - }, - { - "title": "C++", - "urls": [ - "/${VERSION}/build-a-c++-app-with-cockroachdb.html" - ] - }, - { - "title": "C# (.NET)", - "urls": [ - "/${VERSION}/build-a-csharp-app-with-cockroachdb.html" - ] - }, - { - "title": "Clojure", - "urls": [ - "/${VERSION}/build-a-clojure-app-with-cockroachdb.html" - ] - }, - { - "title": "PHP", - "urls": [ - "/${VERSION}/build-a-php-app-with-cockroachdb.html" - ] - }, - { - "title": "Rust", - "urls": [ - "/${VERSION}/build-a-rust-app-with-cockroachdb.html" - ] - } - ] - }, - { - "title": "Explore Benefits", - "items": [ - { - "title": "Data Replication", - "urls": [ - "/${VERSION}/demo-data-replication.html" - ] - }, - { - "title": "Fault Tolerance & Recovery", - "urls": [ - "/${VERSION}/demo-fault-tolerance-and-recovery.html" - ] - }, - { - "title": "Automatic Rebalancing", - "urls": [ - "/${VERSION}/demo-automatic-rebalancing.html" - ] - }, - { - "title": "Cross-Cloud Migration", - "urls": [ - "/${VERSION}/demo-automatic-cloud-migration.html" - ] - }, - { - "title": "Follow-the-Workload", - "urls": [ - "/${VERSION}/demo-follow-the-workload.html" - ] - }, - { - "title": "Orchestration", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes-insecure.html" - ] - }, - { - "title": "JSON Support", - "urls": [ - "/${VERSION}/demo-json-support.html" - ] - } - ] - } - ] - }, - { - "title": "Develop", - "items": [ - { - "title": "Install Client Drivers", - "urls": [ - "/${VERSION}/install-client-drivers.html" - ] - }, - { - "title": "Client Connection Parameters", - "urls": [ - "/${VERSION}/connection-parameters.html" - ] - }, - { - "title": "SQL Feature Support", - "urls": [ - "/${VERSION}/sql-feature-support.html" - ] - }, - { - "title": "SQL Statements", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/sql-statements.html" - ] - }, - { - "title": "ADD COLUMN", - "urls": [ - "/${VERSION}/add-column.html" - ] - }, - { - "title": "ADD CONSTRAINT", - "urls": [ - "/${VERSION}/add-constraint.html" - ] - }, - { - "title": "ALTER COLUMN", - "urls": [ - "/${VERSION}/alter-column.html" - ] - }, - { - "title": "ALTER DATABASE", - "urls": [ - "/${VERSION}/alter-database.html" - ] - }, - { - "title": "ALTER INDEX", - "urls": [ - "/${VERSION}/alter-index.html" - ] - }, - { - "title": "ALTER SEQUENCE", - "urls": [ - "/${VERSION}/alter-sequence.html" - ] - }, - { - "title": "ALTER TABLE", - "urls": [ - "/${VERSION}/alter-table.html" - ] - }, - { - "title": "ALTER USER", - "urls": [ - "/${VERSION}/alter-user.html" - ] - }, - { - "title": "EXPERIMENTAL_AUDIT", - "urls": [ - "/${VERSION}/experimental-audit.html" - ] - }, - { - "title": "ALTER VIEW", - "urls": [ - "/${VERSION}/alter-view.html" - ] - }, - { - "title": "BACKUP (Enterprise)", - "urls": [ - "/${VERSION}/backup.html" - ] - }, - { - "title": "BEGIN", - "urls": [ - "/${VERSION}/begin-transaction.html" - ] - }, - { - "title": "CANCEL JOB", - "urls": [ - "/${VERSION}/cancel-job.html" - ] - }, - { - "title": "CANCEL QUERY", - "urls": [ - "/${VERSION}/cancel-query.html" - ] - }, - { - "title": "COMMIT", - "urls": [ - "/${VERSION}/commit-transaction.html" - ] - }, - { - "title": "CREATE DATABASE", - "urls": [ - "/${VERSION}/create-database.html" - ] - }, - { - "title": "CREATE INDEX", - "urls": [ - "/${VERSION}/create-index.html" - ] - }, - { - "title": "CREATE ROLE (Enterprise)", - "urls": [ - "/${VERSION}/create-role.html" - ] - }, - { - "title": "CREATE SEQUENCE", - "urls": [ - "/${VERSION}/create-sequence.html" - ] - }, - { - "title": "CREATE TABLE", - "urls": [ - "/${VERSION}/create-table.html" - ] - }, - { - "title": "CREATE TABLE AS", - "urls": [ - "/${VERSION}/create-table-as.html" - ] - }, - { - "title": "CREATE USER", - "urls": [ - "/${VERSION}/create-user.html" - ] - }, - { - "title": "CREATE VIEW", - "urls": [ - "/${VERSION}/create-view.html" - ] - }, - { - "title": "DELETE", - "urls": [ - "/${VERSION}/delete.html" - ] - }, - { - "title": "DROP COLUMN", - "urls": [ - "/${VERSION}/drop-column.html" - ] - }, - { - "title": "DROP CONSTRAINT", - "urls": [ - "/${VERSION}/drop-constraint.html" - ] - }, - { - "title": "DROP DATABASE", - "urls": [ - "/${VERSION}/drop-database.html" - ] - }, - { - "title": "DROP INDEX", - "urls": [ - "/${VERSION}/drop-index.html" - ] - }, - { - "title": "DROP ROLE (Enterprise)", - "urls": [ - "/${VERSION}/drop-role.html" - ] - }, - { - "title": "DROP SEQUENCE", - "urls": [ - "/${VERSION}/drop-sequence.html" - ] - }, - { - "title": "DROP TABLE", - "urls": [ - "/${VERSION}/drop-table.html" - ] - }, - { - "title": "DROP USER", - "urls": [ - "/${VERSION}/drop-user.html" - ] - }, - { - "title": "DROP VIEW", - "urls": [ - "/${VERSION}/drop-view.html" - ] - }, - { - "title": "EXPLAIN", - "urls": [ - "/${VERSION}/explain.html" - ] - }, - { - "title": "GRANT <privileges>", - "urls": [ - "/${VERSION}/grant.html" - ] - }, - { - "title": "GRANT <roles> (Enterprise)", - "urls": [ - "/${VERSION}/grant-roles.html" - ] - }, - { - "title": "IMPORT", - "urls": [ - "/${VERSION}/import.html" - ] - }, - { - "title": "INSERT", - "urls": [ - "/${VERSION}/insert.html" - ] - }, - { - "title": "PARTITION BY (Enterprise)", - "urls": [ - "/${VERSION}/partition-by.html" - ] - }, - { - "title": "PAUSE JOB", - "urls": [ - "/${VERSION}/pause-job.html" - ] - }, - { - "title": "RENAME COLUMN", - "urls": [ - "/${VERSION}/rename-column.html" - ] - }, - { - "title": "RENAME DATABASE", - "urls": [ - "/${VERSION}/rename-database.html" - ] - }, - { - "title": "RENAME INDEX", - "urls": [ - "/${VERSION}/rename-index.html" - ] - }, - { - "title": "RENAME TABLE", - "urls": [ - "/${VERSION}/rename-table.html" - ] - }, - { - "title": "RENAME SEQUENCE", - "urls": [ - "/${VERSION}/rename-sequence.html" - ] - }, - { - "title": "RELEASE SAVEPOINT", - "urls": [ - "/${VERSION}/release-savepoint.html" - ] - }, - { - "title": "RESET <session variable>", - "urls": [ - "/${VERSION}/reset-vars.html" - ] - }, - { - "title": "RESET CLUSTER SETTING", - "urls": [ - "/${VERSION}/reset-cluster-setting.html" - ] - }, - { - "title": "RESTORE (Enterprise)", - "urls": [ - "/${VERSION}/restore.html" - ] - }, - { - "title": "RESUME JOB", - "urls": [ - "/${VERSION}/resume-job.html" - ] - }, - { - "title": "REVOKE <privileges>", - "urls": [ - "/${VERSION}/revoke.html" - ] - }, - { - "title": "REVOKE <roles> (Enterprise)", - "urls": [ - "/${VERSION}/revoke-roles.html" - ] - }, - { - "title": "ROLLBACK", - "urls": [ - "/${VERSION}/rollback-transaction.html" - ] - }, - { - "title": "SAVEPOINT", - "urls": [ - "/${VERSION}/savepoint.html" - ] - }, - { - "title": "SELECT", - "urls": [ - "/${VERSION}/select-clause.html" - ] - }, - { - "title": "SET <session variable>", - "urls": [ - "/${VERSION}/set-vars.html" - ] - }, - { - "title": "SET CLUSTER SETTING", - "urls": [ - "/${VERSION}/set-cluster-setting.html" - ] - }, - { - "title": "SET TRANSACTION", - "urls": [ - "/${VERSION}/set-transaction.html" - ] - }, - { - "title": "SHOW <session variables>", - "urls": [ - "/${VERSION}/show-vars.html" - ] - }, - { - "title": "SHOW BACKUP", - "urls": [ - "/${VERSION}/show-backup.html" - ] - }, - { - "title": "SHOW CLUSTER SETTING", - "urls": [ - "/${VERSION}/show-cluster-setting.html" - ] - }, - { - "title": "SHOW COLUMNS", - "urls": [ - "/${VERSION}/show-columns.html" - ] - }, - { - "title": "SHOW CONSTRAINTS", - "urls": [ - "/${VERSION}/show-constraints.html" - ] - }, - { - "title": "SHOW CREATE SEQUENCE", - "urls": [ - "/${VERSION}/show-create-sequence.html" - ] - }, - { - "title": "SHOW CREATE TABLE", - "urls": [ - "/${VERSION}/show-create-table.html" - ] - }, - { - "title": "SHOW CREATE VIEW", - "urls": [ - "/${VERSION}/show-create-view.html" - ] - }, - { - "title": "SHOW DATABASES", - "urls": [ - "/${VERSION}/show-databases.html" - ] - }, - { - "title": "SHOW EXPERIMENTAL_RANGES", - "urls": [ - "/${VERSION}/show-experimental-ranges.html" - ] - }, - { - "title": "SHOW GRANTS", - "urls": [ - "/${VERSION}/show-grants.html" - ] - }, - { - "title": "SHOW INDEX", - "urls": [ - "/${VERSION}/show-index.html" - ] - }, - { - "title": "SHOW JOBS", - "urls": [ - "/${VERSION}/show-jobs.html" - ] - }, - { - "title": "SHOW QUERIES", - "urls": [ - "/${VERSION}/show-queries.html" - ] - }, - { - "title": "SHOW ROLES", - "urls": [ - "/${VERSION}/show-roles.html" - ] - }, - { - "title": "SHOW SESSIONS", - "urls": [ - "/${VERSION}/show-sessions.html" - ] - }, - { - "title": "SHOW SCHEMAS", - "urls": [ - "/${VERSION}/show-schemas.html" - ] - }, - { - "title": "SHOW TABLES", - "urls": [ - "/${VERSION}/show-tables.html" - ] - }, - { - "title": "SHOW TRACE", - "urls": [ - "/${VERSION}/show-trace.html" - ] - }, - { - "title": "SHOW USERS", - "urls": [ - "/${VERSION}/show-users.html" - ] - }, - { - "title": "SPLIT AT", - "urls": [ - "/${VERSION}/split-at.html" - ] - }, - { - "title": "TRUNCATE", - "urls": [ - "/${VERSION}/truncate.html" - ] - }, - { - "title": "UPDATE", - "urls": [ - "/${VERSION}/update.html" - ] - }, - { - "title": "UPSERT", - "urls": [ - "/${VERSION}/upsert.html" - ] - }, - { - "title": "VALIDATE CONSTRAINT", - "urls": [ - "/${VERSION}/validate-constraint.html" - ] - } - ] - }, - { - "title": "Functions and Operators", - "urls": [ - "/${VERSION}/functions-and-operators.html" - ] - }, - { - "title": "SQL Syntax", - "items": [ - { - "title": "Keywords & Identifiers", - "urls": [ - "/${VERSION}/keywords-and-identifiers.html" - ] - }, - { - "title": "Constants", - "urls": [ - "/${VERSION}/sql-constants.html" - ] - }, - { - "title": "Selection Queries", - "urls": [ - "/${VERSION}/selection-queries.html" - ] - }, - { - "title": "Ordering Query Results", - "urls": [ - "/${VERSION}/query-order.html" - ] - }, - { - "title": "Limiting Query Results", - "urls": [ - "/${VERSION}/limit-offset.html" - ] - }, - { - "title": "Table Expressions", - "urls": [ - "/${VERSION}/table-expressions.html" - ] - }, - { - "title": "Common Table Expressions", - "urls": [ - "/${VERSION}/common-table-expressions.html" - ] - }, - { - "title": "Join Expressions", - "urls": [ - "/${VERSION}/joins.html" - ] - }, - { - "title": "Computed Columns", - "urls": [ - "/${VERSION}/computed-columns.html" - ] - }, - { - "title": "Scalar Expressions", - "urls": [ - "/${VERSION}/scalar-expressions.html" - ] - }, - { - "title": "Subqueries", - "urls": [ - "/${VERSION}/subqueries.html" - ] - }, - { - "title": "Name Resolution", - "urls": [ - "/${VERSION}/sql-name-resolution.html" - ] - }, - { - "title": "AS OF SYSTEM TIME", - "urls": [ - "/${VERSION}/as-of-system-time.html" - ] - }, - { - "title": "NULL Handling", - "urls": [ - "/${VERSION}/null-handling.html" - ] - }, - { - "title": "Full SQL Grammar", - "urls": [ - "/${VERSION}/sql-grammar.html" - ] - } - ] - }, - { - "title": "Constraints", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/constraints.html" - ] - }, - { - "title": "Check", - "urls": [ - "/${VERSION}/check.html" - ] - }, - { - "title": "Default Value", - "urls": [ - "/${VERSION}/default-value.html" - ] - }, - { - "title": "Foreign Key", - "urls": [ - "/${VERSION}/foreign-key.html" - ] - }, - { - "title": "Not Null", - "urls": [ - "/${VERSION}/not-null.html" - ] - }, - { - "title": "Primary Key", - "urls": [ - "/${VERSION}/primary-key.html" - ] - }, - { - "title": "Unique", - "urls": [ - "/${VERSION}/unique.html" - ] - } - ] - }, - { - "title": "Data Types", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/data-types.html" - ] - }, - { - "title": "ARRAY", - "urls": [ - "/${VERSION}/array.html" - ] - }, - { - "title": "BOOL", - "urls": [ - "/${VERSION}/bool.html" - ] - }, - { - "title": "BYTES", - "urls": [ - "/${VERSION}/bytes.html" - ] - }, - { - "title": "COLLATE", - "urls": [ - "/${VERSION}/collate.html" - ] - }, - { - "title": "DATE", - "urls": [ - "/${VERSION}/date.html" - ] - }, - { - "title": "DECIMAL", - "urls": [ - "/${VERSION}/decimal.html" - ] - }, - { - "title": "FLOAT", - "urls": [ - "/${VERSION}/float.html" - ] - }, - { - "title": "INET", - "urls": [ - "/${VERSION}/inet.html" - ] - }, - { - "title": "INT", - "urls": [ - "/${VERSION}/int.html" - ] - }, - { - "title": "INTERVAL", - "urls": [ - "/${VERSION}/interval.html" - ] - }, - { - "title": "JSONB", - "urls": [ - "/${VERSION}/jsonb.html" - ] - }, - { - "title": "SERIAL", - "urls": [ - "/${VERSION}/serial.html" - ] - }, - { - "title": "STRING", - "urls": [ - "/${VERSION}/string.html" - ] - }, - { - "title": "TIME", - "urls": [ - "/${VERSION}/time.html" - ] - }, - { - "title": "TIMESTAMP", - "urls": [ - "/${VERSION}/timestamp.html" - ] - }, - { - "title": "UUID", - "urls": [ - "/${VERSION}/uuid.html" - ] - } - ] - }, - { - "title": "Transactions", - "urls": [ - "/${VERSION}/transactions.html" - ] - }, - { - "title": "Views", - "urls": [ - "/${VERSION}/views.html" - ] - }, - { - "title": "Window Functions", - "urls": [ - "/${VERSION}/window-functions.html" - ] - }, - { - "title": "Performance Optimization", - "items": [ - { - "title": "SQL Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "Indexes", - "urls": [ - "/${VERSION}/indexes.html" - ] - }, - { - "title": "Inverted Indexes", - "urls": [ - "/${VERSION}/inverted-indexes.html" - ] - }, - { - "title": "Column Families", - "urls": [ - "/${VERSION}/column-families.html" - ] - }, - { - "title": "Interleaved Tables", - "urls": [ - "/${VERSION}/interleave-in-parent.html" - ] - }, - { - "title": "Parallel Statement Execution", - "urls": [ - "/${VERSION}/parallel-statement-execution.html" - ] - } - ] - }, - { - "title": "Information Schema", - "urls": [ - "/${VERSION}/information-schema.html" - ] - }, - { - "title": "Porting Applications", - "items": [ - { - "title": "From PostgreSQL", - "urls": [ - "/${VERSION}/porting-postgres.html" - ] - } - ] - } - ] - }, - { - "title": "Deploy", - "items": [ - { - "title": "Production Checklist", - "urls": [ - "/${VERSION}/recommended-production-settings.html" - ] - }, - { - "title": "Manual Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/manual-deployment.html" - ] - }, - { - "title": "On-Premises", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-premises.html", - "/${VERSION}/deploy-cockroachdb-on-premises-insecure.html" - ] - }, - { - "title": "AWS", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-aws.html", - "/${VERSION}/deploy-cockroachdb-on-aws-insecure.html" - ] - }, - { - "title": "Azure", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure.html", - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure-insecure.html" - ] - }, - { - "title": "Digital Ocean", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-digital-ocean.html", - "/${VERSION}/deploy-cockroachdb-on-digital-ocean-insecure.html" - ] - }, - { - "title": "Google Cloud Platform GCE", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform.html", - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform-insecure.html" - ] - } - ] - }, - { - "title": "Orchestrated Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/orchestration.html" - ] - }, - { - "title": "Kubernetes Single-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes.html", - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-insecure.html" - ] - }, - { - "title": "Kubernetes Multi-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html" - ] - }, - { - "title": "Kubernetes Performance Optimization", - "urls": [ - "/${VERSION}/kubernetes-performance.html" - ] - }, - { - "title": "Docker Swarm Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-docker-swarm.html", - "/${VERSION}/orchestrate-cockroachdb-with-docker-swarm-insecure.html" - ] - } - ] - }, - { - "title": "Monitoring and Alerting", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/monitoring-and-alerting.html" - ] - }, - { - "title": "Use Prometheus and Alertmanager", - "urls": [ - "/${VERSION}/monitor-cockroachdb-with-prometheus.html" - ] - } - ] - }, - { - "title": "Performance Benchmarking", - "urls": [ - "/${VERSION}/performance-benchmarking-with-tpc-c.html" - ] - }, - { - "title": "Performance Tuning", - "urls": [ - "/${VERSION}/performance-tuning.html" - ] - }, - { - "title": "Access Management", - "items": [ - { - "title": "Manage Users", - "urls": [ - "/${VERSION}/create-and-manage-users.html" - ] - }, - { - "title": "Manage Roles (Enterprise)", - "urls": [ - "/${VERSION}/roles.html" - ] - }, - { - "title": "Privileges", - "urls": [ - "/${VERSION}/privileges.html" - ] - } - ] - }, - { - "title": "Use the Admin UI", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/admin-ui-overview.html" - ] - }, - { - "title": "Access and Navigate the Admin UI", - "urls": [ - "/${VERSION}/admin-ui-access-and-navigate.html" - ] - }, - { - "title": "Overview Dashboard", - "urls": [ - "/${VERSION}/admin-ui-overview-dashboard.html" - ] - }, - { - "title": "Runtime Dashboard", - "urls": [ - "/${VERSION}/admin-ui-runtime-dashboard.html" - ] - }, - { - "title": "SQL Dashboard", - "urls": [ - "/${VERSION}/admin-ui-sql-dashboard.html" - ] - }, - { - "title": "Storage Dashboard", - "urls": [ - "/${VERSION}/admin-ui-storage-dashboard.html" - ] - }, - { - "title": "Replication Dashboard", - "urls": [ - "/${VERSION}/admin-ui-replication-dashboard.html" - ] - }, - { - "title": "Databases Page", - "urls": [ - "/${VERSION}/admin-ui-databases-page.html" - ] - }, - { - "title": "Jobs Page", - "urls": [ - "/${VERSION}/admin-ui-jobs-page.html" - ] - }, - { - "title": "Custom Chart Debug Page", - "urls": [ - "/${VERSION}/admin-ui-custom-chart-debug-page.html" - ] - } - ] - }, - { - "title": "Enterprise Licensing", - "urls": [ - "/${VERSION}/enterprise-licensing.html" - ] - }, - { - "title": "Start a Node", - "urls": [ - "/${VERSION}/start-a-node.html" - ] - }, - { - "title": "Initialize a Cluster", - "urls": [ - "/${VERSION}/initialize-a-cluster.html" - ] - }, - { - "title": "Create Security Certificates", - "urls": [ - "/${VERSION}/create-security-certificates.html", - "/${VERSION}/create-security-certificates-openssl.html" - ] - }, - { - "title": "Configure Replication Zones", - "urls": [ - "/${VERSION}/configure-replication-zones.html" - ] - }, - { - "title": "Define Table Partitions (Enterprise)", - "urls": [ - "/${VERSION}/partitioning.html" - ] - }, - { - "title": "Enable Node Map (Enterprise)", - "urls": [ - "/${VERSION}/enable-node-map.html" - ] - }, - { - "title": "Cluster Settings", - "urls": [ - "/${VERSION}/cluster-settings.html" - ] - }, - { - "title": "Cockroach Commands", - "urls": [ - "/${VERSION}/cockroach-commands.html" - ] - } - ] - }, - { - "title": "Maintain", - "items": [ - { - "title": "Upgrade to CockroachDB v2.0", - "urls": [ - "/${VERSION}/upgrade-cockroach-version.html" - ] - }, - { - "title": "Manage Long-Running Queries", - "urls": [ - "/${VERSION}/manage-long-running-queries.html" - ] - }, - { - "title": "Stop a Node", - "urls": [ - "/${VERSION}/stop-a-node.html" - ] - }, - { - "title": "Decommission Nodes", - "urls": [ - "/${VERSION}/remove-nodes.html" - ] - }, - { - "title": "Import Data", - "urls": [ - "/${VERSION}/import-data.html" - ] - }, - { - "title": "Back up Data", - "urls": [ - "/${VERSION}/back-up-data.html" - ] - }, - { - "title": "Restore Data", - "urls": [ - "/${VERSION}/restore-data.html" - ] - }, - { - "title": "Dump/Export Schema or Data", - "urls": [ - "/${VERSION}/sql-dump.html" - ] - }, - { - "title": "Create a File Server for IMPORT/BACKUP", - "urls": [ - "/${VERSION}/create-a-file-server.html" - ] - }, - { - "title": "Rotate Security Certificates", - "urls": [ - "/${VERSION}/rotate-certificates.html" - ] - }, - { - "title": "Generate CockroachDB Resources", - "urls": [ - "/${VERSION}/generate-cockroachdb-resources.html" - ] - }, - { - "title": "View Node Details", - "urls": [ - "/${VERSION}/view-node-details.html" - ] - }, - { - "title": "View Version Details", - "urls": [ - "/${VERSION}/view-version-details.html" - ] - }, - { - "title": "Diagnostics Reporting", - "urls": [ - "/${VERSION}/diagnostics-reporting.html" - ] - }, - { - "title": "SQL Audit Logging (Experimental)", - "urls": [ - "/${VERSION}/sql-audit-logging.html" - ] - } - ] - }, - { - "title": "Troubleshoot", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/troubleshooting-overview.html" - ] - }, - { - "title": "Common Errors", - "urls": [ - "/${VERSION}/common-errors.html" - ] - }, - { - "title": "Troubleshoot Cluster Setup", - "urls": [ - "/${VERSION}/cluster-setup-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot Query Behavior", - "urls": [ - "/${VERSION}/query-behavior-troubleshooting.html" - ] - }, - { - "title": "Understand Debug Logs", - "urls": [ - "/${VERSION}/debug-and-error-logs.html" - ] - }, - { - "title": "Collect Cluster Debug Info", - "urls": [ - "/${VERSION}/debug-zip.html" - ] - }, - { - "title": "Support Resources", - "urls": [ - "/${VERSION}/support-resources.html" - ] - }, - { - "title": "File an Issue", - "urls": [ - "/${VERSION}/file-an-issue.html" - ] - } - ] - }, - { - "title": "Architecture", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/architecture/overview.html" - ] - }, - { - "title": "SQL Layer", - "urls": [ - "/${VERSION}/architecture/sql-layer.html" - ] - }, - { - "title": "Transaction Layer", - "urls": [ - "/${VERSION}/architecture/transaction-layer.html" - ] - }, - { - "title": "Distribution Layer", - "urls": [ - "/${VERSION}/architecture/distribution-layer.html" - ] - }, - { - "title": "Replication Layer", - "urls": [ - "/${VERSION}/architecture/replication-layer.html" - ] - }, - { - "title": "Storage Layer", - "urls": [ - "/${VERSION}/architecture/storage-layer.html" - ] - } - ] - }, - { - "title": "Contribute", - "items": [ - { - "title": "Improve the Docs", - "urls": [ - "/${VERSION}/improve-the-docs.html" - ] - } - ] - }, - {% include sidebar-releases.json %}, - { - "title": "FAQs", - "items": [ - { - "title": "Product FAQs", - "urls": [ - "/${VERSION}/frequently-asked-questions.html" - ] - }, - { - "title": "SQL FAQs", - "urls": [ - "/${VERSION}/sql-faqs.html" - ] - }, - { - "title": "Operational FAQs", - "urls": [ - "/${VERSION}/operational-faqs.html" - ] - }, - { - "title": "CockroachDB in Comparison", - "urls": [ - "/${VERSION}/cockroachdb-in-comparison.html" - ] - }, - { - "title": "CockroachDB Features", - "items": [ - { - "title": "Multi-Active Availability", - "urls": [ - "/${VERSION}/multi-active-availability.html" - ] - }, - { - "title": "Simplified Deployment", - "urls": [ - "/${VERSION}/simplified-deployment.html" - ] - }, - { - "title": "Strong Consistency", - "urls": [ - "/${VERSION}/strong-consistency.html" - ] - }, - { - "title": "SQL", - "urls": [ - "/${VERSION}/sql.html" - ] - }, - { - "title": "Distributed Transactions", - "urls": [ - "/${VERSION}/distributed-transactions.html" - ] - }, - { - "title": "Automated Scaling & Repair", - "urls": [ - "/${VERSION}/automated-scaling-and-repair.html" - ] - }, - { - "title": "High Availability", - "urls": [ - "/${VERSION}/high-availability.html" - ] - }, - { - "title": "Open Source", - "urls": [ - "/${VERSION}/open-source.html" - ] - }, - { - "title": "Go Implementation", - "urls": [ - "/${VERSION}/go-implementation.html" - ] - } - ] - } - ] - } - ] - } -] diff --git a/src/current/_includes/v2.0/admin-ui-custom-chart-debug-page-00.html b/src/current/_includes/v2.0/admin-ui-custom-chart-debug-page-00.html deleted file mode 100644 index 36e0764df99..00000000000 --- a/src/current/_includes/v2.0/admin-ui-custom-chart-debug-page-00.html +++ /dev/null @@ -1,109 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Column - - Description -
- Metric Name - - How the system refers to this metric, e.g., sql.bytesin. -
- Downsampler - -

- The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view. -

-

- Options: -

    -
  • AVG: Returns the average value over the time period.
  • -
  • MIN: Returns the lowest value seen.
  • -
  • MAX: Returns the highest value seen.
  • -
  • SUM: Returns the sum of all values seen.
  • -
-

-
- Aggregator - -

- Used to combine data points from different nodes. It has the same operations available as the Downsampler. -

-

- Options: -

    -
  • AVG: Returns the average value over the time period.
  • -
  • MIN: Returns the lowest value seen.
  • -
  • MAX: Returns the highest value seen.
  • -
  • SUM: Returns the sum of all values seen.
  • -
-

-
- Rate - -

- Determines how to display the rate of change during the selected time period. -

-

- Options: - -

    -
  • - Normal: Returns the actual recorded value. -
  • -
  • - Rate: Returns the rate of change of the value per second. -
  • -
  • - Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase). -
  • -
-

-
- Source - - The set of nodes being queried, which is either: -
    -
  • - The entire cluster. -
  • -
  • - A single, named node. -
  • -
-
- Per Node - - If checked, the chart will show a line for each node's value of this metric. -
diff --git a/src/current/_includes/v2.0/app/BasicSample.java b/src/current/_includes/v2.0/app/BasicSample.java deleted file mode 100644 index 5fcac022e4e..00000000000 --- a/src/current/_includes/v2.0/app/BasicSample.java +++ /dev/null @@ -1,54 +0,0 @@ -import java.sql.*; -import java.util.Properties; - -/* - Download the Postgres JDBC driver jar from https://jdbc.postgresql.org. - - Then, compile and run this example like so: - - $ export CLASSPATH=.:/path/to/postgresql.jar - $ javac BasicSample.java && java BasicSample -*/ - -public class BasicSample { - public static void main(String[] args) - throws ClassNotFoundException, SQLException { - - // Load the Postgres JDBC driver. - Class.forName("org.postgresql.Driver"); - - // Connect to the "bank" database. - Properties props = new Properties(); - props.setProperty("user", "maxroach"); - props.setProperty("sslmode", "require"); - props.setProperty("sslrootcert", "certs/ca.crt"); - props.setProperty("sslkey", "certs/client.maxroach.pk8"); - props.setProperty("sslcert", "certs/client.maxroach.crt"); - - Connection db = DriverManager - .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props); - - try { - // Create the "accounts" table. - db.createStatement() - .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); - - // Insert two rows into the "accounts" table. - db.createStatement() - .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); - - // Print out the balances. - System.out.println("Initial balances:"); - ResultSet res = db.createStatement() - .executeQuery("SELECT id, balance FROM accounts"); - while (res.next()) { - System.out.printf("\taccount %s: %s\n", - res.getInt("id"), - res.getInt("balance")); - } - } finally { - // Close the database connection. - db.close(); - } - } -} diff --git a/src/current/_includes/v2.0/app/TxnSample.java b/src/current/_includes/v2.0/app/TxnSample.java deleted file mode 100644 index dd0851a56cf..00000000000 --- a/src/current/_includes/v2.0/app/TxnSample.java +++ /dev/null @@ -1,147 +0,0 @@ -import java.sql.*; -import java.util.Properties; - -/* - Download the Postgres JDBC driver jar from https://jdbc.postgresql.org. - - Then, compile and run this example like so: - - $ export CLASSPATH=.:/path/to/postgresql.jar - $ javac TxnSample.java && java TxnSample -*/ - -// Ambiguous whether the transaction committed or not. -class AmbiguousCommitException extends SQLException{ - public AmbiguousCommitException(Throwable cause) { - super(cause); - } -} - -class InsufficientBalanceException extends Exception {} - -class AccountNotFoundException extends Exception { - public int account; - public AccountNotFoundException(int account) { - this.account = account; - } -} - -// A simple interface that provides a retryable lambda expression. -interface RetryableTransaction { - public void run(Connection conn) - throws SQLException, InsufficientBalanceException, - AccountNotFoundException, AmbiguousCommitException; -} - -public class TxnSample { - public static RetryableTransaction transferFunds(int from, int to, int amount) { - return new RetryableTransaction() { - public void run(Connection conn) - throws SQLException, InsufficientBalanceException, - AccountNotFoundException, AmbiguousCommitException { - - // Check the current balance. - ResultSet res = conn.createStatement() - .executeQuery("SELECT balance FROM accounts WHERE id = " - + from); - if(!res.next()) { - throw new AccountNotFoundException(from); - } - - int balance = res.getInt("balance"); - if(balance < from) { - throw new InsufficientBalanceException(); - } - - // Perform the transfer. - conn.createStatement() - .executeUpdate("UPDATE accounts SET balance = balance - " - + amount + " where id = " + from); - conn.createStatement() - .executeUpdate("UPDATE accounts SET balance = balance + " - + amount + " where id = " + to); - } - }; - } - - public static void retryTransaction(Connection conn, RetryableTransaction tx) - throws SQLException, InsufficientBalanceException, - AccountNotFoundException, AmbiguousCommitException { - - Savepoint sp = conn.setSavepoint("cockroach_restart"); - while(true) { - boolean releaseAttempted = false; - try { - tx.run(conn); - releaseAttempted = true; - conn.releaseSavepoint(sp); - break; - } - catch(SQLException e) { - String sqlState = e.getSQLState(); - - // Check if the error code indicates a SERIALIZATION_FAILURE. - if(sqlState.equals("40001")) { - // Signal the database that we will attempt a retry. - conn.rollback(sp); - } else if(releaseAttempted) { - throw new AmbiguousCommitException(e); - } else { - throw e; - } - } - } - conn.commit(); - } - - public static void main(String[] args) - throws ClassNotFoundException, SQLException { - - // Load the Postgres JDBC driver. - Class.forName("org.postgresql.Driver"); - - // Connect to the 'bank' database. - Properties props = new Properties(); - props.setProperty("user", "maxroach"); - props.setProperty("sslmode", "require"); - props.setProperty("sslrootcert", "certs/ca.crt"); - props.setProperty("sslkey", "certs/client.maxroach.pk8"); - props.setProperty("sslcert", "certs/client.maxroach.crt"); - - Connection db = DriverManager - .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props); - - - try { - // We need to turn off autocommit mode to allow for - // multi-statement transactions. - db.setAutoCommit(false); - - // Perform the transfer. This assumes the 'accounts' - // table has already been created in the database. - RetryableTransaction transfer = transferFunds(1, 2, 100); - retryTransaction(db, transfer); - - // Check balances after transfer. - db.setAutoCommit(true); - ResultSet res = db.createStatement() - .executeQuery("SELECT id, balance FROM accounts"); - while (res.next()) { - System.out.printf("\taccount %s: %s\n", res.getInt("id"), - res.getInt("balance")); - } - - } catch(InsufficientBalanceException e) { - System.out.println("Insufficient balance"); - } catch(AccountNotFoundException e) { - System.out.println("No users in the table with id " + e.account); - } catch(AmbiguousCommitException e) { - System.out.println("Ambiguous result encountered: " + e); - } catch(SQLException e) { - System.out.println("SQLException encountered:" + e); - } finally { - // Close the database connection. - db.close(); - } - } -} diff --git a/src/current/_includes/v2.0/app/activerecord-basic-sample.rb b/src/current/_includes/v2.0/app/activerecord-basic-sample.rb deleted file mode 100644 index f1d35e1de3a..00000000000 --- a/src/current/_includes/v2.0/app/activerecord-basic-sample.rb +++ /dev/null @@ -1,48 +0,0 @@ -require 'active_record' -require 'activerecord-cockroachdb-adapter' -require 'pg' - -# Connect to CockroachDB through ActiveRecord. -# In Rails, this configuration would go in config/database.yml as usual. -ActiveRecord::Base.establish_connection( - adapter: 'cockroachdb', - username: 'maxroach', - database: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'require', - sslrootcert: 'certs/ca.crt', - sslkey: 'certs/client.maxroach.key', - sslcert: 'certs/client.maxroach.crt' -) - - -# Define the Account model. -# In Rails, this would go in app/models/ as usual. -class Account < ActiveRecord::Base - validates :id, presence: true - validates :balance, presence: true -end - -# Define a migration for the accounts table. -# In Rails, this would go in db/migrate/ as usual. -class Schema < ActiveRecord::Migration[5.0] - def change - create_table :accounts, force: true do |t| - t.integer :balance - end - end -end - -# Run the schema migration by hand. -# In Rails, this would be done via rake db:migrate as usual. -Schema.new.change() - -# Create two accounts, inserting two rows into the accounts table. -Account.create(id: 1, balance: 1000) -Account.create(id: 2, balance: 250) - -# Retrieve accounts and print out the balances -Account.all.each do |acct| - puts "#{acct.id} #{acct.balance}" -end diff --git a/src/current/_includes/v2.0/app/basic-sample.c b/src/current/_includes/v2.0/app/basic-sample.c deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/current/_includes/v2.0/app/basic-sample.clj b/src/current/_includes/v2.0/app/basic-sample.clj deleted file mode 100644 index b139d27b8e1..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.clj +++ /dev/null @@ -1,31 +0,0 @@ -(ns test.test - (:require [clojure.java.jdbc :as j] - [test.util :as util])) - -;; Define the connection parameters to the cluster. -(def db-spec {:subprotocol "postgresql" - :subname "//localhost:26257/bank" - :user "maxroach" - :password ""}) - -(defn test-basic [] - ;; Connect to the cluster and run the code below with - ;; the connection object bound to 'conn'. - (j/with-db-connection [conn db-spec] - - ;; Insert two rows into the "accounts" table. - (j/insert! conn :accounts {:id 1 :balance 1000}) - (j/insert! conn :accounts {:id 2 :balance 250}) - - ;; Print out the balances. - (println "Initial balances:") - (->> (j/query conn ["SELECT id, balance FROM accounts"]) - (map println) - doall) - - ;; The database connection is automatically closed by with-db-connection. - )) - - -(defn -main [& args] - (test-basic)) diff --git a/src/current/_includes/v2.0/app/basic-sample.cpp b/src/current/_includes/v2.0/app/basic-sample.cpp deleted file mode 100644 index 0cdb6f65bfd..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Build with g++ -std=c++11 basic-sample.cpp -lpq -lpqxx - -#include -#include -#include -#include -#include -#include - -using namespace std; - -int main() { - try { - // Connect to the "bank" database. - pqxx::connection c("postgresql://maxroach@localhost:26257/bank"); - - pqxx::nontransaction w(c); - - // Create the "accounts" table. - w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); - - // Insert two rows into the "accounts" table. - w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); - - // Print out the balances. - cout << "Initial balances:" << endl; - pqxx::result r = w.exec("SELECT id, balance FROM accounts"); - for (auto row : r) { - cout << row[0].as() << ' ' << row[1].as() << endl; - } - - w.commit(); // Note this doesn't doesn't do anything - // for a nontransaction, but is still required. - } - catch (const exception &e) { - cerr << e.what() << endl; - return 1; - } - cout << "Success" << endl; - return 0; -} diff --git a/src/current/_includes/v2.0/app/basic-sample.cs b/src/current/_includes/v2.0/app/basic-sample.cs deleted file mode 100644 index 487ab7ba67c..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.cs +++ /dev/null @@ -1,49 +0,0 @@ -using System; -using System.Data; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "localhost"; - connStringBuilder.Port = 26257; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Database = "bank"; - Simple(connStringBuilder.ConnectionString); - } - - static void Simple(string connString) - { - using(var conn = new NpgsqlConnection(connString)) - { - conn.Open(); - - // Create the "accounts" table. - new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery(); - - // Insert two rows into the "accounts" table. - using(var cmd = new NpgsqlCommand()) - { - cmd.Connection = conn; - cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)"; - cmd.Parameters.AddWithValue("id1", 1); - cmd.Parameters.AddWithValue("val1", 1000); - cmd.Parameters.AddWithValue("id2", 2); - cmd.Parameters.AddWithValue("val2", 250); - cmd.ExecuteNonQuery(); - } - - // Print out the balances. - System.Console.WriteLine("Initial balances:"); - using(var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using(var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - } - } - } -} diff --git a/src/current/_includes/v2.0/app/basic-sample.go b/src/current/_includes/v2.0/app/basic-sample.go deleted file mode 100644 index 6e22c858dbb..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.go +++ /dev/null @@ -1,46 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - // Connect to the "bank" database. - db, err := sql.Open("postgres", - "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - defer db.Close() - - // Create the "accounts" table. - if _, err := db.Exec( - "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil { - log.Fatal(err) - } - - // Insert two rows into the "accounts" table. - if _, err := db.Exec( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil { - log.Fatal(err) - } - - // Print out the balances. - rows, err := db.Query("SELECT id, balance FROM accounts") - if err != nil { - log.Fatal(err) - } - defer rows.Close() - fmt.Println("Initial balances:") - for rows.Next() { - var id, balance int - if err := rows.Scan(&id, &balance); err != nil { - log.Fatal(err) - } - fmt.Printf("%d %d\n", id, balance) - } -} diff --git a/src/current/_includes/v2.0/app/basic-sample.js b/src/current/_includes/v2.0/app/basic-sample.js deleted file mode 100644 index 4e86cb2cbca..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.js +++ /dev/null @@ -1,63 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257, - ssl: { - ca: fs.readFileSync('certs/ca.crt') - .toString(), - key: fs.readFileSync('certs/client.maxroach.key') - .toString(), - cert: fs.readFileSync('certs/client.maxroach.crt') - .toString() - } -}; - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - - // Close communication with the database and exit. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Create the 'accounts' table. - client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next); - }, - function (results, next) { - // Insert two rows into the 'accounts' table. - client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next); - }, - function (results, next) { - // Print out account balances. - client.query('SELECT id, balance FROM accounts;', next); - }, - ], - function (err, results) { - if (err) { - console.error('Error inserting into and selecting from accounts: ', err); - finish(); - } - - console.log('Initial balances:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v2.0/app/basic-sample.php b/src/current/_includes/v2.0/app/basic-sample.php deleted file mode 100644 index 4edae09b12a..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.php +++ /dev/null @@ -1,20 +0,0 @@ - PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - PDO::ATTR_PERSISTENT => true - )); - - $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)'); - - print "Account balances:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v2.0/app/basic-sample.py b/src/current/_includes/v2.0/app/basic-sample.py deleted file mode 100644 index edf1b2617d0..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.py +++ /dev/null @@ -1,37 +0,0 @@ -# Import the driver. -import psycopg2 - -# Connect to the "bank" database. -conn = psycopg2.connect( - database='bank', - user='maxroach', - sslmode='require', - sslrootcert='certs/ca.crt', - sslkey='certs/client.maxroach.key', - sslcert='certs/client.maxroach.crt', - port=26257, - host='localhost' -) - -# Make each statement commit immediately. -conn.set_session(autocommit=True) - -# Open a cursor to perform database operations. -cur = conn.cursor() - -# Create the "accounts" table. -cur.execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)") - -# Insert two rows into the "accounts" table. -cur.execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)") - -# Print out the balances. -cur.execute("SELECT id, balance FROM accounts") -rows = cur.fetchall() -print('Initial balances:') -for row in rows: - print([str(cell) for cell in row]) - -# Close the database connection. -cur.close() -conn.close() diff --git a/src/current/_includes/v2.0/app/basic-sample.rb b/src/current/_includes/v2.0/app/basic-sample.rb deleted file mode 100644 index 93f0dc3d20c..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.rb +++ /dev/null @@ -1,31 +0,0 @@ -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'require', - sslrootcert: 'certs/ca.crt', - sslkey:'certs/client.maxroach.key', - sslcert:'certs/client.maxroach.crt' -) - -# Create the "accounts" table. -conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)') - -# Insert two rows into the "accounts" table. -conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)') - -# Print out the balances. -puts 'Initial balances:' -conn.exec('SELECT id, balance FROM accounts') do |res| - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v2.0/app/basic-sample.rs b/src/current/_includes/v2.0/app/basic-sample.rs deleted file mode 100644 index f381d500028..00000000000 --- a/src/current/_includes/v2.0/app/basic-sample.rs +++ /dev/null @@ -1,22 +0,0 @@ -extern crate postgres; - -use postgres::{Connection, TlsMode}; - -fn main() { - let conn = Connection::connect("postgresql://maxroach@localhost:26257/bank", TlsMode::None) - .unwrap(); - - // Insert two rows into the "accounts" table. - conn.execute( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)", - &[], - ).unwrap(); - - // Print out the balances. - println!("Initial balances:"); - for row in &conn.query("SELECT id, balance FROM accounts", &[]).unwrap() { - let id: i64 = row.get(0); - let balance: i64 = row.get(1); - println!("{} {}", id, balance); - } -} diff --git a/src/current/_includes/v2.0/app/before-you-begin.md b/src/current/_includes/v2.0/app/before-you-begin.md deleted file mode 100644 index dfb97226414..00000000000 --- a/src/current/_includes/v2.0/app/before-you-begin.md +++ /dev/null @@ -1,8 +0,0 @@ -1. [Install CockroachDB](install-cockroachdb.html). -2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. -3. Choose the instructions that correspond to whether your cluster is secure or insecure: - -
- - -
diff --git a/src/current/_includes/v2.0/app/common-steps.md b/src/current/_includes/v2.0/app/common-steps.md deleted file mode 100644 index 76dfe6a008c..00000000000 --- a/src/current/_includes/v2.0/app/common-steps.md +++ /dev/null @@ -1,36 +0,0 @@ -## Step 2. Start a single-node cluster - -For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=hello-1 \ ---host=localhost -~~~ - -## Step 3. Create a user - -In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set maxroach --insecure -~~~ - -## Step 4. Create a database and grant privileges - -As the `root` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create a `bank` database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'CREATE DATABASE bank' -~~~ - -Then [grant privileges](grant.html) to the `maxroach` user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'GRANT ALL ON DATABASE bank TO maxroach' -~~~ diff --git a/src/current/_includes/v2.0/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v2.0/app/create-maxroach-user-and-bank-database.md deleted file mode 100644 index e887162f380..00000000000 --- a/src/current/_includes/v2.0/app/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v2.0/app/gorm-basic-sample.go b/src/current/_includes/v2.0/app/gorm-basic-sample.go deleted file mode 100644 index d18948b80b2..00000000000 --- a/src/current/_includes/v2.0/app/gorm-basic-sample.go +++ /dev/null @@ -1,41 +0,0 @@ -package main - -import ( - "fmt" - "log" - - // Import GORM-related packages. - "github.com/jinzhu/gorm" - _ "github.com/jinzhu/gorm/dialects/postgres" -) - -// Account is our model, which corresponds to the "accounts" database table. -type Account struct { - ID int `gorm:"primary_key"` - Balance int -} - -func main() { - // Connect to the "bank" database as the "maxroach" user. - const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt" - db, err := gorm.Open("postgres", addr) - if err != nil { - log.Fatal(err) - } - defer db.Close() - - // Automatically create the "accounts" table based on the Account model. - db.AutoMigrate(&Account{}) - - // Insert two rows into the "accounts" table. - db.Create(&Account{ID: 1, Balance: 1000}) - db.Create(&Account{ID: 2, Balance: 250}) - - // Print out the balances. - var accounts []Account - db.Find(&accounts) - fmt.Println("Initial balances:") - for _, account := range accounts { - fmt.Printf("%d %d\n", account.ID, account.Balance) - } -} diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/Sample.java b/src/current/_includes/v2.0/app/hibernate-basic-sample/Sample.java deleted file mode 100644 index ed36ae15ad3..00000000000 --- a/src/current/_includes/v2.0/app/hibernate-basic-sample/Sample.java +++ /dev/null @@ -1,64 +0,0 @@ -package com.cockroachlabs; - -import org.hibernate.Session; -import org.hibernate.SessionFactory; -import org.hibernate.cfg.Configuration; - -import javax.persistence.Column; -import javax.persistence.Entity; -import javax.persistence.Id; -import javax.persistence.Table; -import javax.persistence.criteria.CriteriaQuery; - -public class Sample { - // Create a SessionFactory based on our hibernate.cfg.xml configuration - // file, which defines how to connect to the database. - private static final SessionFactory sessionFactory = - new Configuration() - .configure("hibernate.cfg.xml") - .addAnnotatedClass(Account.class) - .buildSessionFactory(); - - // Account is our model, which corresponds to the "accounts" database table. - @Entity - @Table(name="accounts") - public static class Account { - @Id - @Column(name="id") - public long id; - - @Column(name="balance") - public long balance; - - // Convenience constructor. - public Account(int id, int balance) { - this.id = id; - this.balance = balance; - } - - // Hibernate needs a default (no-arg) constructor to create model objects. - public Account() {} - } - - public static void main(String[] args) throws Exception { - Session session = sessionFactory.openSession(); - - try { - // Insert two rows into the "accounts" table. - session.beginTransaction(); - session.save(new Account(1, 1000)); - session.save(new Account(2, 250)); - session.getTransaction().commit(); - - // Print out the balances. - CriteriaQuery query = session.getCriteriaBuilder().createQuery(Account.class); - query.select(query.from(Account.class)); - for (Account account : session.createQuery(query).getResultList()) { - System.out.printf("%d %d\n", account.id, account.balance); - } - } finally { - session.close(); - sessionFactory.close(); - } - } -} diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/build.gradle b/src/current/_includes/v2.0/app/hibernate-basic-sample/build.gradle deleted file mode 100644 index 36f33d73fe6..00000000000 --- a/src/current/_includes/v2.0/app/hibernate-basic-sample/build.gradle +++ /dev/null @@ -1,16 +0,0 @@ -group 'com.cockroachlabs' -version '1.0' - -apply plugin: 'java' -apply plugin: 'application' - -mainClassName = 'com.cockroachlabs.Sample' - -repositories { - mavenCentral() -} - -dependencies { - compile 'org.hibernate:hibernate-core:5.2.4.Final' - compile 'org.postgresql:postgresql:42.2.2.jre7' -} diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz deleted file mode 100644 index d97232aa172..00000000000 Binary files a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ diff --git a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml deleted file mode 100644 index 2213cc85ea5..00000000000 --- a/src/current/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - org.postgresql.Driver - org.hibernate.dialect.PostgreSQLDialect - - maxroach - - - create - - - true - true - - diff --git a/src/current/_includes/v2.0/app/insecure/BasicSample.java b/src/current/_includes/v2.0/app/insecure/BasicSample.java deleted file mode 100644 index 001d38feb48..00000000000 --- a/src/current/_includes/v2.0/app/insecure/BasicSample.java +++ /dev/null @@ -1,51 +0,0 @@ -import java.sql.*; -import java.util.Properties; - -/* - Download the Postgres JDBC driver jar from https://jdbc.postgresql.org. - - Then, compile and run this example like so: - - $ export CLASSPATH=.:/path/to/postgresql.jar - $ javac BasicSample.java && java BasicSample -*/ - -public class BasicSample { - public static void main(String[] args) - throws ClassNotFoundException, SQLException { - - // Load the Postgres JDBC driver. - Class.forName("org.postgresql.Driver"); - - // Connect to the "bank" database. - Properties props = new Properties(); - props.setProperty("user", "maxroach"); - props.setProperty("sslmode", "disable"); - - Connection db = DriverManager - .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props); - - try { - // Create the "accounts" table. - db.createStatement() - .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); - - // Insert two rows into the "accounts" table. - db.createStatement() - .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); - - // Print out the balances. - System.out.println("Initial balances:"); - ResultSet res = db.createStatement() - .executeQuery("SELECT id, balance FROM accounts"); - while (res.next()) { - System.out.printf("\taccount %s: %s\n", - res.getInt("id"), - res.getInt("balance")); - } - } finally { - // Close the database connection. - db.close(); - } - } -} diff --git a/src/current/_includes/v2.0/app/insecure/TxnSample.java b/src/current/_includes/v2.0/app/insecure/TxnSample.java deleted file mode 100644 index 11021ec0e71..00000000000 --- a/src/current/_includes/v2.0/app/insecure/TxnSample.java +++ /dev/null @@ -1,145 +0,0 @@ -import java.sql.*; -import java.util.Properties; - -/* - Download the Postgres JDBC driver jar from https://jdbc.postgresql.org. - - Then, compile and run this example like so: - - $ export CLASSPATH=.:/path/to/postgresql.jar - $ javac TxnSample.java && java TxnSample -*/ - -// Ambiguous whether the transaction committed or not. -class AmbiguousCommitException extends SQLException{ - public AmbiguousCommitException(Throwable cause) { - super(cause); - } -} - -class InsufficientBalanceException extends Exception {} - -class AccountNotFoundException extends Exception { - public int account; - public AccountNotFoundException(int account) { - this.account = account; - } -} - -// A simple interface that provides a retryable lambda expression. -interface RetryableTransaction { - public void run(Connection conn) - throws SQLException, InsufficientBalanceException, - AccountNotFoundException, AmbiguousCommitException; -} - -public class TxnSample { - public static RetryableTransaction transferFunds(int from, int to, int amount) { - return new RetryableTransaction() { - public void run(Connection conn) - throws SQLException, InsufficientBalanceException, - AccountNotFoundException, AmbiguousCommitException { - - // Check the current balance. - ResultSet res = conn.createStatement() - .executeQuery("SELECT balance FROM accounts WHERE id = " - + from); - if(!res.next()) { - throw new AccountNotFoundException(from); - } - - int balance = res.getInt("balance"); - if(balance < from) { - throw new InsufficientBalanceException(); - } - - // Perform the transfer. - conn.createStatement() - .executeUpdate("UPDATE accounts SET balance = balance - " - + amount + " where id = " + from); - conn.createStatement() - .executeUpdate("UPDATE accounts SET balance = balance + " - + amount + " where id = " + to); - } - }; - } - - public static void retryTransaction(Connection conn, RetryableTransaction tx) - throws SQLException, InsufficientBalanceException, - AccountNotFoundException, AmbiguousCommitException { - - Savepoint sp = conn.setSavepoint("cockroach_restart"); - while(true) { - boolean releaseAttempted = false; - try { - tx.run(conn); - releaseAttempted = true; - conn.releaseSavepoint(sp); - } - catch(SQLException e) { - String sqlState = e.getSQLState(); - - // Check if the error code indicates a SERIALIZATION_FAILURE. - if(sqlState.equals("40001")) { - // Signal the database that we will attempt a retry. - conn.rollback(sp); - continue; - } else if(releaseAttempted) { - throw new AmbiguousCommitException(e); - } else { - throw e; - } - } - break; - } - conn.commit(); - } - - public static void main(String[] args) - throws ClassNotFoundException, SQLException { - - // Load the Postgres JDBC driver. - Class.forName("org.postgresql.Driver"); - - // Connect to the 'bank' database. - Properties props = new Properties(); - props.setProperty("user", "maxroach"); - props.setProperty("sslmode", "disable"); - - Connection db = DriverManager - .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props); - - - try { - // We need to turn off autocommit mode to allow for - // multi-statement transactions. - db.setAutoCommit(false); - - // Perform the transfer. This assumes the 'accounts' - // table has already been created in the database. - RetryableTransaction transfer = transferFunds(1, 2, 100); - retryTransaction(db, transfer); - - // Check balances after transfer. - db.setAutoCommit(true); - ResultSet res = db.createStatement() - .executeQuery("SELECT id, balance FROM accounts"); - while (res.next()) { - System.out.printf("\taccount %s: %s\n", res.getInt("id"), - res.getInt("balance")); - } - - } catch(InsufficientBalanceException e) { - System.out.println("Insufficient balance"); - } catch(AccountNotFoundException e) { - System.out.println("No users in the table with id " + e.account); - } catch(AmbiguousCommitException e) { - System.out.println("Ambiguous result encountered: " + e); - } catch(SQLException e) { - System.out.println("SQLException encountered:" + e); - } finally { - // Close the database connection. - db.close(); - } - } -} diff --git a/src/current/_includes/v2.0/app/insecure/activerecord-basic-sample.rb b/src/current/_includes/v2.0/app/insecure/activerecord-basic-sample.rb deleted file mode 100644 index 601838ee789..00000000000 --- a/src/current/_includes/v2.0/app/insecure/activerecord-basic-sample.rb +++ /dev/null @@ -1,44 +0,0 @@ -require 'active_record' -require 'activerecord-cockroachdb-adapter' -require 'pg' - -# Connect to CockroachDB through ActiveRecord. -# In Rails, this configuration would go in config/database.yml as usual. -ActiveRecord::Base.establish_connection( - adapter: 'cockroachdb', - username: 'maxroach', - database: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'disable' -) - -# Define the Account model. -# In Rails, this would go in app/models/ as usual. -class Account < ActiveRecord::Base - validates :id, presence: true - validates :balance, presence: true -end - -# Define a migration for the accounts table. -# In Rails, this would go in db/migrate/ as usual. -class Schema < ActiveRecord::Migration[5.0] - def change - create_table :accounts, force: true do |t| - t.integer :balance - end - end -end - -# Run the schema migration by hand. -# In Rails, this would be done via rake db:migrate as usual. -Schema.new.change() - -# Create two accounts, inserting two rows into the accounts table. -Account.create(id: 1, balance: 1000) -Account.create(id: 2, balance: 250) - -# Retrieve accounts and print out the balances -Account.all.each do |acct| - puts "#{acct.id} #{acct.balance}" -end diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.go b/src/current/_includes/v2.0/app/insecure/basic-sample.go deleted file mode 100644 index 6a647f51641..00000000000 --- a/src/current/_includes/v2.0/app/insecure/basic-sample.go +++ /dev/null @@ -1,44 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - // Connect to the "bank" database. - db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Create the "accounts" table. - if _, err := db.Exec( - "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil { - log.Fatal(err) - } - - // Insert two rows into the "accounts" table. - if _, err := db.Exec( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil { - log.Fatal(err) - } - - // Print out the balances. - rows, err := db.Query("SELECT id, balance FROM accounts") - if err != nil { - log.Fatal(err) - } - defer rows.Close() - fmt.Println("Initial balances:") - for rows.Next() { - var id, balance int - if err := rows.Scan(&id, &balance); err != nil { - log.Fatal(err) - } - fmt.Printf("%d %d\n", id, balance) - } -} diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.js b/src/current/_includes/v2.0/app/insecure/basic-sample.js deleted file mode 100644 index f89ea020a74..00000000000 --- a/src/current/_includes/v2.0/app/insecure/basic-sample.js +++ /dev/null @@ -1,55 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - - // Close communication with the database and exit. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Create the 'accounts' table. - client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next); - }, - function (results, next) { - // Insert two rows into the 'accounts' table. - client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next); - }, - function (results, next) { - // Print out account balances. - client.query('SELECT id, balance FROM accounts;', next); - }, - ], - function (err, results) { - if (err) { - console.error('Error inserting into and selecting from accounts: ', err); - finish(); - } - - console.log('Initial balances:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.php b/src/current/_includes/v2.0/app/insecure/basic-sample.php deleted file mode 100644 index db5a26e3111..00000000000 --- a/src/current/_includes/v2.0/app/insecure/basic-sample.php +++ /dev/null @@ -1,20 +0,0 @@ - PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - PDO::ATTR_PERSISTENT => true - )); - - $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)'); - - print "Account balances:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.py b/src/current/_includes/v2.0/app/insecure/basic-sample.py deleted file mode 100644 index db023a19e33..00000000000 --- a/src/current/_includes/v2.0/app/insecure/basic-sample.py +++ /dev/null @@ -1,34 +0,0 @@ -# Import the driver. -import psycopg2 - -# Connect to the "bank" database. -conn = psycopg2.connect( - database='bank', - user='maxroach', - sslmode='disable', - port=26257, - host='localhost' -) - -# Make each statement commit immediately. -conn.set_session(autocommit=True) - -# Open a cursor to perform database operations. -cur = conn.cursor() - -# Create the "accounts" table. -cur.execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)") - -# Insert two rows into the "accounts" table. -cur.execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)") - -# Print out the balances. -cur.execute("SELECT id, balance FROM accounts") -rows = cur.fetchall() -print('Initial balances:') -for row in rows: - print([str(cell) for cell in row]) - -# Close the database connection. -cur.close() -conn.close() diff --git a/src/current/_includes/v2.0/app/insecure/basic-sample.rb b/src/current/_includes/v2.0/app/insecure/basic-sample.rb deleted file mode 100644 index 904460381f6..00000000000 --- a/src/current/_includes/v2.0/app/insecure/basic-sample.rb +++ /dev/null @@ -1,28 +0,0 @@ -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'disable' -) - -# Create the "accounts" table. -conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)') - -# Insert two rows into the "accounts" table. -conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)') - -# Print out the balances. -puts 'Initial balances:' -conn.exec('SELECT id, balance FROM accounts') do |res| - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v2.0/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v2.0/app/insecure/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 3c7859f0d8d..00000000000 --- a/src/current/_includes/v2.0/app/insecure/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v2.0/app/insecure/gorm-basic-sample.go b/src/current/_includes/v2.0/app/insecure/gorm-basic-sample.go deleted file mode 100644 index b8529962c2b..00000000000 --- a/src/current/_includes/v2.0/app/insecure/gorm-basic-sample.go +++ /dev/null @@ -1,41 +0,0 @@ -package main - -import ( - "fmt" - "log" - - // Import GORM-related packages. - "github.com/jinzhu/gorm" - _ "github.com/jinzhu/gorm/dialects/postgres" -) - -// Account is our model, which corresponds to the "accounts" database table. -type Account struct { - ID int `gorm:"primary_key"` - Balance int -} - -func main() { - // Connect to the "bank" database as the "maxroach" user. - const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable" - db, err := gorm.Open("postgres", addr) - if err != nil { - log.Fatal(err) - } - defer db.Close() - - // Automatically create the "accounts" table based on the Account model. - db.AutoMigrate(&Account{}) - - // Insert two rows into the "accounts" table. - db.Create(&Account{ID: 1, Balance: 1000}) - db.Create(&Account{ID: 2, Balance: 250}) - - // Print out the balances. - var accounts []Account - db.Find(&accounts) - fmt.Println("Initial balances:") - for _, account := range accounts { - fmt.Printf("%d %d\n", account.ID, account.Balance) - } -} diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java deleted file mode 100644 index ed36ae15ad3..00000000000 --- a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java +++ /dev/null @@ -1,64 +0,0 @@ -package com.cockroachlabs; - -import org.hibernate.Session; -import org.hibernate.SessionFactory; -import org.hibernate.cfg.Configuration; - -import javax.persistence.Column; -import javax.persistence.Entity; -import javax.persistence.Id; -import javax.persistence.Table; -import javax.persistence.criteria.CriteriaQuery; - -public class Sample { - // Create a SessionFactory based on our hibernate.cfg.xml configuration - // file, which defines how to connect to the database. - private static final SessionFactory sessionFactory = - new Configuration() - .configure("hibernate.cfg.xml") - .addAnnotatedClass(Account.class) - .buildSessionFactory(); - - // Account is our model, which corresponds to the "accounts" database table. - @Entity - @Table(name="accounts") - public static class Account { - @Id - @Column(name="id") - public long id; - - @Column(name="balance") - public long balance; - - // Convenience constructor. - public Account(int id, int balance) { - this.id = id; - this.balance = balance; - } - - // Hibernate needs a default (no-arg) constructor to create model objects. - public Account() {} - } - - public static void main(String[] args) throws Exception { - Session session = sessionFactory.openSession(); - - try { - // Insert two rows into the "accounts" table. - session.beginTransaction(); - session.save(new Account(1, 1000)); - session.save(new Account(2, 250)); - session.getTransaction().commit(); - - // Print out the balances. - CriteriaQuery query = session.getCriteriaBuilder().createQuery(Account.class); - query.select(query.from(Account.class)); - for (Account account : session.createQuery(query).getResultList()) { - System.out.printf("%d %d\n", account.id, account.balance); - } - } finally { - session.close(); - sessionFactory.close(); - } - } -} diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle deleted file mode 100644 index 36f33d73fe6..00000000000 --- a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle +++ /dev/null @@ -1,16 +0,0 @@ -group 'com.cockroachlabs' -version '1.0' - -apply plugin: 'java' -apply plugin: 'application' - -mainClassName = 'com.cockroachlabs.Sample' - -repositories { - mavenCentral() -} - -dependencies { - compile 'org.hibernate:hibernate-core:5.2.4.Final' - compile 'org.postgresql:postgresql:42.2.2.jre7' -} diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz deleted file mode 100644 index 6da6fc86925..00000000000 Binary files a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ diff --git a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml deleted file mode 100644 index 6da90ad06ab..00000000000 --- a/src/current/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml +++ /dev/null @@ -1,20 +0,0 @@ - - - - - - org.postgresql.Driver - org.hibernate.dialect.PostgreSQL94Dialect - jdbc:postgresql://127.0.0.1:26257/bank?sslmode=disable - maxroach - - - create - - - true - true - - diff --git a/src/current/_includes/v2.0/app/insecure/sequelize-basic-sample.js b/src/current/_includes/v2.0/app/insecure/sequelize-basic-sample.js deleted file mode 100644 index ca92b98e375..00000000000 --- a/src/current/_includes/v2.0/app/insecure/sequelize-basic-sample.js +++ /dev/null @@ -1,35 +0,0 @@ -var Sequelize = require('sequelize-cockroachdb'); - -// Connect to CockroachDB through Sequelize. -var sequelize = new Sequelize('bank', 'maxroach', '', { - dialect: 'postgres', - port: 26257, - logging: false -}); - -// Define the Account model for the "accounts" table. -var Account = sequelize.define('accounts', { - id: { type: Sequelize.INTEGER, primaryKey: true }, - balance: { type: Sequelize.INTEGER } -}); - -// Create the "accounts" table. -Account.sync({force: true}).then(function() { - // Insert two rows into the "accounts" table. - return Account.bulkCreate([ - {id: 1, balance: 1000}, - {id: 2, balance: 250} - ]); -}).then(function() { - // Retrieve accounts. - return Account.findAll(); -}).then(function(accounts) { - // Print out the balances. - accounts.forEach(function(account) { - console.log(account.id + ' ' + account.balance); - }); - process.exit(0); -}).catch(function(err) { - console.error('error: ' + err.message); - process.exit(1); -}); diff --git a/src/current/_includes/v2.0/app/insecure/sqlalchemy-basic-sample.py b/src/current/_includes/v2.0/app/insecure/sqlalchemy-basic-sample.py deleted file mode 100644 index 696350f0915..00000000000 --- a/src/current/_includes/v2.0/app/insecure/sqlalchemy-basic-sample.py +++ /dev/null @@ -1,35 +0,0 @@ -from __future__ import print_function -from sqlalchemy import create_engine, Column, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker - -Base = declarative_base() - -# The Account class corresponds to the "accounts" database table. -class Account(Base): - __tablename__ = 'accounts' - id = Column(Integer, primary_key=True) - balance = Column(Integer) - -# Create an engine to communicate with the database. The "cockroachdb://" prefix -# for the engine URL indicates that we are connecting to CockroachDB. -engine = create_engine('cockroachdb://maxroach@localhost:26257/bank', - connect_args = { - 'sslmode' : 'disable' - }) -Session = sessionmaker(bind=engine) - -# Automatically create the "accounts" table based on the Account class. -Base.metadata.create_all(engine) - -# Insert two rows into the "accounts" table. -session = Session() -session.add_all([ - Account(id=1, balance=1000), - Account(id=2, balance=250), -]) -session.commit() - -# Print out the balances. -for account in session.query(Account): - print(account.id, account.balance) diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.go b/src/current/_includes/v2.0/app/insecure/txn-sample.go deleted file mode 100644 index 2c0cd1b6da6..00000000000 --- a/src/current/_includes/v2.0/app/insecure/txn-sample.go +++ /dev/null @@ -1,51 +0,0 @@ -package main - -import ( - "context" - "database/sql" - "fmt" - "log" - - "github.com/cockroachdb/cockroach-go/crdb" -) - -func transferFunds(tx *sql.Tx, from int, to int, amount int) error { - // Read the balance. - var fromBalance int - if err := tx.QueryRow( - "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil { - return err - } - - if fromBalance < amount { - return fmt.Errorf("insufficient funds") - } - - // Perform the transfer. - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil { - return err - } - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil { - return err - } - return nil -} - -func main() { - db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Run a transfer in a transaction. - err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error { - return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */) - }) - if err == nil { - fmt.Println("Success") - } else { - log.Fatal("error: ", err) - } -} diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.js b/src/current/_includes/v2.0/app/insecure/txn-sample.js deleted file mode 100644 index c44309b01a2..00000000000 --- a/src/current/_includes/v2.0/app/insecure/txn-sample.js +++ /dev/null @@ -1,146 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the bank database. - -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -// Wrapper for a transaction. This automatically re-calls "op" with -// the client as an argument as long as the database server asks for -// the transaction to be retried. - -function txnWrapper(client, op, next) { - client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) { - if (err) { - return next(err); - } - - var released = false; - async.doWhilst(function (done) { - var handleError = function (err) { - // If we got an error, see if it's a retryable one - // and, if so, restart. - if (err.code === '40001') { - // Signal the database that we'll retry. - return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done); - } - // A non-retryable error; break out of the - // doWhilst with an error. - return done(err); - }; - - // Attempt the work. - op(client, function (err) { - if (err) { - return handleError(err); - } - var opResults = arguments; - - // If we reach this point, release and commit. - client.query('RELEASE SAVEPOINT cockroach_restart', function (err) { - if (err) { - return handleError(err); - } - released = true; - return done.apply(null, opResults); - }); - }); - }, - function () { - return !released; - }, - function (err) { - if (err) { - client.query('ROLLBACK', function () { - next(err); - }); - } else { - var txnResults = arguments; - client.query('COMMIT', function (err) { - if (err) { - return next(err); - } else { - return next.apply(null, txnResults); - } - }); - } - }); - }); -} - -// The transaction we want to run. - -function transferFunds(client, from, to, amount, next) { - // Check the current balance. - client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) { - if (err) { - return next(err); - } else if (results.rows.length === 0) { - return next(new Error('account not found in table')); - } - - var acctBal = results.rows[0].balance; - if (acctBal >= amount) { - // Perform the transfer. - async.waterfall([ - function (next) { - // Subtract amount from account 1. - client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next); - }, - function (updateResult, next) { - // Add amount to account 2. - client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next); - }, - function (updateResult, next) { - // Fetch account balances after updates. - client.query('SELECT id, balance FROM accounts', function (err, selectResult) { - next(err, selectResult ? selectResult.rows : null); - }); - } - ], next); - } else { - next(new Error('insufficient funds')); - } - }); -} - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - - // Execute the transaction. - txnWrapper(client, - function (client, next) { - transferFunds(client, 1, 2, 100, next); - }, - function (err, results) { - if (err) { - console.error('error performing transaction', err); - finish(); - } - - console.log('Balances after transfer:'); - results.forEach(function (result) { - console.log(result); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.php b/src/current/_includes/v2.0/app/insecure/txn-sample.php deleted file mode 100644 index e060d311cc3..00000000000 --- a/src/current/_includes/v2.0/app/insecure/txn-sample.php +++ /dev/null @@ -1,71 +0,0 @@ -beginTransaction(); - // This savepoint allows us to retry our transaction. - $dbh->exec("SAVEPOINT cockroach_restart"); - } catch (Exception $e) { - throw $e; - } - - while (true) { - try { - $stmt = $dbh->prepare( - 'UPDATE accounts SET balance = balance + :deposit ' . - 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)'); - - // First, withdraw the money from the old account (if possible). - $stmt->bindValue(':account', $from, PDO::PARAM_INT); - $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "source account does not exist or is underfunded\r\n"; - return; - } - - // Next, deposit into the new account (if it exists). - $stmt->bindValue(':account', $to, PDO::PARAM_INT); - $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "destination account does not exist\r\n"; - return; - } - - // Attempt to release the savepoint (which is really the commit). - $dbh->exec('RELEASE SAVEPOINT cockroach_restart'); - $dbh->commit(); - return; - } catch (PDOException $e) { - if ($e->getCode() != '40001') { - // Non-recoverable error. Rollback and bubble error up the chain. - $dbh->rollBack(); - throw $e; - } else { - // Cockroach transaction retry code. Rollback to the savepoint and - // restart. - $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart'); - } - } - } -} - -try { - $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=disable', - 'maxroach', null, array( - PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - )); - - transferMoney($dbh, 1, 2, 10); - - print "Account balances after transfer:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.py b/src/current/_includes/v2.0/app/insecure/txn-sample.py deleted file mode 100644 index 2ea05a85704..00000000000 --- a/src/current/_includes/v2.0/app/insecure/txn-sample.py +++ /dev/null @@ -1,73 +0,0 @@ -# Import the driver. -import psycopg2 -import psycopg2.errorcodes - -# Connect to the cluster. -conn = psycopg2.connect( - database='bank', - user='maxroach', - sslmode='disable', - port=26257, - host='localhost' -) - -def onestmt(conn, sql): - with conn.cursor() as cur: - cur.execute(sql) - - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn, op): - with conn: - onestmt(conn, "SAVEPOINT cockroach_restart") - while True: - try: - # Attempt the work. - op(conn) - - # If we reach this point, commit. - onestmt(conn, "RELEASE SAVEPOINT cockroach_restart") - break - - except psycopg2.OperationalError as e: - if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE: - # A non-retryable error; report this up the call stack. - raise e - # Signal the database that we'll retry. - onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart") - - -# The transaction we want to run. -def transfer_funds(txn, frm, to, amount): - with txn.cursor() as cur: - - # Check the current balance. - cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm)) - from_balance = cur.fetchone()[0] - if from_balance < amount: - raise "Insufficient funds" - - # Perform the transfer. - cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", - (amount, frm)) - cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", - (amount, to)) - - -# Execute the transaction. -run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100)) - - -with conn: - with conn.cursor() as cur: - # Check account balances. - cur.execute("SELECT id, balance FROM accounts") - rows = cur.fetchall() - print('Balances after transfer:') - for row in rows: - print([str(cell) for cell in row]) - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v2.0/app/insecure/txn-sample.rb b/src/current/_includes/v2.0/app/insecure/txn-sample.rb deleted file mode 100644 index 416efb9e24d..00000000000 --- a/src/current/_includes/v2.0/app/insecure/txn-sample.rb +++ /dev/null @@ -1,49 +0,0 @@ -# Import the driver. -require 'pg' - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn) - conn.transaction do |txn| - txn.exec('SAVEPOINT cockroach_restart') - while - begin - # Attempt the work. - yield txn - - # If we reach this point, commit. - txn.exec('RELEASE SAVEPOINT cockroach_restart') - break - rescue PG::TRSerializationFailure - txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart') - end - end - end -end - -def transfer_funds(txn, from, to, amount) - txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res| - res.each do |row| - raise 'insufficient funds' if Integer(row['balance']) < amount - end - end - txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from]) - txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to]) -end - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'disable' -) - -run_transaction(conn) do |txn| - transfer_funds(txn, 1, 2, 100) -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v2.0/app/project.clj b/src/current/_includes/v2.0/app/project.clj deleted file mode 100644 index 41efc324b59..00000000000 --- a/src/current/_includes/v2.0/app/project.clj +++ /dev/null @@ -1,7 +0,0 @@ -(defproject test "0.1" - :description "CockroachDB test" - :url "http://cockroachlabs.com/" - :dependencies [[org.clojure/clojure "1.8.0"] - [org.clojure/java.jdbc "0.6.1"] - [org.postgresql/postgresql "9.4.1211"]] - :main test.test) diff --git a/src/current/_includes/v2.0/app/see-also-links.md b/src/current/_includes/v2.0/app/see-also-links.md deleted file mode 100644 index 90f06751e13..00000000000 --- a/src/current/_includes/v2.0/app/see-also-links.md +++ /dev/null @@ -1,9 +0,0 @@ -You might also be interested in using a local cluster to explore the following CockroachDB benefits: - -- [Client Connection Parameters](connection-parameters.html) -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html) diff --git a/src/current/_includes/v2.0/app/sequelize-basic-sample.js b/src/current/_includes/v2.0/app/sequelize-basic-sample.js deleted file mode 100644 index d87ff2ca5a5..00000000000 --- a/src/current/_includes/v2.0/app/sequelize-basic-sample.js +++ /dev/null @@ -1,62 +0,0 @@ -var Sequelize = require('sequelize-cockroachdb'); -var fs = require('fs'); - -// Connect to CockroachDB through Sequelize. -var sequelize = new Sequelize('bank', 'maxroach', '', { - dialect: 'postgres', - port: 26257, - logging: false, - dialectOptions: { - ssl: { - ca: fs.readFileSync('certs/ca.crt') - .toString(), - key: fs.readFileSync('certs/client.maxroach.key') - .toString(), - cert: fs.readFileSync('certs/client.maxroach.crt') - .toString() - } - } -}); - -// Define the Account model for the "accounts" table. -var Account = sequelize.define('accounts', { - id: { - type: Sequelize.INTEGER, - primaryKey: true - }, - balance: { - type: Sequelize.INTEGER - } -}); - -// Create the "accounts" table. -Account.sync({ - force: true - }) - .then(function () { - // Insert two rows into the "accounts" table. - return Account.bulkCreate([{ - id: 1, - balance: 1000 - }, - { - id: 2, - balance: 250 - } - ]); - }) - .then(function () { - // Retrieve accounts. - return Account.findAll(); - }) - .then(function (accounts) { - // Print out the balances. - accounts.forEach(function (account) { - console.log(account.id + ' ' + account.balance); - }); - process.exit(0); - }) - .catch(function (err) { - console.error('error: ' + err.message); - process.exit(1); - }); diff --git a/src/current/_includes/v2.0/app/sqlalchemy-basic-sample.py b/src/current/_includes/v2.0/app/sqlalchemy-basic-sample.py deleted file mode 100644 index 0b32b18bd27..00000000000 --- a/src/current/_includes/v2.0/app/sqlalchemy-basic-sample.py +++ /dev/null @@ -1,38 +0,0 @@ -from __future__ import print_function -from sqlalchemy import create_engine, Column, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker - -Base = declarative_base() - -# The Account class corresponds to the "accounts" database table. -class Account(Base): - __tablename__ = 'accounts' - id = Column(Integer, primary_key=True) - balance = Column(Integer) - -# Create an engine to communicate with the database. The "cockroachdb://" prefix -# for the engine URL indicates that we are connecting to CockroachDB. -engine = create_engine('cockroachdb://maxroach@localhost:26257/bank', - connect_args = { - 'sslmode' : 'require', - 'sslrootcert': 'certs/ca.crt', - 'sslkey':'certs/client.maxroach.key', - 'sslcert':'certs/client.maxroach.crt' - }) -Session = sessionmaker(bind=engine) - -# Automatically create the "accounts" table based on the Account class. -Base.metadata.create_all(engine) - -# Insert two rows into the "accounts" table. -session = Session() -session.add_all([ - Account(id=1, balance=1000), - Account(id=2, balance=250), -]) -session.commit() - -# Print out the balances. -for account in session.query(Account): - print(account.id, account.balance) diff --git a/src/current/_includes/v2.0/app/txn-sample.clj b/src/current/_includes/v2.0/app/txn-sample.clj deleted file mode 100644 index 75ee7b4ba62..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.clj +++ /dev/null @@ -1,43 +0,0 @@ -(ns test.test - (:require [clojure.java.jdbc :as j] - [test.util :as util])) - -;; Define the connection parameters to the cluster. -(def db-spec {:subprotocol "postgresql" - :subname "//localhost:26257/bank" - :user "maxroach" - :password ""}) - -;; The transaction we want to run. -(defn transferFunds - [txn from to amount] - - ;; Check the current balance. - (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from]) - (mapv :balance) - (first))] - (when (< fromBalance amount) - (throw (Exception. "Insufficient funds")))) - - ;; Perform the transfer. - (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)]) - (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)])) - -(defn test-txn [] - ;; Connect to the cluster and run the code below with - ;; the connection object bound to 'conn'. - (j/with-db-connection [conn db-spec] - - ;; Execute the transaction within an automatic retry block; - ;; the transaction object is bound to 'txn'. - (util/with-txn-retry [txn conn] - (transferFunds txn 1 2 100)) - - ;; Execute a query outside of an automatic retry block. - (println "Balances after transfer:") - (->> (j/query conn ["SELECT id, balance FROM accounts"]) - (map println) - (doall)))) - -(defn -main [& args] - (test-txn)) diff --git a/src/current/_includes/v2.0/app/txn-sample.cpp b/src/current/_includes/v2.0/app/txn-sample.cpp deleted file mode 100644 index dcdf0ca973d..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.cpp +++ /dev/null @@ -1,76 +0,0 @@ -// Build with g++ -std=c++11 txn-sample.cpp -lpq -lpqxx - -#include -#include -#include -#include -#include -#include - -using namespace std; - -void transferFunds( - pqxx::dbtransaction *tx, int from, int to, int amount) { - // Read the balance. - pqxx::result r = tx->exec( - "SELECT balance FROM accounts WHERE id = " + to_string(from)); - assert(r.size() == 1); - int fromBalance = r[0][0].as(); - - if (fromBalance < amount) { - throw domain_error("insufficient funds"); - } - - // Perform the transfer. - tx->exec("UPDATE accounts SET balance = balance - " - + to_string(amount) + " WHERE id = " + to_string(from)); - tx->exec("UPDATE accounts SET balance = balance + " - + to_string(amount) + " WHERE id = " + to_string(to)); -} - - -// ExecuteTx runs fn inside a transaction and retries it as needed. -// On non-retryable failures, the transaction is aborted and rolled -// back; on success, the transaction is committed. -// -// For more information about CockroachDB's transaction model see -// https://cockroachlabs.com/docs/transactions.html. -// -// NOTE: the supplied exec closure should not have external side -// effects beyond changes to the database. -void executeTx( - pqxx::connection *c, function fn) { - pqxx::work tx(*c); - while (true) { - try { - pqxx::subtransaction s(tx, "cockroach_restart"); - fn(&s); - s.commit(); - break; - } catch (const pqxx::pqxx_exception& e) { - // Swallow "transaction restart" errors; the transaction will be retried. - // Unfortunately libpqxx doesn't give us access to the error code, so we - // do string matching to identify retriable errors. - if (string(e.base().what()).find("restart transaction:") == string::npos) { - throw; - } - } - } - tx.commit(); -} - -int main() { - try { - pqxx::connection c("postgresql://maxroach@localhost:26257/bank"); - - executeTx(&c, [](pqxx::dbtransaction *tx) { - transferFunds(tx, 1, 2, 100); - }); - } - catch (const exception &e) { - cerr << e.what() << endl; - return 1; - } - cout << "Success" << endl; - return 0; -} diff --git a/src/current/_includes/v2.0/app/txn-sample.cs b/src/current/_includes/v2.0/app/txn-sample.cs deleted file mode 100644 index d0824aaa42c..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.cs +++ /dev/null @@ -1,119 +0,0 @@ -using System; -using System.Data; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "localhost"; - connStringBuilder.Port = 26257; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Database = "bank"; - TxnSample(connStringBuilder.ConnectionString); - } - - static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount) - { - int balance = 0; - using(var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran)) - using(var reader = cmd.ExecuteReader()) - { - if (reader.Read()) - { - balance = reader.GetInt32(0); - } - else - { - throw new DataException(String.Format("Account id={0} not found", from)); - } - } - if (balance < amount) - { - throw new DataException(String.Format("Insufficient balance in account id={0}", from)); - } - using(var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran)) - { - cmd.ExecuteNonQuery(); - } - using(var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran)) - { - cmd.ExecuteNonQuery(); - } - } - - static void TxnSample(string connString) - { - using(var conn = new NpgsqlConnection(connString)) - { - conn.Open(); - - // Create the "accounts" table. - new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery(); - - // Insert two rows into the "accounts" table. - using(var cmd = new NpgsqlCommand()) - { - cmd.Connection = conn; - cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)"; - cmd.Parameters.AddWithValue("id1", 1); - cmd.Parameters.AddWithValue("val1", 1000); - cmd.Parameters.AddWithValue("id2", 2); - cmd.Parameters.AddWithValue("val2", 250); - cmd.ExecuteNonQuery(); - } - - // Print out the balances. - System.Console.WriteLine("Initial balances:"); - using(var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using(var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - - try - { - using(var tran = conn.BeginTransaction()) - { - tran.Save("cockroach_restart"); - while (true) - { - try - { - TransferFunds(conn, tran, 1, 2, 100); - tran.Commit(); - break; - } - catch (NpgsqlException e) - { - // Check if the error code indicates a SERIALIZATION_FAILURE. - if (e.ErrorCode == 40001) - { - // Signal the database that we will attempt a retry. - tran.Rollback("cockroach_restart"); - } - else - { - throw; - } - } - } - } - } - catch (DataException e) - { - Console.WriteLine(e.Message); - } - - // Now printout the results. - Console.WriteLine("Final balances:"); - using(var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using(var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - } - } - } -} diff --git a/src/current/_includes/v2.0/app/txn-sample.go b/src/current/_includes/v2.0/app/txn-sample.go deleted file mode 100644 index fc15275abca..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.go +++ /dev/null @@ -1,53 +0,0 @@ -package main - -import ( - "context" - "database/sql" - "fmt" - "log" - - "github.com/cockroachdb/cockroach-go/crdb" -) - -func transferFunds(tx *sql.Tx, from int, to int, amount int) error { - // Read the balance. - var fromBalance int - if err := tx.QueryRow( - "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil { - return err - } - - if fromBalance < amount { - return fmt.Errorf("insufficient funds") - } - - // Perform the transfer. - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil { - return err - } - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil { - return err - } - return nil -} - -func main() { - db, err := sql.Open("postgres", - "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - defer db.Close() - - // Run a transfer in a transaction. - err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error { - return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */) - }) - if err == nil { - fmt.Println("Success") - } else { - log.Fatal("error: ", err) - } -} diff --git a/src/current/_includes/v2.0/app/txn-sample.js b/src/current/_includes/v2.0/app/txn-sample.js deleted file mode 100644 index 1eebaacad30..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.js +++ /dev/null @@ -1,154 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the bank database. - -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257, - ssl: { - ca: fs.readFileSync('certs/ca.crt') - .toString(), - key: fs.readFileSync('certs/client.maxroach.key') - .toString(), - cert: fs.readFileSync('certs/client.maxroach.crt') - .toString() - } -}; - -// Wrapper for a transaction. This automatically re-calls "op" with -// the client as an argument as long as the database server asks for -// the transaction to be retried. - -function txnWrapper(client, op, next) { - client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) { - if (err) { - return next(err); - } - - var released = false; - async.doWhilst(function (done) { - var handleError = function (err) { - // If we got an error, see if it's a retryable one - // and, if so, restart. - if (err.code === '40001') { - // Signal the database that we'll retry. - return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done); - } - // A non-retryable error; break out of the - // doWhilst with an error. - return done(err); - }; - - // Attempt the work. - op(client, function (err) { - if (err) { - return handleError(err); - } - var opResults = arguments; - - // If we reach this point, release and commit. - client.query('RELEASE SAVEPOINT cockroach_restart', function (err) { - if (err) { - return handleError(err); - } - released = true; - return done.apply(null, opResults); - }); - }); - }, - function () { - return !released; - }, - function (err) { - if (err) { - client.query('ROLLBACK', function () { - next(err); - }); - } else { - var txnResults = arguments; - client.query('COMMIT', function (err) { - if (err) { - return next(err); - } else { - return next.apply(null, txnResults); - } - }); - } - }); - }); -} - -// The transaction we want to run. - -function transferFunds(client, from, to, amount, next) { - // Check the current balance. - client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) { - if (err) { - return next(err); - } else if (results.rows.length === 0) { - return next(new Error('account not found in table')); - } - - var acctBal = results.rows[0].balance; - if (acctBal >= amount) { - // Perform the transfer. - async.waterfall([ - function (next) { - // Subtract amount from account 1. - client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next); - }, - function (updateResult, next) { - // Add amount to account 2. - client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next); - }, - function (updateResult, next) { - // Fetch account balances after updates. - client.query('SELECT id, balance FROM accounts', function (err, selectResult) { - next(err, selectResult ? selectResult.rows : null); - }); - } - ], next); - } else { - next(new Error('insufficient funds')); - } - }); -} - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - - // Execute the transaction. - txnWrapper(client, - function (client, next) { - transferFunds(client, 1, 2, 100, next); - }, - function (err, results) { - if (err) { - console.error('error performing transaction', err); - finish(); - } - - console.log('Balances after transfer:'); - results.forEach(function (result) { - console.log(result); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v2.0/app/txn-sample.php b/src/current/_includes/v2.0/app/txn-sample.php deleted file mode 100644 index 363dbcd73cd..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.php +++ /dev/null @@ -1,71 +0,0 @@ -beginTransaction(); - // This savepoint allows us to retry our transaction. - $dbh->exec("SAVEPOINT cockroach_restart"); - } catch (Exception $e) { - throw $e; - } - - while (true) { - try { - $stmt = $dbh->prepare( - 'UPDATE accounts SET balance = balance + :deposit ' . - 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)'); - - // First, withdraw the money from the old account (if possible). - $stmt->bindValue(':account', $from, PDO::PARAM_INT); - $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "source account does not exist or is underfunded\r\n"; - return; - } - - // Next, deposit into the new account (if it exists). - $stmt->bindValue(':account', $to, PDO::PARAM_INT); - $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "destination account does not exist\r\n"; - return; - } - - // Attempt to release the savepoint (which is really the commit). - $dbh->exec('RELEASE SAVEPOINT cockroach_restart'); - $dbh->commit(); - return; - } catch (PDOException $e) { - if ($e->getCode() != '40001') { - // Non-recoverable error. Rollback and bubble error up the chain. - $dbh->rollBack(); - throw $e; - } else { - // Cockroach transaction retry code. Rollback to the savepoint and - // restart. - $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart'); - } - } - } -} - -try { - $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=require;sslrootcert=certs/ca.crt;sslkey=certs/client.maxroach.key;sslcert=certs/client.maxroach.crt', - 'maxroach', null, array( - PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - )); - - transferMoney($dbh, 1, 2, 10); - - print "Account balances after transfer:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v2.0/app/txn-sample.py b/src/current/_includes/v2.0/app/txn-sample.py deleted file mode 100644 index d4c86a36cc8..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.py +++ /dev/null @@ -1,76 +0,0 @@ -# Import the driver. -import psycopg2 -import psycopg2.errorcodes - -# Connect to the cluster. -conn = psycopg2.connect( - database='bank', - user='maxroach', - sslmode='require', - sslrootcert='certs/ca.crt', - sslkey='certs/client.maxroach.key', - sslcert='certs/client.maxroach.crt', - port=26257, - host='localhost' -) - -def onestmt(conn, sql): - with conn.cursor() as cur: - cur.execute(sql) - - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn, op): - with conn: - onestmt(conn, "SAVEPOINT cockroach_restart") - while True: - try: - # Attempt the work. - op(conn) - - # If we reach this point, commit. - onestmt(conn, "RELEASE SAVEPOINT cockroach_restart") - break - - except psycopg2.OperationalError as e: - if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE: - # A non-retryable error; report this up the call stack. - raise e - # Signal the database that we'll retry. - onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart") - - -# The transaction we want to run. -def transfer_funds(txn, frm, to, amount): - with txn.cursor() as cur: - - # Check the current balance. - cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm)) - from_balance = cur.fetchone()[0] - if from_balance < amount: - raise "Insufficient funds" - - # Perform the transfer. - cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", - (amount, frm)) - cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", - (amount, to)) - - -# Execute the transaction. -run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100)) - - -with conn: - with conn.cursor() as cur: - # Check account balances. - cur.execute("SELECT id, balance FROM accounts") - rows = cur.fetchall() - print('Balances after transfer:') - for row in rows: - print([str(cell) for cell in row]) - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v2.0/app/txn-sample.rb b/src/current/_includes/v2.0/app/txn-sample.rb deleted file mode 100644 index 1c3e028fdf7..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.rb +++ /dev/null @@ -1,52 +0,0 @@ -# Import the driver. -require 'pg' - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn) - conn.transaction do |txn| - txn.exec('SAVEPOINT cockroach_restart') - while - begin - # Attempt the work. - yield txn - - # If we reach this point, commit. - txn.exec('RELEASE SAVEPOINT cockroach_restart') - break - rescue PG::TRSerializationFailure - txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart') - end - end - end -end - -def transfer_funds(txn, from, to, amount) - txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res| - res.each do |row| - raise 'insufficient funds' if Integer(row['balance']) < amount - end - end - txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from]) - txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to]) -end - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'require', - sslrootcert: 'certs/ca.crt', - sslkey:'certs/client.maxroach.key', - sslcert:'certs/client.maxroach.crt' -) - -run_transaction(conn) do |txn| - transfer_funds(txn, 1, 2, 100) -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v2.0/app/txn-sample.rs b/src/current/_includes/v2.0/app/txn-sample.rs deleted file mode 100644 index e2282c56ea1..00000000000 --- a/src/current/_includes/v2.0/app/txn-sample.rs +++ /dev/null @@ -1,59 +0,0 @@ -extern crate postgres; - -use postgres::{Connection, TlsMode, Result}; -use postgres::transaction::Transaction; -use self::postgres::error::T_R_SERIALIZATION_FAILURE; - -/// Runs op inside a transaction and retries it as needed. -/// On non-retryable failures, the transaction is aborted and -/// rolled back; on success, the transaction is committed. -fn execute_txn(conn: &Connection, mut op: F) -> Result -where - F: FnMut(&Transaction) -> Result, -{ - let txn = conn.transaction()?; - loop { - let sp = txn.savepoint("cockroach_restart")?; - match op(&sp).and_then(|t| sp.commit().map(|_| t)) { - Err(ref err) if err.as_db() - .map(|e| e.code == T_R_SERIALIZATION_FAILURE) - .unwrap_or(false) => {}, - r => break r, - } - }.and_then(|t| txn.commit().map(|_| t)) -} - -fn transfer_funds(txn: &Transaction, from: i64, to: i64, amount: i64) -> Result<()> { - // Read the balance. - let from_balance: i64 = txn.query("SELECT balance FROM accounts WHERE id = $1", &[&from])? - .get(0) - .get(0); - - assert!(from_balance >= amount); - - // Perform the transfer. - txn.execute( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", - &[&amount, &from], - )?; - txn.execute( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", - &[&amount, &to], - )?; - Ok(()) -} - -fn main() { - let conn = Connection::connect("postgresql://maxroach@localhost:26257/bank", TlsMode::None) - .unwrap(); - - // Run a transfer in a transaction. - execute_txn(&conn, |txn| transfer_funds(txn, 1, 2, 100)).unwrap(); - - // Check account balances after the transaction. - for row in &conn.query("SELECT id, balance FROM accounts", &[]).unwrap() { - let id: i64 = row.get(0); - let balance: i64 = row.get(1); - println!("{} {}", id, balance); - } -} diff --git a/src/current/_includes/v2.0/app/util.clj b/src/current/_includes/v2.0/app/util.clj deleted file mode 100644 index d040affe794..00000000000 --- a/src/current/_includes/v2.0/app/util.clj +++ /dev/null @@ -1,38 +0,0 @@ -(ns test.util - (:require [clojure.java.jdbc :as j] - [clojure.walk :as walk])) - -(defn txn-restart-err? - "Takes an exception and returns true if it is a CockroachDB retry error." - [e] - (when-let [m (.getMessage e)] - (condp instance? e - java.sql.BatchUpdateException - (and (re-find #"getNextExc" m) - (txn-restart-err? (.getNextException e))) - - org.postgresql.util.PSQLException - (= (.getSQLState e) "40001") ; 40001 is the code returned by CockroachDB retry errors. - - false))) - -;; Wrapper for a transaction. -;; This automatically invokes the body again as long as the database server -;; asks the transaction to be retried. - -(defmacro with-txn-retry - "Wrap an evaluation within a CockroachDB retry block." - [[txn c] & body] - `(j/with-db-transaction [~txn ~c] - (loop [] - (j/execute! ~txn ["savepoint cockroach_restart"]) - (let [res# (try (let [r# (do ~@body)] - {:ok r#}) - (catch java.sql.SQLException e# - (if (txn-restart-err? e#) - {:retry true} - (throw e#))))] - (if (:retry res#) - (do (j/execute! ~txn ["rollback to savepoint cockroach_restart"]) - (recur)) - (:ok res#)))))) diff --git a/src/current/_includes/v2.0/computed-columns/jsonb.md b/src/current/_includes/v2.0/computed-columns/jsonb.md deleted file mode 100644 index bd37ecdaad7..00000000000 --- a/src/current/_includes/v2.0/computed-columns/jsonb.md +++ /dev/null @@ -1,35 +0,0 @@ -In this example, let's create a table with a `JSONB` column and a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id STRING PRIMARY KEY AS (profile->>'id') STORED, - profile JSONB -); -~~~ - -Then, insert a few rows of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ -+--------+---------------------------------------------------------------------------------------------------------------------+ -| id | profile | -+--------+---------------------------------------------------------------------------------------------------------------------+ -| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} | -| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} | -| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | -+--------+---------------------------------------------------------------------------------------------------------------------+ -~~~ - -The primary key `id` is computed as a field from the `profile` column. diff --git a/src/current/_includes/v2.0/computed-columns/partitioning.md b/src/current/_includes/v2.0/computed-columns/partitioning.md deleted file mode 100644 index 3785cbe9f8c..00000000000 --- a/src/current/_includes/v2.0/computed-columns/partitioning.md +++ /dev/null @@ -1,53 +0,0 @@ -{{site.data.alerts.callout_info}}Partioning is an enterprise feature. To request and enable a trial or full enterprise license, see Enterprise Licensing.{{site.data.alerts.end}} - -In this example, let's create a table with geo-partitioning and a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE user_locations ( - locality STRING AS (CASE - WHEN country IN ('ca', 'mx', 'us') THEN 'north_america' - WHEN country IN ('au', 'nz') THEN 'australia' - END) STORED, - id SERIAL, - name STRING, - country STRING, - PRIMARY KEY (locality, id)) - PARTITION BY LIST (locality) - (PARTITION north_america VALUES IN ('north_america'), - PARTITION australia VALUES IN ('australia')); -~~~ - -Then, insert a few rows of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO user_locations (name, country) VALUES - ('Leonard McCoy', 'us'), - ('Uhura', 'nz'), - ('Spock', 'ca'), - ('James Kirk', 'us'), - ('Scotty', 'mx'), - ('Hikaru Sulu', 'us'), - ('Pavel Chekov', 'au'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM user_locations; -~~~ -~~~ -+---------------+--------------------+---------------+---------+ -| locality | id | name | country | -+---------------+--------------------+---------------+---------+ -| australia | 333153890100609025 | Uhura | nz | -| australia | 333153890100772865 | Pavel Chekov | au | -| north_america | 333153890100576257 | Leonard McCoy | us | -| north_america | 333153890100641793 | Spock | ca | -| north_america | 333153890100674561 | James Kirk | us | -| north_america | 333153890100707329 | Scotty | mx | -| north_america | 333153890100740097 | Hikaru Sulu | us | -+---------------+--------------------+---------------+---------+ -~~~ - -The `locality` column is computed from the `country` column. diff --git a/src/current/_includes/v2.0/computed-columns/secondary-index.md b/src/current/_includes/v2.0/computed-columns/secondary-index.md deleted file mode 100644 index 242b5d6c7f2..00000000000 --- a/src/current/_includes/v2.0/computed-columns/secondary-index.md +++ /dev/null @@ -1,63 +0,0 @@ -In this example, let's create a table with a computed columns and an index on that column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE gymnastics ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - athlete STRING, - vault DECIMAL, - bars DECIMAL, - beam DECIMAL, - floor DECIMAL, - combined_score DECIMAL AS (vault + bars + beam + floor) STORED, - INDEX total (combined_score DESC) - ); -~~~ - -Then, insert a few rows a data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES - ('Simone Biles', 15.933, 14.800, 15.300, 15.800), - ('Gabby Douglas', 0, 15.766, 0, 0), - ('Laurie Hernandez', 15.100, 0, 15.233, 14.833), - ('Madison Kocian', 0, 15.933, 0, 0), - ('Aly Raisman', 15.833, 0, 15.000, 15.366); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM gymnastics; -~~~ -~~~ -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| id | athlete | vault | bars | beam | floor | combined_score | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 | -| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 | -| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 | -| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 | -| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -~~~ - -Now, let's run a query using the secondary index: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC; -~~~ -~~~ -+------------------+----------------+ -| athlete | combined_score | -+------------------+----------------+ -| Simone Biles | 61.833 | -| Aly Raisman | 46.199 | -| Laurie Hernandez | 45.166 | -| Madison Kocian | 15.933 | -| Gabby Douglas | 15.766 | -+------------------+----------------+ -~~~ - -The athlete with the highest combined score of 61.833 is Simone Biles. diff --git a/src/current/_includes/v2.0/computed-columns/simple.md b/src/current/_includes/v2.0/computed-columns/simple.md deleted file mode 100644 index 056ad70ecc7..00000000000 --- a/src/current/_includes/v2.0/computed-columns/simple.md +++ /dev/null @@ -1,37 +0,0 @@ -In this example, let's create a simple table with a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE names ( - id INT PRIMARY KEY, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED - ); -~~~ - -Then, insert a few rows a data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO names (id, first_name, last_name) VALUES - (1, 'Lola', 'McDog'), - (2, 'Carl', 'Kimball'), - (3, 'Ernie', 'Narayan'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM names; -~~~ -~~~ -+----+------------+-------------+----------------+ -| id | first_name | last_name | full_name | -+----+------------+-------------+----------------+ -| 1 | Lola | McDog | Lola McDog | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -+----+------------+-------------+----------------+ -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). diff --git a/src/current/_includes/v2.0/faq/auto-generate-unique-ids.html b/src/current/_includes/v2.0/faq/auto-generate-unique-ids.html deleted file mode 100644 index 0ffcbb03c3e..00000000000 --- a/src/current/_includes/v2.0/faq/auto-generate-unique-ids.html +++ /dev/null @@ -1,87 +0,0 @@ -To auto-generate unique row IDs, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t1 (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t1 (name) VALUES ('a'), ('b'), ('c'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t1; -~~~ - -~~~ -+--------------------------------------+------+ -| id | name | -+--------------------------------------+------+ -| 60853a85-681d-4620-9677-946bbfdc8fbc | c | -| 77c9bc2e-76a5-4ebc-80c3-7ad3159466a1 | b | -| bd3a56e1-c75e-476c-b221-0da9d74d66eb | a | -+--------------------------------------+------+ -(3 rows) -~~~ - -Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t2 (id BYTES PRIMARY KEY DEFAULT uuid_v4(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t2 (name) VALUES ('a'), ('b'), ('c'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t2; -~~~ - -~~~ -+---------------------------------------------------+------+ -| id | name | -+---------------------------------------------------+------+ -| "\x9b\x10\xdc\x11\x9a\x9cGB\xbd\x8d\t\x8c\xf6@vP" | a | -| "\xd9s\xd7\x13\n_L*\xb0\x87c\xb6d\xe1\xd8@" | c | -| "\uac74\x1dd@B\x97\xac\x04N&\x9eBg\x86" | b | -+---------------------------------------------------+------+ -(3 rows) -~~~ - -In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 64MB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load. - -If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t3 (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t3 (name) VALUES ('a'), ('b'), ('c'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t3; -~~~ - -~~~ -+--------------------+------+ -| id | name | -+--------------------+------+ -| 293807573840855041 | a | -| 293807573840887809 | b | -| 293807573840920577 | c | -+--------------------+------+ -(3 rows) -~~~ - -Upon insert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. diff --git a/src/current/_includes/v2.0/faq/clock-synchronization-effects.md b/src/current/_includes/v2.0/faq/clock-synchronization-effects.md deleted file mode 100644 index d86fb8dc238..00000000000 --- a/src/current/_includes/v2.0/faq/clock-synchronization-effects.md +++ /dev/null @@ -1,15 +0,0 @@ -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -The one rare case to note is when a node's clock suddenly jumps beyond the maximum offset before the node detects it. Although extremely unlikely, this could occur, for example, when running CockroachDB inside a VM and the VM hypervisor decides to migrate the VM to different hardware with a different time. In this case, there can be a small window of time between when the node's clock becomes unsynchronized and when the node spontaneously shuts down. During this window, it would be possible for a client to read stale data and write data derived from stale reads. - -For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Featured Approach -------------|--------------------- -[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. - -{{site.data.alerts.callout_info}}In most cases, we recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/faq/clock-synchronization-monitoring.html b/src/current/_includes/v2.0/faq/clock-synchronization-monitoring.html deleted file mode 100644 index 7fb82e4d188..00000000000 --- a/src/current/_includes/v2.0/faq/clock-synchronization-monitoring.html +++ /dev/null @@ -1,8 +0,0 @@ -As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes: - -Metric | Definition --------|----------- -`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds -`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds - -As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset. diff --git a/src/current/_includes/v2.0/faq/differences-between-numberings.md b/src/current/_includes/v2.0/faq/differences-between-numberings.md deleted file mode 100644 index 741ec4f8066..00000000000 --- a/src/current/_includes/v2.0/faq/differences-between-numberings.md +++ /dev/null @@ -1,11 +0,0 @@ - -| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences | -|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------| -| Size | 16 bytes | 8 bytes | 1 to 8 bytes | -| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered | -| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention | -| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values | -| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local | -| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher | -| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node | -| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited | diff --git a/src/current/_includes/v2.0/faq/planned-maintenance.md b/src/current/_includes/v2.0/faq/planned-maintenance.md deleted file mode 100644 index c9fbb49266a..00000000000 --- a/src/current/_includes/v2.0/faq/planned-maintenance.md +++ /dev/null @@ -1,22 +0,0 @@ -By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window. - -For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING server.time_until_store_dead = '15m0s'; -~~~ - -After completing the maintenance work and [restarting the nodes](start-a-node.html), you would then change the setting back to its default: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING server.time_until_store_dead = '5m0s'; -~~~ - -It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example: - -{% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING server.shutdown.drain_wait = '10s'; - ~~~ diff --git a/src/current/_includes/v2.0/faq/sequential-numbers.md b/src/current/_includes/v2.0/faq/sequential-numbers.md deleted file mode 100644 index ee5bd96d9c4..00000000000 --- a/src/current/_includes/v2.0/faq/sequential-numbers.md +++ /dev/null @@ -1,7 +0,0 @@ -Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations: - -- Unless you need roughly-ordered numbers, we recommend using [`UUID`](uuid.html) values instead. See the [previous -FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details. -- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that -consumes a lower sequence number commits after a transaction that consumes a higher number). -- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers. diff --git a/src/current/_includes/v2.0/faq/sequential-transactions.md b/src/current/_includes/v2.0/faq/sequential-transactions.md deleted file mode 100644 index 684f2ce5d2a..00000000000 --- a/src/current/_includes/v2.0/faq/sequential-transactions.md +++ /dev/null @@ -1,19 +0,0 @@ -Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly -solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM -TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following: - -- Paginating through all the changes to a table or dataset -- Determining the order of changes to data over time -- Determining the state of data at some point in the past -- Determining the changes to data between two points of time - -Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering. - -However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows: - -- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);` -- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;` - -This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result. - -If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs. diff --git a/src/current/_includes/v2.0/faq/simulate-key-value-store.html b/src/current/_includes/v2.0/faq/simulate-key-value-store.html deleted file mode 100644 index 4772fa5358c..00000000000 --- a/src/current/_includes/v2.0/faq/simulate-key-value-store.html +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key: - -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES); -~~~ - -When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation: - -~~~ sql -> UPSERT INTO kv VALUES (1, b'hello') -~~~ - -This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises. diff --git a/src/current/_includes/v2.0/faq/sql-query-logging.md b/src/current/_includes/v2.0/faq/sql-query-logging.md deleted file mode 100644 index b6bb14a900b..00000000000 --- a/src/current/_includes/v2.0/faq/sql-query-logging.md +++ /dev/null @@ -1,63 +0,0 @@ -There are several ways to log SQL queries. The type of logging you use will depend on your requirements. - -- For per-table audit logs, turn on [SQL audit logs](#sql-audit-logs). -- For system troubleshooting and performance optimization, turn on [cluster-wide execution logs](#cluster-wide-execution-logs). -- For local testing, turn on [per-node execution logs](#per-node-execution-logs). - -### SQL audit logs - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - -SQL audit logging is useful if you want to log all queries that are run against specific tables. - -- For a tutorial, see [SQL Audit Logging](sql-audit-logging.html). - -- For SQL reference documentation, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). - -### Cluster-wide execution logs - -For production clusters, the best way to log all queries is to turn on the [cluster-wide setting](cluster-settings.html) `sql.trace.log_statement_execute`: - -~~~ sql -> SET CLUSTER SETTING sql.trace.log_statement_execute = true; -~~~ - -With this setting on, each node of the cluster writes all SQL queries it executes to its log file. When you no longer need to log queries, you can turn the setting back off: - -~~~ sql -> SET CLUSTER SETTING sql.trace.log_statement_execute = false; -~~~ - -### Per-node execution logs - -Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can either pass a CLI flag at node startup, or execute a SQL function on a running node. - -Using the CLI to start a new node, pass the `--vmodule` flag to the [`cockroach start`](start-a-node.html) command. For example, to start a single node locally and log all SQL queries it executes, you'd run: - -~~~ shell -$ cockroach start --insecure --host=localhost --vmodule=exec_log=2 -~~~ - -From the SQL prompt on a running node, execute the `crdb_internal.set_vmodule()` [function](functions-and-operators.html): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT crdb_internal.set_vmodule('exec_log=2'); -~~~ - -This will result in the following output: - -~~~ -+---------------------------+ -| crdb_internal.set_vmodule | -+---------------------------+ -| 0 | -+---------------------------+ -(1 row) -~~~ - -Once the logging is enabled, all of the node's queries will be written to the [CockroachDB log file](debug-and-error-logs.html) as follows: - -~~~ -I180402 19:12:28.112957 394661 sql/exec_log.go:173 [n1,client=127.0.0.1:50155,user=root] exec "psql" {} "SELECT version()" {} 0.795 1 "" -~~~ diff --git a/src/current/_includes/v2.0/faq/when-to-interleave-tables.html b/src/current/_includes/v2.0/faq/when-to-interleave-tables.html deleted file mode 100644 index a65196ad693..00000000000 --- a/src/current/_includes/v2.0/faq/when-to-interleave-tables.html +++ /dev/null @@ -1,5 +0,0 @@ -You're most likely to benefit from interleaved tables when: - - - Your tables form a [hierarchy](interleave-in-parent.html#interleaved-hierarchy) - - Queries maximize the [benefits of interleaving](interleave-in-parent.html#benefits) - - Queries do not suffer too greatly from interleaving's [tradeoffs](interleave-in-parent.html#tradeoffs) diff --git a/src/current/_includes/v2.0/json/json-sample.go b/src/current/_includes/v2.0/json/json-sample.go deleted file mode 100644 index ecba73acc55..00000000000 --- a/src/current/_includes/v2.0/json/json-sample.go +++ /dev/null @@ -1,79 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "io/ioutil" - "net/http" - "time" - - _ "github.com/lib/pq" -) - -func main() { - db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257") - if err != nil { - panic(err) - } - - // The Reddit API wants us to tell it where to start from. The first request - // we just say "null" to say "from the start", subsequent requests will use - // the value received from the last call. - after := "null" - - for i := 0; i < 300; i++ { - after, err = makeReq(db, after) - if err != nil { - panic(err) - } - // Reddit limits to 30 requests per minute, so do not do any more than that. - time.Sleep(2 * time.Second) - } -} - -func makeReq(db *sql.DB, after string) (string, error) { - // First, make a request to reddit using the appropriate "after" string. - client := &http.Client{} - req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil) - - req.Header.Add("User-Agent", `Go`) - - resp, err := client.Do(req) - if err != nil { - return "", err - } - - res, err := ioutil.ReadAll(resp.Body) - if err != nil { - return "", err - } - - // We've gotten back our JSON from reddit, we can use a couple SQL tricks to - // accomplish multiple things at once. - // The JSON reddit returns looks like this: - // { - // "data": { - // "children": [ ... ] - // }, - // "after": ... - // } - // We structure our query so that we extract the `children` field, and then - // expand that and insert each individual element into the database as a - // separate row. We then return the "after" field so we know how to make the - // next request. - r, err := db.Query(` - INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements($1->'data'->'children') - RETURNING $1->'data'->'after'`, - string(res)) - if err != nil { - return "", err - } - - // Since we did a RETURNING, we need to grab the result of our query. - r.Next() - var newAfter string - r.Scan(&newAfter) - - return newAfter, nil -} diff --git a/src/current/_includes/v2.0/json/json-sample.py b/src/current/_includes/v2.0/json/json-sample.py deleted file mode 100644 index 68b7fd1ef37..00000000000 --- a/src/current/_includes/v2.0/json/json-sample.py +++ /dev/null @@ -1,44 +0,0 @@ -import json -import psycopg2 -import requests -import time - -conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257) -conn.set_session(autocommit=True) -cur = conn.cursor() - -# The Reddit API wants us to tell it where to start from. The first request -# we just say "null" to say "from the start"; subsequent requests will use -# the value received from the last call. -url = "https://www.reddit.com/r/programming.json" -after = {"after": "null"} - -for n in range(300): - # First, make a request to reddit using the appropriate "after" string. - req = requests.get(url, params=after, headers={"User-Agent": "Python"}) - - # Decode the JSON and set "after" for the next request. - resp = req.json() - after = {"after": str(resp['data']['after'])} - - # Convert the JSON to a string to send to the database. - data = json.dumps(resp) - - # The JSON reddit returns looks like this: - # { - # "data": { - # "children": [ ... ] - # }, - # "after": ... - # } - # We structure our query so that we extract the `children` field, and then - # expand that and insert each individual element into the database as a - # separate row. - cur.execute("""INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements(%s->'data'->'children')""", (data,)) - - # Reddit limits to 30 requests per minute, so do not do any more than that. - time.sleep(2) - -cur.close() -conn.close() diff --git a/src/current/_includes/v2.0/known-limitations/cte-by-name.md b/src/current/_includes/v2.0/known-limitations/cte-by-name.md deleted file mode 100644 index d33a6f8c7e8..00000000000 --- a/src/current/_includes/v2.0/known-limitations/cte-by-name.md +++ /dev/null @@ -1,10 +0,0 @@ -It is currently not possible to refer to a [common table expression](common-table-expressions.html) by name more than once. - -For example, the following query is invalid because the CTE `a` is -referred to twice: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (VALUES (1), (2), (3)) - SELECT * FROM a, a; -~~~ diff --git a/src/current/_includes/v2.0/known-limitations/cte-in-set-expression.md b/src/current/_includes/v2.0/known-limitations/cte-in-set-expression.md deleted file mode 100644 index 6c5e1dbbd56..00000000000 --- a/src/current/_includes/v2.0/known-limitations/cte-in-set-expression.md +++ /dev/null @@ -1,12 +0,0 @@ -{{site.data.alerts.callout_info}} -Resolved as of v2.1. -{{site.data.alerts.end}} - -It is not yet possible to use a [common table expression](common-table-expressions.html) defined outside of a [set expression](selection-queries.html#set-operations) in the right operand of a set operator, for example: - -~~~ sql -> WITH a AS (SELECT 1) - SELECT * FROM users UNION SELECT * FROM a; -- "a" used on the right, not yet supported. -~~~ - -For `UNION`, you can work around this limitation by swapping the operands. For the other set operators, you can inline the definition of the CTE inside the right operand. diff --git a/src/current/_includes/v2.0/known-limitations/cte-in-values-clause.md b/src/current/_includes/v2.0/known-limitations/cte-in-values-clause.md deleted file mode 100644 index 3e20d578673..00000000000 --- a/src/current/_includes/v2.0/known-limitations/cte-in-values-clause.md +++ /dev/null @@ -1,9 +0,0 @@ -{{site.data.alerts.callout_info}} -Resolved as of v2.1. -{{site.data.alerts.end}} - -It is not yet possible to use a [common table expression](common-table-expressions.html) define outside of a `VALUES` clause in a [subquery](subqueries.html) inside the [`VALUES`](selection-queries.html#values-clause) clause, for example: - -~~~ sql -> WITH a AS (...) VALUES ((SELECT * FROM a)); -~~~ diff --git a/src/current/_includes/v2.0/known-limitations/cte-with-dml.md b/src/current/_includes/v2.0/known-limitations/cte-with-dml.md deleted file mode 100644 index 0cbd18ea484..00000000000 --- a/src/current/_includes/v2.0/known-limitations/cte-with-dml.md +++ /dev/null @@ -1,29 +0,0 @@ -{{site.data.alerts.callout_info}} -Resolved as of v2.1. -{{site.data.alerts.end}} - -If a [common table expression](common-table-expressions.html) containing data-modifying statement is not referred to -by the top level query, either directly or indirectly, the -data-modifying statement will not be executed at all. - -For example, the following query does not insert any row, because the CTE `a` is not used: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (INSERT INTO t(x) VALUES (1), (2), (3)) - SELECT * FROM b; -~~~ - -Also, the following query does not insert any row, even though the CTE `a` is used, because -the other CTE that uses `a` are themselves not used: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (INSERT INTO t(x) VALUES (1), (2), (3)), - b AS (SELECT * FROM a) - SELECT * FROM c; -~~~ - -To determine whether a modification will effectively take place, use -[`EXPLAIN`](explain.html) and check whether the desired data -modification is part of the final plan for the overall query. diff --git a/src/current/_includes/v2.0/known-limitations/cte-with-view.md b/src/current/_includes/v2.0/known-limitations/cte-with-view.md deleted file mode 100644 index 1e82ab9ee2a..00000000000 --- a/src/current/_includes/v2.0/known-limitations/cte-with-view.md +++ /dev/null @@ -1,5 +0,0 @@ -{{site.data.alerts.callout_info}} -Resolved as of v2.1. -{{site.data.alerts.end}} - -It is not yet possible to use a [common table expression](common-table-expressions.html) inside the selection query used to [define a view](create-view.html). diff --git a/src/current/_includes/v2.0/known-limitations/dump-cyclic-foreign-keys.md b/src/current/_includes/v2.0/known-limitations/dump-cyclic-foreign-keys.md deleted file mode 100644 index 4e3c43644ea..00000000000 --- a/src/current/_includes/v2.0/known-limitations/dump-cyclic-foreign-keys.md +++ /dev/null @@ -1 +0,0 @@ -The [`cockroach dump`](sql-dump.html) command will successfully create a dump file for a table with a [foreign key](foreign-key.html) reference to itself, or a set of tables with a cyclic foreign key dependency (e.g., a depends on b depends on a). That dump file, however, can only be executed after manually editing the output to remove the foreign key definitions from the `CREATE TABLE` statements and adding them as `ALTER TABLE ... ADD CONSTRAINT` statements after the `INSERT` statements. diff --git a/src/current/_includes/v2.0/known-limitations/node-map.md b/src/current/_includes/v2.0/known-limitations/node-map.md deleted file mode 100644 index 863f09c3ac2..00000000000 --- a/src/current/_includes/v2.0/known-limitations/node-map.md +++ /dev/null @@ -1,8 +0,0 @@ -You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration: - -| Node | Region | Datacenter | -| ------ | ------ | ------ | -| Node1 | us-east | datacenter-1 | -| Node2 | us-west | datacenter-1 | - -In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed. diff --git a/src/current/_includes/v2.0/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v2.0/known-limitations/partitioning-with-placeholders.md deleted file mode 100644 index b3c3345200d..00000000000 --- a/src/current/_includes/v2.0/known-limitations/partitioning-with-placeholders.md +++ /dev/null @@ -1 +0,0 @@ -When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause. diff --git a/src/current/_includes/v2.0/known-limitations/system-range-replication.md b/src/current/_includes/v2.0/known-limitations/system-range-replication.md deleted file mode 100644 index 4bfe40ba1a2..00000000000 --- a/src/current/_includes/v2.0/known-limitations/system-range-replication.md +++ /dev/null @@ -1 +0,0 @@ -Changes to the [`.default` cluster-wide replication zone](configure-replication-zones.html#edit-the-default-replication-zone) are not automatically applied to existing replication zones, including those for important internal data. For the cluster as a whole to remain available, the "system ranges" for this internal data must always retain a majority of their replicas. Therefore, if you increase the default replication factor, be sure to also [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) as well. diff --git a/src/current/_includes/v2.0/metric-names.md b/src/current/_includes/v2.0/metric-names.md deleted file mode 100644 index 7eebed323d8..00000000000 --- a/src/current/_includes/v2.0/metric-names.md +++ /dev/null @@ -1,246 +0,0 @@ -Name | Help ------|----- -`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas) -`addsstable.copies` | Number of SSTable ingestions that required copying files during application -`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders) -`build.timestamp` | Build information -`capacity.available` | Available storage capacity -`capacity.reserved` | Capacity reserved for snapshots -`capacity.used` | Used storage capacity -`capacity` | Total storage capacity -`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds -`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds -`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges -`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine -`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine -`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions -`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue -`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted -`distsender.batches.partial` | Number of partial batches processed -`distsender.batches` | Number of batches processed -`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered -`distsender.rpc.sent.local` | Number of local RPCs sent -`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors -`distsender.rpc.sent` | Number of RPCs sent -`exec.error` | Number of batch KV requests that failed to execute on this node -`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node -`exec.success` | Number of batch KV requests executed successfully on this node -`gcbytesage` | Cumulative age of non-live data in seconds -`gossip.bytes.received` | Number of received gossip bytes -`gossip.bytes.sent` | Number of sent gossip bytes -`gossip.connections.incoming` | Number of active incoming gossip connections -`gossip.connections.outgoing` | Number of active outgoing gossip connections -`gossip.connections.refused` | Number of refused incoming gossip connections -`gossip.infos.received` | Number of received gossip Info objects -`gossip.infos.sent` | Number of sent gossip Info objects -`intentage` | Cumulative age of intents in seconds -`intentbytes` | Number of bytes in intent KV pairs -`intentcount` | Count of intent keys -`keybytes` | Number of bytes taken up by keys -`keycount` | Count of all keys -`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated -`leases.epoch` | Number of replica leaseholders using epoch-based leases -`leases.error` | Number of failed lease requests -`leases.expiration` | Number of replica leaseholders using expiration-based leases -`leases.success` | Number of successful lease requests -`leases.transfers.error` | Number of failed lease transfers -`leases.transfers.success` | Number of successful lease transfers -`livebytes` | Number of bytes of live data (keys plus values) -`livecount` | Count of live keys -`liveness.epochincrements` | Number of times this node has incremented its liveness epoch -`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node -`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds -`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node -`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) -`node-id` | node ID with labels for advertised RPC and HTTP addresses -`queue.consistency.pending` | Number of pending replicas in the consistency checker queue -`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue -`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue -`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue -`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal -`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal -`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine -`queue.gc.info.intentsconsidered` | Number of 'old' intents -`queue.gc.info.intenttxns` | Number of associated distinct transactions -`queue.gc.info.numkeysaffected` | Number of keys with GC'able data -`queue.gc.info.pushtxn` | Number of attempted pushes -`queue.gc.info.resolvesuccess` | Number of successful intent resolutions -`queue.gc.info.resolvetotal` | Number of attempted intent resolutions -`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns -`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns -`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns -`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine -`queue.gc.pending` | Number of pending replicas in the GC queue -`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue -`queue.gc.process.success` | Number of replicas successfully processed by the GC queue -`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue -`queue.raftlog.pending` | Number of pending replicas in the Raft log queue -`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue -`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue -`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue -`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue -`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue -`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue -`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue -`queue.replicagc.pending` | Number of pending replicas in the replica GC queue -`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue -`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue -`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue -`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue -`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue -`queue.replicate.pending` | Number of pending replicas in the replicate queue -`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue -`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue -`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue -`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options -`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue -`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) -`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) -`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue -`queue.split.pending` | Number of pending replicas in the split queue -`queue.split.process.failure` | Number of replicas which failed processing in the split queue -`queue.split.process.success` | Number of replicas successfully processed by the split queue -`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue -`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue -`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue -`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue -`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue -`raft.commandsapplied` | Count of Raft commands applied -`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue -`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced -`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands -`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries -`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() -`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working -`raft.rcvd.app` | Number of MsgApp messages received by this store -`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store -`raft.rcvd.dropped` | Number of dropped incoming Raft messages -`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store -`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store -`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store -`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store -`raft.rcvd.prop` | Number of MsgProp messages received by this store -`raft.rcvd.snap` | Number of MsgSnap messages received by this store -`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store -`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store -`raft.rcvd.vote` | Number of MsgVote messages received by this store -`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store -`raft.ticks` | Number of Raft ticks queued -`raftlog.behind` | Number of Raft log entries followers on other stores are behind -`raftlog.truncated` | Number of Raft log entries truncated -`range.adds` | Number of range additions -`range.raftleadertransfers` | Number of raft leader transfers -`range.removes` | Number of range removals -`range.snapshots.generated` | Number of generated snapshots -`range.snapshots.normal-applied` | Number of applied snapshots -`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots -`range.splits` | Number of range splits -`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum -`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target -`ranges` | Number of ranges -`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions -`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined -`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined -`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined -`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue -`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue -`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue -`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree -`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue -`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store -`replicas.leaders` | Number of raft leaders -`replicas.leaseholders` | Number of lease holders -`replicas.quiescent` | Number of quiesced replicas -`replicas.reserved` | Number of replicas reserved for snapshots -`replicas` | Number of replicas -`requests.backpressure.split` | Number of backpressured writes waiting on a Range split -`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue -`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender -`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease -`requests.slow.raft` | Number of requests that have been stuck for a long time in raft -`rocksdb.block.cache.hits` | Count of block cache hits -`rocksdb.block.cache.misses` | Count of block cache misses -`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache -`rocksdb.block.cache.usage` | Bytes used by the block cache -`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked -`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation -`rocksdb.compactions` | Number of table compactions -`rocksdb.flushes` | Number of table flushes -`rocksdb.memtable.total-size` | Current size of memtable in bytes -`rocksdb.num-sstables` | Number of rocksdb SSTables -`rocksdb.read-amplification` | Number of disk reads per query -`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks -`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds -`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error. -`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error. -`sql.bytesin` | Number of sql bytes received -`sql.bytesout` | Number of sql bytes sent -`sql.conns` | Number of active sql connections -`sql.ddl.count` | Number of SQL DDL statements -`sql.delete.count` | Number of SQL DELETE statements -`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution -`sql.distsql.flows.active` | Number of distributed SQL flows currently active -`sql.distsql.flows.total` | Number of distributed SQL flows executed -`sql.distsql.queries.active` | Number of distributed SQL queries currently active -`sql.distsql.queries.total` | Number of distributed SQL queries executed -`sql.distsql.select.count` | Number of DistSQL SELECT statements -`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution -`sql.exec.latency` | Latency in nanoseconds of SQL statement execution -`sql.insert.count` | Number of SQL INSERT statements -`sql.mem.current` | Current sql statement memory usage -`sql.mem.distsql.current` | Current sql statement memory usage for distsql -`sql.mem.distsql.max` | Memory usage per sql statement for distsql -`sql.mem.max` | Memory usage per sql statement -`sql.mem.session.current` | Current sql session memory usage -`sql.mem.session.max` | Memory usage per sql session -`sql.mem.txn.current` | Current sql transaction memory usage -`sql.mem.txn.max` | Memory usage per sql transaction -`sql.misc.count` | Number of other SQL statements -`sql.query.count` | Number of SQL queries -`sql.select.count` | Number of SQL SELECT statements -`sql.service.latency` | Latency in nanoseconds of SQL request execution -`sql.txn.abort.count` | Number of SQL transaction ABORT statements -`sql.txn.begin.count` | Number of SQL transaction BEGIN statements -`sql.txn.commit.count` | Number of SQL transaction COMMIT statements -`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements -`sql.update.count` | Number of SQL UPDATE statements -`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo -`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released -`sys.cgocalls` | Total number of cgo call -`sys.cpu.sys.ns` | Total system cpu time in nanoseconds -`sys.cpu.sys.percent` | Current system cpu percentage -`sys.cpu.user.ns` | Total user cpu time in nanoseconds -`sys.cpu.user.percent` | Current user cpu percentage -`sys.fd.open` | Process open file descriptors -`sys.fd.softlimit` | Process open FD soft limit -`sys.gc.count` | Total number of GC runs -`sys.gc.pause.ns` | Total GC pause in nanoseconds -`sys.gc.pause.percent` | Current GC pause percentage -`sys.go.allocbytes` | Current bytes of memory allocated by go -`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released -`sys.goroutines` | Current number of goroutines -`sys.rss` | Current process RSS -`sys.uptime` | Process uptime in seconds -`sysbytes` | Number of bytes in system KV pairs -`syscount` | Count of system KV pairs -`timeseries.write.bytes` | Total size in bytes of metric samples written to disk -`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk -`timeseries.write.samples` | Total number of metric samples written to disk -`totalbytes` | Total number of bytes taken up by keys and values including non-live data -`tscache.skl.read.pages` | Number of pages in the read timestamp cache -`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache -`tscache.skl.write.pages` | Number of pages in the write timestamp cache -`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache -`txn.abandons` | Number of abandoned KV transactions -`txn.aborts` | Number of aborted KV transactions -`txn.autoretries` | Number of automatic retries to avoid serializable restarts -`txn.commits1PC` | Number of committed one-phase KV transactions -`txn.commits` | Number of committed KV transactions (including 1PC) -`txn.durations` | KV transaction durations in nanoseconds -`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command -`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer -`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE -`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first -`txn.restarts` | Number of restarted KV transactions -`valbytes` | Number of bytes taken up by values -`valcount` | Count of all values diff --git a/src/current/_includes/v2.0/misc/available-capacity-metric.md b/src/current/_includes/v2.0/misc/available-capacity-metric.md deleted file mode 100644 index 11511de2d37..00000000000 --- a/src/current/_includes/v2.0/misc/available-capacity-metric.md +++ /dev/null @@ -1 +0,0 @@ -If you are running multiple nodes on a single machine (not recommended in production) and didn't specify the maximum allocated storage capacity for each node using the [`--store`](start-a-node.html#store) flag, the capacity metrics in the Admin UI are incorrect. This is because when multiple nodes are running on a single machine, the machine's hard disk is treated as an available store for each node, while in reality, only one hard disk is available for all nodes. The total available capacity is then calculated as the hard disk size multiplied by the number of nodes on the machine. diff --git a/src/current/_includes/v2.0/misc/aws-locations.md b/src/current/_includes/v2.0/misc/aws-locations.md deleted file mode 100644 index 8b073c1f230..00000000000 --- a/src/current/_includes/v2.0/misc/aws-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`| -| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` | -| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` | -| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` | -| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` | -| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` | -| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` | -| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` | -| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` | -| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` | -| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` | -| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` | -| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` | -| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` | -| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` | -| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v2.0/misc/azure-locations.md b/src/current/_includes/v2.0/misc/azure-locations.md deleted file mode 100644 index 7119ff8b7cb..00000000000 --- a/src/current/_includes/v2.0/misc/azure-locations.md +++ /dev/null @@ -1,30 +0,0 @@ -| Location | SQL Statement | -| -------- | ------------- | -| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` | -| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` | -| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` | -| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` | -| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` | -| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` | -| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` | -| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` | -| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` | -| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` | -| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` | -| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` | -| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` | -| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` | -| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` | -| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` | -| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` | -| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` | -| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` | -| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` | -| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` | -| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` | -| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` | -| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` | -| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` | -| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` | -| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` | -| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` | diff --git a/src/current/_includes/v2.0/misc/basic-terms.md b/src/current/_includes/v2.0/misc/basic-terms.md deleted file mode 100644 index cd067ce00f0..00000000000 --- a/src/current/_includes/v2.0/misc/basic-terms.md +++ /dev/null @@ -1,9 +0,0 @@ -Term | Definition ------|------------ -**Cluster** | Your CockroachDB deployment, which acts as a single logical application. -**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster. -**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.

From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 64 MiB in size, it splits into two ranges. This process continues as the table and its indexes continue growing. -**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -**Leaseholder** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.

Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder. -**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](/docs/v2.0/architecture/replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder. -**Raft Log** | For each range, a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication. diff --git a/src/current/_includes/v2.0/misc/beta-warning.md b/src/current/_includes/v2.0/misc/beta-warning.md deleted file mode 100644 index 505ce8b03dd..00000000000 --- a/src/current/_includes/v2.0/misc/beta-warning.md +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_danger}} This is a beta feature. It is currently undergoing continued testing. Please file a Github issue with us if you identify a bug. {{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/misc/diagnostics-callout.html b/src/current/_includes/v2.0/misc/diagnostics-callout.html deleted file mode 100644 index a969a8cf152..00000000000 --- a/src/current/_includes/v2.0/misc/diagnostics-callout.html +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/misc/experimental-warning.md b/src/current/_includes/v2.0/misc/experimental-warning.md deleted file mode 100644 index c6f3283bc8a..00000000000 --- a/src/current/_includes/v2.0/misc/experimental-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -This is an experimental feature. The interface and output are subject to change. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/misc/external-urls.md b/src/current/_includes/v2.0/misc/external-urls.md deleted file mode 100644 index ae91b9b76ef..00000000000 --- a/src/current/_includes/v2.0/misc/external-urls.md +++ /dev/null @@ -1,24 +0,0 @@ -~~~ -[scheme]://[host]/[path]?[parameters] -~~~ - -| Location | scheme | host | parameters | -|----------|--------|------|------------| -| Amazon S3 | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` | -| Azure | `azure` | Container name | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME` | -| Google Cloud [1](#considerations) | `gs` | Bucket name | `AUTH` (optional): can be `default` or `implicit` | -| HTTP [2](#considerations) | `http` | Remote host | N/A | -| NFS/Local [3](#considerations) | `nodelocal` | File system location | N/A | -| S3-compatible services [4](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`, `AWS_ENDPOINT` | - -#### Considerations - -- 1 If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` setting will be used if it is non-empty, otherwise the `implicit` behavior is used. - -- 2 You can easily create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs. - -- 3 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](start-a-node.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled. - -- 4 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service. - -- The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. diff --git a/src/current/_includes/v2.0/misc/gce-locations.md b/src/current/_includes/v2.0/misc/gce-locations.md deleted file mode 100644 index 22ba06c81e2..00000000000 --- a/src/current/_includes/v2.0/misc/gce-locations.md +++ /dev/null @@ -1,17 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` | -| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` | -| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` | -| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` | -| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` | -| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` | -| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` | -| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` | -| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` | -| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` | -| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` | -| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` | -| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` | -| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` | -| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v2.0/misc/linux-binary-prereqs.md b/src/current/_includes/v2.0/misc/linux-binary-prereqs.md deleted file mode 100644 index b470603bb66..00000000000 --- a/src/current/_includes/v2.0/misc/linux-binary-prereqs.md +++ /dev/null @@ -1 +0,0 @@ -

The CockroachDB binary for Linux requires glibc and libncurses, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.

diff --git a/src/current/_includes/v2.0/misc/logging-flags.md b/src/current/_includes/v2.0/misc/logging-flags.md deleted file mode 100644 index 06af86228ee..00000000000 --- a/src/current/_includes/v2.0/misc/logging-flags.md +++ /dev/null @@ -1,9 +0,0 @@ -Flag | Description ------|------------ -`--log-dir` | Enable logging to files and write logs to the specified directory.

Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory. -`--log-dir-max-size` | After the log directory reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-dir-max-size=1GiB`.

**Default**: 100MiB -`--log-file-max-size` | After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.

**Default**: 10MiB -`--log-file-verbosity` | Only writes messages to log files if they are at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.

**Default**: `INFO` -`--logtostderr` | Enable logging to `stderr` for messages at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--logtostderr=ERROR`

If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.

Setting `--logtostderr=NONE` disables logging to `stderr`. -`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.

When set to `false`, messages logged to `stderr` are colorized based on [severity level](debug-and-error-logs.html#severity-levels).

**Default:** `false` -`--sql-audit-dir` | New in v2.0: If non-empty, create a SQL audit log in this directory. By default, SQL audit logs are written in the same directory as the other logs generated by CockroachDB. For more information, see [SQL Audit Logging](sql-audit-logging.html). diff --git a/src/current/_includes/v2.0/misc/remove-user-callout.html b/src/current/_includes/v2.0/misc/remove-user-callout.html deleted file mode 100644 index 086d27509fc..00000000000 --- a/src/current/_includes/v2.0/misc/remove-user-callout.html +++ /dev/null @@ -1 +0,0 @@ -Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user. diff --git a/src/current/_includes/v2.0/misc/schema-change-view-job.md b/src/current/_includes/v2.0/misc/schema-change-view-job.md deleted file mode 100644 index 1e9b4a7444e..00000000000 --- a/src/current/_includes/v2.0/misc/schema-change-view-job.md +++ /dev/null @@ -1 +0,0 @@ -Whenever you initiate a schema change, CockroachDB registers it as a job, which you can view with [`SHOW JOBS`](show-jobs.html). diff --git a/src/current/_includes/v2.0/orchestration/initialize-cluster-insecure.md b/src/current/_includes/v2.0/orchestration/initialize-cluster-insecure.md deleted file mode 100644 index 1b374e6dbf9..00000000000 --- a/src/current/_includes/v2.0/orchestration/initialize-cluster-insecure.md +++ /dev/null @@ -1,40 +0,0 @@ -1. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the nodes into a single cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job "cluster-init" created - ~~~ - -2. Confirm that cluster initialization has completed successfully. The job - should be considered successful and the CockroachDB pods should soon be - considered `Ready`: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME DESIRED SUCCESSFUL AGE - cluster-init 1 1 2m - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 3m - cockroachdb-1 1/1 Running 0 3m - cockroachdb-2 1/1 Running 0 3m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-limitations.md b/src/current/_includes/v2.0/orchestration/kubernetes-limitations.md deleted file mode 100644 index 00c6c0fdd21..00000000000 --- a/src/current/_includes/v2.0/orchestration/kubernetes-limitations.md +++ /dev/null @@ -1,7 +0,0 @@ -#### Kubernetes version - -Kubernetes 1.18 or higher is required in order to use our most up-to-date configuration files. Earlier Kubernetes releases do not support some of the options used in our configuration files. If you need to run on an older version of Kubernetes, we have kept around configuration files that work on older Kubernetes releases in the versioned subdirectories of [https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes) (e.g., [v1.7](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/v1.7)). - -#### Storage - -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-prometheus-alertmanager.md b/src/current/_includes/v2.0/orchestration/kubernetes-prometheus-alertmanager.md deleted file mode 100644 index bcea917fd34..00000000000 --- a/src/current/_includes/v2.0/orchestration/kubernetes-prometheus-alertmanager.md +++ /dev/null @@ -1,189 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -### Configure Prometheus - -Every node of a CockroachDB cluster exports granular timeseries metrics formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data. This section shows you how to orchestrate Prometheus as part of your Kubernetes cluster and pull these metrics into Prometheus for external monitoring. - -This guidance is based on [CoreOS's Prometheus Operator](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md), which allows a Prometheus instance to be managed using built-in Kubernetes concepts. - -
-{{site.data.alerts.callout_info}} -Before starting, make sure the email address associated with your Google Cloud account is part of the `cluster-admin` RBAC group, as shown in [Step 1. Start Kubernetes](#step-1-start-kubernetes). -{{site.data.alerts.end}} -
- -1. From your local workstation, edit the `cockroachdb` service to add the `prometheus: cockroachdb` label: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl label svc cockroachdb prometheus=cockroachdb - ~~~ - - ~~~ - service "cockroachdb" labeled - ~~~ - - This ensures that there is a prometheus job and monitoring data only for the `cockroachdb` service, not for the `cockroach-public` service. - -2. Install [CoreOS's Prometheus Operator](https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml - ~~~ - - ~~~ - clusterrolebinding "prometheus-operator" created - clusterrole "prometheus-operator" created - serviceaccount "prometheus-operator" created - deployment "prometheus-operator" created - ~~~ - -3. Confirm that the `prometheus-operator` has started: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get deploy prometheus-operator - ~~~ - - ~~~ - NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE - prometheus-operator 1 1 1 1 1m - ~~~ - -4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml - ~~~ - - ~~~ - clusterrole "prometheus" created - clusterrolebinding "prometheus" created - servicemonitor "cockroachdb" created - prometheus "cockroachdb" created - ~~~ - -5. Access the Prometheus UI locally and verify that CockroachDB is feeding data into Prometheus: - - 1. Port-forward from your local machine to the pod running Prometheus: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward prometheus-cockroachdb-0 9090 - ~~~ - - 2. Go to http://localhost:9090 in your browser. - - 3. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this: - - Prometheus targets - - 4. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this: - - Prometheus graph - - {{site.data.alerts.callout_success}} - Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the Admin UI](#step-6-access-the-admin-ui){% else %}[Access the Admin UI](#step-5-access-the-admin-ui){% endif %} and then point your browser to http://localhost:8080/_status/vars. - - For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/). - {{site.data.alerts.end}} - -### Configure Alertmanager - -Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. This section shows you how to use [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml) to do this. - -1. Download our alertmanager-config.yaml configuration file. - -2. Edit the `alertmanager-config.yaml` file to [specify the desired receivers for notifications](https://prometheus.io/docs/alerting/configuration/). Initially, the file contains a placeholder web hook. - -3. Add this configuration to the Kubernetes cluster as a secret, renaming it to `alertmanager.yaml` and labelling it to make it easier to find: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create secret generic alertmanager-cockroachdb --from-file=alertmanager.yaml=alertmanager-config.yaml - ~~~ - - ~~~ - secret "alertmanager-cockroachdb" created - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl label secret alertmanager-cockroachdb app=cockroachdb - ~~~ - - ~~~ - secret "alertmanager-cockroachdb" labeled - ~~~ - - {{site.data.alerts.callout_danger}} - The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `altermanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen. - {{site.data.alerts.end}} - -4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml - ~~~ - - ~~~ - alertmanager "cockroachdb" created - service "alertmanager-cockroachdb" created - ~~~ - -5. Verify that Alertmanager is running: - - 1. Port-forward from your local machine to the pod running Alertmanager: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward alertmanager-cockroachdb-0 9093 - ~~~ - - 2. Go to http://localhost:9093 in your browser. The screen should look like this: - - Alertmanager - -6. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this: - - Alertmanager - -7. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml - ~~~ - - ~~~ - prometheusrule "prometheus-cockroachdb-rules" created - ~~~ - -8. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/status http://localhost:9090/rules. The screen should look like this: - - Alertmanager - -9. Verify that the example alert is firing by opening http://localhost:9090/alerts. The screen should look like this: - - Alertmanager - -10. To remove the example alert: - - 1. Use the `kubectl edit` command to open the rules for editing: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl edit prometheusrules prometheus-cockroachdb-rules - ~~~ - - 2. Remove the `dummy.rules` block and save the file: - - ~~~ - - name: rules/dummy.rules - rules: - - alert: TestAlertManager - expr: vector(1) - ~~~ diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-scale-cluster.md b/src/current/_includes/v2.0/orchestration/kubernetes-scale-cluster.md deleted file mode 100644 index 75c6b278ac2..00000000000 --- a/src/current/_includes/v2.0/orchestration/kubernetes-scale-cluster.md +++ /dev/null @@ -1,17 +0,0 @@ -The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod. - -1. Add a worker node: - - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). - - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/). - - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html). - -2. Use the `kubectl scale` command to add a pod to your StatefulSet: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=4 - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v2.0/orchestration/kubernetes-simulate-failure.md deleted file mode 100644 index e3b2fd5c080..00000000000 --- a/src/current/_includes/v2.0/orchestration/kubernetes-simulate-failure.md +++ /dev/null @@ -1,28 +0,0 @@ -Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage. - -To see this in action: - -1. Terminate one of the CockroachDB nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ - -2. In the Admin UI, the **Summary** panel will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy. - -3. Back in the terminal, verify that the pod was automatically restarted: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ diff --git a/src/current/_includes/v2.0/orchestration/kubernetes-upgrade-cluster.md b/src/current/_includes/v2.0/orchestration/kubernetes-upgrade-cluster.md deleted file mode 100644 index 25fd2eb716a..00000000000 --- a/src/current/_includes/v2.0/orchestration/kubernetes-upgrade-cluster.md +++ /dev/null @@ -1,43 +0,0 @@ -As new versions of CockroachDB are released, it's strongly recommended to upgrade to newer versions in order to pick up bug fixes, performance improvements, and new features. The [general CockroachDB upgrade documentation](upgrade-cockroach-version.html) provides best practices for how to prepare for and execute upgrades of CockroachDB clusters, but the mechanism of actually stopping and restarting processes in Kubernetes is somewhat special. - -Kubernetes knows how to carry out a safe rolling upgrade process of the CockroachDB nodes. When you tell it to change the Docker image used in the CockroachDB StatefulSet, Kubernetes will go one-by-one, stopping a node, restarting it with the new image, and waiting for it to be ready to receive client requests before moving on to the next one. For more information, see [the Kubernetes documentation](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets). - -1. All that it takes to kick off this process is changing the desired Docker image. To do so, pick the version that you want to upgrade to, then run the following command, replacing "VERSION" with your desired new version: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]' - ~~~ - - ~~~ - statefulset "cockroachdb" patched - ~~~ - -2. If you then check the status of your cluster's pods, you should see one of them being restarted: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 2m - cockroachdb-1 1/1 Running 0 2m - cockroachdb-2 1/1 Running 0 2m - cockroachdb-3 0/1 Terminating 0 1m - ~~~ - -3. This will continue until all of the pods have restarted and are running the new image. To check the image of each pod to determine whether they've all be upgraded, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-3 cockroachdb/cockroach:{{page.release_info.version}} - ~~~ diff --git a/src/current/_includes/v2.0/orchestration/monitor-cluster.md b/src/current/_includes/v2.0/orchestration/monitor-cluster.md deleted file mode 100644 index 4db8e9058e0..00000000000 --- a/src/current/_includes/v2.0/orchestration/monitor-cluster.md +++ /dev/null @@ -1,28 +0,0 @@ -To access the cluster's [Admin UI](admin-ui-overview.html): - -1. Port-forward from your local machine to one of the pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward cockroachdb-0 8080 - ~~~ - - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - - {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}} - -{% if page.secure == true %} - -2. Go to https://localhost:8080. - -{% else %} - -2. Go to http://localhost:8080. - -{% endif %} - -3. In the UI, verify that the cluster is running as expected: - - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster. - - Click the **Databases** tab on the left to verify that `bank` is listed. diff --git a/src/current/_includes/v2.0/orchestration/start-cluster.md b/src/current/_includes/v2.0/orchestration/start-cluster.md deleted file mode 100644 index 90d820c0c6c..00000000000 --- a/src/current/_includes/v2.0/orchestration/start-cluster.md +++ /dev/null @@ -1,103 +0,0 @@ -{% if page.secure == true %} - -From your local workstation, use our [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml -~~~ - -~~~ -serviceaccount "cockroachdb" created -role "cockroachdb" created -clusterrole "cockroachdb" created -rolebinding "cockroachdb" created -clusterrolebinding "cockroachdb" created -service "cockroachdb-public" created -service "cockroachdb" created -poddisruptionbudget "cockroachdb-budget" created -statefulset "cockroachdb" created -~~~ - -Alternatively, if you'd rather start with a configuration file that has been customized for performance: - -1. Download our [performance version of `cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml - ~~~ - -2. Modify the file wherever there is a `TODO` comment. - -3. Use the file to create the StatefulSet and start the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-secure.yaml - ~~~ - -{% else %} - -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service "cockroachdb-public" created - service "cockroachdb" created - poddisruptionbudget "cockroachdb-budget" created - statefulset "cockroachdb" created - ~~~ - - Alternatively, if you'd rather start with a configuration file that has been customized for performance: - - 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml - ~~~ - - 2. Modify the file wherever there is a `TODO` comment. - - 3. Use the file to create the StatefulSet and start the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-insecure.yaml - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -{% endif %} diff --git a/src/current/_includes/v2.0/orchestration/start-kubernetes.md b/src/current/_includes/v2.0/orchestration/start-kubernetes.md deleted file mode 100644 index 131890e81ba..00000000000 --- a/src/current/_includes/v2.0/orchestration/start-kubernetes.md +++ /dev/null @@ -1,79 +0,0 @@ -Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice. - -
- - - -
- -
- -1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation. - - This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_success}} - The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb - ~~~ - - ~~~ - Creating cluster cockroachdb...done. - ~~~ - - This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. - - The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster. - -3. Get the email address associated with your Google Cloud account: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud info | grep Account - ~~~ - - ~~~ - Account: [your.google.cloud.email@example.org] - ~~~ - - {{site.data.alerts.callout_danger}} - This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com. - {{site.data.alerts.end}} - -4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user= - ~~~ - - ~~~ - clusterrolebinding "cluster-admin-binding" created - ~~~ - -
- -
- -From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/gce/) documentation. - -The process includes: - -- Creating a Google Cloud Platform account, installing `gcloud`, and other prerequisites. -- Downloading and installing the latest Kubernetes release. -- Creating GCE instances and joining them into a single Kubernetes cluster. -- Installing `kubectl`, the command-line tool used to manage Kubernetes from your workstation. - -
- -
- -From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on AWS EC2](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/aws/) documentation. - -
diff --git a/src/current/_includes/v2.0/orchestration/stop-kubernetes.md b/src/current/_includes/v2.0/orchestration/stop-kubernetes.md deleted file mode 100644 index 264eba07fa8..00000000000 --- a/src/current/_includes/v2.0/orchestration/stop-kubernetes.md +++ /dev/null @@ -1,28 +0,0 @@ -
- - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters delete cockroachdb - ~~~ - -
- -
- - {% include copy-clipboard.html %} - ~~~ shell - $ cluster/kube-down.sh - ~~~ - -
- -
- - {% include copy-clipboard.html %} - ~~~ shell - $ cluster/kube-down.sh - ~~~ - -
- - {{site.data.alerts.callout_danger}}If you stop Kubernetes without first deleting the persistent volumes, they will still exist in your cloud project.{{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/orchestration/test-cluster-insecure.md b/src/current/_includes/v2.0/orchestration/test-cluster-insecure.md deleted file mode 100644 index 52396b848ad..00000000000 --- a/src/current/_includes/v2.0/orchestration/test-cluster-insecure.md +++ /dev/null @@ -1,45 +0,0 @@ -1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \ - -- sql --insecure --host=cockroachdb-public - ~~~ - -2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - +----+---------+ - | id | balance | - +----+---------+ - | 1 | 1000.5 | - +----+---------+ - (1 row) - ~~~ - -3. Exit the SQL shell and delete the temporary pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v2.0/performance/tuning.py b/src/current/_includes/v2.0/performance/tuning.py deleted file mode 100644 index 248daec2488..00000000000 --- a/src/current/_includes/v2.0/performance/tuning.py +++ /dev/null @@ -1,56 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect(database='movr', user='root', host=args.host, port=26257) -conn.set_session(autocommit=True) -cur = conn.cursor() - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -print("Average time (milliseconds):") -print(float(sum(times))/len(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v2.0/prod-deployment/insecure-initialize-cluster.md deleted file mode 100644 index 5d1384c8467..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-initialize-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -On your local machine, complete the node startup process and have them join together as a cluster: - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Run the [`cockroach init`](initialize-a-cluster.html) command, with the `--host` flag set to the address of any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-recommendations.md b/src/current/_includes/v2.0/prod-deployment/insecure-recommendations.md deleted file mode 100644 index e6f7fc0b9fe..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-recommendations.md +++ /dev/null @@ -1,15 +0,0 @@ -- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html). - -- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks: - - Your cluster is open to any client that can access any node's IP addresses. - - Any user, even `root`, can log in without providing a password. - - Any user, connecting as `root`, can read or write any data in your cluster. - - There is no network encryption or authentication, and thus no confidentiality. - -- Decide how you want to access your Admin UI: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI. diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-requirements.md b/src/current/_includes/v2.0/prod-deployment/insecure-requirements.md deleted file mode 100644 index 52640254763..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-requirements.md +++ /dev/null @@ -1,5 +0,0 @@ -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your Admin UI diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v2.0/prod-deployment/insecure-scale-cluster.md deleted file mode 100644 index 66df603eb03..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-scale-cluster.md +++ /dev/null @@ -1,120 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --insecure \ - --host= \ - --locality= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --join=:26257,:26257,:26257 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service): - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory - -8. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - Flag | Description - -----|------------ - `--host` | Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.

If you want the node to listen on multiple interfaces, leave `--host` empty.

If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave `--host` empty and set the `--advertise-host` flag to the internal address. - `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. - -9. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v2.0/prod-deployment/insecure-start-nodes.md deleted file mode 100644 index 046a80d37c0..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-start-nodes.md +++ /dev/null @@ -1,149 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication. - `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html). - -5. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - -8. In the sample configuration template, specify values for the following flags: - - Flag | Description - -----|------------ - `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html). - -9. Start the CockroachDB cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl start insecurecockroachdb - ~~~ - -10. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v2.0/prod-deployment/insecure-test-cluster.md deleted file mode 100644 index 1c926379fda..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-test-cluster.md +++ /dev/null @@ -1,48 +0,0 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. - -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -2. Create an `insecurenodetest` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ - -3. Use `\q` or `ctrl-d` to exit the SQL shell. - -4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -5. View the cluster's databases, which will include `insecurenodetest`: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -6. Use `\q` or `ctrl-d` to exit the SQL shell. diff --git a/src/current/_includes/v2.0/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v2.0/prod-deployment/insecure-test-load-balancing.md deleted file mode 100644 index e4369b54410..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecure-test-load-balancing.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}} - -1. SSH to the machine where you want the run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node. - -2. Download `workload` and make it executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST - ~~~ - -3. Rename and copy `workload` into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i workload.LATEST /usr/local/bin/workload - ~~~ - -4. Start the TPC-C workload, pointing it at the IP address of the load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ workload run tpcc \ - --drop \ - --init \ - --duration=20m \ - --tolerate-errors \ - "postgresql://root@tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help. - -4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v2.0/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v2.0/prod-deployment/insecurecockroachdb.service deleted file mode 100644 index 1bab823eea7..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/insecurecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --insecure --advertise-host= --join=:26257,:26257,:26257 --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v2.0/prod-deployment/monitor-cluster.md b/src/current/_includes/v2.0/prod-deployment/monitor-cluster.md deleted file mode 100644 index cb8185eac19..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/monitor-cluster.md +++ /dev/null @@ -1,3 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html). diff --git a/src/current/_includes/v2.0/prod-deployment/prod-see-also.md b/src/current/_includes/v2.0/prod-deployment/prod-see-also.md deleted file mode 100644 index 9dc661f6dfc..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/prod-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Performance Benchmarking](performance-benchmarking-with-tpc-c.html) -- [Performance Tuning](performance-tuning.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/_includes/v2.0/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v2.0/prod-deployment/secure-generate-certificates.md deleted file mode 100644 index 4d821e21063..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-generate-certificates.md +++ /dev/null @@ -1,144 +0,0 @@ -You can use either `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands. - -Locally, you'll need to [create the following certificates and keys](create-security-certificates.html): - -- A certificate authority (CA) key pair (`ca.crt` and `ca.key`). -- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers. -- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine. - -{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}} - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Create two directories: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -3. Create the CA certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -5. Upload certificates to the first node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -6. Delete the local copy of the node certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.{{site.data.alerts.end}} - -7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -8. Upload certificates to the second node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -9. Repeat steps 6 - 8 for each additional node. - -10. Create a client certificate and key for the `root` user: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -11. Upload certificates to the machine where you will run a sample workload: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and client certificate and key: - $ scp certs/ca.crt \ - certs/client.root.crt \ - certs/client.root.key \ - @:~/certs - ~~~ - - In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well. diff --git a/src/current/_includes/v2.0/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v2.0/prod-deployment/secure-initialize-cluster.md deleted file mode 100644 index 9ae863063bf..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-initialize-cluster.md +++ /dev/null @@ -1,15 +0,0 @@ -On your local machine, run the [`cockroach init`](initialize-a-cluster.html) command to complete the node startup process and have them join together as a cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --certs-dir=certs --host=
-~~~ - -This command requires the following flags: - -Flag | Description ------|------------ -`--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `client.root.crt` and `client.root.key` files for the `root` user. -`--host` | Specifies the address of any node in the cluster. - -After running this command, each node prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. diff --git a/src/current/_includes/v2.0/prod-deployment/secure-recommendations.md b/src/current/_includes/v2.0/prod-deployment/secure-recommendations.md deleted file mode 100644 index 79d077ee84d..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-recommendations.md +++ /dev/null @@ -1,9 +0,0 @@ -- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html). - -- Decide how you want to access your Admin UI: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI. diff --git a/src/current/_includes/v2.0/prod-deployment/secure-requirements.md b/src/current/_includes/v2.0/prod-deployment/secure-requirements.md deleted file mode 100644 index f4a9beb1209..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-requirements.md +++ /dev/null @@ -1,7 +0,0 @@ -- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates. - -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your Admin UI diff --git a/src/current/_includes/v2.0/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v2.0/prod-deployment/secure-scale-cluster.md deleted file mode 100644 index 4638d6b7500..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-scale-cluster.md +++ /dev/null @@ -1,128 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --host= \ - --locality= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --join=:26257,:26257,:26257 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach.cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service): - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory. - -9. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - Flag | Description - -----|------------ - `--host` | Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.

If you want the node to listen on multiple interfaces, leave `--host` empty.

If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave `--host` empty and set the `--advertise-host` flag to the internal address. - `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. - -10. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-start-nodes.md b/src/current/_includes/v2.0/prod-deployment/secure-start-nodes.md deleted file mode 100644 index e843e84d951..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-start-nodes.md +++ /dev/null @@ -1,156 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node. - `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html). - -5. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach.cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - -9. In the sample configuration template, specify values for the following flags: - - Flag | Description - -----|------------ - `--advertise-host` | Specifies the IP address or hostname to tell other nodes to use. This value must route to an IP address the node is listening on (with `--host` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-host` and/or `--host` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address and port of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data`, binds internal and client communication to `--port=26257`, and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html). - -10. Start the CockroachDB cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl start securecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v2.0/prod-deployment/secure-test-cluster.md b/src/current/_includes/v2.0/prod-deployment/secure-test-cluster.md deleted file mode 100644 index 7b897f362a5..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-test-cluster.md +++ /dev/null @@ -1,55 +0,0 @@ -CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. - -To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: - -1. On your local machine, launch the built-in SQL client: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - - This command requires the following flags: - - Flag | Description - -----|------------ - `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `client.root.crt` and `client.root.key` files for the `root` user. - `--host` | Specifies the address of any node in the cluster. - -2. Create a `securenodetest` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE securenodetest; - ~~~ - -3. Use `\q` or **CTRL-C** to exit the SQL shell. - -4. Launch the built-in SQL client against a different node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -5. View the cluster's databases, which will include `securenodetest`: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | securenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -6. Use `\q` or **CTRL-C** to exit the SQL shell. diff --git a/src/current/_includes/v2.0/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v2.0/prod-deployment/secure-test-load-balancing.md deleted file mode 100644 index dad1a55e835..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/secure-test-load-balancing.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}} - -1. SSH to the machine where you want the run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files. - -2. Download `workload` and make it executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST - ~~~ - -3. Rename and copy `workload` into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i workload.LATEST /usr/local/bin/workload - ~~~ - -4. Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the `ca.crt`, `client.root.crt`, and `client.root.key` files: - - {% include copy-clipboard.html %} - ~~~ shell - $ workload run tpcc \ - --drop \ - --init \ - --duration=20m \ - --tolerate-errors \ - "postgresql://root@tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help. - -4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v2.0/prod-deployment/securecockroachdb.service b/src/current/_includes/v2.0/prod-deployment/securecockroachdb.service deleted file mode 100644 index 7ab88217783..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/securecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-host= --join=:26257,:26257,:26257 --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v2.0/prod-deployment/synchronize-clocks.md b/src/current/_includes/v2.0/prod-deployment/synchronize-clocks.md deleted file mode 100644 index 5257e7a9640..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/synchronize-clocks.md +++ /dev/null @@ -1,173 +0,0 @@ -CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node. - -{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well. - -1. SSH to the first machine. - -2. Disable `timesyncd`, which tends to be active by default on some Linux distributions: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo timedatectl set-ntp no - ~~~ - - Verify that `timesyncd` is off: - - {% include copy-clipboard.html %} - ~~~ shell - $ timedatectl - ~~~ - - Look for `Network time on: no` or `NTP enabled: no` in the output. - -3. Install the `ntp` package: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -4. Stop the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -5. Sync the machine's clock with Google's NTP service: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}} - -6. Verify that the machine is using a Google NTP server: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -7. Repeat these steps for each machine where a CockroachDB node will run. - -{% elsif page.title contains "Google" %} - -Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - -- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks). - -{% elsif page.title contains "AWS" %} - -Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. - -- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). -- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second. - -{% elsif page.title contains "Azure" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. - -1. SSH to the first machine. - -2. Find the ID of the Hyper-V Time Synchronization device: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3 - ~~~ - - ~~~ - VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization] - Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee} - Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee - Rel_ID=12, target_cpu=0 - ~~~ - -3. Unbind the device, using the `Device_ID` from the previous command's output: - - {% include copy-clipboard.html %} - ~~~ shell - $ echo | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind - ~~~ - -4. Install the `ntp` package: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -5. Stop the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -6. Sync the machine's clock with Google's NTP service: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}}We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}} - -7. Verify that the machine is using a Google NTP server: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -8. Repeat these steps for each machine where a CockroachDB node will run. - -{% endif %} diff --git a/src/current/_includes/v2.0/prod-deployment/use-cluster.md b/src/current/_includes/v2.0/prod-deployment/use-cluster.md deleted file mode 100644 index fc2224fd384..00000000000 --- a/src/current/_includes/v2.0/prod-deployment/use-cluster.md +++ /dev/null @@ -1,11 +0,0 @@ -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node. - -You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases, tables, and rows differently. For more information, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_danger}} -When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v2.0/sql/connection-parameters-with-url.md b/src/current/_includes/v2.0/sql/connection-parameters-with-url.md deleted file mode 100644 index 59c24c6450d..00000000000 --- a/src/current/_includes/v2.0/sql/connection-parameters-with-url.md +++ /dev/null @@ -1,2 +0,0 @@ -{% include {{ page.version.version }}/sql/connection-parameters.md %} - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL diff --git a/src/current/_includes/v2.0/sql/connection-parameters.md b/src/current/_includes/v2.0/sql/connection-parameters.md deleted file mode 100644 index 2e74255dcc4..00000000000 --- a/src/current/_includes/v2.0/sql/connection-parameters.md +++ /dev/null @@ -1,7 +0,0 @@ -Flag | Description ------|------------ -`--host` | The server host to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:**`localhost` -`--port`
`-p` | The server port to connect to.

**Env Variable:** `COCKROACH_PORT`
**Default:** `26257` -`--user`
`-u` | The [SQL user](create-and-manage-users.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` \ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/add_column.html b/src/current/_includes/v2.0/sql/diagrams/add_column.html deleted file mode 100644 index f59fd135d0e..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/add_column.html +++ /dev/null @@ -1,52 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ADD - - -COLUMN - - -IF - - -NOT - - -EXISTS - - - -column_name - - - - -typename - - - - -col_qualification - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/add_constraint.html b/src/current/_includes/v2.0/sql/diagrams/add_constraint.html deleted file mode 100644 index a8f3b1c9c61..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/add_constraint.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ADD - - -CONSTRAINT - - - -constraint_name - - - - -constraint_elem - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_column.html b/src/current/_includes/v2.0/sql/diagrams/alter_column.html deleted file mode 100644 index 1c77dc193ef..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/alter_column.html +++ /dev/null @@ -1,56 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ALTER - - -COLUMN - - - -column_name - - - -SET - - -DEFAULT - - - -a_expr - - - -DROP - - -DEFAULT - - -NOT - - -NULL - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_sequence_options.html b/src/current/_includes/v2.0/sql/diagrams/alter_sequence_options.html deleted file mode 100644 index ee56ccdaee6..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/alter_sequence_options.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - INCREMENT - - - BY - - - MINVALUE - - - MAXVALUE - - - START - - - WITH - - - - integer - - - - NO - - - MINVALUE - - - MAXVALUE - - - CYCLE - - - CYCLE - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_table_partition_by.html b/src/current/_includes/v2.0/sql/diagrams/alter_table_partition_by.html deleted file mode 100644 index 073c8794394..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/alter_table_partition_by.html +++ /dev/null @@ -1,81 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - PARTITION - - - BY - - - LIST - - - ( - - - - name_list - - - - ) - - - ( - - - - list_partitions - - - - RANGE - - - ( - - - - name_list - - - - ) - - - ( - - - - range_partitions - - - - ) - - - NOTHING - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_user_password.html b/src/current/_includes/v2.0/sql/diagrams/alter_user_password.html deleted file mode 100644 index 0e014933d1b..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/alter_user_password.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -ALTER - - -USER - - -IF - - -EXISTS - - -name - - -WITH - - -PASSWORD - - -password - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/alter_view.html b/src/current/_includes/v2.0/sql/diagrams/alter_view.html deleted file mode 100644 index 2e481fa60aa..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/alter_view.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - VIEW - - - IF - - - EXISTS - - - - view_name - - - - RENAME - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/backup.html b/src/current/_includes/v2.0/sql/diagrams/backup.html deleted file mode 100644 index 1974cb5bcb0..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/backup.html +++ /dev/null @@ -1,73 +0,0 @@ -
- - - - - - BACKUP - - - TABLE - - - - table_pattern - - - - , - - - DATABASE - - - - name - - - - , - - - TO - - - - string_or_placeholder - - - - AS OF SYSTEM TIME - - - - timestamp - - - - INCREMENTAL FROM - - - - full_backup_location - - - - , - - - - incremental_backup_location - - - - WITH - - - - kv_option_list - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/begin_transaction.html b/src/current/_includes/v2.0/sql/diagrams/begin_transaction.html deleted file mode 100644 index ee2372d9861..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/begin_transaction.html +++ /dev/null @@ -1,47 +0,0 @@ -
- - - - - - BEGIN - - - TRANSACTION - - - ISOLATION LEVEL - - - SNAPSHOT - - - SERIALIZABLE - - - PRIORITY - - - LOW - - - NORMAL - - - HIGH - - - READ - - - ONLY - - - WRITE - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/cancel_job.html b/src/current/_includes/v2.0/sql/diagrams/cancel_job.html deleted file mode 100644 index f61ff0cfa79..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/cancel_job.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - CANCEL - - - JOB - - - - job_id - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/cancel_query.html b/src/current/_includes/v2.0/sql/diagrams/cancel_query.html deleted file mode 100644 index 6cc33a38466..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/cancel_query.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - CANCEL - - - QUERY - - - - query_id - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/check_column_level.html b/src/current/_includes/v2.0/sql/diagrams/check_column_level.html deleted file mode 100644 index 59eec3e3c15..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/check_column_level.html +++ /dev/null @@ -1,70 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - CHECK - - - ( - - - - check_expr - - - - ) - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/check_table_level.html b/src/current/_includes/v2.0/sql/diagrams/check_table_level.html deleted file mode 100644 index 6066d637220..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/check_table_level.html +++ /dev/null @@ -1,60 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - CHECK - - - ( - - - - check_expr - - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/col_qualification.html b/src/current/_includes/v2.0/sql/diagrams/col_qualification.html deleted file mode 100644 index 8b9b2d4fa1d..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/col_qualification.html +++ /dev/null @@ -1,132 +0,0 @@ -
- - - - - - CONSTRAINT - - - - constraint_name - - - - NOT - - - NULL - - - UNIQUE - - - PRIMARY - - - KEY - - - CHECK - - - ( - - - - a_expr - - - - ) - - - DEFAULT - - - - b_expr - - - - REFERENCES - - - - table_name - - - - - opt_name_parens - - - - - reference_actions - - - - AS - - - ( - - - - a_expr - - - - ) - - - STORED - - - COLLATE - - - - collation_name - - - - FAMILY - - - - family_name - - - - CREATE - - - FAMILY - - - - family_name - - - - IF - - - NOT - - - EXISTS - - - FAMILY - - - - family_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/column_def.html b/src/current/_includes/v2.0/sql/diagrams/column_def.html deleted file mode 100644 index 284e8dc5838..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/column_def.html +++ /dev/null @@ -1,23 +0,0 @@ - \ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/commit_transaction.html b/src/current/_includes/v2.0/sql/diagrams/commit_transaction.html deleted file mode 100644 index 12914f3e1cb..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/commit_transaction.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - - - COMMIT - - - END - - - TRANSACTION - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_database.html b/src/current/_includes/v2.0/sql/diagrams/create_database.html deleted file mode 100644 index c621b08e138..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_database.html +++ /dev/null @@ -1,42 +0,0 @@ -
- - - - - - CREATE - - - DATABASE - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - ENCODING - - - = - - - - encoding - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_index.html b/src/current/_includes/v2.0/sql/diagrams/create_index.html deleted file mode 100644 index dc0479dab14..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_index.html +++ /dev/null @@ -1,91 +0,0 @@ -
- - - - - - CREATE - - - UNIQUE - - - INDEX - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - ( - - - - column_name - - - - ASC - - - DESC - - - , - - - ) - - - COVERING - - - STORING - - - ( - - - - name_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_inverted_index.html b/src/current/_includes/v2.0/sql/diagrams/create_inverted_index.html deleted file mode 100644 index 266281c12c1..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_inverted_index.html +++ /dev/null @@ -1,64 +0,0 @@ -
- - - - - - CREATE - - - INVERTED - - - INDEX - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - ( - - - - column_name - - - - ASC - - - DESC - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_role.html b/src/current/_includes/v2.0/sql/diagrams/create_role.html deleted file mode 100644 index 3c9c43dedf3..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_role.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - CREATE - - - ROLE - - - IF - - - NOT - - - EXISTS - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_sequence.html b/src/current/_includes/v2.0/sql/diagrams/create_sequence.html deleted file mode 100644 index 4363cc0b087..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_sequence.html +++ /dev/null @@ -1,58 +0,0 @@ -
- - - - -CREATE - - -SEQUENCE - - -IF - - -NOT - - -EXISTS - - -sequence_name - - -NO - - -CYCLE - - -MINVALUE - - -MAXVALUE - - -INCREMENT - - -BY - - -MINVALUE - - -MAXVALUE - - -START - - -WITH - - -integer - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/create_table.html b/src/current/_includes/v2.0/sql/diagrams/create_table.html deleted file mode 100644 index 456c9f64ab7..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_table.html +++ /dev/null @@ -1,67 +0,0 @@ - \ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_table_as.html b/src/current/_includes/v2.0/sql/diagrams/create_table_as.html deleted file mode 100644 index dbf1028099a..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_table_as.html +++ /dev/null @@ -1,50 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - ( - - - - name - - - - , - - - ) - - - AS - - - - select_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_user.html b/src/current/_includes/v2.0/sql/diagrams/create_user.html deleted file mode 100644 index 1dc78bb289a..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_user.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - PASSWORD - - - - password - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/create_view.html b/src/current/_includes/v2.0/sql/diagrams/create_view.html deleted file mode 100644 index 044db4c888c..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/create_view.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - - - CREATE - - - VIEW - - - - view_name - - - - ( - - - - name_list - - - - ) - - - AS - - - - select_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/default_value_column_level.html b/src/current/_includes/v2.0/sql/diagrams/default_value_column_level.html deleted file mode 100644 index 0ba9afca9c4..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/default_value_column_level.html +++ /dev/null @@ -1,64 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - DEFAULT - - - - default_value - - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/delete.html b/src/current/_includes/v2.0/sql/diagrams/delete.html deleted file mode 100644 index d79cbd6e082..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/delete.html +++ /dev/null @@ -1,66 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -DELETE - - -FROM - - - -table_name - - - -AS - - - -table_alias_name - - - -WHERE - - - -a_expr - - - - -sort_clause - - - - -limit_clause - - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_column.html b/src/current/_includes/v2.0/sql/diagrams/drop_column.html deleted file mode 100644 index 384f5219d9d..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_column.html +++ /dev/null @@ -1,43 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -DROP - - -COLUMN - - -IF - - -EXISTS - - -name - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_constraint.html b/src/current/_includes/v2.0/sql/diagrams/drop_constraint.html deleted file mode 100644 index 77cea230ccd..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_constraint.html +++ /dev/null @@ -1,45 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -DROP - - -CONSTRAINT - - -IF - - -EXISTS - - - -name - - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_database.html b/src/current/_includes/v2.0/sql/diagrams/drop_database.html deleted file mode 100644 index 038eb0befc1..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_database.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - - - DROP - - - DATABASE - - - IF - - - EXISTS - - - - name - - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_index.html b/src/current/_includes/v2.0/sql/diagrams/drop_index.html deleted file mode 100644 index 2dd8b3636ee..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_index.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -DROP - - -INDEX - - -IF - - -EXISTS - - -table_name - -@ - - -index_name - -CASCADE - - -RESTRICT - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_role.html b/src/current/_includes/v2.0/sql/diagrams/drop_role.html deleted file mode 100644 index 0037ebf56ce..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_role.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - - - DROP - - - ROLE - - - IF - - - EXISTS - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_sequence.html b/src/current/_includes/v2.0/sql/diagrams/drop_sequence.html deleted file mode 100644 index 6507f7dec30..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_sequence.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_table.html b/src/current/_includes/v2.0/sql/diagrams/drop_table.html deleted file mode 100644 index 18ad4fdd502..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_table.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_user.html b/src/current/_includes/v2.0/sql/diagrams/drop_user.html deleted file mode 100644 index 57c3db991b9..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_user.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - DROP - - - USER - - - IF - - - EXISTS - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/drop_view.html b/src/current/_includes/v2.0/sql/diagrams/drop_view.html deleted file mode 100644 index d95db116000..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/drop_view.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - VIEW - - - IF - - - EXISTS - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/experimental_audit.html b/src/current/_includes/v2.0/sql/diagrams/experimental_audit.html deleted file mode 100644 index 46cc527074a..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/experimental_audit.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - - -table_name - - - -EXPERIMENTAL_AUDIT - - -SET - - -READ - - -WRITE - - -OFF - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/explain.html b/src/current/_includes/v2.0/sql/diagrams/explain.html deleted file mode 100644 index 89ca35dd0fa..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/explain.html +++ /dev/null @@ -1,40 +0,0 @@ -
- - - - - - EXPLAIN - - - ( - - - EXPRS - - - METADATA - - - QUALIFY - - - VERBOSE - - - TYPES - - - , - - - ) - - - - explainable_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/family_def.html b/src/current/_includes/v2.0/sql/diagrams/family_def.html deleted file mode 100644 index 1dda01d9e79..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/family_def.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - FAMILY - - - - opt_family_name - - - - ( - - - - name - - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/foreign_key_column_level.html b/src/current/_includes/v2.0/sql/diagrams/foreign_key_column_level.html deleted file mode 100644 index a963e586425..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/foreign_key_column_level.html +++ /dev/null @@ -1,75 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - REFERENCES - - - - parent_table - - - - ( - - - - ref_column_name - - - - ) - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/foreign_key_table_level.html b/src/current/_includes/v2.0/sql/diagrams/foreign_key_table_level.html deleted file mode 100644 index 2eb3498af46..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/foreign_key_table_level.html +++ /dev/null @@ -1,85 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - FOREIGN KEY - - - ( - - - - fk_column_name - - - - , - - - ) - - - REFERENCES - - - - parent_table - - - - ( - - - - ref_column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/grant_privileges.html b/src/current/_includes/v2.0/sql/diagrams/grant_privileges.html deleted file mode 100644 index da7f44e5160..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/grant_privileges.html +++ /dev/null @@ -1,74 +0,0 @@ -
- - - - - - GRANT - - - ALL - - - CREATE - - - GRANT - - - SELECT - - - DROP - - - INSERT - - - DELETE - - - UPDATE - - - , - - - ON - - - TABLE - - - - table_name - - - - , - - - DATABASE - - - - database_name - - - - , - - - TO - - - - user_name - - - - , - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/grant_roles.html b/src/current/_includes/v2.0/sql/diagrams/grant_roles.html deleted file mode 100644 index f8eee0dc766..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/grant_roles.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - -GRANT - - -role_name - - -, - - -TO - - -user_name - - -, - - -WITH - - -ADMIN - - -OPTION - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/import.html b/src/current/_includes/v2.0/sql/diagrams/import.html deleted file mode 100644 index 4528fe2a3e2..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/import.html +++ /dev/null @@ -1,72 +0,0 @@ -
- - - - - - IMPORT - - - TABLE - - - - table_name - - - - CREATE - - - USING - - - - create_table_file - - - - ( - - - - table_elem_list - - - - ) - - - CSV - - - DATA - - - ( - - - - file_to_import - - - - , - - - ) - - - WITH - - - - kv_option - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/index_def.html b/src/current/_includes/v2.0/sql/diagrams/index_def.html deleted file mode 100644 index 7808b2e4800..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/index_def.html +++ /dev/null @@ -1,85 +0,0 @@ -
- - - - - - UNIQUE - - - INDEX - - - - opt_index_name - - - - ( - - - - index_elem - - - - , - - - ) - - - COVERING - - - STORING - - - ( - - - - name_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by - - - - INVERTED - - - INDEX - - - - name - - - - ( - - - - index_elem - - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/insert.html b/src/current/_includes/v2.0/sql/diagrams/insert.html deleted file mode 100644 index 81576677379..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/insert.html +++ /dev/null @@ -1,81 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -INSERT - - -INTO - - - -table_name - - - -AS - - - -table_alias_name - - - -( - - - -column_name - - - -, - - -) - - - -select_stmt - - - -DEFAULT - - -VALUES - - - -on_conflict - - - -RETURNING - - - -target_elem - - - -, - - -NOTHING - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/interleave.html b/src/current/_includes/v2.0/sql/diagrams/interleave.html deleted file mode 100644 index 09bb9c35b5b..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/interleave.html +++ /dev/null @@ -1,69 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - ( - - - - table_definition - - - - ) - - - INTERLEAVE - - - IN - - - PARENT - - - - table_name - - - - ( - - - - name_list - - - - ) - - - - opt_partition_by - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/joined_table.html b/src/current/_includes/v2.0/sql/diagrams/joined_table.html deleted file mode 100644 index 68b66314702..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/joined_table.html +++ /dev/null @@ -1,100 +0,0 @@ -
- - - - -( - - - -joined_table - - - -) - - - -table_ref - - - -CROSS - - -NATURAL - - -FULL - - -LEFT - - -RIGHT - - -OUTER - - -INNER - - -JOIN - - - -table_ref - - - -FULL - - -LEFT - - -RIGHT - - -OUTER - - -INNER - - -JOIN - - - -table_ref - - - -USING - - -( - - - -name - - - -, - - -) - - -ON - - - -a_expr - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/limit_clause.html b/src/current/_includes/v2.0/sql/diagrams/limit_clause.html deleted file mode 100644 index 98d5114a88e..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/limit_clause.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - -LIMIT - - - -count - - - -FETCH - - -FIRST - - -NEXT - - - -count - - - -ROW - - -ROWS - - -ONLY - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/not_null_column_level.html b/src/current/_includes/v2.0/sql/diagrams/not_null_column_level.html deleted file mode 100644 index 52e17e9d57d..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/not_null_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - NOT NULL - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/offset_clause.html b/src/current/_includes/v2.0/sql/diagrams/offset_clause.html deleted file mode 100644 index d6dc4873ee5..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/offset_clause.html +++ /dev/null @@ -1,26 +0,0 @@ -
- - - - -OFFSET - - - -a_expr - - - - -c_expr - - - -ROW - - -ROWS - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/on_conflict.html b/src/current/_includes/v2.0/sql/diagrams/on_conflict.html deleted file mode 100644 index 7a64a45547b..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/on_conflict.html +++ /dev/null @@ -1,107 +0,0 @@ -
- - - - -ON - - -CONFLICT - - -( - - - -name - - - -, - - -) - - -WHERE - - - -a_expr - - - -DO - - -UPDATE - - -SET - - - -column_name - - - -= - - - -a_expr - - - -( - - - -column_name - - - -, - - -) - - -= - - -( - - - -select_stmt - - - - -a_expr - - - -, - - -) - - -, - - -WHERE - - - -a_expr - - - -NOTHING - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/opt_interleave.html b/src/current/_includes/v2.0/sql/diagrams/opt_interleave.html deleted file mode 100644 index 5825c01b310..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/opt_interleave.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - - - INTERLEAVE - - - IN - - - PARENT - - - - table_name - - - - ( - - - - name_list - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/pause_job.html b/src/current/_includes/v2.0/sql/diagrams/pause_job.html deleted file mode 100644 index 2726666933a..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/pause_job.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - PAUSE - - - JOB - - - - job_id - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/primary_key_column_level.html b/src/current/_includes/v2.0/sql/diagrams/primary_key_column_level.html deleted file mode 100644 index f938b641654..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/primary_key_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - PRIMARY KEY - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/primary_key_table_level.html b/src/current/_includes/v2.0/sql/diagrams/primary_key_table_level.html deleted file mode 100644 index db8ece49c39..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/primary_key_table_level.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - PRIMARY KEY - - - ( - - - - column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/release_savepoint.html b/src/current/_includes/v2.0/sql/diagrams/release_savepoint.html deleted file mode 100644 index 194ce6573ca..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/release_savepoint.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RELEASE - - - SAVEPOINT - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_column.html b/src/current/_includes/v2.0/sql/diagrams/rename_column.html deleted file mode 100644 index 2d275bc9de7..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/rename_column.html +++ /dev/null @@ -1,44 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - RENAME - - - COLUMN - - - - current_name - - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_database.html b/src/current/_includes/v2.0/sql/diagrams/rename_database.html deleted file mode 100644 index ce9ddd3ddba..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/rename_database.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - ALTER - - - DATABASE - - - - name - - - - RENAME - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_index.html b/src/current/_includes/v2.0/sql/diagrams/rename_index.html deleted file mode 100644 index 82ed2e90255..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/rename_index.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -IF - - -EXISTS - - -table_name - -@ - - -index_name - -RENAME - - -TO - - -index_name - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_sequence.html b/src/current/_includes/v2.0/sql/diagrams/rename_sequence.html deleted file mode 100644 index a564d9db425..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/rename_sequence.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - current_name - - - - RENAME - - - TO - - - - new_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/rename_table.html b/src/current/_includes/v2.0/sql/diagrams/rename_table.html deleted file mode 100644 index 316c56482eb..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/rename_table.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - current_name - - - - RENAME - - - TO - - - - new_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/reset_csetting.html b/src/current/_includes/v2.0/sql/diagrams/reset_csetting.html deleted file mode 100644 index 49e120ffc69..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/reset_csetting.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - RESET - - - CLUSTER - - - SETTING - - - - var_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/reset_session.html b/src/current/_includes/v2.0/sql/diagrams/reset_session.html deleted file mode 100644 index 0a47ec52d49..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/reset_session.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RESET - - - SESSION - - - - session_var - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/restore.html b/src/current/_includes/v2.0/sql/diagrams/restore.html deleted file mode 100644 index 4aec1b4819f..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/restore.html +++ /dev/null @@ -1,67 +0,0 @@ -
- - - - -RESTORE - - -TABLE - - - -table_pattern - - - -, - - -DATABASE - - - -database_name - - - -, - - -FROM - - -full_backup_location - - -incremental_backup_location - - -, - - -AS - - -OF - - -SYSTEM - - -TIME - - -timestamp - - -WITH - - - -kv_option_list - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/resume_job.html b/src/current/_includes/v2.0/sql/diagrams/resume_job.html deleted file mode 100644 index 2aa93c46cb5..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/resume_job.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RESUME - - - JOB - - - - job_id - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/revoke_privileges.html b/src/current/_includes/v2.0/sql/diagrams/revoke_privileges.html deleted file mode 100644 index a6f9a1dee8e..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/revoke_privileges.html +++ /dev/null @@ -1,74 +0,0 @@ -
- - - - - - REVOKE - - - ALL - - - CREATE - - - GRANT - - - SELECT - - - DROP - - - INSERT - - - DELETE - - - UPDATE - - - , - - - ON - - - TABLE - - - - table_name - - - - , - - - DATABASE - - - - database_name - - - - , - - - FROM - - - - user_name - - - - , - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/revoke_roles.html b/src/current/_includes/v2.0/sql/diagrams/revoke_roles.html deleted file mode 100644 index a30aee75474..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/revoke_roles.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - -REVOKE - - -ADMIN - - -OPTION - - -FOR - - -role_name - - -, - - -FROM - - -user_name - - -, - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/rollback_transaction.html b/src/current/_includes/v2.0/sql/diagrams/rollback_transaction.html deleted file mode 100644 index c34d5d12047..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/rollback_transaction.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - ROLLBACK - - - TO - - - SAVEPOINT - - - - cockroach_restart - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/savepoint.html b/src/current/_includes/v2.0/sql/diagrams/savepoint.html deleted file mode 100644 index 9b7dc70608b..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/savepoint.html +++ /dev/null @@ -1,16 +0,0 @@ -
- - - - - - SAVEPOINT - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/select.html b/src/current/_includes/v2.0/sql/diagrams/select.html deleted file mode 100644 index 9f743234e06..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/select.html +++ /dev/null @@ -1,38 +0,0 @@ - diff --git a/src/current/_includes/v2.0/sql/diagrams/select_clause.html b/src/current/_includes/v2.0/sql/diagrams/select_clause.html deleted file mode 100644 index 88dc35507df..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/select_clause.html +++ /dev/null @@ -1,53 +0,0 @@ - diff --git a/src/current/_includes/v2.0/sql/diagrams/set_cluster_setting.html b/src/current/_includes/v2.0/sql/diagrams/set_cluster_setting.html deleted file mode 100644 index b6554c7be52..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/set_cluster_setting.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - SET - - - CLUSTER - - - SETTING - - - - var_name - - - - = - - - TO - - - - var_value - - - - DEFAULT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/set_operation.html b/src/current/_includes/v2.0/sql/diagrams/set_operation.html deleted file mode 100644 index aa0e63023dc..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/set_operation.html +++ /dev/null @@ -1,32 +0,0 @@ -
- - - - - -select_clause - - - -UNION - - -INTERSECT - - -EXCEPT - - -ALL - - -DISTINCT - - - -select_clause - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/set_transaction.html b/src/current/_includes/v2.0/sql/diagrams/set_transaction.html deleted file mode 100644 index 14d8b19a019..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/set_transaction.html +++ /dev/null @@ -1,68 +0,0 @@ -
- - - - - - SET - - - SESSION - - - TRANSACTION - - - ISOLATION - - - LEVEL - - - READ - - - UNCOMMITTED - - - COMMITTED - - - SNAPSHOT - - - REPEATABLE - - - READ - - - SERIALIZABLE - - - PRIORITY - - - LOW - - - NORMAL - - - HIGH - - - READ - - - ONLY - - - WRITE - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/set_var.html b/src/current/_includes/v2.0/sql/diagrams/set_var.html deleted file mode 100644 index 96bb04e7cf6..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/set_var.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - - - SET - - - SESSION - - - - var_name - - - - TO - - - = - - - - var_value - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_backup.html b/src/current/_includes/v2.0/sql/diagrams/show_backup.html deleted file mode 100644 index 0f4f4e2c379..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_backup.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - SHOW - - - BACKUP - - - - location - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_cluster_setting.html b/src/current/_includes/v2.0/sql/diagrams/show_cluster_setting.html deleted file mode 100644 index d575106689f..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_cluster_setting.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - SETTING - - - - var_name - - - - ALL - - - ALL - - - CLUSTER - - - SETTINGS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_columns.html b/src/current/_includes/v2.0/sql/diagrams/show_columns.html deleted file mode 100644 index 7b47a3b3123..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_columns.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - COLUMNS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_constraints.html b/src/current/_includes/v2.0/sql/diagrams/show_constraints.html deleted file mode 100644 index 9c520ae9bc6..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_constraints.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - - - SHOW - - - CONSTRAINT - - - CONSTRAINTS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_create_sequence.html b/src/current/_includes/v2.0/sql/diagrams/show_create_sequence.html deleted file mode 100644 index 6a2437fb077..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_create_sequence.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - -SHOW - - -CREATE - - -SEQUENCE - - -sequence_name - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_create_table.html b/src/current/_includes/v2.0/sql/diagrams/show_create_table.html deleted file mode 100644 index ee1abe260ad..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_create_table.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - CREATE - - - TABLE - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_create_view.html b/src/current/_includes/v2.0/sql/diagrams/show_create_view.html deleted file mode 100644 index 6d730290564..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_create_view.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - CREATE - - - VIEW - - - - view_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_databases.html b/src/current/_includes/v2.0/sql/diagrams/show_databases.html deleted file mode 100644 index 487bfc4e629..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_databases.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - DATABASES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_grants.html b/src/current/_includes/v2.0/sql/diagrams/show_grants.html deleted file mode 100644 index 92a7932dc22..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_grants.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - - - SHOW - - - GRANTS - - - ON - - - ROLE - - - - role_name - - - - , - - - TABLE - - - - table_name - - - - , - - - DATABASE - - - - database_name - - - - , - - - FOR - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_index.html b/src/current/_includes/v2.0/sql/diagrams/show_index.html deleted file mode 100644 index 3014183c521..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_index.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - SHOW - - - INDEX - - - INDEXES - - - KEYS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_jobs.html b/src/current/_includes/v2.0/sql/diagrams/show_jobs.html deleted file mode 100644 index b59d4d176d0..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_jobs.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - JOBS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_queries.html b/src/current/_includes/v2.0/sql/diagrams/show_queries.html deleted file mode 100644 index 26376243dac..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_queries.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - LOCAL - - - QUERIES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_ranges.html b/src/current/_includes/v2.0/sql/diagrams/show_ranges.html deleted file mode 100644 index 268530ff8f4..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_ranges.html +++ /dev/null @@ -1,32 +0,0 @@ -
- - - - -SHOW - - -EXPERIMENTAL_RANGES - - -FROM - - -TABLE - - - -table_name - - - -INDEX - - - -table_name_with_index - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/show_roles.html b/src/current/_includes/v2.0/sql/diagrams/show_roles.html deleted file mode 100644 index fd508395e0b..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_roles.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - ROLES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_schemas.html b/src/current/_includes/v2.0/sql/diagrams/show_schemas.html deleted file mode 100644 index efa07764533..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_schemas.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - SCHEMAS - - - FROM - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_sessions.html b/src/current/_includes/v2.0/sql/diagrams/show_sessions.html deleted file mode 100644 index 3b2aa5b16ee..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_sessions.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - LOCAL - - - SESSIONS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_tables.html b/src/current/_includes/v2.0/sql/diagrams/show_tables.html deleted file mode 100644 index 570e6222172..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_tables.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - TABLES - - - FROM - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_trace.html b/src/current/_includes/v2.0/sql/diagrams/show_trace.html deleted file mode 100644 index ffa4c89a33e..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_trace.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - - - SHOW - - - COMPACT - - - KV - - - TRACE - - - FOR - - - SESSION - - - - explainable_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_users.html b/src/current/_includes/v2.0/sql/diagrams/show_users.html deleted file mode 100644 index 7c33b7f00b4..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_users.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - USERS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/show_var.html b/src/current/_includes/v2.0/sql/diagrams/show_var.html deleted file mode 100644 index fb7ec6f4ce8..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/show_var.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - SESSION - - - var_name - - - ALL - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/simple_select_clause.html b/src/current/_includes/v2.0/sql/diagrams/simple_select_clause.html deleted file mode 100644 index 4eeeeae5b59..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/simple_select_clause.html +++ /dev/null @@ -1,107 +0,0 @@ -
- - - - -SELECT - - -ALL - - -DISTINCT - - -ON - - -( - - - -a_expr - - - -, - - -) - - - -target_elem - - - -, - - -FROM - - - -table_ref - - - -, - - -AS - - -OF - - -SYSTEM - - -TIME - - - -a_expr_const - - - -WHERE - - - -a_expr - - - -GROUP - - -BY - - - -a_expr - - - -, - - -HAVING - - - -a_expr - - - -WINDOW - - - -window_definition_list - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/sort_clause.html b/src/current/_includes/v2.0/sql/diagrams/sort_clause.html deleted file mode 100644 index dbac057629e..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/sort_clause.html +++ /dev/null @@ -1,55 +0,0 @@ -
- - - - - - ORDER - - - BY - - - - a_expr - - - - PRIMARY - - - KEY - - - - table_name - - - - INDEX - - - - table_name - - - - @ - - - - index_name - - - - ASC - - - DESC - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/split_index_at.html b/src/current/_includes/v2.0/sql/diagrams/split_index_at.html deleted file mode 100644 index 51daee7e3c7..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/split_index_at.html +++ /dev/null @@ -1,35 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -table_name - -@ - - -index_name - -SPLIT - - -AT - - -select_stmt - -WITH - - -EXPIRATION - - -a_expr - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/split_table_at.html b/src/current/_includes/v2.0/sql/diagrams/split_table_at.html deleted file mode 100644 index a694595b9b5..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/split_table_at.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - - table_name - - - - SPLIT - - - AT - - - - select_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/stmt_block.html b/src/current/_includes/v2.0/sql/diagrams/stmt_block.html deleted file mode 100644 index a1b877445f7..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/stmt_block.html +++ /dev/null @@ -1,13814 +0,0 @@ -
- - -

stmt_block:

- - - - - - - - stmt_list - - - - - -

no references


stmt_list:

- - - - - - - - stmt - - - - ; - - - - -

referenced by: -

-


stmt:

- - - - - - - HELPTOKEN - - - - alter_stmt - - - - - backup_stmt - - - - - cancel_stmt - - - - - copy_from_stmt - - - - - create_stmt - - - - - deallocate_stmt - - - - - delete_stmt - - - - - discard_stmt - - - - - drop_stmt - - - - - execute_stmt - - - - - explain_stmt - - - - - export_stmt - - - - - grant_stmt - - - - - insert_stmt - - - - - import_stmt - - - - - pause_stmt - - - - - prepare_stmt - - - - - restore_stmt - - - - - resume_stmt - - - - - revoke_stmt - - - - - savepoint_stmt - - - - - scrub_stmt - - - - - select_stmt - - - - - release_stmt - - - - - reset_stmt - - - - - set_stmt - - - - - show_stmt - - - - - transaction_stmt - - - - - truncate_stmt - - - - - update_stmt - - - - - upsert_stmt - - - - - -

referenced by: -

-


alter_stmt:

- - - - - - - - alter_ddl_stmt - - - - - alter_user_stmt - - - - - -

referenced by: -

-


backup_stmt:

- - - - - - - BACKUP - - - - targets - - - - TO - - - - string_or_placeholder - - - - - opt_as_of_clause - - - - - opt_incremental - - - - - opt_with_options - - - - - -

referenced by: -

-


cancel_stmt:

- - - - - - - - cancel_job_stmt - - - - - cancel_query_stmt - - - - - cancel_session_stmt - - - - - -

referenced by: -

-


copy_from_stmt:

- - - - - - - COPY - - - - table_name - - - - - opt_column_list - - - - FROM - - - STDIN - - - - -

referenced by: -

-


create_stmt:

- - - - - - - - create_user_stmt - - - - - create_role_stmt - - - - - create_ddl_stmt - - - - - create_stats_stmt - - - - - -

referenced by: -

-


deallocate_stmt:

- - - - - - - DEALLOCATE - - - PREPARE - - - - name - - - - ALL - - - - -

referenced by: -

-


delete_stmt:

- - - - - - - - opt_with_clause - - - - DELETE - - - FROM - - - - relation_expr_opt_alias - - - - - where_clause - - - - - opt_sort_clause - - - - - opt_limit_clause - - - - - returning_clause - - - - - -

referenced by: -

-


discard_stmt:

- - - - - - - DISCARD - - - ALL - - - - -

referenced by: -

-


drop_stmt:

- - - - - - - - drop_ddl_stmt - - - - - drop_role_stmt - - - - - drop_user_stmt - - - - - -

referenced by: -

-


execute_stmt:

- - - - - - - EXECUTE - - - - table_alias_name - - - - - execute_param_clause - - - - - -

referenced by: -

-


explain_stmt:

- - - - - - - EXPLAIN - - - ( - - - - explain_option_list - - - - ) - - - - explainable_stmt - - - - - -

referenced by: -

-


export_stmt:

- - - - - - - EXPORT - - - INTO - - - CSV - - - - string_or_placeholder - - - - - opt_with_options - - - - FROM - - - - select_stmt - - - - - -

referenced by: -

-


grant_stmt:

- - - - - - - GRANT - - - - privileges - - - - ON - - - - targets - - - - TO - - - - name_list - - - - - privilege_list - - - - TO - - - - name_list - - - - WITH - - - ADMIN - - - OPTION - - - - -

referenced by: -

-


insert_stmt:

- - - - - - - - opt_with_clause - - - - INSERT - - - INTO - - - - insert_target - - - - - insert_rest - - - - - on_conflict - - - - - returning_clause - - - - - -

referenced by: -

-


import_stmt:

- - - - - - - IMPORT - - - TABLE - - - - table_name - - - - CREATE - - - USING - - - - string_or_placeholder - - - - ( - - - - table_elem_list - - - - ) - - - CSV - - - DATA - - - ( - - - - string_or_placeholder_list - - - - ) - - - - opt_with_options - - - - - -

referenced by: -

-


pause_stmt:

- - - - - - - PAUSE - - - JOB - - - - a_expr - - - - - -

referenced by: -

-


prepare_stmt:

- - - - - - - PREPARE - - - - table_alias_name - - - - - prep_type_clause - - - - AS - - - - preparable_stmt - - - - - -

referenced by: -

-


restore_stmt:

- - - - - - - RESTORE - - - - targets - - - - FROM - - - - string_or_placeholder_list - - - - - as_of_clause - - - - - opt_with_options - - - - - -

referenced by: -

-


resume_stmt:

- - - - - - - RESUME - - - JOB - - - - a_expr - - - - - -

referenced by: -

-


revoke_stmt:

- - - - - - - REVOKE - - - - privileges - - - - ON - - - - targets - - - - ADMIN - - - OPTION - - - FOR - - - - privilege_list - - - - FROM - - - - name_list - - - - - -

referenced by: -

-


savepoint_stmt:

- - - - - - - SAVEPOINT - - - - name - - - - - -

referenced by: -

-


scrub_stmt:

- - - - - - - - scrub_table_stmt - - - - - scrub_database_stmt - - - - - -

referenced by: -

-


select_stmt:

- - - - - - - - select_no_parens - - - - - select_with_parens - - - - - -

referenced by: -

-


release_stmt:

- - - - - - - RELEASE - - - - savepoint_name - - - - - -

referenced by: -

-


reset_stmt:

- - - - - - - - reset_session_stmt - - - - - reset_csetting_stmt - - - - - -

referenced by: -

-


set_stmt:

- - - - - - - - set_session_stmt - - - - - set_csetting_stmt - - - - - set_transaction_stmt - - - - - use_stmt - - - - - -

referenced by: -

-


show_stmt:

- - - - - - - - show_backup_stmt - - - - - show_columns_stmt - - - - - show_constraints_stmt - - - - - show_create_table_stmt - - - - - show_create_view_stmt - - - - - show_create_sequence_stmt - - - - - show_csettings_stmt - - - - - show_databases_stmt - - - - - show_grants_stmt - - - - - show_histogram_stmt - - - - - show_indexes_stmt - - - - - show_jobs_stmt - - - - - show_queries_stmt - - - - - show_ranges_stmt - - - - - show_roles_stmt - - - - - show_schemas_stmt - - - - - show_session_stmt - - - - - show_sessions_stmt - - - - - show_stats_stmt - - - - - show_tables_stmt - - - - - show_trace_stmt - - - - - show_users_stmt - - - - - -

referenced by: -

-


transaction_stmt:

- - - - - - - - begin_stmt - - - - - commit_stmt - - - - - rollback_stmt - - - - - abort_stmt - - - - - -

referenced by: -

-


truncate_stmt:

- - - - - - - TRUNCATE - - - - opt_table - - - - - relation_expr_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


update_stmt:

- - - - - - - - opt_with_clause - - - - UPDATE - - - - relation_expr_opt_alias - - - - SET - - - - set_clause_list - - - - - where_clause - - - - - opt_sort_clause - - - - - opt_limit_clause - - - - - returning_clause - - - - - -

referenced by: -

-


upsert_stmt:

- - - - - - - - opt_with_clause - - - - UPSERT - - - INTO - - - - insert_target - - - - - insert_rest - - - - - returning_clause - - - - - -

referenced by: -

-


alter_ddl_stmt:

- - - - - - - - alter_table_stmt - - - - - alter_index_stmt - - - - - alter_view_stmt - - - - - alter_sequence_stmt - - - - - alter_database_stmt - - - - - -

referenced by: -

-


alter_user_stmt:

- - - - - - - - alter_user_password_stmt - - - - - -

referenced by: -

-


targets:

- - - - - - - identifier - - - - col_name_keyword - - - - - unreserved_keyword - - - - - complex_table_pattern - - - - - table_pattern - - - - , - - - TABLE - - - - table_pattern_list - - - - DATABASE - - - - name_list - - - - - -

referenced by: -

-


string_or_placeholder:

- - - - - - - - non_reserved_word_or_sconst - - - - PLACEHOLDER - - - - -

referenced by: -

-


opt_as_of_clause:

- - - - - - - - as_of_clause - - - - - -

referenced by: -

-


opt_incremental:

- - - - - - - INCREMENTAL - - - FROM - - - - string_or_placeholder_list - - - - - -

referenced by: -

-


opt_with_options:

- - - - - - - WITH - - - - kv_option_list - - - - OPTIONS - - - ( - - - - kv_option_list - - - - ) - - - - -

referenced by: -

-


cancel_job_stmt:

- - - - - - - CANCEL - - - JOB - - - - a_expr - - - - - -

referenced by: -

-


cancel_query_stmt:

- - - - - - - CANCEL - - - QUERY - - - IF - - - EXISTS - - - - a_expr - - - - - -

referenced by: -

-


cancel_session_stmt:

- - - - - - - CANCEL - - - SESSION - - - IF - - - EXISTS - - - - a_expr - - - - - -

referenced by: -

-


table_name:

- - - - - - - - db_object_name - - - - - -

referenced by: -

-


opt_column_list:

- - - - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


create_user_stmt:

- - - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - - string_or_placeholder - - - - - opt_password - - - - - -

referenced by: -

-


create_role_stmt:

- - - - - - - CREATE - - - ROLE - - - IF - - - NOT - - - EXISTS - - - - string_or_placeholder - - - - - -

referenced by: -

-


create_ddl_stmt:

- - - - - - - - create_database_stmt - - - - - create_index_stmt - - - - - create_table_stmt - - - - - create_table_as_stmt - - - - - create_view_stmt - - - - - create_sequence_stmt - - - - - -

referenced by: -

-


create_stats_stmt:

- - - - - - - CREATE - - - STATISTICS - - - - statistics_name - - - - ON - - - - name_list - - - - FROM - - - - table_name - - - - - -

referenced by: -

-


name:

- - - - - - - identifier - - - - unreserved_keyword - - - - - col_name_keyword - - - - - -

referenced by: -

-


opt_with_clause:

- - - - - - - - with_clause - - - - - -

referenced by: -

-


relation_expr_opt_alias:

- - - - - - - - relation_expr - - - - AS - - - - table_alias_name - - - - - -

referenced by: -

-


where_clause:

- - - - - - - WHERE - - - - a_expr - - - - - -

referenced by: -

-


opt_sort_clause:

- - - - - - - - sort_clause - - - - - -

referenced by: -

-


opt_limit_clause:

- - - - - - - - limit_clause - - - - - -

referenced by: -

-


returning_clause:

- - - - - - - RETURNING - - - - target_list - - - - NOTHING - - - - -

referenced by: -

-


drop_ddl_stmt:

- - - - - - - - drop_database_stmt - - - - - drop_index_stmt - - - - - drop_table_stmt - - - - - drop_view_stmt - - - - - drop_sequence_stmt - - - - - -

referenced by: -

-


drop_role_stmt:

- - - - - - - DROP - - - ROLE - - - IF - - - EXISTS - - - - string_or_placeholder_list - - - - - -

referenced by: -

-


drop_user_stmt:

- - - - - - - DROP - - - USER - - - IF - - - EXISTS - - - - string_or_placeholder_list - - - - - -

referenced by: -

-


table_alias_name:

- - - - - - - - name - - - - - -

referenced by: -

-


execute_param_clause:

- - - - - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


explainable_stmt:

- - - - - - - - preparable_stmt - - - - - alter_ddl_stmt - - - - - create_ddl_stmt - - - - - create_stats_stmt - - - - - drop_ddl_stmt - - - - - execute_stmt - - - - - -

referenced by: -

-


explain_option_list:

- - - - - - - - explain_option_name - - - - , - - - - -

referenced by: -

-


privileges:

- - - - - - - ALL - - - - privilege_list - - - - - -

referenced by: -

-


name_list:

- - - - - - - - name - - - - , - - - - -

referenced by: -

-


privilege_list:

- - - - - - - - privilege - - - - , - - - - -

referenced by: -

-


insert_target:

- - - - - - - - table_name - - - - AS - - - - table_alias_name - - - - - -

referenced by: -

-


insert_rest:

- - - - - - - ( - - - - insert_column_list - - - - ) - - - - select_stmt - - - - DEFAULT - - - VALUES - - - - -

referenced by: -

-


on_conflict:

- - - - - - - ON - - - CONFLICT - - - - opt_conf_expr - - - - DO - - - UPDATE - - - SET - - - - set_clause_list - - - - - where_clause - - - - NOTHING - - - - -

referenced by: -

-


string_or_placeholder_list:

- - - - - - - - string_or_placeholder - - - - , - - - - -

referenced by: -

-


table_elem_list:

- - - - - - - - table_elem - - - - , - - - - -

referenced by: -

-


a_expr:

- - - - - - - - c_expr - - - - + - - - - - - - ~ - - - NOT - - - - a_expr - - - - DEFAULT - - - MAXVALUE - - - MINVALUE - - - TYPECAST - - - - cast_target - - - - TYPEANNOTATE - - - - typename - - - - COLLATE - - - - collation_name - - - - + - - - - a_expr - - - - - - - - - a_expr - - - - * - - - - a_expr - - - - / - - - - a_expr - - - - FLOORDIV - - - - a_expr - - - - % - - - - a_expr - - - - ^ - - - - a_expr - - - - # - - - - a_expr - - - - & - - - - a_expr - - - - | - - - - a_expr - - - - < - - - - a_expr - - - - > - - - - a_expr - - - - ? - - - - a_expr - - - - JSON_SOME_EXISTS - - - - a_expr - - - - JSON_ALL_EXISTS - - - - a_expr - - - - CONTAINS - - - - a_expr - - - - CONTAINED_BY - - - - a_expr - - - - = - - - - a_expr - - - - CONCAT - - - - a_expr - - - - LSHIFT - - - - a_expr - - - - RSHIFT - - - - a_expr - - - - FETCHVAL - - - - a_expr - - - - FETCHTEXT - - - - a_expr - - - - FETCHVAL_PATH - - - - a_expr - - - - FETCHTEXT_PATH - - - - a_expr - - - - REMOVE_PATH - - - - a_expr - - - - INET_CONTAINED_BY_OR_EQUALS - - - - a_expr - - - - INET_CONTAINS_OR_CONTAINED_BY - - - - a_expr - - - - INET_CONTAINS_OR_EQUALS - - - - a_expr - - - - LESS_EQUALS - - - - a_expr - - - - GREATER_EQUALS - - - - a_expr - - - - NOT_EQUALS - - - - a_expr - - - - AND - - - - a_expr - - - - OR - - - - a_expr - - - - LIKE - - - - a_expr - - - - NOT - - - LIKE - - - - a_expr - - - - ILIKE - - - - a_expr - - - - NOT - - - ILIKE - - - - a_expr - - - - SIMILAR - - - TO - - - - a_expr - - - - NOT - - - SIMILAR - - - TO - - - - a_expr - - - - ~ - - - - a_expr - - - - NOT_REGMATCH - - - - a_expr - - - - REGIMATCH - - - - a_expr - - - - NOT_REGIMATCH - - - - a_expr - - - - IS - - - NAN - - - IS - - - NOT - - - NAN - - - IS - - - NULL - - - ISNULL - - - IS - - - NOT - - - NULL - - - NOTNULL - - - IS - - - TRUE - - - IS - - - NOT - - - TRUE - - - IS - - - FALSE - - - IS - - - NOT - - - FALSE - - - IS - - - UNKNOWN - - - IS - - - NOT - - - UNKNOWN - - - IS - - - DISTINCT - - - FROM - - - - a_expr - - - - IS - - - NOT - - - DISTINCT - - - FROM - - - - a_expr - - - - IS - - - OF - - - ( - - - - type_list - - - - ) - - - IS - - - NOT - - - OF - - - ( - - - - type_list - - - - ) - - - BETWEEN - - - - opt_asymmetric - - - - - b_expr - - - - AND - - - - a_expr - - - - NOT - - - BETWEEN - - - - opt_asymmetric - - - - - b_expr - - - - AND - - - - a_expr - - - - BETWEEN - - - SYMMETRIC - - - - b_expr - - - - AND - - - - a_expr - - - - NOT - - - BETWEEN - - - SYMMETRIC - - - - b_expr - - - - AND - - - - a_expr - - - - IN - - - - in_expr - - - - NOT - - - IN - - - - in_expr - - - - - subquery_op - - - - - sub_type - - - - - a_expr - - - - - -

referenced by: -

-


prep_type_clause:

- - - - - - - ( - - - - type_list - - - - ) - - - - -

referenced by: -

-


preparable_stmt:

- - - - - - - - alter_user_stmt - - - - - backup_stmt - - - - - cancel_stmt - - - - - create_user_stmt - - - - - create_role_stmt - - - - - delete_stmt - - - - - drop_role_stmt - - - - - drop_user_stmt - - - - - import_stmt - - - - - insert_stmt - - - - - pause_stmt - - - - - reset_stmt - - - - - restore_stmt - - - - - resume_stmt - - - - - select_stmt - - - - - set_session_stmt - - - - - set_csetting_stmt - - - - - show_stmt - - - - - update_stmt - - - - - upsert_stmt - - - - - -

referenced by: -

-


as_of_clause:

- - - - - - - AS - - - OF - - - SYSTEM - - - TIME - - - - a_expr_const - - - - - -

referenced by: -

-


scrub_table_stmt:

- - - - - - - EXPERIMENTAL - - - SCRUB - - - TABLE - - - - table_name - - - - - opt_as_of_clause - - - - - opt_scrub_options_clause - - - - - -

referenced by: -

-


scrub_database_stmt:

- - - - - - - EXPERIMENTAL - - - SCRUB - - - DATABASE - - - - database_name - - - - - opt_as_of_clause - - - - - -

referenced by: -

-


select_no_parens:

- - - - - - - - simple_select - - - - - select_clause - - - - - sort_clause - - - - - opt_sort_clause - - - - - select_limit - - - - - with_clause - - - - - select_clause - - - - - sort_clause - - - - - opt_sort_clause - - - - - select_limit - - - - - -

referenced by: -

-


select_with_parens:

- - - - - - - ( - - - - select_no_parens - - - - - select_with_parens - - - - ) - - - - -

referenced by: -

-


savepoint_name:

- - - - - - - SAVEPOINT - - - - name - - - - - -

referenced by: -

-


reset_session_stmt:

- - - - - - - RESET - - - SESSION - - - - session_var - - - - - -

referenced by: -

-


reset_csetting_stmt:

- - - - - - - RESET - - - CLUSTER - - - SETTING - - - - var_name - - - - - -

referenced by: -

-


set_session_stmt:

- - - - - - - SET - - - SESSION - - - - set_rest_more - - - - CHARACTERISTICS - - - AS - - - TRANSACTION - - - - transaction_mode_list - - - - - set_rest_more - - - - - -

referenced by: -

-


set_csetting_stmt:

- - - - - - - SET - - - CLUSTER - - - SETTING - - - - var_name - - - - = - - - TO - - - - var_value - - - - - -

referenced by: -

-


set_transaction_stmt:

- - - - - - - SET - - - SESSION - - - TRANSACTION - - - - transaction_mode_list - - - - - -

referenced by: -

-


use_stmt:

- - - - - - - USE - - - - var_value - - - - - -

referenced by: -

-


show_backup_stmt:

- - - - - - - SHOW - - - BACKUP - - - - string_or_placeholder - - - - - -

referenced by: -

-


show_columns_stmt:

- - - - - - - SHOW - - - COLUMNS - - - FROM - - - - table_name - - - - - -

referenced by: -

-


show_constraints_stmt:

- - - - - - - SHOW - - - CONSTRAINT - - - CONSTRAINTS - - - FROM - - - - table_name - - - - - -

referenced by: -

-


show_create_table_stmt:

- - - - - - - SHOW - - - CREATE - - - TABLE - - - - table_name - - - - - -

referenced by: -

-


show_create_view_stmt:

- - - - - - - SHOW - - - CREATE - - - VIEW - - - - view_name - - - - - -

referenced by: -

-


show_create_sequence_stmt:

- - - - - - - SHOW - - - CREATE - - - SEQUENCE - - - - sequence_name - - - - - -

referenced by: -

-


show_csettings_stmt:

- - - - - - - SHOW - - - CLUSTER - - - SETTING - - - - var_name - - - - ALL - - - ALL - - - CLUSTER - - - SETTINGS - - - - -

referenced by: -

-


show_databases_stmt:

- - - - - - - SHOW - - - DATABASES - - - - -

referenced by: -

-


show_grants_stmt:

- - - - - - - SHOW - - - GRANTS - - - - opt_on_targets_roles - - - - - for_grantee_clause - - - - - -

referenced by: -

-


show_histogram_stmt:

- - - - - - - SHOW - - - HISTOGRAM - - - ICONST - - - - -

referenced by: -

-


show_indexes_stmt:

- - - - - - - SHOW - - - INDEX - - - INDEXES - - - KEYS - - - FROM - - - - table_name - - - - - -

referenced by: -

-


show_jobs_stmt:

- - - - - - - SHOW - - - JOBS - - - - -

referenced by: -

-


show_queries_stmt:

- - - - - - - SHOW - - - CLUSTER - - - LOCAL - - - QUERIES - - - - -

referenced by: -

-


show_ranges_stmt:

- - - - - - - SHOW - - - - ranges_kw - - - - FROM - - - TABLE - - - - table_name - - - - INDEX - - - - table_name_with_index - - - - - -

referenced by: -

-


show_roles_stmt:

- - - - - - - SHOW - - - ROLES - - - - -

referenced by: -

-


show_schemas_stmt:

- - - - - - - SHOW - - - SCHEMAS - - - FROM - - - - name - - - - - -

referenced by: -

-


show_session_stmt:

- - - - - - - SHOW - - - SESSION - - - - session_var - - - - - -

referenced by: -

-


show_sessions_stmt:

- - - - - - - SHOW - - - CLUSTER - - - LOCAL - - - SESSIONS - - - - -

referenced by: -

-


show_stats_stmt:

- - - - - - - SHOW - - - STATISTICS - - - USING - - - JSON - - - FOR - - - TABLE - - - - table_name - - - - - -

referenced by: -

-


show_tables_stmt:

- - - - - - - SHOW - - - TABLES - - - FROM - - - - name - - - - . - - - - name - - - - - -

referenced by: -

-


show_trace_stmt:

- - - - - - - SHOW - - - - opt_compact - - - - KV - - - TRACE - - - FOR - - - SESSION - - - - explainable_stmt - - - - - -

referenced by: -

-


show_users_stmt:

- - - - - - - SHOW - - - USERS - - - - -

referenced by: -

-


begin_stmt:

- - - - - - - BEGIN - - - - opt_transaction - - - - START - - - TRANSACTION - - - - begin_transaction - - - - - -

referenced by: -

-


commit_stmt:

- - - - - - - COMMIT - - - END - - - - opt_transaction - - - - - -

referenced by: -

-


rollback_stmt:

- - - - - - - ROLLBACK - - - - opt_to_savepoint - - - - - -

referenced by: -

-


abort_stmt:

- - - - - - - ABORT - - - - opt_abort_mod - - - - - -

referenced by: -

-


opt_table:

- - - - - - - TABLE - - - - -

referenced by: -

-


relation_expr_list:

- - - - - - - - relation_expr - - - - , - - - - -

referenced by: -

-


opt_drop_behavior:

- - - - - - - CASCADE - - - RESTRICT - - - - -

referenced by: -

-


set_clause_list:

- - - - - - - - set_clause - - - - , - - - - -

referenced by: -

-


alter_table_stmt:

- - - - - - - - alter_onetable_stmt - - - - - alter_split_stmt - - - - - alter_scatter_stmt - - - - - alter_rename_table_stmt - - - - - -

referenced by: -

-


alter_index_stmt:

- - - - - - - - alter_oneindex_stmt - - - - - alter_split_index_stmt - - - - - alter_scatter_index_stmt - - - - - alter_rename_index_stmt - - - - - -

referenced by: -

-


alter_view_stmt:

- - - - - - - - alter_rename_view_stmt - - - - - -

referenced by: -

-


alter_sequence_stmt:

- - - - - - - - alter_rename_sequence_stmt - - - - - alter_sequence_options_stmt - - - - - -

referenced by: -

-


alter_database_stmt:

- - - - - - - - alter_rename_database_stmt - - - - - -

referenced by: -

-


alter_user_password_stmt:

- - - - - - - ALTER - - - USER - - - IF - - - EXISTS - - - - string_or_placeholder - - - - WITH - - - PASSWORD - - - - string_or_placeholder - - - - - -

referenced by: -

-


col_name_keyword:

- - - - - - - ANNOTATE_TYPE - - - BETWEEN - - - BIGINT - - - BIT - - - BOOLEAN - - - CHAR - - - CHARACTER - - - CHARACTERISTICS - - - COALESCE - - - DEC - - - DECIMAL - - - EXISTS - - - EXTRACT - - - EXTRACT_DURATION - - - FLOAT - - - GREATEST - - - GROUPING - - - IF - - - IFNULL - - - INT - - - INTEGER - - - INTERVAL - - - LEAST - - - NULLIF - - - NUMERIC - - - OUT - - - OVERLAY - - - POSITION - - - PRECISION - - - REAL - - - ROW - - - SMALLINT - - - SUBSTRING - - - TIME - - - TIMESTAMP - - - TREAT - - - TRIM - - - VALUES - - - VARCHAR - - - VIRTUAL - - - WORK - - - - -

referenced by: -

-


unreserved_keyword:

- - - - - - - ABORT - - - ACTION - - - ADD - - - ADMIN - - - ALTER - - - AT - - - BACKUP - - - BEGIN - - - BIGSERIAL - - - BLOB - - - BOOL - - - BY - - - BYTEA - - - BYTES - - - CACHE - - - CANCEL - - - CASCADE - - - CLUSTER - - - COLUMNS - - - COMMENT - - - COMMIT - - - COMMITTED - - - COMPACT - - - CONFLICT - - - CONFIGURATION - - - CONFIGURATIONS - - - CONFIGURE - - - CONSTRAINTS - - - COPY - - - COVERING - - - CSV - - - CUBE - - - CURRENT - - - CYCLE - - - DATA - - - DATABASE - - - DATABASES - - - DATE - - - DAY - - - DEALLOCATE - - - DELETE - - - DISCARD - - - DOUBLE - - - DROP - - - EMIT - - - ENCODING - - - EXECUTE - - - EXPERIMENTAL - - - EXPERIMENTAL_AUDIT - - - EXPERIMENTAL_CHANGEFEED - - - EXPERIMENTAL_FINGERPRINTS - - - EXPERIMENTAL_RANGES - - - EXPERIMENTAL_RELOCATE - - - EXPERIMENTAL_REPLICA - - - EXPLAIN - - - EXPORT - - - FILTER - - - FIRST - - - FLOAT4 - - - FLOAT8 - - - FOLLOWING - - - FORCE_INDEX - - - GIN - - - GRANTS - - - HIGH - - - HISTOGRAM - - - HOUR - - - IMPORT - - - INCREMENT - - - INCREMENTAL - - - INDEXES - - - INET - - - INJECT - - - INSERT - - - INT2 - - - INT2VECTOR - - - INT4 - - - INT8 - - - INT64 - - - INTERLEAVE - - - INVERTED - - - ISOLATION - - - JOB - - - JOBS - - - JSON - - - JSONB - - - KEY - - - KEYS - - - KV - - - LC_COLLATE - - - LC_CTYPE - - - LESS - - - LEVEL - - - LIST - - - LOCAL - - - LOW - - - MATCH - - - MINUTE - - - MONTH - - - NAMES - - - NAN - - - NAME - - - NEXT - - - NO - - - NORMAL - - - NO_INDEX_JOIN - - - NULLS - - - OF - - - OFF - - - OID - - - OIDVECTOR - - - OPTION - - - OPTIONS - - - ORDINALITY - - - OVER - - - OWNED - - - PARENT - - - PARTIAL - - - PARTITION - - - PASSWORD - - - PAUSE - - - PHYSICAL - - - PLANS - - - PRECEDING - - - PREPARE - - - PRIORITY - - - QUERIES - - - QUERY - - - RANGE - - - READ - - - RECURSIVE - - - REF - - - REGCLASS - - - REGPROC - - - REGPROCEDURE - - - REGNAMESPACE - - - REGTYPE - - - RELEASE - - - RENAME - - - REPEATABLE - - - RESET - - - RESTORE - - - RESTRICT - - - RESUME - - - REVOKE - - - ROLE - - - ROLES - - - ROLLBACK - - - ROLLUP - - - ROWS - - - SETTING - - - SETTINGS - - - STATUS - - - SAVEPOINT - - - SCATTER - - - SCHEMA - - - SCHEMAS - - - SCRUB - - - SEARCH - - - SECOND - - - SERIAL - - - SERIALIZABLE - - - SERIAL2 - - - SERIAL4 - - - SERIAL8 - - - SEQUENCE - - - SEQUENCES - - - SESSION - - - SESSIONS - - - SET - - - SHOW - - - SIMPLE - - - SMALLSERIAL - - - SNAPSHOT - - - SQL - - - START - - - STATISTICS - - - STDIN - - - STORE - - - STORING - - - STRICT - - - STRING - - - SPLIT - - - SYNTAX - - - SYSTEM - - - TABLES - - - TEMP - - - TEMPLATE - - - TEMPORARY - - - TESTING_RANGES - - - TESTING_RELOCATE - - - TEXT - - - THAN - - - TIMESTAMPTZ - - - TRACE - - - TRANSACTION - - - TRUNCATE - - - TYPE - - - UNBOUNDED - - - UNCOMMITTED - - - UNKNOWN - - - UPDATE - - - UPSERT - - - UUID - - - USE - - - USERS - - - VALID - - - VALIDATE - - - VALUE - - - VARYING - - - WITHIN - - - WITHOUT - - - WRITE - - - YEAR - - - ZONE - - - - -

referenced by: -

-


complex_table_pattern:

- - - - - - - - complex_db_object_name - - - - - name - - - - . - - - - unrestricted_name - - - - . - - - * - - - - -

referenced by: -

-


table_pattern:

- - - - - - - - simple_db_object_name - - - - - complex_table_pattern - - - - - -

referenced by: -

-


table_pattern_list:

- - - - - - - - table_pattern - - - - , - - - - -

referenced by: -

-


non_reserved_word_or_sconst:

- - - - - - - - non_reserved_word - - - - SCONST - - - - -

referenced by: -

-


kv_option_list:

- - - - - - - - kv_option - - - - , - - - - -

referenced by: -

-


db_object_name:

- - - - - - - - simple_db_object_name - - - - - complex_db_object_name - - - - - -

referenced by: -

-


opt_password:

- - - - - - - - opt_with - - - - PASSWORD - - - - string_or_placeholder - - - - - -

referenced by: -

-


create_database_stmt:

- - - - - - - CREATE - - - DATABASE - - - IF - - - NOT - - - EXISTS - - - - database_name - - - - - opt_with - - - - - opt_template_clause - - - - - opt_encoding_clause - - - - - opt_lc_collate_clause - - - - - opt_lc_ctype_clause - - - - - -

referenced by: -

-


create_index_stmt:

- - - - - - - CREATE - - - - opt_unique - - - - INDEX - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - - opt_using_gin - - - - ( - - - - index_params - - - - ) - - - - opt_storing - - - - - opt_interleave - - - - - opt_partition_by - - - - INVERTED - - - INDEX - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - ( - - - - index_params - - - - ) - - - - -

referenced by: -

-


create_table_stmt:

- - - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - ( - - - - opt_table_elem_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by - - - - - -

referenced by: -

-


create_table_as_stmt:

- - - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - - opt_column_list - - - - AS - - - - select_stmt - - - - - -

referenced by: -

-


create_view_stmt:

- - - - - - - CREATE - - - VIEW - - - - view_name - - - - - opt_column_list - - - - AS - - - - select_stmt - - - - - -

referenced by: -

-


create_sequence_stmt:

- - - - - - - CREATE - - - SEQUENCE - - - IF - - - NOT - - - EXISTS - - - - sequence_name - - - - - opt_sequence_option_list - - - - - -

referenced by: -

-


statistics_name:

- - - - - - - - name - - - - - -

referenced by: -

-


with_clause:

- - - - - - - WITH - - - - cte_list - - - - - -

referenced by: -

-


relation_expr:

- - - - - - - - table_name - - - - * - - - ONLY - - - - table_name - - - - ( - - - - table_name - - - - ) - - - - -

referenced by: -

-


sort_clause:

- - - - - - - ORDER - - - BY - - - - sortby_list - - - - - -

referenced by: -

-


limit_clause:

- - - - - - - LIMIT - - - - select_limit_value - - - - FETCH - - - - first_or_next - - - - - opt_select_fetch_first_value - - - - - row_or_rows - - - - ONLY - - - - -

referenced by: -

-


target_list:

- - - - - - - - target_elem - - - - , - - - - -

referenced by: -

-


drop_database_stmt:

- - - - - - - DROP - - - DATABASE - - - IF - - - EXISTS - - - - database_name - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_index_stmt:

- - - - - - - DROP - - - INDEX - - - IF - - - EXISTS - - - - table_name_with_index_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_table_stmt:

- - - - - - - DROP - - - TABLE - - - IF - - - EXISTS - - - - table_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_view_stmt:

- - - - - - - DROP - - - VIEW - - - IF - - - EXISTS - - - - table_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_sequence_stmt:

- - - - - - - DROP - - - SEQUENCE - - - IF - - - EXISTS - - - - table_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


expr_list:

- - - - - - - - a_expr - - - - , - - - - -

referenced by: -

-


explain_option_name:

- - - - - - - - non_reserved_word - - - - - -

referenced by: -

-


privilege:

- - - - - - - - name - - - - CREATE - - - GRANT - - - SELECT - - - - -

referenced by: -

-


insert_column_list:

- - - - - - - - insert_column_item - - - - , - - - - -

referenced by: -

-


opt_conf_expr:

- - - - - - - ( - - - - name_list - - - - ) - - - - where_clause - - - - - -

referenced by: -

-


table_elem:

- - - - - - - - column_def - - - - - index_def - - - - - family_def - - - - - table_constraint - - - - - -

referenced by: -

-


c_expr:

- - - - - - - - d_expr - - - - - array_subscripts - - - - - case_expr - - - - EXISTS - - - - select_with_parens - - - - - -

referenced by: -

-


cast_target:

- - - - - - - - typename - - - - - postgres_oid - - - - - -

referenced by: -

-


typename:

- - - - - - - - simple_typename - - - - - opt_array_bounds - - - - ARRAY - - - - -

referenced by: -

-


collation_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


type_list:

- - - - - - - - typename - - - - , - - - - -

referenced by: -

-


opt_asymmetric:

- - - - - - - ASYMMETRIC - - - - -

referenced by: -

-


b_expr:

- - - - - - - - c_expr - - - - + - - - - - - - ~ - - - - b_expr - - - - TYPECAST - - - - cast_target - - - - TYPEANNOTATE - - - - typename - - - - + - - - - - - - * - - - / - - - FLOORDIV - - - % - - - ^ - - - # - - - & - - - | - - - < - - - > - - - = - - - CONCAT - - - LSHIFT - - - RSHIFT - - - LESS_EQUALS - - - GREATER_EQUALS - - - NOT_EQUALS - - - - b_expr - - - - IS - - - NOT - - - DISTINCT - - - FROM - - - - b_expr - - - - OF - - - ( - - - - type_list - - - - ) - - - - -

referenced by: -

-


in_expr:

- - - - - - - - select_with_parens - - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


subquery_op:

- - - - - - - - math_op - - - - NOT - - - LIKE - - - ILIKE - - - - -

referenced by: -

-


sub_type:

- - - - - - - ANY - - - SOME - - - ALL - - - - -

referenced by: -

-


a_expr_const:

- - - - - - - ICONST - - - FCONST - - - - const_typename - - - - SCONST - - - BCONST - - - - interval - - - - TRUE - - - FALSE - - - NULL - - - - -

referenced by: -

-


opt_scrub_options_clause:

- - - - - - - WITH - - - OPTIONS - - - - scrub_option_list - - - - - -

referenced by: -

-


database_name:

- - - - - - - - name - - - - - -

referenced by: -

-


simple_select:

- - - - - - - - simple_select_clause - - - - - values_clause - - - - - table_clause - - - - - set_operation - - - - - -

referenced by: -

-


select_clause:

- - - - - - - - simple_select - - - - - select_with_parens - - - - - -

referenced by: -

-


select_limit:

- - - - - - - - limit_clause - - - - - offset_clause - - - - - offset_clause - - - - - limit_clause - - - - - -

referenced by: -

-


session_var:

- - - - - - - identifier - - - ALL - - - DATABASE - - - NAMES - - - SESSION_USER - - - TIME - - - ZONE - - - - -

referenced by: -

-


var_name:

- - - - - - - - name - - - - - attrs - - - - - -

referenced by: -

-


set_rest_more:

- - - - - - - - generic_set - - - - - -

referenced by: -

-


transaction_mode_list:

- - - - - - - - transaction_mode - - - - - opt_comma - - - - - -

referenced by: -

-


var_value:

- - - - - - - - a_expr - - - - ON - - - - -

referenced by: -

-


view_name:

- - - - - - - - table_name - - - - - -

referenced by: -

-


sequence_name:

- - - - - - - - db_object_name - - - - - -

referenced by: -

-


opt_on_targets_roles:

- - - - - - - ON - - - - targets_roles - - - - - -

referenced by: -

-


for_grantee_clause:

- - - - - - - FOR - - - - name_list - - - - - -

referenced by: -

-


ranges_kw:

- - - - - - - TESTING_RANGES - - - EXPERIMENTAL_RANGES - - - - -

referenced by: -

-


table_name_with_index:

- - - - - - - - table_name - - - - @ - - - - index_name - - - - - -

referenced by: -

-


opt_compact:

- - - - - - - COMPACT - - - - -

referenced by: -

-


opt_transaction:

- - - - - - - TRANSACTION - - - - -

referenced by: -

-


begin_transaction:

- - - - - - - - transaction_mode_list - - - - - -

referenced by: -

-


opt_to_savepoint:

- - - - - - - TRANSACTION - - - TO - - - - savepoint_name - - - - - -

referenced by: -

-


opt_abort_mod:

- - - - - - - TRANSACTION - - - WORK - - - - -

referenced by: -

-


set_clause:

- - - - - - - - single_set_clause - - - - - multiple_set_clause - - - - - -

referenced by: -

-


alter_onetable_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - - alter_table_cmds - - - - - -

referenced by: -

-


alter_split_stmt:

- - - - - - - ALTER - - - TABLE - - - - table_name - - - - SPLIT - - - AT - - - - select_stmt - - - - - -

referenced by: -

-


alter_scatter_stmt:

- - - - - - - ALTER - - - TABLE - - - - table_name - - - - SCATTER - - - FROM - - - ( - - - - expr_list - - - - ) - - - TO - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


alter_rename_table_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - RENAME - - - TO - - - - table_name - - - - - opt_column - - - - - column_name - - - - TO - - - - column_name - - - - - -

referenced by: -

-


alter_oneindex_stmt:

- - - - - - - ALTER - - - INDEX - - - IF - - - EXISTS - - - - table_name_with_index - - - - - alter_index_cmds - - - - - -

referenced by: -

-


alter_split_index_stmt:

- - - - - - - ALTER - - - INDEX - - - - table_name_with_index - - - - SPLIT - - - AT - - - - select_stmt - - - - - -

referenced by: -

-


alter_scatter_index_stmt:

- - - - - - - ALTER - - - INDEX - - - - table_name_with_index - - - - SCATTER - - - FROM - - - ( - - - - expr_list - - - - ) - - - TO - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


alter_rename_index_stmt:

- - - - - - - ALTER - - - INDEX - - - IF - - - EXISTS - - - - table_name_with_index - - - - RENAME - - - TO - - - - index_name - - - - - -

referenced by: -

-


alter_rename_view_stmt:

- - - - - - - ALTER - - - VIEW - - - IF - - - EXISTS - - - - relation_expr - - - - RENAME - - - TO - - - - view_name - - - - - -

referenced by: -

-


alter_rename_sequence_stmt:

- - - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - relation_expr - - - - RENAME - - - TO - - - - sequence_name - - - - - -

referenced by: -

-


alter_sequence_options_stmt:

- - - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - - sequence_option_list - - - - - -

referenced by: -

-


alter_rename_database_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - RENAME - - - TO - - - - database_name - - - - - -

referenced by: -

-


complex_db_object_name:

- - - - - - - - name - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


unrestricted_name:

- - - - - - - identifier - - - - unreserved_keyword - - - - - col_name_keyword - - - - - type_func_name_keyword - - - - - reserved_keyword - - - - - -

referenced by: -

-


simple_db_object_name:

- - - - - - - - name - - - - - -

referenced by: -

-


non_reserved_word:

- - - - - - - identifier - - - - unreserved_keyword - - - - - col_name_keyword - - - - - type_func_name_keyword - - - - - -

referenced by: -

-


kv_option:

- - - - - - - - name - - - - SCONST - - - = - - - - string_or_placeholder - - - - - -

referenced by: -

-


opt_with:

- - - - - - - WITH - - - - -

referenced by: -

-


opt_template_clause:

- - - - - - - TEMPLATE - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_encoding_clause:

- - - - - - - ENCODING - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_lc_collate_clause:

- - - - - - - LC_COLLATE - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_lc_ctype_clause:

- - - - - - - LC_CTYPE - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_unique:

- - - - - - - UNIQUE - - - - -

referenced by: -

-


opt_index_name:

- - - - - - - - opt_name - - - - - -

referenced by: -

-


opt_using_gin:

- - - - - - - USING - - - GIN - - - - -

referenced by: -

-


index_params:

- - - - - - - - index_elem - - - - , - - - - -

referenced by: -

-


opt_storing:

- - - - - - - - storing - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


opt_interleave:

- - - - - - - INTERLEAVE - - - IN - - - PARENT - - - - table_name - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


opt_partition_by:

- - - - - - - - partition_by - - - - - -

referenced by: -

-


index_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


opt_table_elem_list:

- - - - - - - - table_elem_list - - - - - -

referenced by: -

-


opt_sequence_option_list:

- - - - - - - - sequence_option_list - - - - - -

referenced by: -

-


cte_list:

- - - - - - - - common_table_expr - - - - , - - - - -

referenced by: -

-


sortby_list:

- - - - - - - - sortby - - - - , - - - - -

referenced by: -

-


select_limit_value:

- - - - - - - - a_expr - - - - ALL - - - - -

referenced by: -

-


first_or_next:

- - - - - - - FIRST - - - NEXT - - - - -

referenced by: -

-


opt_select_fetch_first_value:

- - - - - - - - signed_iconst - - - - ( - - - - a_expr - - - - ) - - - - -

referenced by: -

-


row_or_rows:

- - - - - - - ROW - - - ROWS - - - - -

referenced by: -

-


target_elem:

- - - - - - - - a_expr - - - - AS - - - - target_name - - - - identifier - - - * - - - - -

referenced by: -

-


table_name_with_index_list:

- - - - - - - - table_name_with_index - - - - , - - - - -

referenced by: -

-


table_name_list:

- - - - - - - - table_name - - - - , - - - - -

referenced by: -

-


insert_column_item:

- - - - - - - - column_name - - - - - -

referenced by: -

-


column_def:

- - - - - - - - column_name - - - - - typename - - - - - col_qual_list - - - - - -

referenced by: -

-


index_def:

- - - - - - - UNIQUE - - - INDEX - - - - opt_index_name - - - - ( - - - - index_params - - - - ) - - - - opt_storing - - - - - opt_interleave - - - - - opt_partition_by - - - - INVERTED - - - INDEX - - - - opt_name - - - - ( - - - - index_params - - - - ) - - - - -

referenced by: -

-


family_def:

- - - - - - - FAMILY - - - - opt_family_name - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


table_constraint:

- - - - - - - CONSTRAINT - - - - constraint_name - - - - - constraint_elem - - - - - -

referenced by: -

-


d_expr:

- - - - - - - - column_path_with_star - - - - - a_expr_const - - - - @ - - - ICONST - - - PLACEHOLDER - - - ( - - - - a_expr - - - - ) - - - - func_expr - - - - - select_with_parens - - - - ARRAY - - - - select_with_parens - - - - - array_expr - - - - - explicit_row - - - - - implicit_row - - - - - -

referenced by: -

-


array_subscripts:

- - - - - - - - array_subscript - - - - - -

referenced by: -

-


case_expr:

- - - - - - - CASE - - - - case_arg - - - - - when_clause_list - - - - - case_default - - - - END - - - - -

referenced by: -

-


postgres_oid:

- - - - - - - REGPROC - - - REGPROCEDURE - - - REGCLASS - - - REGTYPE - - - REGNAMESPACE - - - - -

referenced by: -

-


simple_typename:

- - - - - - - - const_typename - - - - - bit_with_length - - - - - character_with_length - - - - INTERVAL - - - - opt_interval - - - - - -

referenced by: -

-


opt_array_bounds:

- - - - - - - [ - - - ] - - - - -

referenced by: -

-


math_op:

- - - - - - - + - - - - - - - * - - - / - - - FLOORDIV - - - % - - - & - - - | - - - ^ - - - # - - - < - - - > - - - = - - - LESS_EQUALS - - - GREATER_EQUALS - - - NOT_EQUALS - - - - -

referenced by: -

-


const_typename:

- - - - - - - - numeric - - - - - bit_without_length - - - - - character_without_length - - - - - const_datetime - - - - - const_json - - - - BLOB - - - BYTES - - - BYTEA - - - TEXT - - - NAME - - - SERIAL - - - SERIAL2 - - - SERIAL4 - - - SERIAL8 - - - SMALLSERIAL - - - UUID - - - INET - - - BIGSERIAL - - - OID - - - OIDVECTOR - - - INT2VECTOR - - - identifier - - - - -

referenced by: -

-


interval:

- - - - - - - INTERVAL - - - SCONST - - - - opt_interval - - - - - -

referenced by: -

-


scrub_option_list:

- - - - - - - - scrub_option - - - - , - - - - -

referenced by: -

-


simple_select_clause:

- - - - - - - SELECT - - - - opt_all_clause - - - - DISTINCT - - - - distinct_on_clause - - - - - target_list - - - - - from_clause - - - - - where_clause - - - - - group_clause - - - - - having_clause - - - - - window_clause - - - - - -

referenced by: -

-


values_clause:

- - - - - - - VALUES - - - ( - - - - expr_list - - - - ) - - - , - - - - -

referenced by: -

-


table_clause:

- - - - - - - TABLE - - - - table_ref - - - - - -

referenced by: -

-


set_operation:

- - - - - - - - select_clause - - - - UNION - - - INTERSECT - - - EXCEPT - - - - all_or_distinct - - - - - select_clause - - - - - -

referenced by: -

-


offset_clause:

- - - - - - - OFFSET - - - - a_expr - - - - - c_expr - - - - - row_or_rows - - - - - -

referenced by: -

-


attrs:

- - - - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


generic_set:

- - - - - - - - var_name - - - - TO - - - = - - - - var_list - - - - - -

referenced by: -

-


transaction_mode:

- - - - - - - - transaction_iso_level - - - - - transaction_user_priority - - - - - transaction_read_mode - - - - - -

referenced by: -

-


opt_comma:

- - - - - - - , - - - - -

referenced by: -

-


targets_roles:

- - - - - - - ROLE - - - - name_list - - - - - targets - - - - - -

referenced by: -

-


single_set_clause:

- - - - - - - - column_name - - - - = - - - - a_expr - - - - - -

referenced by: -

-


multiple_set_clause:

- - - - - - - ( - - - - insert_column_list - - - - ) - - - = - - - - in_expr - - - - - -

referenced by: -

-


alter_table_cmds:

- - - - - - - - alter_table_cmd - - - - , - - - - -

referenced by: -

-


opt_column:

- - - - - - - COLUMN - - - - -

referenced by: -

-


column_name:

- - - - - - - - name - - - - - -

referenced by: -

-


alter_index_cmds:

- - - - - - - - alter_index_cmd - - - - , - - - - -

referenced by: -

-


sequence_option_list:

- - - - - - - - sequence_option_elem - - - - - -

referenced by: -

-


type_func_name_keyword:

- - - - - - - COLLATION - - - CROSS - - - FAMILY - - - FULL - - - INNER - - - ILIKE - - - IS - - - ISNULL - - - JOIN - - - LEFT - - - LIKE - - - MAXVALUE - - - MINVALUE - - - NATURAL - - - NOTNULL - - - OUTER - - - OVERLAPS - - - RIGHT - - - SIMILAR - - - - -

referenced by: -

-


reserved_keyword:

- - - - - - - ALL - - - ANALYSE - - - ANALYZE - - - AND - - - ANY - - - ARRAY - - - AS - - - ASC - - - ASYMMETRIC - - - BOTH - - - CASE - - - CAST - - - CHECK - - - COLLATE - - - COLUMN - - - CONSTRAINT - - - CREATE - - - CURRENT_CATALOG - - - CURRENT_DATE - - - CURRENT_ROLE - - - CURRENT_SCHEMA - - - CURRENT_TIME - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - DEFAULT - - - DEFERRABLE - - - DESC - - - DISTINCT - - - DO - - - ELSE - - - END - - - EXCEPT - - - FALSE - - - FETCH - - - FOR - - - FOREIGN - - - FROM - - - GRANT - - - GROUP - - - HAVING - - - IN - - - INDEX - - - INITIALLY - - - INTERSECT - - - INTO - - - LATERAL - - - LEADING - - - LIMIT - - - LOCALTIME - - - LOCALTIMESTAMP - - - NOT - - - NOTHING - - - NULL - - - OFFSET - - - ON - - - ONLY - - - OR - - - ORDER - - - PLACING - - - PRIMARY - - - REFERENCES - - - RETURNING - - - SELECT - - - SESSION_USER - - - SOME - - - STORED - - - SYMMETRIC - - - TABLE - - - THEN - - - TO - - - TRAILING - - - TRUE - - - UNION - - - UNIQUE - - - USER - - - USING - - - VARIADIC - - - VIEW - - - WHEN - - - WHERE - - - WINDOW - - - WITH - - - - -

referenced by: -

-


opt_equal:

- - - - - - - = - - - - -

referenced by: -

-


opt_name:

- - - - - - - - name - - - - - -

referenced by: -

-


index_elem:

- - - - - - - - column_name - - - - - opt_asc_desc - - - - - -

referenced by: -

-


storing:

- - - - - - - COVERING - - - STORING - - - - -

referenced by: -

-


partition_by:

- - - - - - - PARTITION - - - BY - - - LIST - - - ( - - - - name_list - - - - ) - - - ( - - - - list_partitions - - - - RANGE - - - ( - - - - name_list - - - - ) - - - ( - - - - range_partitions - - - - ) - - - NOTHING - - - - -

referenced by: -

-


common_table_expr:

- - - - - - - - table_alias_name - - - - - opt_column_list - - - - AS - - - ( - - - - preparable_stmt - - - - ) - - - - -

referenced by: -

-


sortby:

- - - - - - - - a_expr - - - - PRIMARY - - - KEY - - - - table_name - - - - INDEX - - - - table_name - - - - @ - - - - index_name - - - - - opt_asc_desc - - - - - -

referenced by: -

-


signed_iconst:

- - - - - - - + - - - - - - - ICONST - - - - -

referenced by: -

-


target_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


col_qual_list:

- - - - - - - - col_qualification - - - - - -

referenced by: -

-


opt_family_name:

- - - - - - - - opt_name - - - - - -

referenced by: -

-


constraint_name:

- - - - - - - - name - - - - - -

referenced by: -

-


constraint_elem:

- - - - - - - CHECK - - - ( - - - - a_expr - - - - PRIMARY - - - KEY - - - ( - - - - index_params - - - - ) - - - UNIQUE - - - ( - - - - index_params - - - - ) - - - - opt_storing - - - - - opt_interleave - - - - - opt_partition_by - - - - FOREIGN - - - KEY - - - ( - - - - name_list - - - - ) - - - REFERENCES - - - - table_name - - - - - opt_column_list - - - - - reference_actions - - - - - -

referenced by: -

-


column_path_with_star:

- - - - - - - - column_path - - - - - name - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - . - - - * - - - - -

referenced by: -

-


func_expr:

- - - - - - - - func_application - - - - - filter_clause - - - - - over_clause - - - - - func_expr_common_subexpr - - - - - -

referenced by: -

-


array_expr:

- - - - - - - [ - - - - opt_expr_list - - - - - array_expr_list - - - - ] - - - - -

referenced by: -

-


explicit_row:

- - - - - - - ROW - - - ( - - - - opt_expr_list - - - - ) - - - - -

referenced by: -

-


implicit_row:

- - - - - - - ( - - - - expr_list - - - - , - - - - a_expr - - - - ) - - - - -

referenced by: -

-


array_subscript:

- - - - - - - [ - - - - a_expr - - - - - opt_slice_bound - - - - : - - - - opt_slice_bound - - - - ] - - - - -

referenced by: -

-


case_arg:

- - - - - - - - a_expr - - - - - -

referenced by: -

-


when_clause_list:

- - - - - - - - when_clause - - - - - -

referenced by: -

-


case_default:

- - - - - - - ELSE - - - - a_expr - - - - - -

referenced by: -

-


bit_with_length:

- - - - - - - BIT - - - - opt_varying - - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


character_with_length:

- - - - - - - - character_base - - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


opt_interval:

- - - - - - - YEAR - - - TO - - - MONTH - - - MONTH - - - DAY - - - TO - - - HOUR - - - MINUTE - - - SECOND - - - HOUR - - - TO - - - MINUTE - - - SECOND - - - MINUTE - - - TO - - - SECOND - - - SECOND - - - - -

referenced by: -

-


numeric:

- - - - - - - INT - - - INT2 - - - INT4 - - - INT8 - - - INT64 - - - INTEGER - - - SMALLINT - - - BIGINT - - - REAL - - - FLOAT4 - - - FLOAT8 - - - FLOAT - - - - opt_float - - - - DOUBLE - - - PRECISION - - - DECIMAL - - - DEC - - - NUMERIC - - - - opt_numeric_modifiers - - - - BOOLEAN - - - BOOL - - - - -

referenced by: -

-


bit_without_length:

- - - - - - - BIT - - - - opt_varying - - - - - -

referenced by: -

-


character_without_length:

- - - - - - - - character_base - - - - - -

referenced by: -

-


const_datetime:

- - - - - - - DATE - - - TIME - - - WITHOUT - - - TIME - - - ZONE - - - TIMESTAMP - - - WITHOUT - - - WITH - - - TIME - - - ZONE - - - TIMESTAMPTZ - - - - -

referenced by: -

-


const_json:

- - - - - - - JSON - - - JSONB - - - - -

referenced by: -

-


scrub_option:

- - - - - - - INDEX - - - CONSTRAINT - - - ALL - - - ( - - - - name_list - - - - ) - - - PHYSICAL - - - - -

referenced by: -

-


opt_all_clause:

- - - - - - - ALL - - - - -

referenced by: -

-


from_clause:

- - - - - - - FROM - - - - from_list - - - - - opt_as_of_clause - - - - - -

referenced by: -

-


group_clause:

- - - - - - - GROUP - - - BY - - - - expr_list - - - - - -

referenced by: -

-


having_clause:

- - - - - - - HAVING - - - - a_expr - - - - - -

referenced by: -

-


window_clause:

- - - - - - - WINDOW - - - - window_definition_list - - - - - -

referenced by: -

-


distinct_on_clause:

- - - - - - - DISTINCT - - - ON - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


table_ref:

- - - - - - - - relation_expr - - - - - opt_index_hints - - - - - func_name - - - - ( - - - - opt_expr_list - - - - ) - - - - special_function - - - - - select_with_parens - - - - [ - - - - explainable_stmt - - - - ] - - - - opt_ordinality - - - - - opt_alias_clause - - - - - joined_table - - - - ( - - - - joined_table - - - - ) - - - - opt_ordinality - - - - - alias_clause - - - - - -

referenced by: -

-


all_or_distinct:

- - - - - - - ALL - - - DISTINCT - - - - -

referenced by: -

-


var_list:

- - - - - - - - var_value - - - - , - - - - -

referenced by: -

-


transaction_iso_level:

- - - - - - - ISOLATION - - - LEVEL - - - - iso_level - - - - - -

referenced by: -

-


transaction_user_priority:

- - - - - - - PRIORITY - - - - user_priority - - - - - -

referenced by: -

-


transaction_read_mode:

- - - - - - - READ - - - ONLY - - - WRITE - - - - -

referenced by: -

-


alter_table_cmd:

- - - - - - - ADD - - - COLUMN - - - IF - - - NOT - - - EXISTS - - - - column_def - - - - - table_constraint - - - - - opt_validate_behavior - - - - ALTER - - - - opt_column - - - - - column_name - - - - - alter_column_default - - - - DROP - - - NOT - - - NULL - - - DROP - - - - opt_column - - - - IF - - - EXISTS - - - - column_name - - - - CONSTRAINT - - - IF - - - EXISTS - - - - constraint_name - - - - - opt_drop_behavior - - - - VALIDATE - - - CONSTRAINT - - - - constraint_name - - - - EXPERIMENTAL_AUDIT - - - SET - - - - audit_mode - - - - - partition_by - - - - INJECT - - - STATISTICS - - - - a_expr - - - - - -

referenced by: -

-


alter_index_cmd:

- - - - - - - - partition_by - - - - - -

referenced by: -

-


sequence_option_elem:

- - - - - - - NO - - - CYCLE - - - MINVALUE - - - MAXVALUE - - - INCREMENT - - - BY - - - MINVALUE - - - MAXVALUE - - - START - - - WITH - - - - signed_iconst64 - - - - - -

referenced by: -

-


opt_asc_desc:

- - - - - - - ASC - - - DESC - - - - -

referenced by: -

-


list_partitions:

- - - - - - - - list_partition - - - - , - - - - -

referenced by: -

-


range_partitions:

- - - - - - - - range_partition - - - - , - - - - -

referenced by: -

-


col_qualification:

- - - - - - - CONSTRAINT - - - - constraint_name - - - - - col_qualification_elem - - - - COLLATE - - - - collation_name - - - - FAMILY - - - - family_name - - - - CREATE - - - FAMILY - - - - family_name - - - - IF - - - NOT - - - EXISTS - - - FAMILY - - - - family_name - - - - - -

referenced by: -

-


reference_actions:

- - - - - - - - reference_on_update - - - - - reference_on_delete - - - - - reference_on_delete - - - - - reference_on_update - - - - - -

referenced by: -

-


column_path:

- - - - - - - - name - - - - - prefixed_column_path - - - - - -

referenced by: -

-


func_application:

- - - - - - - - func_name - - - - ( - - - ALL - - - DISTINCT - - - - expr_list - - - - - opt_sort_clause - - - - * - - - ) - - - - -

referenced by: -

-


filter_clause:

- - - - - - - FILTER - - - ( - - - WHERE - - - - a_expr - - - - ) - - - - -

referenced by: -

-


over_clause:

- - - - - - - OVER - - - - window_specification - - - - - window_name - - - - - -

referenced by: -

-


func_expr_common_subexpr:

- - - - - - - CURRENT_DATE - - - CURRENT_SCHEMA - - - CURRENT_CATALOG - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - CURRENT_ROLE - - - SESSION_USER - - - USER - - - CAST - - - ( - - - - a_expr - - - - AS - - - - cast_target - - - - ANNOTATE_TYPE - - - ( - - - - a_expr - - - - , - - - - typename - - - - IF - - - ( - - - - a_expr - - - - , - - - NULLIF - - - IFNULL - - - ( - - - - a_expr - - - - , - - - - a_expr - - - - COALESCE - - - ( - - - - expr_list - - - - ) - - - - special_function - - - - - -

referenced by: -

-


opt_expr_list:

- - - - - - - - expr_list - - - - - -

referenced by: -

-


array_expr_list:

- - - - - - - - array_expr - - - - , - - - - -

referenced by: -

-


opt_slice_bound:

- - - - - - - - a_expr - - - - - -

referenced by: -

-


when_clause:

- - - - - - - WHEN - - - - a_expr - - - - THEN - - - - a_expr - - - - - -

referenced by: -

-


opt_varying:

- - - - - - - VARYING - - - - -

referenced by: -

-


character_base:

- - - - - - - CHARACTER - - - CHAR - - - - opt_varying - - - - VARCHAR - - - STRING - - - - -

referenced by: -

-


opt_float:

- - - - - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


opt_numeric_modifiers:

- - - - - - - ( - - - ICONST - - - , - - - ICONST - - - ) - - - - -

referenced by: -

-


from_list:

- - - - - - - - table_ref - - - - , - - - - -

referenced by: -

-


window_definition_list:

- - - - - - - - window_definition - - - - , - - - - -

referenced by: -

-


opt_index_hints:

- - - - - - - @ - - - - index_name - - - - [ - - - ICONST - - - ] - - - { - - - - index_hints_param_list - - - - } - - - - -

referenced by: -

-


opt_ordinality:

- - - - - - - WITH - - - ORDINALITY - - - - -

referenced by: -

-


opt_alias_clause:

- - - - - - - - alias_clause - - - - - -

referenced by: -

-


func_name:

- - - - - - - - type_function_name - - - - - prefixed_column_path - - - - - -

referenced by: -

-


special_function:

- - - - - - - CURRENT_DATE - - - CURRENT_SCHEMA - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - ( - - - EXTRACT - - - EXTRACT_DURATION - - - ( - - - - extract_list - - - - OVERLAY - - - ( - - - - overlay_list - - - - POSITION - - - ( - - - - position_list - - - - SUBSTRING - - - ( - - - - substr_list - - - - GREATEST - - - LEAST - - - ( - - - - expr_list - - - - TRIM - - - ( - - - BOTH - - - LEADING - - - TRAILING - - - - trim_list - - - - ) - - - - -

referenced by: -

-


joined_table:

- - - - - - - ( - - - - joined_table - - - - ) - - - - table_ref - - - - CROSS - - - NATURAL - - - - join_type - - - - JOIN - - - - table_ref - - - - - join_type - - - - JOIN - - - - table_ref - - - - - join_qual - - - - - -

referenced by: -

-


alias_clause:

- - - - - - - AS - - - - table_alias_name - - - - - opt_column_list - - - - - -

referenced by: -

-


iso_level:

- - - - - - - READ - - - UNCOMMITTED - - - COMMITTED - - - SNAPSHOT - - - REPEATABLE - - - READ - - - SERIALIZABLE - - - - -

referenced by: -

-


user_priority:

- - - - - - - LOW - - - NORMAL - - - HIGH - - - - -

referenced by: -

-


alter_column_default:

- - - - - - - SET - - - DEFAULT - - - - a_expr - - - - DROP - - - DEFAULT - - - - -

referenced by: -

-


opt_validate_behavior:

- - - - - - - NOT - - - VALID - - - - -

referenced by: -

-


audit_mode:

- - - - - - - READ - - - WRITE - - - OFF - - - - -

referenced by: -

-


signed_iconst64:

- - - - - - - - signed_iconst - - - - - -

referenced by: -

-


list_partition:

- - - - - - - - partition - - - - VALUES - - - IN - - - ( - - - - expr_list - - - - ) - - - - opt_partition_by - - - - - -

referenced by: -

-


range_partition:

- - - - - - - - partition - - - - VALUES - - - FROM - - - ( - - - - expr_list - - - - ) - - - TO - - - ( - - - - expr_list - - - - ) - - - - opt_partition_by - - - - - -

referenced by: -

-


col_qualification_elem:

- - - - - - - NOT - - - NULL - - - UNIQUE - - - PRIMARY - - - KEY - - - CHECK - - - ( - - - - a_expr - - - - ) - - - DEFAULT - - - - b_expr - - - - REFERENCES - - - - table_name - - - - - opt_name_parens - - - - - reference_actions - - - - AS - - - ( - - - - a_expr - - - - ) - - - STORED - - - - -

referenced by: -

-


family_name:

- - - - - - - - name - - - - - -

referenced by: -

-


reference_on_update:

- - - - - - - ON - - - UPDATE - - - - reference_action - - - - - -

referenced by: -

-


reference_on_delete:

- - - - - - - ON - - - DELETE - - - - reference_action - - - - - -

referenced by: -

-


prefixed_column_path:

- - - - - - - - name - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


window_specification:

- - - - - - - ( - - - - opt_existing_window_name - - - - - opt_partition_clause - - - - - opt_sort_clause - - - - ) - - - - -

referenced by: -

-


window_name:

- - - - - - - - name - - - - - -

referenced by: -

-


window_definition:

- - - - - - - - window_name - - - - AS - - - - window_specification - - - - - -

referenced by: -

-


index_hints_param_list:

- - - - - - - - index_hints_param - - - - , - - - - -

referenced by: -

-


type_function_name:

- - - - - - - identifier - - - - unreserved_keyword - - - - - type_func_name_keyword - - - - - -

referenced by: -

-


extract_list:

- - - - - - - - extract_arg - - - - FROM - - - - a_expr - - - - - expr_list - - - - - -

referenced by: -

-


overlay_list:

- - - - - - - - a_expr - - - - - overlay_placing - - - - - substr_from - - - - - substr_for - - - - - expr_list - - - - - -

referenced by: -

-


position_list:

- - - - - - - - b_expr - - - - IN - - - - b_expr - - - - - -

referenced by: -

-


substr_list:

- - - - - - - - a_expr - - - - - substr_from - - - - - substr_for - - - - - substr_for - - - - - substr_from - - - - - opt_expr_list - - - - - -

referenced by: -

-


trim_list:

- - - - - - - - a_expr - - - - FROM - - - - expr_list - - - - - -

referenced by: -

-


join_type:

- - - - - - - FULL - - - LEFT - - - RIGHT - - - - join_outer - - - - INNER - - - - -

referenced by: -

-


join_qual:

- - - - - - - USING - - - ( - - - - name_list - - - - ) - - - ON - - - - a_expr - - - - - -

referenced by: -

-


partition:

- - - - - - - PARTITION - - - - partition_name - - - - - -

referenced by: -

-


opt_name_parens:

- - - - - - - ( - - - - name - - - - ) - - - - -

referenced by: -

-


reference_action:

- - - - - - - NO - - - ACTION - - - RESTRICT - - - CASCADE - - - SET - - - NULL - - - DEFAULT - - - - -

referenced by: -

-


opt_existing_window_name:

- - - - - - - - name - - - - - -

referenced by: -

-


opt_partition_clause:

- - - - - - - PARTITION - - - BY - - - - expr_list - - - - - -

referenced by: -

-


index_hints_param:

- - - - - - - FORCE_INDEX - - - = - - - - index_name - - - - NO_INDEX_JOIN - - - - -

referenced by: -

-


extract_arg:

- - - - - - - identifier - - - YEAR - - - MONTH - - - DAY - - - HOUR - - - MINUTE - - - SECOND - - - - -

referenced by: -

-


overlay_placing:

- - - - - - - PLACING - - - - a_expr - - - - - -

referenced by: -

-


substr_from:

- - - - - - - FROM - - - - a_expr - - - - - -

referenced by: -

-


substr_for:

- - - - - - - FOR - - - - a_expr - - - - - -

referenced by: -

-


join_outer:

- - - - - - - OUTER - - - - -

referenced by: -

-


partition_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


generated by Railroad Diagram Generator

diff --git a/src/current/_includes/v2.0/sql/diagrams/table.html b/src/current/_includes/v2.0/sql/diagrams/table.html deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/current/_includes/v2.0/sql/diagrams/table_clause.html b/src/current/_includes/v2.0/sql/diagrams/table_clause.html deleted file mode 100644 index 97691481d76..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/table_clause.html +++ /dev/null @@ -1,15 +0,0 @@ -
- - - - -TABLE - - - -table_ref - - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/table_constraint.html b/src/current/_includes/v2.0/sql/diagrams/table_constraint.html deleted file mode 100644 index ac37f0f1eac..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/table_constraint.html +++ /dev/null @@ -1,120 +0,0 @@ -
- - - - - - CONSTRAINT - - - - constraint_name - - - - CHECK - - - ( - - - - a_expr - - - - PRIMARY - - - KEY - - - ( - - - - index_params - - - - ) - - - UNIQUE - - - ( - - - - index_params - - - - ) - - - COVERING - - - STORING - - - ( - - - - name_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by - - - - FOREIGN - - - KEY - - - ( - - - - name_list - - - - ) - - - REFERENCES - - - - table_name - - - - - opt_column_list - - - - - reference_actions - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/table_ref.html b/src/current/_includes/v2.0/sql/diagrams/table_ref.html deleted file mode 100644 index e3164a6e2a9..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/table_ref.html +++ /dev/null @@ -1,85 +0,0 @@ - diff --git a/src/current/_includes/v2.0/sql/diagrams/truncate.html b/src/current/_includes/v2.0/sql/diagrams/truncate.html deleted file mode 100644 index 06cb91a310c..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/truncate.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - TRUNCATE - - - TABLE - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/unique_column_level.html b/src/current/_includes/v2.0/sql/diagrams/unique_column_level.html deleted file mode 100644 index c7c178e9351..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/unique_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - UNIQUE - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/unique_table_level.html b/src/current/_includes/v2.0/sql/diagrams/unique_table_level.html deleted file mode 100644 index e77a972161a..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/unique_table_level.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - UNIQUE - - - ( - - - - column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/update.html b/src/current/_includes/v2.0/sql/diagrams/update.html deleted file mode 100644 index 7ead70594b4..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/update.html +++ /dev/null @@ -1,118 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -UPDATE - - - -table_name - - - -AS - - - -table_alias_name - - - -SET - - - -column_name - - - -= - - - -a_expr - - - -( - - - -column_name - - - -, - - -) - - -= - - -( - - - -select_stmt - - - - -a_expr - - - -, - - -) - - -, - - -WHERE - - - -a_expr - - - - -sort_clause - - - - -limit_clause - - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/upsert.html b/src/current/_includes/v2.0/sql/diagrams/upsert.html deleted file mode 100644 index b4d7987ddfe..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/upsert.html +++ /dev/null @@ -1,71 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -UPSERT - - -INTO - - - -table_name - - - -AS - - - -table_alias_name - - - -( - - - -column_name - - - -, - - -) - - -select_stmt - - -DEFAULT - - -VALUES - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/validate_constraint.html b/src/current/_includes/v2.0/sql/diagrams/validate_constraint.html deleted file mode 100644 index d470d8dd98f..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/validate_constraint.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - VALIDATE - - - CONSTRAINT - - - - constraint_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v2.0/sql/diagrams/values_clause.html b/src/current/_includes/v2.0/sql/diagrams/values_clause.html deleted file mode 100644 index 34f78e982b4..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/values_clause.html +++ /dev/null @@ -1,27 +0,0 @@ -
- - - - -VALUES - - -( - - - -a_expr - - - -, - - -) - - -, - - - -
diff --git a/src/current/_includes/v2.0/sql/diagrams/with_clause.html b/src/current/_includes/v2.0/sql/diagrams/with_clause.html deleted file mode 100644 index 0f746306ae3..00000000000 --- a/src/current/_includes/v2.0/sql/diagrams/with_clause.html +++ /dev/null @@ -1,71 +0,0 @@ - diff --git a/src/current/_includes/v2.0/sql/function-special-forms.md b/src/current/_includes/v2.0/sql/function-special-forms.md deleted file mode 100644 index bb4b06bbe39..00000000000 --- a/src/current/_includes/v2.0/sql/function-special-forms.md +++ /dev/null @@ -1,27 +0,0 @@ -| Special form | Equivalent to | -|-----------------------------------------------------------|---------------------------------------------| -| `CURRENT_CATALOG` | `current_catalog()` | -| `CURRENT_DATE` | `current_date()` | -| `CURRENT_ROLE` | `current_user()` | -| `CURRENT_SCHEMA` | `current_schema()` | -| `CURRENT_TIMESTAMP` | `current_timestamp()` | -| `CURRENT_TIME` | `current_time()` | -| `CURRENT_USER` | `current_user()` | -| `EXTRACT( FROM )` | `extract("", )` | -| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` | -| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` | -| `OVERLAY( PLACING FROM )` | `overlay(, , )` | -| `POSITION( IN )` | `strpos(, )` | -| `SESSION_USER` | `current_user()` | -| `SUBSTRING( FOR FROM )` | `substring(, , )` | -| `SUBSTRING( FOR )` | `substring(, 1, )` | -| `SUBSTRING( FROM FOR )` | `substring(, , )` | -| `SUBSTRING( FROM )` | `substring(, )` | -| `TRIM( FROM )` | `btrim(, )` | -| `TRIM(, )` | `btrim(, )` | -| `TRIM(FROM )` | `btrim()` | -| `TRIM(LEADING FROM )` | `ltrim(, )` | -| `TRIM(LEADING FROM )` | `ltrim()` | -| `TRIM(TRAILING FROM )` | `rtrim(, )` | -| `TRIM(TRAILING FROM )` | `rtrim()` | -| `USER` | `current_user()` | diff --git a/src/current/_includes/v2.0/sql/settings/settings.md b/src/current/_includes/v2.0/sql/settings/settings.md deleted file mode 100644 index 67a6dab2e4a..00000000000 --- a/src/current/_includes/v2.0/sql/settings/settings.md +++ /dev/null @@ -1,52 +0,0 @@ -| SETTING | TYPE | DEFAULT | DESCRIPTION | -|-----------------------------------------------------|-------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------| -| `cloudstorage.gs.default.key` | string | `` | if set, JSON key to use during Google Cloud Storage operations | -| `cloudstorage.http.custom_ca` | string | `` | custom root CA (appended to system's default CAs) for verifying certificates when interacting with HTTPS storage | -| `cluster.organization` | string | `` | organization name | -| `debug.panic_on_failed_assertions` | boolean | `false` | panic when an assertion fails rather than reporting | -| `diagnostics.reporting.enabled` | boolean | `true` | enable reporting diagnostic metrics to cockroach labs | -| `diagnostics.reporting.interval` | duration | `1h0m0s` | interval at which diagnostics data should be reported | -| `diagnostics.reporting.send_crash_reports` | boolean | `true` | send crash and panic reports | -| `kv.allocator.lease_rebalancing_aggressiveness` | float | `1E+00` | set greater than 1.0 to rebalance leases toward load more aggressively, or between 0 and 1.0 to be more conservative about rebalancing leases | -| `kv.allocator.load_based_lease_rebalancing.enabled` | boolean | `true` | set to enable rebalancing of range leases based on load and latency | -| `kv.allocator.range_rebalance_threshold` | float | `5E-02` | minimum fraction away from the mean a store's range count can be before it is considered overfull or underfull | -| `kv.allocator.stat_based_rebalancing.enabled` | boolean | `false` | set to enable rebalancing of range replicas based on write load and disk usage | -| `kv.allocator.stat_rebalance_threshold` | float | `2E-01` | minimum fraction away from the mean a store's stats (like disk usage or writes per second) can be before it is considered overfull or underfull | -| `kv.bulk_io_write.max_rate` | byte size | `8.0 EiB` | the rate limit (bytes/sec) to use for writes to disk on behalf of bulk io ops | -| `kv.bulk_sst.sync_size` | byte size | `2.0 MiB` | threshold after which non-Rocks SST writes must fsync (0 disables) | -| `kv.raft.command.max_size` | byte size | `64 MiB` | maximum size of a raft command | -| `kv.raft_log.synchronize` | boolean | `true` | set to true to synchronize on Raft log writes to persistent storage | -| `kv.range.backpressure_range_size_multiplier` | float | `2E+00` | multiple of range_max_bytes that a range is allowed to grow to without splitting before writes to that range are blocked, or 0 to disable | -| `kv.range_descriptor_cache.size` | integer | `1000000` | maximum number of entries in the range descriptor and leaseholder caches | -| `kv.snapshot_rebalance.max_rate` | byte size | `2.0 MiB` | the rate limit (bytes/sec) to use for rebalance snapshots | -| `kv.snapshot_recovery.max_rate` | byte size | `8.0 MiB` | the rate limit (bytes/sec) to use for recovery snapshots | -| `kv.transaction.max_intents_bytes` | integer | `256000` | maximum number of bytes used to track write intents in transactions | -| `kv.transaction.max_refresh_spans_bytes` | integer | `256000` | maximum number of bytes used to track refresh spans in serializable transactions | -| `rocksdb.min_wal_sync_interval` | duration | `0s` | minimum duration between syncs of the RocksDB WAL | -| `server.consistency_check.interval` | duration | `24h0m0s` | the time between range consistency checks; set to 0 to disable consistency checking | -| `server.declined_reservation_timeout` | duration | `1s` | the amount of time to consider the store throttled for up-replication after a reservation was declined | -| `server.failed_reservation_timeout` | duration | `5s` | the amount of time to consider the store throttled for up-replication after a failed reservation call | -| `server.remote_debugging.mode` | string | `local` | set to enable remote debugging, localhost-only or disable (any, local, off) | -| `server.shutdown.drain_wait` | duration | `0s` | the amount of time a server waits in an unready state before proceeding with the rest of the shutdown process | -| `server.shutdown.query_wait` | duration | `10s` | the server will wait for at least this amount of time for active queries to finish | -| `server.time_until_store_dead` | duration | `5m0s` | the time after which if there is no new gossiped information about a store, it is considered dead | -| `server.web_session_timeout` | duration | `168h0m0s` | the duration that a newly created web session will be valid | -| `sql.defaults.distsql` | enumeration | `1` | Default distributed SQL execution mode [off = 0, auto = 1, on = 2] | -| `sql.distsql.distribute_index_joins` | boolean | `true` | if set, for index joins we instantiate a join reader on every node that has a stream; if not set, we use a single join reader | -| `sql.distsql.interleaved_joins.enabled` | boolean | `true` | if set we plan interleaved table joins instead of merge joins when possible | -| `sql.distsql.merge_joins.enabled` | boolean | `true` | if set, we plan merge joins when possible | -| `sql.distsql.temp_storage.joins` | boolean | `true` | set to true to enable use of disk for distributed sql joins | -| `sql.distsql.temp_storage.sorts` | boolean | `true` | set to true to enable use of disk for distributed sql sorts | -| `sql.distsql.temp_storage.workmem` | byte size | `64 MiB` | maximum amount of memory in bytes a processor can use before falling back to temp storage | -| `sql.metrics.statement_details.dump_to_logs` | boolean | `false` | dump collected statement statistics to node logs when periodically cleared | -| `sql.metrics.statement_details.enabled` | boolean | `true` | collect per-statement query statistics | -| `sql.metrics.statement_details.threshold` | duration | `0s` | minimum execution time to cause statistics to be collected | -| `sql.trace.log_statement_execute` | boolean | `false` | set to true to enable logging of executed statements | -| `sql.trace.session_eventlog.enabled` | boolean | `false` | set to true to enable session tracing | -| `sql.trace.txn.enable_threshold` | duration | `0s` | duration beyond which all transactions are traced (set to 0 to disable) | -| `timeseries.resolution_10s.storage_duration` | duration | `720h0m0s` | the amount of time to store timeseries data | -| `timeseries.storage.enabled` | boolean | `true` | if set, periodic timeseries data is stored within the cluster; disabling is not recommended unless you are storing the data elsewhere | -| `trace.debug.enable` | boolean | `false` | if set, traces for recent requests can be seen in the /debug page | -| `trace.lightstep.token` | string | `` | if set, traces go to Lightstep using this token | -| `trace.zipkin.collector` | string | `` | if set, traces go to the given Zipkin instance (example: '127.0.0.1:9411'); ignored if trace.lightstep.token is set. | -| `version` | custom validation | `2.0` | set the active cluster version in the format '.'. | diff --git a/src/current/_includes/v2.0/start-in-docker/mac-linux-steps.md b/src/current/_includes/v2.0/start-in-docker/mac-linux-steps.md deleted file mode 100644 index e8715c0dd48..00000000000 --- a/src/current/_includes/v2.0/start-in-docker/mac-linux-steps.md +++ /dev/null @@ -1,160 +0,0 @@ -## Before you begin - -If you have not already installed the official CockroachDB Docker image, go to [Install CockroachDB](install-cockroachdb.html) and follow the instructions under **Use Docker**. - -## Step 1. Create a bridge network - -Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks. - -{% include copy-clipboard.html %} -~~~ shell -$ docker network create -d bridge roachnet -~~~ - -We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like. - -## Step 2. Start the first node - -{% include copy-clipboard.html %} -~~~ shell -$ docker run -d \ ---name=roach1 \ ---hostname=roach1 \ ---net=roachnet \ --p 26257:26257 -p 8080:8080 \ --v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data" \ -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure -~~~ - -This command creates a container and starts the first CockroachDB node inside it. Let's look at each part: - -- `docker run`: The Docker command to start a new container. -- `-d`: This flag runs the container in the background so you can continue the next steps in the same shell. -- `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container. -- `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster. -- `--net`: The bridge network for the container to join. See step 1 for more details. -- `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the Admin UI (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser. -- `-v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in `${PWD}/cockroach-data/roach1` on the host and will persist after the container is stopped or deleted. For more details, see Docker's Bind Mounts topic. -- `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode. - -## Step 3. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. - -To simulate a real deployment, scale your cluster by adding two more nodes: - -{% include copy-clipboard.html %} -~~~ shell -$ docker run -d \ ---name=roach2 \ ---hostname=roach2 \ ---net=roachnet \ --v "${PWD}/cockroach-data/roach2:/cockroach/cockroach-data" \ -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ docker run -d \ ---name=roach3 \ ---hostname=roach3 \ ---net=roachnet \ --v "${PWD}/cockroach-data/roach3:/cockroach/cockroach-data" \ -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1 -~~~ - -These commands add two more containers and start CockroachDB nodes inside them, joining them to the first node. There are only a few differences to note from step 2: - -- `-v`: This flag mounts a host directory as a data volume. Data and logs for these nodes will be stored in `${PWD}/cockroach-data/roach2` and `${PWD}/cockroach-data/roach3` on the host and will persist after the containers are stopped or deleted. -- `--join`: This flag joins the new nodes to the cluster, using the first container's `hostname`. Otherwise, all [`cockroach start`](start-a-node.html) defaults are accepted. Note that since each node is in a unique container, using identical default ports won’t cause conflicts. - -## Step 4. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, use the `docker exec` command to start the [built-in SQL shell](use-the-built-in-sql-client.html) in the first container: - -{% include copy-clipboard.html %} -~~~ shell -$ docker exec -it roach1 ./cockroach sql --insecure -~~~ - -~~~ -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -~~~ - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bank.accounts VALUES (1, 1000.50); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Then start the SQL shell in the second container: - -{% include copy-clipboard.html %} -~~~ shell -$ docker exec -it roach2 ./cockroach sql --insecure -~~~ - -~~~ -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -~~~ - -Now run the same `SELECT` query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -When you're done, exit the SQL shell on node 2: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/images/v2.0/2automated-scaling-repair.png b/src/current/images/v2.0/2automated-scaling-repair.png deleted file mode 100644 index 2402db24d75..00000000000 Binary files a/src/current/images/v2.0/2automated-scaling-repair.png and /dev/null differ diff --git a/src/current/images/v2.0/2distributed-transactions.png b/src/current/images/v2.0/2distributed-transactions.png deleted file mode 100644 index 52fc2d11943..00000000000 Binary files a/src/current/images/v2.0/2distributed-transactions.png and /dev/null differ diff --git a/src/current/images/v2.0/2go-implementation.png b/src/current/images/v2.0/2go-implementation.png deleted file mode 100644 index e5729f51cfb..00000000000 Binary files a/src/current/images/v2.0/2go-implementation.png and /dev/null differ diff --git a/src/current/images/v2.0/2open-source.png b/src/current/images/v2.0/2open-source.png deleted file mode 100644 index b2a936d8d29..00000000000 Binary files a/src/current/images/v2.0/2open-source.png and /dev/null differ diff --git a/src/current/images/v2.0/2simplified-deployments.png b/src/current/images/v2.0/2simplified-deployments.png deleted file mode 100644 index 15576d1ae5d..00000000000 Binary files a/src/current/images/v2.0/2simplified-deployments.png and /dev/null differ diff --git a/src/current/images/v2.0/2strong-consistency.png b/src/current/images/v2.0/2strong-consistency.png deleted file mode 100644 index 571dc01761d..00000000000 Binary files a/src/current/images/v2.0/2strong-consistency.png and /dev/null differ diff --git a/src/current/images/v2.0/CockroachDB_Training_Wide.png b/src/current/images/v2.0/CockroachDB_Training_Wide.png deleted file mode 100644 index 0844c2b50e0..00000000000 Binary files a/src/current/images/v2.0/CockroachDB_Training_Wide.png and /dev/null differ diff --git a/src/current/images/v2.0/Parallel_Statement_Execution_Error_Mismatch.png b/src/current/images/v2.0/Parallel_Statement_Execution_Error_Mismatch.png deleted file mode 100644 index f60360c9598..00000000000 Binary files a/src/current/images/v2.0/Parallel_Statement_Execution_Error_Mismatch.png and /dev/null differ diff --git a/src/current/images/v2.0/Parallel_Statement_Hybrid_Execution.png b/src/current/images/v2.0/Parallel_Statement_Hybrid_Execution.png deleted file mode 100644 index a4edf85dc02..00000000000 Binary files a/src/current/images/v2.0/Parallel_Statement_Hybrid_Execution.png and /dev/null differ diff --git a/src/current/images/v2.0/Parallel_Statement_Normal_Execution.png b/src/current/images/v2.0/Parallel_Statement_Normal_Execution.png deleted file mode 100644 index df63ab1da01..00000000000 Binary files a/src/current/images/v2.0/Parallel_Statement_Normal_Execution.png and /dev/null differ diff --git a/src/current/images/v2.0/Sequential_Statement_Execution.png b/src/current/images/v2.0/Sequential_Statement_Execution.png deleted file mode 100644 index 99c47c51664..00000000000 Binary files a/src/current/images/v2.0/Sequential_Statement_Execution.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-cluster-overview-panel.png b/src/current/images/v2.0/admin-ui-cluster-overview-panel.png deleted file mode 100644 index ee906077ee8..00000000000 Binary files a/src/current/images/v2.0/admin-ui-cluster-overview-panel.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-custom-chart-debug-00.png b/src/current/images/v2.0/admin-ui-custom-chart-debug-00.png deleted file mode 100644 index 7e94d83da20..00000000000 Binary files a/src/current/images/v2.0/admin-ui-custom-chart-debug-00.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-components.png b/src/current/images/v2.0/admin-ui-node-components.png deleted file mode 100644 index 2ed730ff80c..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-components.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-list.png b/src/current/images/v2.0/admin-ui-node-list.png deleted file mode 100644 index 9820b63c12a..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-list.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-map-after-license.png b/src/current/images/v2.0/admin-ui-node-map-after-license.png deleted file mode 100644 index fa47a7b579f..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-map-after-license.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-map-before-license.png b/src/current/images/v2.0/admin-ui-node-map-before-license.png deleted file mode 100644 index f352e214868..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-map-before-license.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-map-complete.png b/src/current/images/v2.0/admin-ui-node-map-complete.png deleted file mode 100644 index 46b1c38d4bf..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-map-complete.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-map-navigation.gif b/src/current/images/v2.0/admin-ui-node-map-navigation.gif deleted file mode 100644 index 67ce2dc009c..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-map-navigation.gif and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-node-map.png b/src/current/images/v2.0/admin-ui-node-map.png deleted file mode 100644 index c1e0b83a3dc..00000000000 Binary files a/src/current/images/v2.0/admin-ui-node-map.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-region-component.png b/src/current/images/v2.0/admin-ui-region-component.png deleted file mode 100644 index c36a362d107..00000000000 Binary files a/src/current/images/v2.0/admin-ui-region-component.png and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-single-node.gif b/src/current/images/v2.0/admin-ui-single-node.gif deleted file mode 100644 index f60d25b0e2a..00000000000 Binary files a/src/current/images/v2.0/admin-ui-single-node.gif and /dev/null differ diff --git a/src/current/images/v2.0/admin-ui-time-range.gif b/src/current/images/v2.0/admin-ui-time-range.gif deleted file mode 100644 index c28807b9a1b..00000000000 Binary files a/src/current/images/v2.0/admin-ui-time-range.gif and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui.png b/src/current/images/v2.0/admin_ui.png deleted file mode 100644 index 33bce5efcdd..00000000000 Binary files a/src/current/images/v2.0/admin_ui.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_capacity.png b/src/current/images/v2.0/admin_ui_capacity.png deleted file mode 100644 index 1e9085851af..00000000000 Binary files a/src/current/images/v2.0/admin_ui_capacity.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_clock_offset.png b/src/current/images/v2.0/admin_ui_clock_offset.png deleted file mode 100644 index 2f4b3051282..00000000000 Binary files a/src/current/images/v2.0/admin_ui_clock_offset.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_cpu_time.png b/src/current/images/v2.0/admin_ui_cpu_time.png deleted file mode 100644 index 3e81817ca38..00000000000 Binary files a/src/current/images/v2.0/admin_ui_cpu_time.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_database_grants_view.png b/src/current/images/v2.0/admin_ui_database_grants_view.png deleted file mode 100644 index ad18cc34ce6..00000000000 Binary files a/src/current/images/v2.0/admin_ui_database_grants_view.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_database_tables_view.png b/src/current/images/v2.0/admin_ui_database_tables_view.png deleted file mode 100644 index 27acc8b8efb..00000000000 Binary files a/src/current/images/v2.0/admin_ui_database_tables_view.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_events.png b/src/current/images/v2.0/admin_ui_events.png deleted file mode 100644 index 3d3a4738c78..00000000000 Binary files a/src/current/images/v2.0/admin_ui_events.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_file_descriptors.png b/src/current/images/v2.0/admin_ui_file_descriptors.png deleted file mode 100644 index 42187c9878d..00000000000 Binary files a/src/current/images/v2.0/admin_ui_file_descriptors.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_hovering.gif b/src/current/images/v2.0/admin_ui_hovering.gif deleted file mode 100644 index 1795471051f..00000000000 Binary files a/src/current/images/v2.0/admin_ui_hovering.gif and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_jobs_page.png b/src/current/images/v2.0/admin_ui_jobs_page.png deleted file mode 100644 index a9f07a785a3..00000000000 Binary files a/src/current/images/v2.0/admin_ui_jobs_page.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_memory_usage.png b/src/current/images/v2.0/admin_ui_memory_usage.png deleted file mode 100644 index ffc2c515616..00000000000 Binary files a/src/current/images/v2.0/admin_ui_memory_usage.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_node_count.png b/src/current/images/v2.0/admin_ui_node_count.png deleted file mode 100644 index d5c103fc868..00000000000 Binary files a/src/current/images/v2.0/admin_ui_node_count.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_nodes_page.png b/src/current/images/v2.0/admin_ui_nodes_page.png deleted file mode 100644 index 495ff14eea0..00000000000 Binary files a/src/current/images/v2.0/admin_ui_nodes_page.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_overview_dashboard.png b/src/current/images/v2.0/admin_ui_overview_dashboard.png deleted file mode 100644 index f1ef539a293..00000000000 Binary files a/src/current/images/v2.0/admin_ui_overview_dashboard.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_ranges.png b/src/current/images/v2.0/admin_ui_ranges.png deleted file mode 100644 index 316186bb4a3..00000000000 Binary files a/src/current/images/v2.0/admin_ui_ranges.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replica_quiescence.png b/src/current/images/v2.0/admin_ui_replica_quiescence.png deleted file mode 100644 index 663dbfb097e..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replica_quiescence.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replica_snapshots.png b/src/current/images/v2.0/admin_ui_replica_snapshots.png deleted file mode 100644 index 56146c7f775..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replica_snapshots.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replicas.png b/src/current/images/v2.0/admin_ui_replicas.png deleted file mode 100644 index 8ee31eed675..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replicas.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replicas_migration.png b/src/current/images/v2.0/admin_ui_replicas_migration.png deleted file mode 100644 index 6e08c5a3a5b..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replicas_migration.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replicas_migration2.png b/src/current/images/v2.0/admin_ui_replicas_migration2.png deleted file mode 100644 index f7183689f20..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replicas_migration2.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replicas_migration3.png b/src/current/images/v2.0/admin_ui_replicas_migration3.png deleted file mode 100644 index b7d9fd39760..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replicas_migration3.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replicas_per_node.png b/src/current/images/v2.0/admin_ui_replicas_per_node.png deleted file mode 100644 index a6a662c6f32..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replicas_per_node.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_replicas_per_store.png b/src/current/images/v2.0/admin_ui_replicas_per_store.png deleted file mode 100644 index 2036c392fc8..00000000000 Binary files a/src/current/images/v2.0/admin_ui_replicas_per_store.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_service_latency_99_percentile.png b/src/current/images/v2.0/admin_ui_service_latency_99_percentile.png deleted file mode 100644 index 7e14805d21d..00000000000 Binary files a/src/current/images/v2.0/admin_ui_service_latency_99_percentile.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_sql_byte_traffic.png b/src/current/images/v2.0/admin_ui_sql_byte_traffic.png deleted file mode 100644 index 9f077b25259..00000000000 Binary files a/src/current/images/v2.0/admin_ui_sql_byte_traffic.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_sql_connections.png b/src/current/images/v2.0/admin_ui_sql_connections.png deleted file mode 100644 index 7cda5614e49..00000000000 Binary files a/src/current/images/v2.0/admin_ui_sql_connections.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_sql_queries.png b/src/current/images/v2.0/admin_ui_sql_queries.png deleted file mode 100644 index 94ed02d88ae..00000000000 Binary files a/src/current/images/v2.0/admin_ui_sql_queries.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_summary_panel.png b/src/current/images/v2.0/admin_ui_summary_panel.png deleted file mode 100644 index 5eaa9b18439..00000000000 Binary files a/src/current/images/v2.0/admin_ui_summary_panel.png and /dev/null differ diff --git a/src/current/images/v2.0/admin_ui_transactions.png b/src/current/images/v2.0/admin_ui_transactions.png deleted file mode 100644 index 5131ecc6b2d..00000000000 Binary files a/src/current/images/v2.0/admin_ui_transactions.png and /dev/null differ diff --git a/src/current/images/v2.0/after-decommission1.png b/src/current/images/v2.0/after-decommission1.png deleted file mode 100644 index 945ec05f974..00000000000 Binary files a/src/current/images/v2.0/after-decommission1.png and /dev/null differ diff --git a/src/current/images/v2.0/after-decommission2.png b/src/current/images/v2.0/after-decommission2.png deleted file mode 100644 index fbb041d2c14..00000000000 Binary files a/src/current/images/v2.0/after-decommission2.png and /dev/null differ diff --git a/src/current/images/v2.0/automated-operations1.png b/src/current/images/v2.0/automated-operations1.png deleted file mode 100644 index 64c6e51616c..00000000000 Binary files a/src/current/images/v2.0/automated-operations1.png and /dev/null differ diff --git a/src/current/images/v2.0/before-decommission1.png b/src/current/images/v2.0/before-decommission1.png deleted file mode 100644 index 91627545b22..00000000000 Binary files a/src/current/images/v2.0/before-decommission1.png and /dev/null differ diff --git a/src/current/images/v2.0/before-decommission2.png b/src/current/images/v2.0/before-decommission2.png deleted file mode 100644 index 063efeb6326..00000000000 Binary files a/src/current/images/v2.0/before-decommission2.png and /dev/null differ diff --git a/src/current/images/v2.0/cloudformation_admin_ui_live_node_count.png b/src/current/images/v2.0/cloudformation_admin_ui_live_node_count.png deleted file mode 100644 index fce52a39034..00000000000 Binary files a/src/current/images/v2.0/cloudformation_admin_ui_live_node_count.png and /dev/null differ diff --git a/src/current/images/v2.0/cloudformation_admin_ui_replicas.png b/src/current/images/v2.0/cloudformation_admin_ui_replicas.png deleted file mode 100644 index 9327b1004e4..00000000000 Binary files a/src/current/images/v2.0/cloudformation_admin_ui_replicas.png and /dev/null differ diff --git a/src/current/images/v2.0/cloudformation_admin_ui_sql_queries.png b/src/current/images/v2.0/cloudformation_admin_ui_sql_queries.png deleted file mode 100644 index 843d94b30f0..00000000000 Binary files a/src/current/images/v2.0/cloudformation_admin_ui_sql_queries.png and /dev/null differ diff --git a/src/current/images/v2.0/cluster-status-after-decommission1.png b/src/current/images/v2.0/cluster-status-after-decommission1.png deleted file mode 100644 index 35d96fef0d5..00000000000 Binary files a/src/current/images/v2.0/cluster-status-after-decommission1.png and /dev/null differ diff --git a/src/current/images/v2.0/cluster-status-after-decommission2.png b/src/current/images/v2.0/cluster-status-after-decommission2.png deleted file mode 100644 index e420e202aa6..00000000000 Binary files a/src/current/images/v2.0/cluster-status-after-decommission2.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple1.png b/src/current/images/v2.0/decommission-multiple1.png deleted file mode 100644 index 30c90280f7c..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple1.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple2.png b/src/current/images/v2.0/decommission-multiple2.png deleted file mode 100644 index d93abcd4acb..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple2.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple3.png b/src/current/images/v2.0/decommission-multiple3.png deleted file mode 100644 index 3a1d17176de..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple3.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple4.png b/src/current/images/v2.0/decommission-multiple4.png deleted file mode 100644 index 854c4ba50c9..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple4.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple5.png b/src/current/images/v2.0/decommission-multiple5.png deleted file mode 100644 index 3a8621e956b..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple5.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple6.png b/src/current/images/v2.0/decommission-multiple6.png deleted file mode 100644 index 168ba907be1..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple6.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-multiple7.png b/src/current/images/v2.0/decommission-multiple7.png deleted file mode 100644 index a52d034cf9a..00000000000 Binary files a/src/current/images/v2.0/decommission-multiple7.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario1.1.png b/src/current/images/v2.0/decommission-scenario1.1.png deleted file mode 100644 index a66389270de..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario1.1.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario1.2.png b/src/current/images/v2.0/decommission-scenario1.2.png deleted file mode 100644 index 9b33855e101..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario1.2.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario1.3.png b/src/current/images/v2.0/decommission-scenario1.3.png deleted file mode 100644 index 4c1175d956b..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario1.3.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario2.1.png b/src/current/images/v2.0/decommission-scenario2.1.png deleted file mode 100644 index 2fa8790c556..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario2.1.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario2.2.png b/src/current/images/v2.0/decommission-scenario2.2.png deleted file mode 100644 index 391b8e24c0f..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario2.2.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario3.1.png b/src/current/images/v2.0/decommission-scenario3.1.png deleted file mode 100644 index db682df3d78..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario3.1.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario3.2.png b/src/current/images/v2.0/decommission-scenario3.2.png deleted file mode 100644 index 3571bd0b83e..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario3.2.png and /dev/null differ diff --git a/src/current/images/v2.0/decommission-scenario3.3.png b/src/current/images/v2.0/decommission-scenario3.3.png deleted file mode 100644 index 45f61d9bd18..00000000000 Binary files a/src/current/images/v2.0/decommission-scenario3.3.png and /dev/null differ diff --git a/src/current/images/v2.0/follow-workload-1.png b/src/current/images/v2.0/follow-workload-1.png deleted file mode 100644 index a58fcb2e5ed..00000000000 Binary files a/src/current/images/v2.0/follow-workload-1.png and /dev/null differ diff --git a/src/current/images/v2.0/follow-workload-2.png b/src/current/images/v2.0/follow-workload-2.png deleted file mode 100644 index 47d83c5d4d6..00000000000 Binary files a/src/current/images/v2.0/follow-workload-2.png and /dev/null differ diff --git a/src/current/images/v2.0/icon_info.svg b/src/current/images/v2.0/icon_info.svg deleted file mode 100644 index 57aac994733..00000000000 --- a/src/current/images/v2.0/icon_info.svg +++ /dev/null @@ -1,4 +0,0 @@ - - - - \ No newline at end of file diff --git a/src/current/images/v2.0/perf_tuning_concepts1.png b/src/current/images/v2.0/perf_tuning_concepts1.png deleted file mode 100644 index 3a086a41c26..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_concepts1.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_concepts2.png b/src/current/images/v2.0/perf_tuning_concepts2.png deleted file mode 100644 index d67b8f253f8..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_concepts2.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_concepts3.png b/src/current/images/v2.0/perf_tuning_concepts3.png deleted file mode 100644 index 46d666be55d..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_concepts3.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_concepts4.png b/src/current/images/v2.0/perf_tuning_concepts4.png deleted file mode 100644 index b60b19e01bf..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_concepts4.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_movr_schema.png b/src/current/images/v2.0/perf_tuning_movr_schema.png deleted file mode 100644 index 262adc18b75..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_movr_schema.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing.png b/src/current/images/v2.0/perf_tuning_multi_region_rebalancing.png deleted file mode 100644 index e5ef7d970cc..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing_after_partitioning.png b/src/current/images/v2.0/perf_tuning_multi_region_rebalancing_after_partitioning.png deleted file mode 100644 index 4f358ac05af..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_multi_region_rebalancing_after_partitioning.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_multi_region_topology.png b/src/current/images/v2.0/perf_tuning_multi_region_topology.png deleted file mode 100644 index fe64c322ca0..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_multi_region_topology.png and /dev/null differ diff --git a/src/current/images/v2.0/perf_tuning_single_region_topology.png b/src/current/images/v2.0/perf_tuning_single_region_topology.png deleted file mode 100644 index 4dfca364929..00000000000 Binary files a/src/current/images/v2.0/perf_tuning_single_region_topology.png and /dev/null differ diff --git a/src/current/images/v2.0/raw-status-endpoints.png b/src/current/images/v2.0/raw-status-endpoints.png deleted file mode 100644 index a893911fa87..00000000000 Binary files a/src/current/images/v2.0/raw-status-endpoints.png and /dev/null differ diff --git a/src/current/images/v2.0/recovery1.png b/src/current/images/v2.0/recovery1.png deleted file mode 100644 index 31b74749434..00000000000 Binary files a/src/current/images/v2.0/recovery1.png and /dev/null differ diff --git a/src/current/images/v2.0/recovery2.png b/src/current/images/v2.0/recovery2.png deleted file mode 100644 index 83bd7dd66b0..00000000000 Binary files a/src/current/images/v2.0/recovery2.png and /dev/null differ diff --git a/src/current/images/v2.0/recovery3.png b/src/current/images/v2.0/recovery3.png deleted file mode 100644 index 44ecc0fec4c..00000000000 Binary files a/src/current/images/v2.0/recovery3.png and /dev/null differ diff --git a/src/current/images/v2.0/remove-dead-node1.png b/src/current/images/v2.0/remove-dead-node1.png deleted file mode 100644 index 26569078efd..00000000000 Binary files a/src/current/images/v2.0/remove-dead-node1.png and /dev/null differ diff --git a/src/current/images/v2.0/replication1.png b/src/current/images/v2.0/replication1.png deleted file mode 100644 index 1ac6c708f00..00000000000 Binary files a/src/current/images/v2.0/replication1.png and /dev/null differ diff --git a/src/current/images/v2.0/replication2.png b/src/current/images/v2.0/replication2.png deleted file mode 100644 index 5db0abed2bd..00000000000 Binary files a/src/current/images/v2.0/replication2.png and /dev/null differ diff --git a/src/current/images/v2.0/scalability1.png b/src/current/images/v2.0/scalability1.png deleted file mode 100644 index 9bebd74f1a8..00000000000 Binary files a/src/current/images/v2.0/scalability1.png and /dev/null differ diff --git a/src/current/images/v2.0/scalability2.png b/src/current/images/v2.0/scalability2.png deleted file mode 100644 index c15995d4c6f..00000000000 Binary files a/src/current/images/v2.0/scalability2.png and /dev/null differ diff --git a/src/current/images/v2.0/trace.png b/src/current/images/v2.0/trace.png deleted file mode 100644 index 4f0fb98a753..00000000000 Binary files a/src/current/images/v2.0/trace.png and /dev/null differ diff --git a/src/current/releases/v2.0.md b/src/current/releases/v2.0.md index 4a396107099..45f8b532ca5 100644 --- a/src/current/releases/v2.0.md +++ b/src/current/releases/v2.0.md @@ -1,5 +1,5 @@ --- -title: What's New in v2.0 +title: What's New in v2.0 toc: true toc_not_nested: true summary: Additions and changes in CockroachDB version v2.0 since version v1.1 @@ -8,16 +8,32 @@ docs_area: releases keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes --- -{% assign rel = site.data.releases | where_exp: "rel", "rel.major_version == page.major_version" | sort: "release_date" | reverse %} + + + + + + + + + + + + + + + + + + + + + + + + + -{% assign vers = site.data.versions | where_exp: "vers", "vers.major_version == page.major_version" | first %} +This release is no longer supported. For more information, see our [Release support policy]({% link releases/release-support-policy.md %}). -{% assign today = "today" | date: "%Y-%m-%d" %} - -{% include releases/testing-release-notice.md major_version=vers %} - -{% include releases/whats-new-intro.md major_version=vers %} - -{% for r in rel %} -{% include releases/{{ page.major_version }}/{{ r.release_name }}.md release=r.release_name release_date=r.release_date %} -{% endfor %} +To download the archived documentation for this release, see [Archived Documentation]({% link releases/archived-documentation.md %}). \ No newline at end of file diff --git a/src/current/v2.0/404.md b/src/current/v2.0/404.md deleted file mode 100755 index 13a69ddde5c..00000000000 --- a/src/current/v2.0/404.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Page Not Found -description: "Page not found." -sitemap: false -search: exclude -related_pages: none -toc: false ---- - - -{%comment%} - - -{%endcomment%} \ No newline at end of file diff --git a/src/current/v2.0/add-column.md b/src/current/v2.0/add-column.md deleted file mode 100644 index f1125bf9a7f..00000000000 --- a/src/current/v2.0/add-column.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: ADD COLUMN -summary: Use the ADD COLUMN statement to add columns to tables. -toc: true ---- - -The `ADD COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and adds columns to tables. - - -## Synopsis - -
- {% include {{ page.version.version }}/sql/diagrams/add_column.html %} -
- -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table to which you want to add the column. | -| `column_name` | The name of the column you want to add. The column name must follow these [identifier rules](keywords-and-identifiers.html#identifiers) and must be unique within the table but can have the same name as indexes or constraints. | -| `typename` | The [data type](data-types.html) of the new column. | -| `col_qualification` | An optional list of column definitions, which may include [column-level constraints](constraints.html), [collation](collate.html), or [column family assignments](column-families.html).

Note that it is not possible to add a column with the [Foreign Key](foreign-key.html) constraint. As a workaround, you can add the column without the constraint, then use [`CREATE INDEX`](create-index.html) to index the column, and then use [`ADD CONSTRAINT`](add-constraint.html) to add the Foreign Key constraint to the column. | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Add a Single Column - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN names STRING; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ -+-----------+-------------------+-------+---------+-----------+ -| Field | Type | Null | Default | Indices | -+-----------+-------------------+-------+---------+-----------+ -| id | INT | false | NULL | {primary} | -| balance | DECIMAL | true | NULL | {} | -| names | STRING | true | NULL | {} | -+-----------+-------------------+-------+---------+-----------+ -~~~ - -### Add Multiple Columns - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN location STRING, ADD COLUMN amount DECIMAL; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ -+-----------+-------------------+-------+---------+-----------+ -| Field | Type | Null | Default | Indices | -+-----------+-------------------+-------+---------+-----------+ -| id | INT | false | NULL | {primary} | -| balance | DECIMAL | true | NULL | {} | -| names | STRING | true | NULL | {} | -| location | STRING | true | NULL | {} | -| amount | DECIMAL | true | NULL | {} | -+-----------+-------------------+-------+---------+-----------+ - -~~~ - -### Add a Non-Null Column with a Default Value - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN interest DECIMAL NOT NULL DEFAULT (DECIMAL '1.3'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ -~~~ -+-----------+-------------------+-------+---------------------------+-----------+ -| Field | Type | Null | Default | Indices | -+-----------+-------------------+-------+---------------------------+-----------+ -| id | INT | false | NULL | {primary} | -| balance | DECIMAL | true | NULL | {} | -| names | STRING | true | NULL | {} | -| location | STRING | true | NULL | {} | -| amount | DECIMAL | true | NULL | {} | -| interest | DECIMAL | false | ('1.3':::STRING::DECIMAL) | {} | -+-----------+-------------------+-------+---------------------------+-----------+ -~~~ - -### Add a Non-Null Column with Unique Values - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN cust_number DECIMAL UNIQUE NOT NULL; -~~~ - -### Add a Column with Collation - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN more_names STRING COLLATE en; -~~~ - -### Add a Column and Assign it to a Column Family - -#### Add a Column and Assign it to a New Column Family - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN location1 STRING CREATE FAMILY new_family; -~~~ - -#### Add a Column and Assign it to an Existing Column Family - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN location2 STRING FAMILY existing_family; -~~~ - -#### Add a Column and Create a New Column Family if Column Family Does Not Exist - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN new_name STRING CREATE IF NOT EXISTS FAMILY f1; -~~~ - - -## See Also -- [`ALTER TABLE`](alter-table.html) -- [Column-level Constraints](constraints.html) -- [Collation](collate.html) -- [Column Families](column-families.html) diff --git a/src/current/v2.0/add-constraint.md b/src/current/v2.0/add-constraint.md deleted file mode 100644 index f12a6e59a47..00000000000 --- a/src/current/v2.0/add-constraint.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: ADD CONSTRAINT -summary: Use the ADD CONSTRAINT statement to add constraints to columns. -toc: true ---- - -The `ADD CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and can add the following [constraints](constraints.html) to columns: - -- [Check](check.html) -- [Foreign Keys](foreign-key.html) -- [Unique](unique.html) - -{{site.data.alerts.callout_info}} -The Primary Key and Not Null constraints can only be applied through CREATE TABLE. The Default constraint is managed through ALTER COLUMN.{{site.data.alerts.end}} - - -## Synopsis - -
- {% include {{ page.version.version }}/sql/diagrams/add_constraint.html %} -
- -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table containing the column you want to constrain. | -| `constraint_name` | The name of the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). | -| `constraint_elem` | The [Check](check.html), [Foreign Keys](foreign-key.html), [Unique](unique.html) constraint you want to add.

Adding/changing a Default constraint is done through [`ALTER COLUMN`](alter-column.html).

Adding/changing the table's Primary Key is not supported through `ALTER TABLE`; it can only be specified during [table creation](create-table.html#create-a-table-primary-key-defined). | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Add the Unique Constraint - -Adding the [Unique constraint](unique.html) requires that all of a column's values be distinct from one another (except for *NULL* values). - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT id_customer_unique UNIQUE (id, customer); -~~~ - -### Add the Check Constraint - -Adding the [Check constraint](check.html) requires that all of a column's values evaluate to `TRUE` for a Boolean expression. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT total_0_check CHECK (total > 0); -~~~ - -### Add the Foreign Key Constraint with `CASCADE` - -Before you can add the [Foreign Key](foreign-key.html) constraint to columns, the columns must already be indexed. If they are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the `ADD CONSTRAINT` statement to add the Foreign Key constraint to the columns. - -For example, let's say you have two simple tables, `orders` and `customers`: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE customers; -~~~ - -~~~ -+-----------+-------------------------------------------------+ -| Table | CreateTable | -+-----------+-------------------------------------------------+ -| customers | CREATE TABLE customers ( | -| | id INT NOT NULL, | -| | "name" STRING NOT NULL, | -| | address STRING NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | FAMILY "primary" (id, "name", address) | -| | ) | -+-----------+-------------------------------------------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE orders; -~~~ - -~~~ -+--------+-------------------------------------------------------------------------------------------------------------+ -| Table | CreateTable | -+--------+-------------------------------------------------------------------------------------------------------------+ -| orders | CREATE TABLE orders ( | -| | id INT NOT NULL, | -| | customer_id INT NULL, | -| | status STRING NOT NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | FAMILY "primary" (id, customer_id, status), | -| | CONSTRAINT check_status CHECK (status IN ('open':::STRING, 'complete':::STRING, 'cancelled':::STRING)) | -| | ) | -+--------+-------------------------------------------------------------------------------------------------------------+ -(1 row) -~~~ - -To ensure that each value in the `orders.customer_id` column matches a unique value in the `customers.id` column, you want to add the Foreign Key constraint to `orders.customer_id`. So you first create an index on `orders.customer_id`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON orders (customer_id); -~~~ - -Then you add the Foreign Key constraint. - -New in v2.0: You can include a [foreign key action](foreign-key.html#foreign-key-actions-new-in-v2-0) to specify what happens when a foreign key is updated or deleted. - -In this example, let's use `ON DELETE CASCADE` (i.e., when referenced row is deleted, all dependent objects are also deleted). - -{{site.data.alerts.callout_danger}} -`CASCADE` does not list objects it drops or updates, so it should be used cautiously. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT customer_fk FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE; -~~~ - -If you had tried to add the constraint before indexing the column, you would have received an error: - -~~~ -pq: foreign key requires an existing index on columns ("customer_id") -~~~ - -## See Also - -- [Constraints](constraints.html) -- [Foreign Key Constraint](foreign-key.html) -- [`ALTER COLUMN`](alter-column.html) -- [`CREATE TABLE`](create-table.html) -- [`ALTER TABLE`](alter-table.html) diff --git a/src/current/v2.0/admin-ui-access-and-navigate.md b/src/current/v2.0/admin-ui-access-and-navigate.md deleted file mode 100644 index ad210ca29d2..00000000000 --- a/src/current/v2.0/admin-ui-access-and-navigate.md +++ /dev/null @@ -1,167 +0,0 @@ ---- -title: Access and Navigate the CockroachDB Admin UI -summary: Learn how to access and navigate the Admin UI. -toc: true ---- - - -## Access the Admin UI - -You can access the Admin UI from any node in the cluster. - -By default, you can access it via HTTP on port `8080` of the hostname or IP address you configured using the `--host` flag while [starting the node](start-a-node.html#general). For example, `http://:8080`. If you are running a secure cluster, use `https://:8080`. - -You can also set the CockroachDB Admin UI to a custom port using `--http-port` or a custom hostname using `--http-host` when [starting each node](start-a-node.html). For example, if you set both a custom port and hostname, `http://:`. For a secure cluster, `https://:`. - -For additional guidance on accessing the Admin UI in the context of cluster deployment, see [Start a Local Cluster](start-a-local-cluster.html) and [Manual Deployment](manual-deployment.html). - -## Navigate the Admin UI - -The left-hand navigation bar allows you to navigate to the [Cluster Overview page](admin-ui-access-and-navigate.html), [Cluster metrics dashboards](admin-ui-overview.html), [Databases page](admin-ui-databases-page.html), and [Jobs page](admin-ui-jobs-page.html). - -The main panel displays changes for each page: - -Page | Main Panel Component ------------|------------ -Cluster Overview |
  • [Cluster Overview panel](admin-ui-access-and-navigate.html#cluster-overview-panel)
  • [Node List](admin-ui-access-and-navigate.html#node-list). [Enterprise users](enterprise-licensing.html) can enable and switch to the [Node Map](admin-ui-access-and-navigate.html#node-map-enterprise) view.
-Cluster Metrics |
  • [Time Series graphs](admin-ui-access-and-navigate.html#time-series-graphs)
  • [Summary Panel](admin-ui-access-and-navigate.html#summary-panel)
  • [Events List](admin-ui-access-and-navigate.html#events-panel)
-Databases | Information about the tables and grants in your [databases](admin-ui-databases-page.html). -Jobs | Information about all currently active schema changes and backup/restore [jobs](admin-ui-jobs-page.html). - -### Cluster Overview Panel - -CockroachDB Admin UI - -The **Cluster Overview** panel provides the following metrics: - -Metric | Description ---------|---- -Capacity Usage |
  • The storage capacity used as a percentage of total storage capacity allocated across all nodes.
  • The current capacity usage.
-Node Status |
  • The number of [live nodes](admin-ui-access-and-navigate.html#live-nodes) in the cluster.
  • The number of suspect nodes in the cluster. A node is considered a suspect node if it's liveness status is unavailable or the node is in the process of decommissioning.
  • The number of [dead nodes](admin-ui-access-and-navigate.html#dead-nodes) in the cluster.
  • -Replication Status |
    • The total number of ranges in the cluster.
    • The number of [under-replicated ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
    • The number of [unavailable ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
    • - -### Node List - -The **Node List** is the default view on the **Overview** page. -CockroachDB Admin UI - -#### Live Nodes -Live nodes are nodes that are online and responding. They are marked with a green dot. If a node is removed or dies, the dot turns yellow to indicate that it is not responding. If the node remains unresponsive for a certain amount of time (5 minutes by default), the node turns red and is moved to the [**Dead Nodes**](#dead-nodes) section, indicating that it is no longer expected to come back. - -The following details are shown for each live node: - -Column | Description --------|------------ -ID | The ID of the node. -Address | The address of the node. You can click on the address to view further details about the node. -Uptime | How long the node has been running. -Bytes | The used capacity for the node. -Replicas | The number of replicas on the node. -Mem Usage | The memory usage for the node. -Version | The build tag of the CockroachDB version installed on the node. -Logs | Click **Logs** to see the logs for the node. - -#### Dead Nodes - -Nodes are considered dead once they have not responded for a certain amount of time (5 minutes by default). At this point, the automated repair process starts, wherein CockroachDB automatically rebalances replicas from the dead node, using the unaffected replicas as sources. See [Stop a Node](stop-a-node.html#how-it-works) for more information. - -The following details are shown for each dead node: - -Column | Description --------|------------ -ID | The ID of the node. -Address | The address of the node. You can click on the address to view further details about the node. -Down Since | How long the node has been down. - -#### Decommissioned Nodes - -New in v1.1: Nodes that have been decommissioned for permanent removal from the cluster are listed in the **Decommissioned Nodes** table. - -When you decommission a node, CockroachDB lets the node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node so that it can be safely shut down. See [Remove Nodes](remove-nodes.html) for more information. - -### Node Map (Enterprise) - -New in v2.0: The **Node Map** is an [enterprise-only](enterprise-licensing.html) feature that gives you a visual representation of the geographical configuration of your cluster. - -CockroachDB Admin UI Summary Panel - -The Node Map consists of the following components: - -**Region component** - -CockroachDB Admin UI Summary Panel - -**Node component** - -CockroachDB Admin UI Summary Panel - -For guidance on enabling and using the node map, see [Enable Node Map](enable-node-map.html). - -### Time Series Graphs - -The **Cluster Metrics** dashboards display the time series graphs that are useful to visualize and monitor data trends. To access the time series graphs, click **Metrics** on the left-hand navigation bar. - -You can hover over each graph to see actual point-in-time values. - -CockroachDB Admin UI - -{{site.data.alerts.callout_info}}By default, CockroachDB stores timeseries metrics for the last 30 days, but you can reduce the interval for timeseries storage. Alternately, if you are exclusively using a third-party tool such as Prometheus for timeseries monitoring, you can disable timeseries storage entirely. For more details, see this FAQ. -{{site.data.alerts.end}} - -#### Change time range - -You can change the time range by clicking on the time window. -CockroachDB Admin UI - -{{site.data.alerts.callout_info}}The Admin UI shows time in UTC, even if you set a different time zone for your cluster. {{site.data.alerts.end}} - -#### View metrics for a single node - -By default, the time series panel displays the metrics for the entire cluster. To view the metrics for an individual node, select the node from the **Graph** drop-down list. -CockroachDB Admin UI - -### Summary Panel - -The **Cluster Metrics** dashboards display the **Summary** panel of key metrics. To view the **Summary** panel, click **Metrics** on the left-hand navigation bar. - -CockroachDB Admin UI Summary Panel - -The **Summary** panel provides the following metrics: - -Metric | Description ---------|---- -Total Nodes | The total number of nodes in the cluster. Decommissioned nodes are not included in the Total Nodes count.

      You can further drill down into the nodes details by clicking on [**View nodes list**](#node-list). -Dead Nodes | The number of [dead nodes](admin-ui-access-and-navigate.html#dead-nodes) in the cluster. -Capacity Used | The storage capacity used as a percentage of total storage capacity allocated across all nodes. -Unavailable Ranges | The number of unavailable ranges in the cluster. A non-zero number indicates an unstable cluster. -Queries per second | The number of SQL queries executed per second. -P50 Latency | The 50th percentile of service latency. Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client. -P99 Latency | The 99th percentile of service latency. - -{{site.data.alerts.callout_info}} -{% include v2.0/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -### Events Panel - -The **Cluster Metrics** dashboards display the **Events** panel that lists the 10 most recent events logged for the all nodes across the cluster. To view the **Events** panel, click **Metrics** on the left-hand navigation bar. To see the list of all events, click **View all events** in the **Events** panel. - -CockroachDB Admin UI Events - -The following types of events are listed: - -- Database created -- Database dropped -- Table created -- Table dropped -- Table altered -- Index created -- Index dropped -- View created -- View dropped -- Schema change reversed -- Schema change finished -- Node joined -- Node decommissioned -- Node restarted -- Cluster setting changed diff --git a/src/current/v2.0/admin-ui-custom-chart-debug-page.md b/src/current/v2.0/admin-ui-custom-chart-debug-page.md deleted file mode 100644 index a318e466b0c..00000000000 --- a/src/current/v2.0/admin-ui-custom-chart-debug-page.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: Custom Chart Debug Page -toc: true ---- - -New in v2.0: The **Custom Chart** debug page in the Admin UI can be used to create a custom chart showing any combination of over [200 available metrics](#available-metrics). - -The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Just like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc. - - -## Getting There - -To get to the **Custom Chart** debug page, [open the Admin UI](admin-ui-access-and-navigate.html), and either: - -- Open http://localhost:8080/#/debug/chart in your browser (replacing `localhost` and `8080` with your node's host and port). - -- Open any node's Admin UI debug page at http://localhost:8080/#/debug in your browser (replacing `localhost` and `8080` with your node's host and port), scroll down to the **UI Debugging** section, and click **Custom Time-Series Chart**. - -## Query Options - -The dropdown menus above the chart are used to set: - -- The time span to chart -- The units to display - -CockroachDB Admin UI - -The table below the chart shows which metrics are being queried, and how they'll be combined and displayed. - -Options include: - -{% include {{page.version.version}}/admin-ui-custom-chart-debug-page-00.html %} - -## Examples - -### Query User and System CPU Usage - -CockroachDB Admin UI - -To compare system vs. userspace CPU usage, select the following values under **Metric Name**: - -+ `sys.cpu.sys.percent` -+ `sys.cpu.user.percent` - -The Y-axis label is the **Count**. A count of 1 represents 100% utilization. The **Aggregator** of **Sum** can show the count to be above 1, which would mean CPU utilization is greater than 100%. - -Checking **Per Node** displays statistics for each node, which could show whether an individual node's CPU usage was higher or lower than the average. - -## Available Metrics - -{{site.data.alerts.callout_info}} -This list is taken directly from the source code and is subject to change. Some of the metrics listed below are already visible in other areas of the [Admin UI](admin-ui-overview.html). -{{site.data.alerts.end}} - -{% include {{page.version.version}}/metric-names.md %} - -## See Also - -+ [Troubleshooting Overview](troubleshooting-overview.html) -+ [Support Resources](support-resources.html) -+ [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v2.0/admin-ui-databases-page.md b/src/current/v2.0/admin-ui-databases-page.md deleted file mode 100644 index b8a5453f835..00000000000 --- a/src/current/v2.0/admin-ui-databases-page.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Database Page -toc: true ---- - -The **Databases** page of the Admin UI provides details of the databases configured, the tables in each database, and the grants assigned to each user. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Databases** on the left-hand navigation bar. - - -## Tables View - -The **Tables** view shows details of the system table as well as the tables in your databases. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar. - -CockroachDB Admin UI Database Tables View - -The following details are displayed for each table: - -Metric | Description ---------|---- -Table Name | The name of the table. -Size | Approximate total disk size of the table across all replicas. -Ranges | The number of ranges in the table. -\# of Columns | The number of columns in the table. -\# of Indices | The number of indices for the table. - -## Grants View - -The **Grants** view shows the [privileges](privileges.html) granted to users for each database. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar, select **Databases** from the left-hand navigation bar, and then select **Grants** from the **View** menu. - -For more details about grants and privileges, see [Grants](grant.html). - -CockroachDB Admin UI Database Grants View diff --git a/src/current/v2.0/admin-ui-jobs-page.md b/src/current/v2.0/admin-ui-jobs-page.md deleted file mode 100644 index 5d4bc43bd5a..00000000000 --- a/src/current/v2.0/admin-ui-jobs-page.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Jobs Page -toc: true ---- - -New in v1.1: The **Jobs** page of the Admin UI provides details about the backup/restore jobs as well as schema changes performed across all nodes in the cluster. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Jobs** on the left-hand navigation bar. - - -## Job Details - -The **Jobs** table displays the user, description, creation time, and status of each backup and restore job, as well as schema changes performed across all nodes in the cluster. - -CockroachDB Admin UI Jobs Page - -If a description is truncated, click the ellipsis to view the job's the full description. - -## Filter Results - -You can filter the results based on the status of the jobs or the type of jobs (backups, restores, or schema changes). You can also choose to view either the latest 50 jobs or all the jobs across all nodes. - -Filter By | Description -----------|------------ -Job Status | From the **Status** menu, select the required status filter. -Job Type | From the **Type** menu, select **Backups**, **Restores**, **Imports**, or **Schema Changes**. -Jobs Shown | From the **Show** menu, select **First 50** or **All**. diff --git a/src/current/v2.0/admin-ui-overview-dashboard.md b/src/current/v2.0/admin-ui-overview-dashboard.md deleted file mode 100644 index 02262d1683a..00000000000 --- a/src/current/v2.0/admin-ui-overview-dashboard.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Overview Dashboard -summary: The Overview dashboard lets you monitor important SQL performance, replication, and storage metrics. -toc: true ---- - -The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default. - - -The **Overview** dashboard displays the following time series graphs: - -## SQL Queries - -CockroachDB Admin UI SQL Queries graph - -- In the node view, the SQL Queries graph shows the current moving average, over the last 10 seconds, of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node. - -- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load. - -## Service Latency: SQL, 99th percentile - -CockroachDB Admin UI Service Latency graph - -Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client. - -- In the node view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the node. - -- In the cluster view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency across all nodes in the cluster. - -## Replicas per Node - -CockroachDB Admin UI Replicas per node graph - -Ranges are subsets of your data, which are replicated to ensure survivability. Ranges are replicated to a configurable number of CockroachDB nodes. - -- In the node view, the graph shows the number of range replicas on the selected node. - -- In the cluster view, the graph shows the number of range replicas on each node in the cluster. - -For details about how to control the number and location of replicas, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_info}}The timeseries data used to power the graphs in the admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this FAQ.{{site.data.alerts.end}} - -## Capacity - -CockroachDB Admin UI Capacity graph - -You can monitor the **Capacity** graph to determine when additional storage is needed. - -- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node. - -- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -Capacity | The maximum storage capacity allocated to CockroachDB. You can configure the maximum allocated storage capacity for CockroachDB using the --store flag. For more information, see [Start a Node](start-a-node.html#store). -Available | The free storage capacity available to CockroachDB. -Used | Disk space used by the data in the CockroachDB store. Note that this value is less than (Capacity - Available) because Capacity and Available metrics consider the entire disk and all applications on the disk including CockroachDB, whereas Used metric tracks only the store's disk usage. - -{{site.data.alerts.callout_info}} -{% include v2.0/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} diff --git a/src/current/v2.0/admin-ui-overview.md b/src/current/v2.0/admin-ui-overview.md deleted file mode 100644 index 00779fbbc6d..00000000000 --- a/src/current/v2.0/admin-ui-overview.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Admin UI Overview -summary: Use the Admin UI to monitor and optimize cluster performance. -toc: false -key: explore-the-admin-ui.html ---- - -The CockroachDB Admin UI provides details about your cluster and database configuration, and helps you optimize cluster performance by monitoring the following areas: - -Area | Description ---------|---- -[Node Map](enable-node-map.html) | View and monitor the metrics and geographical configuration of your cluster. -[Cluster Health](admin-ui-access-and-navigate.html#summary-panel) | View essential metrics about the cluster's health, such as the number of live, dead, and suspect nodes, the number of unavailable ranges, and the queries per second and service latency across the cluster. -[Overview Metrics](admin-ui-overview-dashboard.html) | View important SQL performance, replication, and storage metrics. -[Runtime Metrics](admin-ui-runtime-dashboard.html) | View metrics about node count, CPU time, and memory usage. -[SQL Performance](admin-ui-sql-dashboard.html) | View metrics about SQL connections, byte traffic, queries, transactions, and service latency. -[Storage Utilization](admin-ui-storage-dashboard.html) | View metrics about storage capacity and file descriptors. -[Replication Details](admin-ui-replication-dashboard.html) | View metrics about how data is replicated across the cluster, such as range status, replicas per store, and replica quiescence. -[Nodes Details](admin-ui-access-and-navigate.html#summary-panel) | View details of live, dead, and decommissioned nodes. -[Events](admin-ui-access-and-navigate.html#events-panel) | View a list of recent cluster events. -[Database Details](admin-ui-databases-page.html) | View details about the system and user databases in the cluster. -[Jobs Details](admin-ui-jobs-page.html) | View details of the jobs running in the cluster. -[Custom Chart Debug Page](admin-ui-custom-chart-debug-page.html) | Create a custom dashboard choosing from over 200 available metrics. - -The Admin UI also provides details about the way data is **Distributed**, the state of specific **Queues**, and metrics for **Slow Queries**, but these details are largely internal and intended for use by CockroachDB developers. - -{{site.data.alerts.callout_info}}By default, the Admin UI shares anonymous usage details with Cockroach Labs. For information about the details shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}} diff --git a/src/current/v2.0/admin-ui-replication-dashboard.md b/src/current/v2.0/admin-ui-replication-dashboard.md deleted file mode 100644 index 191a031fe8b..00000000000 --- a/src/current/v2.0/admin-ui-replication-dashboard.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: Replication Dashboard -summary: The Replication dashboard lets you monitor the replication metrics for your cluster. -toc: true ---- - -The **Replication** dashboard in the CockroachDB Admin UI enables you to monitor the replication metrics for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Replication**. - - -## Review of CockroachDB terminology - -- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -- **Range Lease:** For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. -- **Under-replicated Ranges:** When a cluster is first initialized, the few default starting ranges will only have a single replica, but as soon as other nodes are available, they will replicate to them until they've reached their desired replication factor, the default being 3. If a range does not have enough replicas, the range is said to be "under-replicated". -- **Unavailable Ranges:** If a majority of a range's replicas are on nodes that are unavailable, then the entire range is unavailable and will be unable to process queries. - -For more details, see [Scalable SQL Made Easy: How CockroachDB Automates Operations](https://www.cockroachlabs.com/blog/automated-rebalance-and-repair/) - -## Replication Dashboard - -The **Replication** dashboard displays the following time series graphs: - -### Ranges - -CockroachDB Admin UI Replicas per Store - -The **Ranges** graph shows you various details about the status of ranges. - -- In the node view, the graph shows details about ranges on the node. - -- In the cluster view, the graph shows details about ranges across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -Ranges | The number of ranges. -Leaders | The number of ranges with leaders. If the number does not match the number of ranges for a long time, troubleshoot your cluster. -Lease Holders | The number of ranges that have leases. -Leaders w/o Leases | The number of Raft leaders without leases. If the number if non-zero for a long time, troubleshoot your cluster. -Unavailable | The number of unavailable ranges. If the number if non-zero for a long time, troubleshoot your cluster. -Under-replicated | The number of under-replicated ranges. - -### Replicas Per Store - -CockroachDB Admin UI Replicas per Store - -- In the node view, the graph shows the number of range replicas on the store. - -- In the cluster view, the graph shows the number of range replicas on each store. - -You can [Configure replication zones](configure-replication-zones.html) to set the number and location of replicas. You can monitor the configuration changes using the Admin UI, as described in [Fault tolerance and recovery](demo-fault-tolerance-and-recovery.html). - -### Replica Quiescence - -CockroachDB Admin UI Replica Quiescence - -- In the node view, the graph shows the number of replicas on the node. - -- In the cluster view, the graph shows the number of replicas across all nodes. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -Replicas | The number of replicas. -Quiescent | The number of replicas that haven't been accessed for a while. - -### Snapshots - -CockroachDB Admin UI Replica Snapshots - -Usually the nodes in a [Raft group](architecture/replication-layer.html#raft) stay synchronized by following along the log message by message. However, if a node is far enough behind the log (e.g., if it was offline or is a new node getting up to speed), rather than send all the individual messages that changed the range, the cluster can send it a snapshot of the range and it can start following along from there. Commonly this is done preemptively, when the cluster can predict that a node will need to catch up, but occasionally the Raft protocol itself will request the snapshot. - -Metric | Description --------|------------ -Generated | The number of snapshots created per second. -Applied (Raft-initiated) | The number of snapshots applied to nodes per second that were initiated within Raft. -Applied (Preemptive) | The number of snapshots applied to nodes per second that were anticipated ahead of time (e.g., because a node was about to be added to a Raft group). -Reserved | The number of slots reserved per second for incoming snapshots that will be sent to a node. - -### Other Graphs - -The **Replication** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Leaseholders per Store -- Logical Bytes per Store -- Range Operations - -For monitoring CockroachDB, it is sufficient to use the [**Ranges**](#ranges), [**Replicas per Store**](#replicas-per-store), and [**Replica Quiescence**](#replica-quiescence) graphs. diff --git a/src/current/v2.0/admin-ui-runtime-dashboard.md b/src/current/v2.0/admin-ui-runtime-dashboard.md deleted file mode 100644 index d93e418894f..00000000000 --- a/src/current/v2.0/admin-ui-runtime-dashboard.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: Runtime Dashboard -toc: true ---- - -The **Runtime** dashboard in the CockroachDB Admin UI lets you monitor runtime metrics for you cluster, such as node count, memory usage, and CPU time. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Runtime**. - - -The **Runtime** dashboard displays the following time series graphs: - -## Live Node Count - -CockroachDB Admin UI Node Count - -In the node view as well as the cluster view, the graph shows the number of live nodes in the cluster. - -A dip in the graph indicates decommissioned nodes, dead nodes, or nodes that are not responding. To troubleshoot the dip in the graph, refer to the [Summary panel](admin-ui-access-and-navigate.html#summary-panel). - -## Memory Usage - -CockroachDB Admin UI Memory Usage - -- In the node view, the graph shows the memory in use for the selected node. - -- In the cluster view, the graph shows the memory in use across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -RSS | Total memory in use by CockroachDB. -Go Allocated | Memory allocated by the Go layer. -Go Total | Total memory managed by the Go layer. -CGo Allocated | Memory allocated by the C layer. -CGo Total | Total memory managed by the C layer. - -{{site.data.alerts.callout_info}}If Go Total or CGO Total fluctuates or grows steadily over time, contact us.{{site.data.alerts.end}} - -## CPU Time - -CockroachDB Admin UI CPU Time - - -- In the node view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations for the selected node. -- In the cluster view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations across all nodes in the cluster. - -On hovering over the CPU Time graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -User CPU Time | Total CPU seconds per second used by the CockroachDB process across all nodes. -Sys CPU Time | Total CPU seconds per second used by the system calls made by CockroachDB across all nodes. - -## Clock Offset - -CockroachDB Admin UI Clock Offset - -- In the node view, the graph shows the mean clock offset of the node against the rest of the cluster. -- In the cluster view, the graph shows the mean clock offset of each node against the rest of the cluster. - -## Other Graphs - -The **Runtime** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Goroutine Count -- GC Runs -- GC Pause Time - -For monitoring CockroachDB, it is sufficient to use the [**Live Node Count**](#live-node-count), [**Memory Usage**](#memory-usage), [**CPU Time**](#cpu-time), and [**Clock Offset**](#clock-offset) graphs. diff --git a/src/current/v2.0/admin-ui-sql-dashboard.md b/src/current/v2.0/admin-ui-sql-dashboard.md deleted file mode 100644 index 860b6efde12..00000000000 --- a/src/current/v2.0/admin-ui-sql-dashboard.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: SQL Dashboard -summary: The SQL dashboard lets you monitor the performance of your SQL queries. -toc: true ---- - -The **SQL** dashboard in the CockroachDB Admin UI lets you monitor the performance of your SQL queries. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **SQL**. - - -The **SQL** dashboard displays the following time series graphs: - -## SQL Connections - -CockroachDB Admin UI SQL Connections - -- In the node view, the graph shows the number of connections currently open between the client and the selected node. - -- In the cluster view, the graph shows the total number of SQL client connections to all nodes combined. - -## SQL Byte Traffic - -CockroachDB Admin UI SQL Byte Traffic - -The **SQL Byte Traffic** graph helps you correlate SQL query count to byte traffic, especially in bulk data inserts or analytic queries that return data in bulk. - -- In the node view, the graph shows the current byte throughput (bytes/second) between all the currently connected SQL clients and the node. - -- In the cluster view, the graph shows the aggregate client throughput across all nodes. - -## SQL Queries - -CockroachDB Admin UI SQL Queries - -- In the node view, the graph shows the current moving average, over the last 10 seconds, of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node. - -- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load. - -## Transactions - -CockroachDB Admin UI Transactions - -- In the node view, the graph shows separately the current moving average, over the last 10 seconds, of the number of opened, committed, aborted and rolled back transactions per second issued by SQL clients on the node. - -- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current transactions load over the cluster, assuming the last 10 seconds of activity per node are representative of this load. - -If the graph shows excessive aborts or rollbacks, it might indicate issues with the SQL queries. In that case, re-examine queries to lower contention. - -## Service Latency - -CockroachDB Admin UI Service Latency - -Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client. - -- In the node view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the selected node. - -- In the cluster view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for each node in the cluster. - -## Other Graphs - -The **SQL** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Execution Latency -- Active Distributed SQL Queries -- Active Flows for Distributed SQL Queries -- Service Latency: DistSQL -- Schema Changes - -For monitoring CockroachDB, it is sufficient to use the [**SQL Connections**](#sql-connections), [**SQL Byte Traffic**](#sql-byte-traffic), [**SQL Queries**](#sql-queries), [**Service Latency**](#service-latency), and [**Transactions**](#transactions) graphs. diff --git a/src/current/v2.0/admin-ui-storage-dashboard.md b/src/current/v2.0/admin-ui-storage-dashboard.md deleted file mode 100644 index 0d8e2bbd282..00000000000 --- a/src/current/v2.0/admin-ui-storage-dashboard.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Storage Dashboard -summary: The Storage dashboard lets you monitor the storage utilization for your cluster. -toc: true ---- - -The **Storage** dashboard in the CockroachDB Admin UI lets you monitor the storage utilization for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Storage**. - - -The **Storage** dashboard displays the following time series graphs: - -## Capacity - -CockroachDB Admin UI Capacity graph - -You can monitor the **Capacity** graph to determine when additional storage is needed. - -- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node. - -- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -Capacity | The maximum storage capacity allocated to CockroachDB. You can configure the maximum allocated storage capacity for CockroachDB using the `--store` flag. For more information, see [Start a Node](start-a-node.html#store). -Available | The free storage capacity available to CockroachDB. -Used | Disk space used by the data in the CockroachDB store. Note that this value is less than (Capacity - Available) because Capacity and Available metrics consider the entire disk and all applications on the disk including CockroachDB, whereas Used metric tracks only the store's disk usage. - -{{site.data.alerts.callout_info}} -{% include v2.0/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -## File Descriptors - -CockroachDB Admin UI File Descriptors - -- In the node view, the graph shows the number of open file descriptors for that node, compared with the file descriptor limit. - -- In the cluster view, the graph shows the number of open file descriptors across all nodes, compared with the file descriptor limit. - -If the Open count is almost equal to the Limit count, increase [File Descriptors](recommended-production-settings.html#file-descriptors-limit). - -{{site.data.alerts.callout_info}}If you are running multiple nodes on a single machine (not recommended), the actual number of open file descriptors are considered open on each node. Thus the limit count value displayed on the Admin UI is the actual value of open file descriptors multiplied by the number of nodes, compared with the file descriptor limit. {{site.data.alerts.end}} - -For Windows systems, you can ignore the File Descriptors graph because the concept of file descriptors is not applicable to Windows. - -## Other Graphs - -The **Storage** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Live Bytes -- Log Commit Latency -- Command Commit Latency -- RocksDB Read Amplification -- RocksDB SSTables -- Time Series Writes -- Time Series Bytes Written - -For monitoring CockroachDB, it is sufficient to use the [**Capacity**](#capacity) and [**File Descriptors**](#file-descriptors) graphs. diff --git a/src/current/v2.0/alter-column.md b/src/current/v2.0/alter-column.md deleted file mode 100644 index 1f3d5053f83..00000000000 --- a/src/current/v2.0/alter-column.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: ALTER COLUMN -summary: Use the ALTER COLUMN statement to set, change, or drop a column's Default constraint or to drop the Not Null constraint. -toc: true ---- - -The `ALTER COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and sets, changes, or drops a column's [Default constraint](default-value.html) or drops the [Not Null constraint](not-null.html). - -{{site.data.alerts.callout_info}}To manage other constraints, see ADD CONSTRAINT and DROP CONSTRAINT{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/alter_column.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table with the column you want to modify. | -| `column_name` | The name of the column you want to modify. | -| `a_expr` | The new [Default Value](default-value.html) you want to use. | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Set or Change a Default Value - -Setting the [Default Value constraint](default-value.html) inserts the value when data's written to the table without explicitly defining the value for the column. If the column already has a Default Value set, you can use this statement to change it. - -The below example inserts the Boolean value `true` whenever you inserted data to the `subscriptions` table without defining a value for the `newsletter` column. - -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter SET DEFAULT true; -~~~ - -### Remove Default Constraint - -If the column has a defined [Default Value](default-value.html), you can remove the constraint, which means the column will no longer insert a value by default if one is not explicitly defined for the column. - -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP DEFAULT; -~~~ - -### Remove Not Null Constraint - -If the column has the [Not Null constraint](not-null.html) applied to it, you can remove the constraint, which means the column becomes optional and can have *NULL* values written into it. - -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP NOT NULL; -~~~ - -## See Also - -- [Constraints](constraints.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ALTER TABLE`](alter-table.html) diff --git a/src/current/v2.0/alter-database.md b/src/current/v2.0/alter-database.md deleted file mode 100644 index 31972f31829..00000000000 --- a/src/current/v2.0/alter-database.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: ALTER DATABASE -summary: Use the ALTER DATABASE statement to change an existing database. -toc: false ---- - -The `ALTER DATABASE` [statement](sql-statements.html) applies a schema change to a database. - -{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}} - -For information on using `ALTER DATABASE`, see the documents for its relevant subcommands. - -Subcommand | Description ------------|------------ -[`RENAME`](rename-database.html) | Change the name of a database. diff --git a/src/current/v2.0/alter-index.md b/src/current/v2.0/alter-index.md deleted file mode 100644 index 46080a55a17..00000000000 --- a/src/current/v2.0/alter-index.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: ALTER INDEX -summary: Use the ALTER INDEX statement to change an existing index. -toc: false ---- - -The `ALTER INDEX` [statement](sql-statements.html) applies a schema change to an index. - -{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}} - -For information on using `ALTER INDEX`, see the documents for its relevant subcommands. - -Subcommand | Description ------------|------------ -[`RENAME`](rename-index.html) | Change the name of an index. -[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the index. diff --git a/src/current/v2.0/alter-sequence.md b/src/current/v2.0/alter-sequence.md deleted file mode 100644 index e2da8e5ce44..00000000000 --- a/src/current/v2.0/alter-sequence.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: ALTER SEQUENCE -summary: Use the ALTER SEQUENCE statement to change the name, increment values, and other settings of a sequence. -toc: true ---- - -New in v2.0: The `ALTER SEQUENCE` [statement](sql-statements.html) [changes the name](rename-sequence.html), increment values, and other settings of a sequence. - -{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the parent database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/alter_sequence_options.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Modify the sequence only if it exists; if it does not exist, do not return an error. -`sequence_name` | The name of the sequence you want to modify. -`INCREMENT` | The new value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence. -`MINVALUE` | The new minimum value of the sequence.

      Default: `1` -`MAXVALUE` | The new maximum value of the sequence.

      Default: `9223372036854775807` -`START` | The value the sequence starts at if you `RESTART` or if the sequence hits the `MAXVALUE` and `CYCLE` is set.

      `RESTART` and `CYCLE` are not implemented yet. -`CYCLE` | The sequence will wrap around when the sequence value hits the maximum or minimum value. If `NO CYCLE` is set, the sequence will not wrap. - -## Examples - -### Change the Increment Value of a Sequence - -In this example, we're going to change the increment value of a sequence from its current state (i.e., `1`) to `2`. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE customer_seq INCREMENT 2; -~~~ - -Next, we'll add another record to the table and check that the new record adheres to the new sequence. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customer_list (customer, address) VALUES ('Marie', '333 Ocean Ave'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customer_list; -~~~ -~~~ -+----+----------+--------------------+ -| id | customer | address | -+----+----------+--------------------+ -| 1 | Lauren | 123 Main Street | -| 2 | Jesse | 456 Broad Ave | -| 3 | Amruta | 9876 Green Parkway | -| 5 | Marie | 333 Ocean Ave | -+----+----------+--------------------+ -~~~ - -### Set the Next Value of a Sequence - -In this example, we're going to change the next value of the example sequence (`customer_seq`). Currently, the next value will be `7` (i.e., `5` + `INCREMENT 2`). We will change the next value to `20`. - -{{site.data.alerts.callout_info}}You cannot set a value outside the MAXVALUE or MINVALUE of the sequence. {{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SELECT setval('customer_seq', 20, false); -~~~ -~~~ -+--------+ -| setval | -+--------+ -| 20 | -+--------+ -~~~ - -{{site.data.alerts.callout_info}}The setval('seq_name', value, is_called) function in CockroachDB SQL mimics the setval() function in PostgreSQL, but it does not store the is_called flag. Instead, it sets the value to val - increment for false or val for true. {{site.data.alerts.end}} - -Let's add another record to the table to check that the new record adheres to the new next value. - -~~~ sql -> INSERT INTO customer_list (customer, address) VALUES ('Lola', '333 Schermerhorn'); -~~~ -~~~ -+----+----------+--------------------+ -| id | customer | address | -+----+----------+--------------------+ -| 1 | Lauren | 123 Main Street | -| 2 | Jesse | 456 Broad Ave | -| 3 | Amruta | 9876 Green Parkway | -| 5 | Marie | 333 Ocean Ave | -| 20 | Lola | 333 Schermerhorn | -+----+----------+--------------------+ -~~~ - - -## See Also - -- [`RENAME SEQUENCE`](rename-sequence.html) -- [`CREATE SEQUENCE`](create-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [Functions and Operators](functions-and-operators.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/alter-table.md b/src/current/v2.0/alter-table.md deleted file mode 100644 index 9fd5ca94786..00000000000 --- a/src/current/v2.0/alter-table.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: ALTER TABLE -summary: Use the ALTER TABLE statement to change the schema of a table. -toc: true ---- - -The `ALTER TABLE` [statement](sql-statements.html) applies a schema change to a table. - -{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}} - - -## Subcommands - -For information on using `ALTER TABLE`, see the documents for its relevant subcommands. - -Subcommand | Description ------------|------------ -[`ADD COLUMN`](add-column.html) | Add columns to tables. -[`ADD CONSTRAINT`](add-constraint.html) | Add constraints to columns. -[`ALTER COLUMN`](alter-column.html) | Change or drop a column's [Default constraint](default-value.html) or drop the [Not Null constraint](not-null.html). -[`DROP COLUMN`](drop-column.html) | Remove columns from tables. -[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from columns. -[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Enable per-table audit logs. -[`PARTITION BY`](partition-by.html) | New in v2.0: Repartition or unpartition a table with partitions ([Enterprise-only](enterprise-licensing.html)). -[`RENAME COLUMN`](rename-column.html) | Change the names of columns. -[`RENAME TABLE`](rename-table.html) | Change the names of tables. -[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the table. -[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column. - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} diff --git a/src/current/v2.0/alter-user.md b/src/current/v2.0/alter-user.md deleted file mode 100644 index bb7dcda941f..00000000000 --- a/src/current/v2.0/alter-user.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: ALTER USER -summary: The ALTER USER statement can be used to add or change a user's password. -toc: true ---- - -New in v2.0: The `ALTER USER` [statement](sql-statements.html) can be used to add or change a [user's](create-and-manage-users.html) password. - -{{site.data.alerts.callout_success}}You can also use the cockroach user command to add or change a user's password.{{site.data.alerts.end}} - - -## Considerations - -- Password creation and alteration is supported only in secure clusters for non-`root` users. - -## Required Privileges - -The user must have the `INSERT` and `UPDATE` [privileges](privileges.html) on the `system.users` table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/alter_user_password.html %}
      - -## Parameters - - - -Parameter | Description -----------|------------- -`name` | The name of the user whose password you want to create or add. -`password` | Let the user [authenticate their access to a secure cluster](create-user.html#user-authentication) using this new password. Passwords should be entered as [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an [identifier](#change-password-using-an-identifier), although this is discouraged. - -## Examples - -### Change Password Using a String Literal - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD 'ilov3beefjerky'; -~~~ -~~~ -ALTER USER 1 -~~~ - -### Change Password Using an Identifier - -The following statement changes the password to `ilov3beefjerky`, as above: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD ilov3beefjerky; -~~~ - -This is equivalent to the example in the previous section because the password contains only lowercase characters. - -In contrast, the following statement changes the password to `thereisnotomorrow`, even though the password in the syntax contains capitals, because identifiers are normalized automatically: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD ThereIsNoTomorrow; -~~~ - -To preserve case in a password specified using identifier syntax, use double quotes: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD "ThereIsNoTomorrow"; -~~~ - -## See Also - -- [`cockroach user` command](create-and-manage-users.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT `](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/alter-view.md b/src/current/v2.0/alter-view.md deleted file mode 100644 index e2594d0e8d7..00000000000 --- a/src/current/v2.0/alter-view.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: ALTER VIEW -summary: The ALTER VIEW statement changes the name of a view. -toc: true ---- - -The `ALTER VIEW` [statement](sql-statements.html) changes the name of a [view](views.html). - -{{site.data.alerts.callout_info}}It is not currently possible to change the SELECT statement executed by a view. Instead, you must drop the existing view and create a new view. Also, it is not currently possible to rename a view that other views depend on, but this ability may be added in the future (see this issue).{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `DROP` [privilege](privileges.html) on the view and the `CREATE` privilege on the parent database. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/alter_view.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Rename the view only if a view of `view_name` exists; if one does not exist, do not return an error. -`view_name` | The name of the view to rename. To find view names, use:

      `SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';` -`name` | The new [`name`](sql-grammar.html#name) for the view, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 2 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER VIEW bank.user_emails RENAME TO bank.user_email_addresses; -~~~ - -~~~ -RENAME VIEW -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+----------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+----------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 2 | -| def | bank | user_email_addresses | VIEW | 3 | -+---------------+-------------------+----------------------+------------+---------+ -(2 rows) -~~~ - -## See Also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`DROP VIEW`](drop-view.html) diff --git a/src/current/v2.0/architecture/distribution-layer.md b/src/current/v2.0/architecture/distribution-layer.md deleted file mode 100644 index 91f4a4ca766..00000000000 --- a/src/current/v2.0/architecture/distribution-layer.md +++ /dev/null @@ -1,185 +0,0 @@ ---- -title: Distribution Layer -summary: The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data. -toc: true ---- - -The Distribution Layer of CockroachDB's architecture provides a unified view of your cluster's data. - -{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}} - - -## Overview - -To make all data in your cluster accessible from any node, CockroachDB stores data in a monolithic sorted map of key-value pairs. This keyspace describes all of the data in your cluster, as well as its location, and is divided into what we call "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. - -CockroachDB implements a sorted map to enable: - - - **Simple lookups**: Because we identify which nodes are responsible for certain portions of the data, queries are able to quickly locate where to find the data they want. - - **Efficient scans**: By defining the order of data, it's easy to find data within a particular range during a scan. - -### Monolithic Sorted Map Structure - -The monolithic sorted map is comprised of two fundamental elements: - -- System data, which include **meta ranges** that describe the locations of data in your cluster (among many other cluster-wide and local data elements) -- User data, which store your cluster's **table data** - -#### Meta Ranges - -The locations of all ranges in your cluster are stored in a two-level index at the beginning of your key-space, known as meta ranges, where the first level (`meta1`) addresses the second, and the second (`meta2`) addresses data in the cluster. Importantly, every node has information on where to locate the `meta1` range (known as its Range Descriptor, detailed below), and the range is never split. - -This meta range structure lets us address up to 4EiB of user data by default: we can address 2^(18 + 18) = 2^36 ranges; each range addresses 2^26 B, and altogether we address 2^(36+26) B = 2^62 B = 4EiB. However, with larger range sizes, it's possible to expand this capacity even further. - -Meta ranges are treated mostly like normal ranges and are accessed and replicated just like other elements of your cluster's KV data. - -Each node caches values of the `meta2` range it has accessed before, which optimizes access of that data in the future. Whenever a node discovers that its `meta2` cache is invalid for a specific key, the cache is updated by performing a regular read on the `meta2` range. - -#### Table Data - -After the node's meta ranges is the KV data your cluster stores. - -Each table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 64 MiB in size, it splits into two ranges. This process continues as a table and its indexes continue growing. Once a table is split across multiple ranges, it's likely that the table and secondary indexes will be stored in separate ranges. However, a range can still contain data for both the table and a secondary index. - -The default 64MiB range size represents a sweet spot for us between a size that's small enough to move quickly between nodes, but large enough to store a meaningfully contiguous set of data whose keys are more likely to be accessed together. These ranges are then shuffled around your cluster to ensure survivability. - -These ranges are replicated (in the aptly named Replication Layer), and have the addresses of each replica stored in the `meta2` range. - -### Using the Monolithic Sorted Map - -When a node receives a request, it looks at the Meta Ranges to find out which node it needs to route the request to by comparing the keys in the request to the keys in its `meta2` range. - -These meta ranges are heavily cached, so this is normally handled without having to send an RPC to the node actually containing the `meta2` ranges. - -The node then sends those KV operations to the Leaseholder identified in the `meta2` range. However, it's possible that the data moved, in which case the node that no longer has the information replies to the requesting node where it's now located. In this case we go back to the `meta2` range to get more up-to-date information and try again. - -### Interactions with Other Layers - -In relationship to other layers in CockroachDB, the Distribution Layer: - -- Receives requests from the Transaction Layer on the same node. -- Identifies which nodes should receive the request, and then sends the request to the proper node's Replication Layer. - -## Technical Details & Components - -### gRPC - -gRPC is the software nodes use to communicate with one another. Because the Distribution Layer is the first layer to communicate with other nodes, CockroachDB implements gRPC here. - -gRPC requires inputs and outputs to be formatted as protocol buffers (protobufs). To leverage gRPC, CockroachDB implements a protocol-buffer-based API defined in `api.proto`. - -For more information about gRPC, see the [official gRPC documentation](http://www.grpc.io/docs/guides/). - -### BatchRequest - -All KV operation requests are bundled into a [protobuf](https://en.wikipedia.org/wiki/Protocol_Buffers), known as a `BatchRequest`. The destination of this batch is identified in the `BatchRequest` header, as well as a pointer to the request's transaction record. (On the other side, when a node is replying to a `BatchRequest`, it uses a protobuf––`BatchResponse`.) - -This `BatchRequest` is also what's used to send requests between nodes using gRPC, which accepts and sends protocol buffers. - -### DistSender - -The gateway/coordinating node's `DistSender` receives `BatchRequest`s from its own `TxnCoordSender`. `DistSender` is then responsible for breaking up `BatchRequests` and routing a new set of `BatchRequests` to the nodes it identifies contain the data using its `meta2` ranges. It will use the cache to send the request to the Leaseholder, but it's also prepared to try the other replicas, in order of "proximity". The replica that the cache says is the Leaseholder is simply moved to the front of the list of replicas to be tried and then an RPC is sent to all of them, in order. - -Requests received by a non-Leaseholder fail with an error pointing at the replica's last known Leaseholder. These requests are retried transparently with the updated lease by the gateway node and never reach the client. - -As nodes begin replying to these commands, `DistSender` also aggregates the results in preparation for returning them to the client. - -### Meta Range KV Structure - -Like all other data in your cluster, meta ranges are structured as KV pairs. Both meta ranges have a similar structure: - -~~~ -metaX/successorKey -> LeaseholderAddress, [list of other nodes containing data] -~~~ - -Element | Description ---------|------------------------ -`metaX` | The level of meta range. Here we use a simplified `meta1` or `meta2`, but these are actually represented in `cockroach` as `\x02` and `\x03` respectively. -`successorKey` | The first key *greater* than the key you're scanning for. This makes CockroachDB's scans efficient; it simply scans the keys until it finds a value greater than the key it's looking for, and that is where it finds the relevant data.

      The `successorKey` for the end of a keyspace is identified as `maxKey`. -`LeaseholderAddress` | The replica primarily responsible for reads and writes, known as the Leaseholder. The Replication Layer contains more information about [Leases](replication-layer.html#leases). - -Here's an example: - -~~~ -meta2/M -> node1:26257, node2:26257, node3:26257 -~~~ - -In this case, the replica on `node1` is the Leaseholder, and nodes 2 and 3 also contain replicas. - -#### Example - -Let's imagine we have an alphabetically sorted column, which we use for lookups. Here are what the meta ranges would approximately look like: - -1. `meta1` contains the address for the nodes containing the `meta2` replicas. - - ~~~ - # Points to meta2 range for keys [A-M) - meta1/M -> node1:26257, node2:26257, node3:26257 - - # Points to meta2 range for keys [M-Z] - meta1/maxKey -> node4:26257, node5:26257, node6:26257 - ~~~ - -2. `meta2` contains addresses for the nodes containing the replicas of each range in the cluster, the first of which is the [Leaseholder](replication-layer.html#leases). - - ~~~ - # Contains [A-G) - meta2/G -> node1:26257, node2:26257, node3:26257 - - # Contains [G-M) - meta2/M -> node1:26257, node2:26257, node3:26257 - - #Contains [M-Z) - meta2/Z -> node4:26257, node5:26257, node6:26257 - - #Contains [Z-maxKey) - meta2/maxKey-> node4:26257, node5:26257, node6:26257 - ~~~ - -### Table Data KV Structure - -Key-Value data, which represents the data in your tables using the following structure: - -~~~ -/// -> -~~~ - -The table itself is stored with an `index_id` of 1 for its `PRIMARY KEY` columns, with the rest of the columns in the table considered as stored/covered columns. - -### Range Descriptors - -Each range in CockroachDB contains metadata, known as a Range Descriptor. A Range Descriptor is comprised of the following: - -- A sequential RangeID -- The keyspace (i.e., the set of keys) the range contains; for example, the first and last `` in the Table Data KV Structure above. This determines the `meta2` range's keys. -- The addresses of nodes containing replicas of the range, with its Leaseholder (which is responsible for its reads and writes) in the first position. This determines the `meta2` range's key's values. - -Because Range Descriptors comprise the key-value data of the `meta2` range, each node's `meta2` cache also stores Range Descriptors. - -Range Descriptors are updated whenever there are: - -- Membership changes to a range's Raft group (discussed in more detail in the [Replication Layer](replication-layer.html#membership-changes-rebalance-repair)) -- Leaseholder changes -- Range splits - -All of these updates to the Range Descriptor occur locally on the range, and then propagate to the `meta2` range. - -### Range Splits - -By default, CockroachDB attempts to keep ranges/replicas at 64MiB. Once a range reaches that limit we split it into two 32 MiB ranges (composed of contiguous key spaces). - -During this range split, the node creates a new Raft group containing all of the same members as the range that was split. The fact that there are now two ranges also means that there is a transaction that updates `meta2` with the new keyspace boundaries, as well as the addresses of the nodes using the Range Descriptor. - -## Technical Interactions with Other Layers - -### Distribution & Transaction Layer - -The Distribution Layer's `DistSender` receives `BatchRequests` from its own node's `TxnCoordSender`, housed in the Transaction Layer. - -### Distribution & Replication Layer - -The Distribution Layer routes `BatchRequests` to nodes containing ranges of data, which is ultimately routed to the Raft group leader or Leaseholder, which are handled in the Replication Layer. - -## What's Next? - -Learn how CockroachDB copies data and ensures consistency in the [Replication Layer](replication-layer.html). diff --git a/src/current/v2.0/architecture/overview.md b/src/current/v2.0/architecture/overview.md deleted file mode 100644 index 8f26d270298..00000000000 --- a/src/current/v2.0/architecture/overview.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: Architecture Overview -summary: Learn about the inner-workings of the CockroachDB architecture. -toc: true -key: cockroachdb-architecture.html ---- - -CockroachDB was designed to create the open-source database our developers would want to use: one that is both scalable and consistent. Developers often have questions about how we've achieved this, and this guide sets out to detail the inner-workings of the `cockroach` process as a means of explanation. - -However, you definitely do not need to understand the underlying architecture to use CockroachDB. These pages give serious users and database enthusiasts a high-level framework to explain what's happening under the hood. - -## Using this Guide - -This guide is broken out into pages detailing each layer of CockroachDB. It's recommended to read through the layers sequentially, starting with this overview and then proceeding to the SQL Layer. - -If you're looking for a high-level understanding of CockroachDB, you can simply read the **Overview** section of each layer. For more technical detail––for example, if you're interested in [contributing to the project](https://wiki.crdb.io/wiki/spaces/CRDB/pages/73204033/Contributing+to+CockroachDB)––you should read the **Components** sections as well. - -{{site.data.alerts.callout_info}}This guide details how CockroachDB is built, but does not explain how you should architect an application using CockroachDB. For help with your own application's architecture using CockroachDB, check out our user documentation.{{site.data.alerts.end}} - -## Goals of CockroachDB - -CockroachDB was designed in service of the following goals: - -- Make life easier for humans. This means being low-touch and highly automated for operators and simple to reason about for developers. -- Offer industry-leading consistency, even on massively scaled deployments. This means enabling distributed transactions, as well as removing the pain of eventual consistency issues and stale reads. -- Create an always-on database that accepts reads and writes on all nodes without generating conflicts. -- Allow flexible deployment in any environment, without tying you to any platform or vendor. -- Support familiar tools for working with relational data (i.e., SQL). - -With the confluence of these features, we hope that CockroachDB lets teams easily build global, scalable, resilient cloud services. - -## Glossary - -### Terms - -It's helpful to understand a few terms before reading our architecture documentation. - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -### Concepts - -CockroachDB heavily relies on the following concepts, so being familiar with them will help you understand what our architecture achieves. - -Term | Definition ------|----------- -**Consistency** | CockroachDB uses "consistency" in both the sense of [ACID semantics](https://en.wikipedia.org/wiki/Consistency_(database_systems)) and the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), albeit less formally than either definition. What we try to express with this term is that your data should be anomaly-free. -**Consensus** | When a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.

      When a write *doesn't* achieve consensus, forward progress halts to maintain consistency within the cluster. -**Replication** | Replication involves creating and distributing copies of data, as well as ensuring copies remain consistent. However, there are multiple types of replication: namely, synchronous and asynchronous.

      Synchronous replication requires all writes to propagate to a quorum of copies of the data before being considered committed. To ensure consistency with your data, this is the kind of replication CockroachDB uses.

      Asynchronous replication only requires a single node to receive the write to be considered committed; it's propagated to each copy of the data after the fact. This is more or less equivalent to "eventual consistency", which was popularized by NoSQL databases. This method of replication is likely to cause anomalies and loss of data. -**Transactions** | A set of operations performed on your database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/Database_transaction). This is a crucial component for a consistent system to ensure developers can trust the data in their database. -**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast. - -## Overview - -CockroachDB starts running on machines with two commands: - -- `cockroach start` with a `--join` flag for all of the initial nodes in the cluster, so the process knows all of the other machines it can communicate with -- `cockroach init` to perform a one-time initialization of the cluster - -Once the `cockroach` process is running, developers interact with CockroachDB through a SQL API, which we've modeled after PostgreSQL. Thanks to the symmetrical behavior of all nodes, you can send SQL requests to any of them; this makes CockroachDB really easy to integrate with load balancers. - -After receiving SQL RPCs, nodes convert them into operations that work with our distributed key-value store. As these RPCs start filling your cluster with data, CockroachDB algorithmically starts distributing your data among your nodes, breaking the data up into 64MiB chunks that we call ranges. Each range is replicated to at least 3 nodes to ensure survivability. This way, if nodes go down, you still have copies of the data which can be used for reads and writes, as well as replicating the data to other nodes. - -If a node receives a read or write request it cannot directly serve, it simply finds the node that can handle the request, and communicates with it. This way you do not need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node. - -Any changes made to the data in a range rely on a consensus algorithm to ensure a majority of its replicas agree to commit the change, ensuring industry-leading isolation guarantees and providing your application consistent reads, regardless of which node you communicate with. - -Ultimately, data is written to and read from disk using an efficient storage engine, which is able to keep track of the data's timestamp. This has the benefit of letting us support the SQL standard `AS OF SYSTEM TIME` clause, letting you find historical data for a period of time. - -However, while that high-level overview gives you a notion of what CockroachDB does, looking at how the `cockroach` process operates on each of these nodes will give you much greater understanding of our architecture. - -### Layers - -At the highest level, CockroachDB converts clients' SQL statements into key-value (KV) data, which is distributed among nodes and written to disk. Our architecture is the process by which we accomplish that, which is manifested as a number of layers that interact with those directly above and below it as relatively opaque services. - -The following pages describe the function each layer performs, but mostly ignore the details of other layers. This description is true to the experience of the layers themselves, which generally treat the other layers as black-box APIs. There are interactions that occur between layers which *are not* clearly articulated and require an understanding of each layer's function to understand the entire process. - -Layer | Order | Purpose -------|------------|-------- -[SQL](sql-layer.html) | 1 | Translate client SQL queries to KV operations. -[Transactional](transaction-layer.html) | 2 | Allow atomic changes to multiple KV entries. -[Distribution](distribution-layer.html) | 3 | Present replicated KV ranges as a single entity. -[Replication](replication-layer.html) | 4 | Consistently and synchronously replicate KV ranges across many nodes. This layer also enables consistent reads via leases. -[Storage](storage-layer.html) | 5 | Write and read KV data on disk. - -## What's Next? - -Begin understanding our architecture by learning how CockroachDB works with applications in the [SQL Layer](sql-layer.html). diff --git a/src/current/v2.0/architecture/replication-layer.md b/src/current/v2.0/architecture/replication-layer.md deleted file mode 100644 index 25e2b588a8f..00000000000 --- a/src/current/v2.0/architecture/replication-layer.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Replication Layer -summary: The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between copies. -toc: true ---- - -The Replication Layer of CockroachDB's architecture copies data between nodes and ensures consistency between these copies by implementing our consensus algorithm. - -{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}} - -## Overview - -High availability requires that your database can tolerate nodes going offline without interrupting service to your application. This means replicating data between nodes to ensure the data remains accessible. - -Ensuring consistency with nodes offline, though, is a challenge many databases fail. To solve this problem, CockroachDB uses a consensus algorithm to require that a quorum of replicas agrees on any changes to a range before those changes are committed. Because 3 is the smallest number that can achieve quorum (i.e., 2 out of 3), CockroachDB's high availability (known as Multi-Active Availability) requires 3 nodes. - -The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [Replication Zones](../configure-replication-zones.html). - -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. - -### Interactions with Other Layers - -In relationship to other layers in CockroachDB, the Replication Layer: - -- Receives requests from and sends responses to the Distribution Layer. -- Writes accepted requests to the Storage Layer. - -## Components - -### Raft - -Raft is a consensus protocol––an algorithm which makes sure that your data is safely stored on multiple machines, and that those machines agree on the current state even if some of them are temporarily disconnected. - -Raft organizes all nodes that contain a replica of a range into a group--unsurprisingly called a Raft Group. Each replica in a Raft Group is either a "leader" or a "follower". The leader, which is elected by Raft and long-lived, coordinates all writes to the Raft Group. It heartbeats followers periodically and keeps their logs replicated. In the absence of heartbeats, followers become candidates after randomized election timeouts and proceed to hold new leader elections. - -Once a node receives a `BatchRequest` for a range it contains, it converts those KV operations into Raft commands. Those commands are proposed to the Raft group leader––which is what makes it ideal for the [Leaseholder](#leases) and the Raft leader to be one in the same––and written to the Raft log. - -For a great overview of Raft, we recommend [The Secret Lives of Data](http://thesecretlivesofdata.com/raft/). - -#### Raft Logs - -When writes receive a quorum, and are committed by the Raft group leader, they're appended to the Raft log. This provides an ordered set of commands that the replicas agreed on and is essentially the source of truth for consistent replication. - -Because this log is treated as serializable, it can be replayed to bring a node from a past state to its current state. This log also lets nodes that temporarily went offline to be "caught up" to the current state without needing to receive a copy of the existing data in the form of a snapshot. - -### Snapshots - -Each replica can be "snapshotted", which copies all of its data as of a specific timestamp (available because of [MVCC](storage-layer.html#mvcc)). This snapshot can be sent to other nodes during a rebalance event to expedite replication. - -After loading the snapshot, the node gets up to date by replaying all actions from the Raft group's log that have occurred since the snapshot was taken. - -### Leases - -A single node in the Raft group acts as the Leaseholder, which is the only node that can serve reads or propose writes to the Raft group leader (both actions are received as `BatchRequests` from [`DistSender`](distribution-layer.html#distsender)). - -When serving reads, Leaseholders bypass Raft; for the Leaseholder's writes to have been committed in the first place, they must have already achieved consensus, so a second consensus on the same data is unnecessary. This has the benefit of not incurring networking round trips required by Raft and greatly increases the speed of reads (without sacrificing consistency). - -CockroachDB attempts to elect a Leaseholder who is also the Raft group leader, which can also optimize the speed of writes. - -If there is no Leaseholder, any node receiving a request will attempt to become the Leaseholder for the range. To prevent two nodes from acquiring the lease, the requester includes a copy of the last valid lease it had; if another node became the Leaseholder, its request is ignored. - -#### Co-location with Raft Leadership - -The range lease is completely separate from Raft leadership, and so without further efforts, Raft leadership and the Range lease might not be held by the same Replica. However, we can optimize query performance by making the same node both Raft leader and the Leaseholder; it reduces network round trips if the Leaseholder receiving the requests can simply propose the Raft commands to itself, rather than communicating them to another node. - -To achieve this, each lease renewal or transfer also attempts to collocate them. In practice, that means that the mismatch is rare and self-corrects quickly. - -#### Epoch-Based Leases (Table Data) - -To manage leases for table data, CockroachDB implements a notion of "epochs," which are defined as the period between a node joining a cluster and a node disconnecting from a cluster. When the node disconnects, the epoch is considered changed, and the node immediately loses all of its leases. - -This mechanism lets us avoid tracking leases for every range, which eliminates a substantial amount of traffic we would otherwise incur. Instead, we assume leases do not expire until a node loses connection. - -#### Expiration-Based Leases (Meta & System Ranges) - -Your table's meta and system ranges (detailed in the Distribution Layer) are treated as normal key-value data, and therefore have Leases, as well. However, instead of using epochs, they have an expiration-based lease. These leases simply expire at a particular timestamp (typically a few seconds)––however, as long as the node continues proposing Raft commands, it continues to extend the expiration of the lease. If it doesn't, the next node containing a replica of the range that tries to read from or write to the range will become the Leaseholder. - -### Membership Changes: Rebalance/Repair - -Whenever there are changes to a cluster's number of nodes, the members of Raft groups change and, to ensure optimal survivability and performance, replicas need to be rebalanced. What that looks like varies depending on whether the membership change is nodes being added or going offline. - -**Nodes added**: The new node communicates information about itself to other nodes, indicating that it has space available. The cluster then rebalances some replicas onto the new node. - -**Nodes going offline**: If a member of a Raft group ceases to respond, after 5 minutes, the cluster begins to rebalance by replicating the data the downed node held onto other nodes. - -#### Rebalancing Replicas - -When CockroachDB detects a membership change, ultimately, replicas are moved between nodes. - -This is achieved by using a snapshot of a replica from the Leaseholder, and then sending the data to another node over [gRPC](distribution-layer.html#grpc). After the transfer has been completed, the node with the new replica joins that range's Raft group; it then detects that its latest timestamp is behind the most recent entries in the Raft log and it replays all of the actions in the Raft log on itself. - -## Interactions with Other Layers - -### Replication & Distribution Layers - -The Replication Layer receives requests from its and other nodes' `DistSender`. If this node is the Leaseholder for the range, it accepts the requests; if it isn't, it returns an error with a pointer to which node it believes *is* the Leaseholder. These KV requests are then turned into Raft commands. - -The Replication layer sends `BatchResponses` back to the Distribution Layer's `DistSender`. - -### Replication & Storage Layers - -Committed Raft commands are written to the Raft log and ultimately stored on disk through the Storage Layer. - -The Leaseholder serves reads from its RocksDB instance, which is in the Storage Layer. - -## What's Next? - -Learn how CockroachDB reads and writes data from disk in the [Storage Layer](storage-layer.html). diff --git a/src/current/v2.0/architecture/sql-layer.md b/src/current/v2.0/architecture/sql-layer.md deleted file mode 100644 index b6d31b6f456..00000000000 --- a/src/current/v2.0/architecture/sql-layer.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -title: SQL Layer -summary: The SQL layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations. -toc: true ---- - -The SQL Layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations used by the rest of the database. - -{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}} - -## Overview - -Once CockroachDB has been deployed, developers need nothing more than a connection string to the cluster and SQL statements to start working. - -Because CockroachDB's nodes all behave symmetrically, developers can send requests to any node (which means CockroachDB works well with load balancers). Whichever node receives the request acts as the "gateway node," as other layers process the request. - -When developers send requests to the cluster, they arrive as SQL statements, but data is ultimately written to and read from the storage layer as key-value (KV) pairs. To handle this, the SQL layer converts SQL statements into a plan of KV operations, which it passes along to the Transaction Layer. - -### Interactions with Other Layers - -In relationship to other layers in CockroachDB, the SQL Layer: - -- Sends requests to the Transaction Layer. - -## Components - -### Relational Structure - -Developers experience data stored in CockroachDB in a relational structure, i.e., rows and columns. Sets of rows and columns are organized into tables. Collections of tables are organized into databases. Your cluster can contain many databases. - -Because of this structure, CockroachDB provides typical relational features like constraints (e.g., foreign keys). This lets application developers trust that the database will ensure consistent structuring of the application's data; data validation doesn't need to be built into the application logic separately. - -### SQL API - -CockroachDB implements a large portion of the ANSI SQL standard to manifest its relational structure. You can view [all of the SQL features CockroachDB supports here](../sql-feature-support.html). - -Importantly, through the SQL API, we also let developers use ACID-semantic transactions just like they would through any SQL database (`BEGIN`, `END`, `ISOLATION LEVELS`, etc.) - -### PostgreSQL Wire Protocol - -SQL queries reach your cluster through the PostgreSQL wire protocol. This makes connecting your application to the cluster simple by supporting most PostgreSQL-compatible drivers, as well as many PostgreSQL ORMs, such as GORM (Go) and Hibernate (Java). - -### SQL Parser, Planner, Executor - -After your node ultimately receives a SQL request from a client, CockroachDB parses the statement, creates a query plan, and then executes the plan. - -#### Parsing - -Received queries are parsed against our `yacc` file (which describes our supported syntax), and converts the string version of each query into [Abstract Syntax Trees](https://en.wikipedia.org/wiki/Abstract_syntax_tree) (AST). - -#### Planning - -With the AST, CockroachDB begins [semantic analysis](https://en.wikipedia.org/wiki/Semantic_analysis_(compilers)), which includes checking whether the query is valid, resolving names, eliminating unneeded intermediate computations, and finalizing which data types to use for intermediate results. - -At the same time, CockroachDB starts planning the query's execution by generating a tree of `planNodes`. Each of the `planNodes` contain a set of code that uses KV operations; this is ultimately how SQL statements are converted into KV operations. - -You can see the `planNodes` a query generates using [`EXPLAIN`](../explain.html). - -#### Executing - -`planNodes` are then executed, which begins by communicating with the Transaction Layer. - -This step also includes encoding values from your statements, as well as decoding values returned from lower layers. - -### Encoding - -Though SQL queries are written in parsable strings, lower layers of CockroachDB deal primarily in bytes. This means at the SQL layer, in query execution, CockroachDB must convert row data from their SQL representation as strings into bytes, and convert bytes returned from lower layers into SQL data that can be passed back to the client. - -It's also important––for indexed columns––that this byte encoding preserve the same sort order as the data type it represents. This is because of the way CockroachDB ultimately stores data in a sorted key-value map; storing bytes in the same order as the data it represents lets us efficiently scan KV data. - -However, for non-indexed columns (e.g., non-`PRIMARY KEY` columns), CockroachDB instead uses an encoding (known as "value encoding") which consumes less space but does not preserve ordering. - -You can find more exhaustive detail in the [Encoding Tech Note](https://github.com/cockroachdb/cockroach/blob/master/docs/tech-notes/encoding.md). - -### DistSQL - -Because CockroachDB is a distributed database, we've developed a Distributed SQL (DistSQL) optimization tool for some queries, which can dramatically speed up queries that involve many ranges. Though DistSQL's architecture is worthy of its own documentation, this cursory explanation can provide some insight into how it works. - -In non-distributed queries, the coordinating node receives all of the rows that match its query, and then performs any computations on the entire data set. - -However, for DistSQL-compatible queries, each node does computations on the rows it contains, and then sends the results (instead of the entire rows) to the coordinating node. The coordinating node then aggregates the results from each node, and finally returns a single response to the client. - -This dramatically reduces the amount of data brought to the coordinating node, and leverages the well-proven concept of parallel computing, ultimately reducing the time it takes for complex queries to complete. In addition, this processes data on the node that already stores it, which lets CockroachDB handle row-sets that are larger than an individual node's storage. - -To run SQL statements in a distributed fashion, we introduce a couple of concepts: - -- **Logical plan**: Similar to the AST/`planNode` tree described above, it represents the abstract (non-distributed) data flow through computation stages. -- **Physical plan**: A physical plan is conceptually a mapping of the logical plan nodes to physical machines running `cockroach`. Logical plan nodes are replicated and specialized depending on the cluster topology. Like `planNodes` above, these components of the physical plan are scheduled and run on the cluster. - -You can find much greater detail in the [DistSQL RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20160421_distributed_sql.md). - -## Technical Interactions with Other Layers - -### SQL & Transaction Layer - -KV operations from executed `planNodes` are sent to the Transaction Layer. - -## What's Next? - -Learn how CockroachDB handles concurrent requests in the [Transaction Layer](transaction-layer.html). diff --git a/src/current/v2.0/architecture/storage-layer.md b/src/current/v2.0/architecture/storage-layer.md deleted file mode 100644 index 6e482a94b79..00000000000 --- a/src/current/v2.0/architecture/storage-layer.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Storage Layer -summary: The storage layer of CockroachDB's architecture reads and writes data to disk. -toc: true ---- - -The Storage Layer of CockroachDB's architecture reads and writes data to disk. - -{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}} - -## Overview - -Each CockroachDB node contains at least one `store`, specified when the node starts, which is where the `cockroach` process reads and writes its data on disk. - -This data is stored as key-value pairs on disk using RocksDB, which is treated primarily as a black-box API. Internally, each store contains three instance of RocksDB: - -- One for the Raft log -- One for storing temporary Distributed SQL data -- One for all other data on the node - -In addition, there is also a block cache shared amongst all of the stores in a node. These stores in turn have a collection of range replicas. More than one replica for a range will never be placed on the same store or even the same node. - -### Interactions with Other Layers - -In relationship to other layers in CockroachDB, the Storage Layer: - -- Serves successful reads and writes from the Replication Layer. - -## Components - -### RocksDB - -CockroachDB uses RocksDB––an embedded key-value store––to read and write data to disk. You can find more information about it on the [RocksDB Basics GitHub page](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics). - -RocksDB integrates really well with CockroachDB for a number of reasons: - -- Key-value store, which makes mapping to our key-value layer very simple -- Atomic write batches and snapshots, which give us a subset of transactions - -Efficient storage for the keys is guaranteed by the underlying RocksDB engine by means of prefix compression. - -### MVCC - -CockroachDB relies heavily on [multi-version concurrency control (MVCC)](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) to process concurrent requests and guarantee consistency. Much of this work is done by using [hybrid logical clock (HLC) timestamps](transaction-layer.html#time-hybrid-logical-clocks) to differentiate between versions of data, track commit timestamps, and identify a value's garbage collection expiration. All of this MVCC data is then stored in RocksDB. - -Despite being implemented in the Storage Layer, MVCC values are widely used to enforce consistency in the [Transaction Layer](transaction-layer.html). For example, CockroachDB maintains a [Timestamp Cache](transaction-layer.html#timestamp-cache), which stores the timestamp of the last time that the key was read. If a write operation occurs at a lower timestamp than the largest value in the Read Timestamp Cache, it signifies there’s a potential anomaly and the transaction must be restarted at a later timestamp. - -#### Time-Travel - -As described in the [SQL:2011 standard](https://en.wikipedia.org/wiki/SQL:2011#Temporal_support), CockroachDB supports time travel queries (enabled by MVCC). - -To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB actually uses the schema information as of that time to formulate the queries. - -Using these tools, you can get consistent data from your database as far back as your garbage collection period. - -### Garbage Collection - -CockroachDB regularly garbage collects MVCC values to reduce the size of data stored on disk. To do this, we compact old MVCC values when there is a newer MVCC value with a timestamp that's older than the garbage collection period. By default, the garbage collection period is 24 hours, but it can be set at the cluster, database, or table level through [Replication Zones](../configure-replication-zones.html). - -## Interactions with Other Layers - -### Storage & Replication Layers - -The Storage Layer commits writes from the Raft log to disk, as well as returns requested data (i.e., reads) to the Replication Layer. - -## What's Next? - -Now that you've learned about our architecture, [start a local cluster](../install-cockroachdb.html) and start [building an app with CockroachDB](../build-an-app-with-cockroachdb.html). diff --git a/src/current/v2.0/architecture/transaction-layer.md b/src/current/v2.0/architecture/transaction-layer.md deleted file mode 100644 index 1e3d7d4e9eb..00000000000 --- a/src/current/v2.0/architecture/transaction-layer.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: Transaction Layer -summary: The transaction layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations. -toc: true ---- - -The Transaction Layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations. - -{{site.data.alerts.callout_info}}If you haven't already, we recommend reading the Architecture Overview.{{site.data.alerts.end}} - -## Overview - -Above all else, CockroachDB believes consistency is the most important feature of a database––without it, developers cannot build reliable tools, and businesses suffer from potentially subtle and hard to detect anomalies. - -To provide consistency, CockroachDB implements full support for ACID transaction semantics in the Transaction Layer. However, it's important to realize that *all* statements are handled as transactions, including single statements––this is sometimes referred to as "autocommit mode" because it behaves as if every statement is followed by a `COMMIT`. - -For code samples of using transactions in CockroachDB, see our documentation on [transactions](../transactions.html#sql-statements). - -Because CockroachDB enables transactions that can span your entire cluster (including cross-range and cross-table transactions), it optimizes correctness through a two-phase transaction protocol with asynchronous cleanup. - -### Writes & Reads (Phase 1) - -#### Writing - -When the Transaction Layer executes write operations, it doesn't directly write values to disk. Instead, it creates two things that help it mediate a distributed transaction: - -- A **Transaction Record** stored in the range where the first write occurs, which includes the transaction's current state (which starts as `PENDING`, and ends as either `COMMITTED` or `ABORTED`). - -- **Write Intents** for all of a transaction’s writes, which represent a provisional, uncommitted state. These are essentially the same as standard [multi-version concurrency control (MVCC)](storage-layer.html#mvcc) values but also contain a pointer to the Transaction Record stored on the cluster. - -As write intents are created, CockroachDB checks for newer committed values. If newer committed values exist, the transaction may be restarted. If existing write intents for the same keys exist, it is resolved as a [transaction conflict](#transaction-conflicts). - -If transactions fail for other reasons, such as failing to pass a SQL constraint, the transaction is aborted. - -#### Reading - -If the transaction has not been aborted, the Transaction Layer begins executing read operations. If a read only encounters standard MVCC values, everything is fine. However, if it encounters any Write Intents, the operation must be resolved as a [transaction conflict](#transaction-conflicts). - -### Commits (Phase 2) - -CockroachDB checks the running transaction's record to see if it's been `ABORTED`; if it has, it restarts the transaction. - -If the transaction passes these checks, it's moved to `COMMITTED` and responds with the transaction's success to the client. At this point, the client is free to begin sending more requests to the cluster. - -### Cleanup (Asynchronous Phase 3) - -After the transaction has been resolved, all of the Write Intents should resolved. To do this, the coordinating node––which kept a track of all of the keys it wrote––reaches out to the values and either: - -- Resolves their Write Intents to MVCC values by removing the element that points it to the Transaction Record. -- Deletes the Write Intents. - -This is simply an optimization, though. If operations in the future encounter Write Intents, they always check their Transaction Records––any operation can resolve or remove Write Intents by checking the Transaction Record's status. - -### Interactions with Other Layers - -In relationship to other layers in CockroachDB, the Transaction Layer: - -- Receives KV operations from the SQL Layer. -- Controls the flow of KV operations sent to the Distribution Layer. - -## Technical Details & Components - -### Time & Hybrid Logical Clocks - -In distributed systems, ordering and causality are difficult problems to solve. While it's possible to rely entirely on Raft consensus to maintain serializability, it would be inefficient for reading data. To optimize performance of reads, CockroachDB implements hybrid-logical clocks (HLC) which are composed of a physical component (always close to local wall time) and a logical component (used to distinguish between events with the same physical component). This means that HLC time is always greater than or equal to the wall time. You can find more detail in the [HLC paper](http://www.cse.buffalo.edu/tech-reports/2014-04.pdf). - -In terms of transactions, the gateway node picks a timestamp for the transaction using HLC time. Whenever a transaction's timestamp is mentioned, it's an HLC value. This timestamp is used to both track versions of values (through [multiversion concurrency control](storage-layer.html#mvcc)), as well as provide our transactional isolation guarantees. - -When nodes send requests to other nodes, they include the timestamp generated by their local HLCs (which includes both physical and logical components). When nodes receive requests, they inform their local HLC of the timestamp supplied with the event by the sender. This is useful in guaranteeing that all data read/written on a node is at a timestamp less than the next HLC time. - -This then lets the node primarily responsible for the range (i.e., the Leaseholder) serve reads for data it stores by ensuring the transaction reading the data is at an HLC time greater than the MVCC value it's reading (i.e., the read always happens "after" the write). - -#### Max Clock Offset Enforcement - -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), **it crashes immediately**. - -While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -For more detail about the risks that large clock offsets can cause, see [What happens when node clocks are not properly synchronized?](../operational-faqs.html#what-happens-when-node-clocks-are-not-properly-synchronized) - -### Timestamp Cache - -To provide serializability, whenever an operation reads a value, we store the operation's timestamp in a timestamp cache, which shows the high-water mark for values being read. - -Whenever a write occurs, its timestamp is checked against the timestamp cache. If the timestamp is less than the timestamp cache's latest value, we attempt to push the timestamp for its transaction forward to a later time. In the case of serializable transactions, this might cause them to restart in the second phase of the transaction (see [read refreshing](#read-refreshing)). - -### client.Txn and TxnCoordSender - -As we mentioned in the SQL layer's architectural overview, CockroachDB converts all SQL statements into key-value (KV) operations, which is how data is ultimately stored and accessed. - -All of the KV operations generated from the SQL layer use `client.Txn`, which is the transactional interface for the CockroachDB KV layer––but, as we discussed above, all statements are treated as transactions, so all statements use this interface. - -However, `client.Txn` is actually just a wrapper around `TxnCoordSender`, which plays a crucial role in our code base by: - -- Dealing with transactions' state. After a transaction is started, `TxnCoordSender` starts asynchronously sending heartbeat messages to that transaction's Transaction Record, which signals that it should be kept alive. If the `TxnCoordSender`'s heartbeating stops, the Transaction Record is moved to the `ABORTED` status. -- Tracking each written key or key range over the course of the transaction. -- Clearing the accumulated Write Intent for the transaction when it's committed or aborted. All requests being performed as part of a transaction have to go through the same `TxnCoordSender` to account for all of its Write Intents, which optimizes the cleanup process. - -After setting up this bookkeeping, the request is passed to the `DistSender` in the Distribution Layer. - -### Transaction Records - -When a transaction starts, `TxnCoordSender` writes a Transaction Record to the range containing the first key modified in the transaction. As mentioned above, the Transaction Record provides the system with a source of truth about the status of a transaction. - -The Transaction Record expresses one of the following dispositions of a transaction: - -- `PENDING`: The initial status of all values, indicating that the Write Intent's transaction is still in progress. -- `COMMITTED`: Once a transaction has completed, this status indicates that the value can be read. -- `ABORTED`: If a transaction fails or is aborted by the client, it's moved into this state. - -The Transaction Record for a committed transaction remains until all its Write Intents are converted to MVCC values. For an aborted transaction, the Transaction Record can be deleted at any time, which also means that CockroachDB treats missing Transaction Records as if they belong to aborted transactions. - -### Write Intents - -Values in CockroachDB are not directly written to the storage layer; instead everything is written in a provisional state known as a "Write Intent." These are essentially multi-version concurrency control values (also known as MVCC, which is explained in greater depth in the Storage Layer) with an additional value added to them which identifies the Transaction Record to which the value belongs. - -Whenever an operation encounters a Write Intent (instead of an MVCC value), it looks up the status of the Transaction Record to understand how it should treat the Write Intent value. - -#### Resolving Write Intent - -Whenever an operation encounters a Write Intent for a key, it attempts to "resolve" it, the result of which depends on the Write Intent's Transaction Record: - -- `COMMITTED`: The operation reads the Write Intent and converts it to an MVCC value by removing the Write Intent's pointer to the Transaction Record. -- `ABORTED`: The Write Intent is ignored and deleted. -- `PENDING`: This signals there is a [transaction conflict](#transaction-conflicts), which must be resolved. - -### Isolation Levels - -Isolation is an element of [ACID transactions](https://en.wikipedia.org/wiki/ACID), which determines how concurrency is controlled, and ultimately guarantees consistency. - -CockroachDB efficiently supports the strongest ANSI transaction isolation level: `SERIALIZABLE`. All other ANSI transaction isolaton levels (e.g., `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ`) are automatically upgraded to `SERIALIZABLE`. Weaker isolation levels have historically been used to maximize transaction throughput. However, [recent research](http://www.bailis.org/papers/acidrain-sigmod2017.pdf) has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks. CockroachDB continues to support an additional non-ANSI isolation level, `SNAPSHOT`, although it is deprecated. Clients can explicitly set a transaction's isolation when starting the transaction: - -- **Serializable Snapshot Isolation** _(Serializable)_ transactions are CockroachDB's default (equivalent to ANSI SQL's `SERIALIZABLE` isolation level, which is the highest of the four standard levels). This isolation level does not allow any anomalies in your data, and is enforced by requiring the client to retry transactions if serializability violations are possible. - -- **Snapshot Isolation** _(Snapshot)_ transactions trade correctness in order to avoid retries when serializability violations are possible. This is achieved by always reading at an initial transaction timestamp, but allowing the transaction's commit timestamp to be pushed forward in the event of [transaction conflicts](#transaction-conflicts). Snapshot isolation cannot prevent an anomaly known as [write skew](https://en.wikipedia.org/wiki/Snapshot_isolation). - -### Transaction Conflicts - -CockroachDB's transactions allow the following types of conflicts that involve running into an intent: - -- **Write/Write**, where two `PENDING` transactions create Write Intents for the same key. -- **Write/Read**, when a read encounters an existing Write Intent with a timestamp less than its own. - -To make this simpler to understand, we'll call the first transaction `TxnA` and the transaction that encounters its Write Intents `TxnB`. - -CockroachDB proceeds through the following steps until one of the transactions is aborted, has its timestamp pushed, or enters the `TxnWaitQueue`. - -1. If the transaction has an explicit priority set (i.e., `HIGH`, or `LOW`), the transaction with the lower priority is aborted (in the writer/write case) or has its timestamp pushed (in the write/read case). - -2. `TxnB` tries to push `TxnA`'s timestamp forward. - - This succeeds only in the case that `TxnA` has snapshot isolation and `TxnB`'s operation is a read. In this case, the [write skew](https://en.wikipedia.org/wiki/Snapshot_isolation) anomaly occurs. - -3. `TxnB` enters the `TxnWaitQueue` to wait for `TxnA` to complete. - -Additionally, the following types of conflicts that do not involve running into intents can arise: - -- **Write after read**, when a write with a lower timestamp encounters a later read. This is handled through the [Timestamp Cache](#timestamp-cache). -- **Read within uncertainty window**, when a read encounters a value with a higher timestamp but it's ambiguous whether the value should be considered to be in the future or in the past of the transaction because of possible *clock skew*. This is handled by attempting to push the transaction's timestamp beyond the uncertain value (see [read refreshing](#read-refreshing)). Note that, if the transaction has to be retried, reads will never encounter uncertainty issues on any node which was previously visited, and that there's never any uncertainty on values read from the transaction's gateway node. - -### TxnWaitQueue - -The `TxnWaitQueue` tracks all transactions that could not push a transaction whose writes they encountered, and must wait for the blocking transaction to complete before they can proceed. - -The `TxnWaitQueue`'s structure is a map of blocking transaction IDs to those they're blocking. For example: - -~~~ -txnA -> txn1, txn2 -txnB -> txn3, txn4, txn5 -~~~ - -Importantly, all of this activity happens on a single node, which is the leader of the range's Raft group that contains the Transaction Record. - -Once the transaction does resolve––by committing or aborting––a signal is sent to the `TxnWaitQueue`, which lets all transactions that were blocked by the resolved transaction begin executing. - -Blocked transactions also check the status of their own transaction to ensure they're still active. If the blocked transaction was aborted, it's simply removed. - -If there is a deadlock between transactions (i.e., they're each blocked by each other's Write Intents), one of the transactions is randomly aborted. In the above example, this would happen if `TxnA` blocked `TxnB` on `key1` and `TxnB` blocked `TxnA` on `key2`. - -### Read refreshing - -Whenever a transaction's timestamp has been pushed, additional checks are required before allowing serializable transactions to commit at the pushed timestamp: any values which the transaction previously read must be checked to verify that no writes have subsequently occurred between the original transaction timestamp and the pushed transaction timestamp. This check prevents serializability violation. The check is done by keeping track of all the reads using a dedicated `RefreshRequest`. If this succeeds, the transaction is allowed to commit (transactions perform this check at commit time if they've been pushed by a different transaction or by the timestamp cache, or they perform the check whenever they encounter a `ReadWithinUncertaintyIntervalError` immediately, before continuing). -If the refreshing is unsuccessful, then the transaction must be retried at the pushed timestamp. - -## Technical Interactions with Other Layers - -### Transaction & SQL Layer - -The Transaction Layer receives KV operations from `planNodes` executed in the SQL Layer. - -### Transaction & Distribution Layer - -The `TxnCoordSender` sends its KV requests to `DistSender` in the Distribution Layer. - -## What's Next? - -Learn how CockroachDB presents a unified view of your cluster's data in the [Distribution Layer](distribution-layer.html). diff --git a/src/current/v2.0/array.md b/src/current/v2.0/array.md deleted file mode 100644 index 59929d54b19..00000000000 --- a/src/current/v2.0/array.md +++ /dev/null @@ -1,217 +0,0 @@ ---- -title: ARRAY -summary: The ARRAY data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array data types. -toc: true ---- - -New in v1.1:The `ARRAY` data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array [data type](data-types.html). - -The `ARRAY` data type is useful for ensuring compatibility with ORMs and other tools. However, if such compatibility is not a concern, it's more flexible to design your schema with normalized tables. - - -{{site.data.alerts.callout_info}} CockroachDB does not support nested arrays, creating database indexes on arrays, and ordering by arrays.{{site.data.alerts.end}} - -## Syntax - -A value of data type `ARRAY` can be expressed in the following ways: - - -- Appending square brackets (`[]`) to any non-array [data type](data-types.html). -- Adding the term `ARRAY` to any non-array [data type](data-types.html). - -## Size - -The size of an `ARRAY` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Examples - -{{site.data.alerts.callout_success}} -For a complete list of array functions built into CockroachDB, see the [documentation on array functions](functions-and-operators.html#array-functions). -{{site.data.alerts.end}} - -### Creating an array column by appending square brackets - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE a (b STRING[]); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO a VALUES (ARRAY['sky', 'road', 'car']); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM a; -~~~ - -~~~ -+----------------------+ -| b | -+----------------------+ -| {"sky","road","car"} | -+----------------------+ -(1 row) -~~~ - -### Creating an array column by adding the term `ARRAY` - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE c (d INT ARRAY); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO c VALUES (ARRAY[10,20,30]); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------+ -| d | -+------------+ -| {10,20,30} | -+------------+ -(1 row) -~~~ - -### Accessing an array element using array index -{{site.data.alerts.callout_info}} Arrays in CockroachDB are 1-indexed. {{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------+ -| d | -+------------+ -| {10,20,30} | -+------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT d[2] FROM c; -~~~ - -~~~ -+------+ -| d[2] | -+------+ -| 20 | -+------+ -(1 row) -~~~ - -### Appending an element to an array - -#### Using the `array_append` function - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------+ -| d | -+------------+ -| {10,20,30} | -+------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE c SET d = array_append(d, 40) WHERE d[3] = 30; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+---------------+ -| d | -+---------------+ -| {10,20,30,40} | -+---------------+ -(1 row) -~~~ - -#### Using the append (`||`) operator - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+---------------+ -| d | -+---------------+ -| {10,20,30,40} | -+---------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE c SET d = d || 50 WHERE d[4] = 40; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------------+ -| d | -+------------------+ -| {10,20,30,40,50} | -+------------------+ -(1 row) -~~~ - -## Supported Casting & ConversionNew in v2.0 - -[Casting](data-types.html#data-type-conversions-casts) between `ARRAY` values is supported when the data types of the arrays support casting. For example, it is possible to cast from a `BOOL` array to an `INT` array but not from a `BOOL` array to a `TIMESTAMP` array: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[true,false,true]::INT[]; -~~~ - -~~~ -+--------------------------------+ -| ARRAY[true, false, | -| true]::INT[] | -+--------------------------------+ -| {1,0,1} | -+--------------------------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[true,false,true]::TIMESTAMP[]; -~~~ - -~~~ -pq: invalid cast: bool[] -> TIMESTAMP[] -~~~ - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/as-of-system-time.md b/src/current/v2.0/as-of-system-time.md deleted file mode 100644 index 3f710bc2411..00000000000 --- a/src/current/v2.0/as-of-system-time.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: AS OF SYSTEM TIME -summary: The AS OF SYSTEM TIME clause executes a statement as of a specified time. -toc: true ---- - -The `AS OF SYSTEM TIME timestamp` clause causes statements to execute -using the database contents "as of" a specified time in the past. - -This clause can be used to read historical data (also known as "[time -travel -queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/)") -and can also be advantageous for performance as it decreases -transaction conflicts. For more details, see [SQL Performance Best -Practices](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries). - -{{site.data.alerts.callout_info}}Historical data is available only within the garbage collection window, which is determined by the ttlseconds field in the replication zone configuration.{{site.data.alerts.end}} - -## Synopsis - -The `AS OF SYSTEM TIME` clause is supported in multiple SQL contexts, -including but not limited to: - -- In [`SELECT` clauses](select-clause.html), at the very end of the `FROM` sub-clause. -- In [`BACKUP`](backup.html), after the parameters of the `TO` sub-clause. -- In [`RESTORE`](restore.html), after the parameters of the `FROM` sub-clause. - -Currently, CockroachDB does not support `AS OF SYSTEM TIME` in -[explicit transactions](transactions.html). This limitation may be -lifted in the future. - -## Parameters - -The `timestamp` argument supports the following formats: - -Format | Notes ----|--- -[`INT`](int.html) | Nanoseconds since the Unix epoch. -[`STRING`](string.html) | A [`TIMESTAMP`](timestamp.html) or [`INT`](int.html) number of nanoseconds. - -## Examples - -### Select Historical Data (Time-Travel) - -Imagine this example represents the database's current data: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance - FROM accounts - WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 750 | -| Edna Barath | 2200 | -+-------------+---------+ -~~~ - -We could instead retrieve the values as they were on October 3, 2016 at 12:45 UTC: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance - FROM accounts - AS OF SYSTEM TIME '2016-10-03 12:45:00' - WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 450 | -| Edna Barath | 2000 | -+-------------+---------+ -~~~ - - -### Using Different Timestamp Formats - -Assuming the following statements are run at `2016-01-01 12:00:00`, they would execute as of `2016-01-01 08:00:00`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00' -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME 1451635200000000000 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME '1451635200000000000' -~~~ - -### Selecting from Multiple Tables - -{{site.data.alerts.callout_info}}It is not yet possible to select from multiple tables at different timestamps. The entire query runs at the specified time in the past.{{site.data.alerts.end}} - -When selecting over multiple tables in a single `FROM` clause, the `AS -OF SYSTEM TIME` clause must appear at the very end and applies to the -entire `SELECT` clause. - -For example: - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM t, u, v AS OF SYSTEM TIME '2016-01-01 08:00:00'; -~~~ - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM t JOIN u ON t.x = u.y AS OF SYSTEM TIME '2016-01-01 08:00:00'; -~~~ - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM (SELECT * FROM t), (SELECT * FROM u) AS OF SYSTEM TIME '2016-01-01 08:00:00'; -~~~ - -### Using `AS OF SYSTEM TIME` in Subqueries - -To enable time travel, the `AS OF SYSTEM TIME` clause must appear in -at least the top-level statement. It is not valid to use it only in a -[subquery](subqueries.html). - -For example, the following is invalid: - -~~~ -SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00'), u -~~~ - -To facilitate the composition of larger queries from simpler queries, -CockroachDB allows `AS OF SYSTEM TIME` in sub-queries under the -following conditions: - -- The top level query also specifies `AS OF SYSTEM TIME`. -- All the `AS OF SYSTEM TIME` clauses specify the same timestamp. - -For example: - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00') tp - JOIN u ON tp.x = u.y - AS OF SYSTEM TIME '2016-01-01 08:00:00' -- same timestamp as above - OK. - WHERE x < 123; -~~~ - -## See Also - -- [Select Historical Data](select-clause.html#select-historical-data-time-travel) -- [Time-Travel Queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) - -## Tech Note - -{{site.data.alerts.callout_info}}Although the following format is supported, it is not intended to be used by most users.{{site.data.alerts.end}} - -HLC timestamps can be specified using a [`DECIMAL`](decimal.html). The -integer part is the wall time in nanoseconds. The fractional part is -the logical counter, a 10-digit integer. This is the same format as -produced by the `cluster_logical_timestamp()` function. diff --git a/src/current/v2.0/automated-scaling-and-repair.md b/src/current/v2.0/automated-scaling-and-repair.md deleted file mode 100644 index 4d708823f30..00000000000 --- a/src/current/v2.0/automated-scaling-and-repair.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Automated Scaling & Repair -summary: CockroachDB transparently manages scale with an upgrade path from a single node to hundreds. -toc: false ---- - -CockroachDB scales horizontally with minimal operator overhead. You can run it on your local computer, a single server, a corporate development cluster, or a private or public cloud. [Adding capacity](start-a-node.html) is as easy as pointing a new node at the running cluster. - -At the key-value level, CockroachDB starts off with a single, empty range. As you put data in, this single range eventually reaches a threshold size (64MB by default). When that happens, the data splits into two ranges, each covering a contiguous segment of the entire key-value space. This process continues indefinitely; as new data flows in, existing ranges continue to split into new ranges, aiming to keep a relatively small and consistent range size. - -When your cluster spans multiple nodes (physical machines, virtual machines, or containers), newly split ranges are automatically rebalanced to nodes with more capacity. CockroachDB communicates opportunities for rebalancing using a peer-to-peer [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) by which nodes exchange network addresses, store capacity, and other information. - -- Add resources to scale horizontally, with zero hassle and no downtime -- Self-organizes, self-heals, and automatically rebalances -- Migrate data seamlessly between clouds - -Automated scaling and repair in CockroachDB diff --git a/src/current/v2.0/back-up-data.md b/src/current/v2.0/back-up-data.md deleted file mode 100644 index e9a02f47c9f..00000000000 --- a/src/current/v2.0/back-up-data.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Back Up Data -summary: Learn how to back up and restore a CockroachDB cluster. -toc: false ---- - -CockroachDB offers the following methods to back up your cluster's data: - -- [`cockroach dump`](sql-dump.html), which is a CLI command to dump/export your database's schema and table data. -- [`BACKUP`](backup.html) (*[enterprise license](https://www.cockroachlabs.com/pricing/) only*), which is a SQL statement that backs up your cluster to cloud or network file storage. - -### Details - -We recommend creating daily backups of your data as an operational best practice. - -However, because CockroachDB is designed with high fault tolerance, backups are primarily needed for disaster recovery (i.e., if your cluster loses a majority of its nodes). Isolated issues (such as small-scale node outages) do not require any intervention. - -## Restore - -For information about restoring your backed up data, see [Restoring Data](restore-data.html). - -## See Also - -- [Restore Data](restore-data.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/backup.md b/src/current/v2.0/backup.md deleted file mode 100644 index 94db064ac17..00000000000 --- a/src/current/v2.0/backup.md +++ /dev/null @@ -1,194 +0,0 @@ ---- -title: BACKUP -summary: Back up your CockroachDB cluster to a cloud storage services such as AWS S3, Google Cloud Storage, or other NFS. -toc: true ---- - -{{site.data.alerts.callout_danger}}The BACKUP feature is only available to enterprise users. For non-enterprise backups, see cockroach dump.{{site.data.alerts.end}} - -CockroachDB's `BACKUP` [statement](sql-statements.html) allows you to create full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp. Backups can be with or without [revision history](backup.html#backups-with-revision-history-new-in-v2-0). - -Because CockroachDB is designed with high fault tolerance, these backups are designed primarily for disaster recovery (i.e., if your cluster loses a majority of its nodes) through [`RESTORE`](restore.html). Isolated issues (such as small-scale node outages) do not require any intervention. - - -## Functional Details - -### Backup Targets - -You can backup entire tables (which automatically includes their indexes) or [views](views.html). Backing up a database simply backs up all of its tables and views. - -{{site.data.alerts.callout_info}}BACKUP only offers table-level granularity; it does not support backing up subsets of a table.{{site.data.alerts.end}} - -### Object Dependencies - -Dependent objects must be backed up at the same time as the objects they depend on. - -Object | Depends On --------|----------- -Table with [foreign key](foreign-key.html) constraints | The table it `REFERENCES`; however, this dependency can be [removed during the restore](restore.html#skip_missing_foreign_keys). -Table with a [sequence](create-sequence.html) | New in v2.0: The sequence it uses; however, this dependency can be [removed during the restore](restore.html#skip_missing_sequences). -[Views](views.html) | The tables used in the view's `SELECT` statement. -[Interleaved tables](interleave-in-parent.html) | The parent table in the [interleaved hierarchy](interleave-in-parent.html#interleaved-hierarchy). - -### Users and Privileges - -Every backup you create includes `system.users`, which stores your users and their passwords. To restore your users, you must use [this procedure](restore.html#restoring-users-from-system-users-backup). - -Restored tables inherit privilege grants from the target database; they do not preserve privilege grants from the backed up table because the restoring cluster may have different users. - -Table-level privileges must be [granted to users](grant.html) after the restore is complete. - -### Backup Types - -CockroachDB offers two types of backups: full and incremental. - -#### Full Backups - -Full backups contain an unreplicated copy of your data and can always be used to restore your cluster. These files are roughly the size of your data and require greater resources to produce than incremental backups. You can take full backups as of a given timestamp and (optionally) include the available [revision history](backup.html#backups-with-revision-history-new-in-v2-0). - -#### Incremental Backups - -Incremental backups are smaller and faster to produce than full backups because they contain only the data that has changed since a base set of backups you specify (which must include one full backup, and can include many incremental backups). You can take incremental backups either as of a given timestamp or with full [revision history](backup.html#backups-with-revision-history-new-in-v2-0). - -Note the following restrictions: - -- Incremental backups can only be created within the garbage collection period of the base backup's most recent timestamp. This is because incremental backups are created by finding which data has been created or modified since the most recent timestamp in the base backup––that timestamp data, though, is deleted by the garbage collection process. - - You can configure garbage collection periods using the `ttlseconds` [replication zone setting](configure-replication-zones.html). - -- It is not possible to create an incremental backup if one or more tables were [created](create-table.html), [dropped](drop-table.html), or [truncated](truncate.html) after the full backup. In this case, you must create a new [full backup](#full-backups). - -### Backups with Revision History New in v2.0 - -{% include {{ page.version.version }}/misc/beta-warning.md %} - -You can create full or incremental backups with revision history: - -- Taking full backups with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp. -- Taking incremental backups with revision history allows you to back up every change made since the last backup and within the garbage collection period leading up to and including the given timestamp. You can take incremental backups with revision history even when your previous full or incremental backups were taken without revision history. - -You can configure garbage collection periods using the `ttlseconds` [replication zone setting](configure-replication-zones.html). Taking backups with revision history allows for point-in-time restores within the revision history. - -## Performance - -The `BACKUP` process minimizes its impact to the cluster's performance by distributing work to all nodes. Each node backs up only a specific subset of the data it stores (those for which it serves writes; more details about this architectural concept forthcoming), with no two nodes backing up the same data. - -For best performance, we also recommend always starting backups with a specific [timestamp](timestamp.html) at least 10 seconds in the past. For example: - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP...AS OF SYSTEM TIME '2017-06-09 16:13:55.571516+00:00'; -~~~ - -This improves performance by decreasing the likelihood that the `BACKUP` will be [retried because it contends with other statements/transactions](transactions.html#transaction-retries). However, because `AS OF SYSTEM TIME` returns historical data, your reads might be stale. - -## Automating Backups - -We recommend automating daily backups of your cluster. - -To automate backups, you must have a client send the `BACKUP` statement to the cluster. - -Once the backup is complete, your client will receive a `BACKUP` response. - -## Viewing and Controlling Backups Jobs - -After CockroachDB successfully initiates a backup, it registers the backup as a job, which you can view with [`SHOW JOBS`](show-jobs.html). - -After the backup has been initiated, you can control it with [`PAUSE JOB`](pause-job.html), [`RESUME JOB`](resume-job.html), and [`CANCEL JOB`](cancel-job.html). - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/backup.html %} -
      - -{{site.data.alerts.callout_info}}The BACKUP statement cannot be used within a transaction.{{site.data.alerts.end}} - -## Required Privileges - -Only the `root` user can run `BACKUP`. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_pattern` | The table or [view](views.html) you want to back up. | -| `name` | The name of the database you want to back up (i.e., create backups of all tables and views in the database).| -| `destination` | The URL where you want to store the backup.

      For information about this URL structure, see [Backup File URLs](#backup-file-urls). | -| `AS OF SYSTEM TIME timestamp` | Back up data as it existed as of [`timestamp`](as-of-system-time.html). The `timestamp` must be more recent than your cluster's last garbage collection (which defaults to occur every 25 hours, but is [configurable per table](configure-replication-zones.html#replication-zone-format)). | -| `WITH revision_history` | New in v2.0: Create a backup with full [revision history](backup.html#backups-with-revision-history-new-in-v2-0) that records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. | -| `INCREMENTAL FROM full_backup_location` | Create an incremental backup using the full backup stored at the URL `full_backup_location` as its base. For information about this URL structure, see [Backup File URLs](#backup-file-urls).

      **Note:** It is not possible to create an incremental backup if one or more tables were [created](create-table.html), [dropped](drop-table.html), or [truncated](truncate.html) after the full backup. In this case, you must create a new [full backup](#full-backups). | -| `incremental_backup_location` | Create an incremental backup that includes all backups listed at the provided URLs.

      Lists of incremental backups must be sorted from oldest to newest. The newest incremental backup's timestamp must be within the table's garbage collection period.

      For information about this URL structure, see [Backup File URLs](#backup-file-urls).

      For more information about garbage collection, see [Configure Replication Zones](configure-replication-zones.html#replication-zone-format). | - -### Backup File URLs - -We will use the URL provided to construct a secure API call to the service you specify. The path to each backup must be unique, and the URL for your backup's destination/locations must use the following format: - -{% include {{ page.version.version }}/misc/external-urls.md %} - -## Examples - -Per our guidance in the [Performance](#performance) section, we recommend starting backups from a time at least 10 seconds in the past using [`AS OF SYSTEM TIME`](as-of-system-time.html). - -### Backup a Single Table or View - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP bank.customers \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '2017-03-26 23:59:00'; -~~~ - -### Backup Multiple Tables - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP bank.customers, bank.accounts \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '2017-03-26 23:59:00'; -~~~ - -### Backup an Entire Database - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '2017-03-26 23:59:00'; -~~~ - -### Backup with Revision HistoryNew in v2.0 - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '2017-03-26 23:59:00' WITH revision_history; -~~~ - -### Create Incremental Backups - -Incremental backups must be based off of full backups you've already created. - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/db/bank/2017-03-29-nightly' \ -AS OF SYSTEM TIME '2017-03-28 23:59:00' \ -INCREMENTAL FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly'; -~~~ - -### Create Incremental Backups with Revision HistoryNew in v2.0 - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/database-bank-2017-03-29-nightly' \ -AS OF SYSTEM TIME '2017-03-28 23:59:00' \ -INCREMENTAL FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly' WITH revision_history; -~~~ - -## See Also - -- [`RESTORE`](restore.html) -- [Configure Replication Zones](configure-replication-zones.html) diff --git a/src/current/v2.0/begin-transaction.md b/src/current/v2.0/begin-transaction.md deleted file mode 100644 index e8f707211e6..00000000000 --- a/src/current/v2.0/begin-transaction.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -title: BEGIN -summary: Initiate a SQL transaction with the BEGIN statement in CockroachDB. -toc: true ---- - -The `BEGIN` [statement](sql-statements.html) initiates a [transaction](transactions.html), which either successfully executes all of the statements it contains or none at all. - -{{site.data.alerts.callout_danger}}When using transactions, your application should include logic to retry transactions that are aborted to break a dependency cycle between concurrent transactions.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/begin_transaction.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to initiate a transaction. However, privileges are required for each statement within a transaction. - -## Aliases - -In CockroachDB, the following are aliases for the `BEGIN` statement: - -- `BEGIN TRANSACTION` -- `START TRANSACTION` - -The following aliases also exist for [isolation levels](transactions.html#isolation-levels): - -- `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` are aliases for `SERIALIZABLE` - -For more information on isolation level aliases, see [Comparison to ANSI SQL Isolation Levels](transactions.html#comparison-to-ansi-sql-isolation-levels). - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `ISOLATION LEVEL` | By default, transactions in CockroachDB implement the strongest ANSI isolation level: `SERIALIZABLE`. At this isolation level, transactions will never result in anomalies. The `SNAPSHOT` isolation level is still supported as well for backwards compatibility, but you should avoid using it. It provides little benefit in terms of performance and can result in inconsistent state under certain complex workloads. For more information, see [Transactions: Isolation Levels](transactions.html#isolation-levels).

      **Default**: `SERIALIZABLE` | -| `PRIORITY` | If you do not want the transaction to run with `NORMAL` priority, you can set it to `LOW` or `HIGH`.

      Transactions with higher priority are less likely to need to be retried.

      For more information, see [Transactions: Priorities](transactions.html#transaction-priorities).

      **Default**: `NORMAL` | - -## Examples - -### Begin a Transaction - -#### Use Default Settings - -Without modifying the `BEGIN` statement, the transaction uses `SERIALIZABLE` isolation and `NORMAL` priority. - -~~~ sql -> BEGIN; - -> SAVEPOINT cockroach_restart; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -#### Change Isolation Level & Priority - -You can set a transaction's isolation level to `SNAPSHOT`, as well as its priority to `LOW` or `HIGH`. - -~~~ sql -> BEGIN ISOLATION LEVEL SNAPSHOT, PRIORITY HIGH; - -> SAVEPOINT cockroach_restart; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -You can also set a transaction's isolation level and priority with [`SET TRANSACTION`](set-transaction.html). - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -### Begin a Transaction with Automatic Retries - -CockroachDB will [automatically retry](transactions.html#transaction-retries) all transactions that contain both `BEGIN` and `COMMIT` in the same batch. Batching is controlled by your driver or client's behavior, but means that CockroachDB receives all of the statements as a single unit, instead of a number of requests. - -From the perspective of CockroachDB, a transaction sent as a batch looks like this: - -~~~ sql -> BEGIN; DELETE FROM customers WHERE id = 1; DELETE orders WHERE customer = 1; COMMIT; -~~~ - -However, in your application's code, batched transactions are often just multiple statements sent at once. For example, in Go, this transaction would sent as a single batch (and automatically retried): - -~~~ go -db.Exec( - "BEGIN; - - DELETE FROM customers WHERE id = 1; - - DELETE orders WHERE customer = 1; - - COMMIT;" -) -~~~ - -Issuing statements this way signals to CockroachDB that you do not need to change any of the statement's values if the transaction doesn't immediately succeed, so it can continually retry the transaction until it's accepted. - -## See Also - -- [Transactions](transactions.html) -- [`COMMIT`](commit-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) diff --git a/src/current/v2.0/bool.md b/src/current/v2.0/bool.md deleted file mode 100644 index b86a8243b49..00000000000 --- a/src/current/v2.0/bool.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: BOOL -summary: The BOOL data type stores Boolean values of false or true. -toc: true ---- - -The `BOOL` [data type](data-types.html) stores a Boolean value of `false` or `true`. - - -## Aliases - -In CockroachDB, `BOOLEAN` is an alias for `BOOL`. - -## Syntax - -There are two predefined -[named constants](sql-constants.html#named-constants) for `BOOL`: -`TRUE` and `FALSE` (the names are case-insensitive). - -Alternately, a boolean value can be obtained by coercing a numeric -value: zero is coerced to `FALSE`, and any non-zero value to `TRUE`. - -- `CAST(0 AS BOOL)` (false) -- `CAST(123 AS BOOL)` (true) - -## Size - -A `BOOL` value is 1 byte in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -~~~ sql -> CREATE TABLE bool (a INT PRIMARY KEY, b BOOL, c BOOLEAN); - -> SHOW COLUMNS FROM bool; -~~~ -~~~ -+-------+------+-------+---------+ -| Field | Type | Null | Default | -+-------+------+-------+---------+ -| a | INT | false | NULL | -| b | BOOL | true | NULL | -| c | BOOL | true | NULL | -+-------+------+-------+---------+ -~~~ -~~~ sql -> INSERT INTO bool VALUES (12345, true, CAST(0 AS BOOL)); - -> SELECT * FROM bool; -~~~ -~~~ -+-------+------+-------+ -| a | b | c | -+-------+------+-------+ -| 12345 | true | false | -+-------+------+-------+ -~~~ - -## Supported Casting & Conversion - -`BOOL` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts `true` to `1`, `false` to `0` -`DECIMAL` | Converts `true` to `1`, `false` to `0` -`FLOAT` | Converts `true` to `1`, `false` to `0` -`STRING` | –– - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/build-a-c++-app-with-cockroachdb.md b/src/current/v2.0/build-a-c++-app-with-cockroachdb.md deleted file mode 100644 index 9bcc68f09a8..00000000000 --- a/src/current/v2.0/build-a-c++-app-with-cockroachdb.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Build a C++ App with CockroachDB -summary: Learn how to use CockroachDB from a simple C++ application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple C++ application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the [C++ libpqxx driver](https://github.com/jtv/libpqxx) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Install the libpqxx driver - -Install the C++ libpqxx driver as described in the [official documentation](https://github.com/jtv/libpqxx). - -{% include {{ page.version.version }}/app/common-steps.md %} - -## Step 5. Run the C++ code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic Statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.cpp file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ cpp -{% include {{ page.version.version }}/app/basic-sample.cpp %} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}} - -Download the txn-sample.cpp file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ cpp -{% include {{ page.version.version }}/app/txn-sample.cpp %} -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -## What's Next? - -Read more about using the [C++ libpqxx driver](https://github.com/jtv/libpqxx). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-clojure-app-with-cockroachdb.md b/src/current/v2.0/build-a-clojure-app-with-cockroachdb.md deleted file mode 100644 index c0a4406988e..00000000000 --- a/src/current/v2.0/build-a-clojure-app-with-cockroachdb.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Build a Clojure App with CockroachDB -summary: Learn how to use CockroachDB from a simple Clojure application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple Clojure application with CockroachDB using [leiningen](https://leiningen.org/) and a PostgreSQL-compatible driver. - -We have tested the [Clojure java.jdbc driver](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/) in conjunction with the [PostgreSQL JDBC driver](https://jdbc.postgresql.org/) enough to claim **beta-level** support, so that combination is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Install `leiningen` - -Install the Clojure `lein` utility as described in its [official documentation](https://leiningen.org/). - -{% include {{ page.version.version }}/app/common-steps.md %} - -## Step 5. Create a table in the new database - -As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---database=bank \ ---user=maxroach \ --e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)' -~~~ - -## Step 6. Run the Clojure code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Create a basic Clojure/JDBC project - -1. Create a new directory `myapp`. -2. Create a file `myapp/project.clj` and populate it with the following code, or download it directly. Be sure to place the file in the subdirectory `src/test` in your project. - - {% include copy-clipboard.html %} - ~~~ clojure - {% include {{ page.version.version }}/app/project.clj %} - ~~~ - -3. Create a file `myapp/src/test/util.clj` and populate it with the code from this file. Be sure to place the file in the subdirectory `src/test` in your project. - -### Basic Statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Create a file `myapp/src/test/test.clj` and copy the code below to it, or download it directly. Be sure to rename this file to `test.clj` in the subdirectory `src/test` in your project. - -{% include copy-clipboard.html %} -~~~ clojure -{% include {{ page.version.version }}/app/basic-sample.clj %} -~~~ - -Run with: - -{% include copy-clipboard.html %} -~~~ shell -lein run -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Copy the code below to `myapp/src/test/test.clj` or -download it directly. Again, preserve the file name `test.clj`. - -{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ clojure -{% include {{ page.version.version }}/app/txn-sample.clj %} -~~~ - -Run with: - -{% include copy-clipboard.html %} -~~~ shell -lein run -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -## What's Next? - -Read more about using the [Clojure java.jdbc driver](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-csharp-app-with-cockroachdb.md b/src/current/v2.0/build-a-csharp-app-with-cockroachdb.md deleted file mode 100644 index 48f4b825ceb..00000000000 --- a/src/current/v2.0/build-a-csharp-app-with-cockroachdb.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: Build a C# (.NET) App with CockroachDB -summary: Learn how to use CockroachDB from a simple C# (.NET) application with a low-level client driver. -toc: true -twitter: true ---- - -This tutorial shows you how build a simple C# (.NET) application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the [.NET Npgsql driver](http://www.npgsql.org/) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html) and the .NET SDK for your OS. - -## Step 1. Create a .NET project - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet new console -o cockroachdb-test-app -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cd cockroachdb-test-app -~~~ - -The `dotnet` command creates a new app of type `console`. The `-o` parameter creates a directory named `cockroachdb-test-app` where your app will be stored and populates it with the required files. The `cd cockroachdb-test-app` command puts you into the newly created app directory. - -## Step 2. Install the Npgsql driver - -Install the latest version of the [Npgsql driver](https://www.nuget.org/packages/Npgsql/) into the .NET project using the built-in nuget package manager: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet add package Npgsql -~~~ - -## Step 3. Start a single-node cluster - -For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=hello-1 \ ---host=localhost -~~~ - -## Step 4. Create a user - -In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set maxroach --insecure -~~~ - -## Step 5. Create a database and grant privileges - -As the `root` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create a `bank` database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'CREATE DATABASE bank' -~~~ - -Then [grant privileges](grant.html) to the `maxroach` user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'GRANT ALL ON DATABASE bank TO maxroach' -~~~ - -## Step 6. Run the C# code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic Statements - -Replace the contents of `cockraochdb-test-app/Program.cs` with the following code: - -{% include copy-clipboard.html %} -~~~ csharp -{% include {{ page.version.version }}/app/basic-sample.cs %} -~~~ - -Then run the code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -~~~ - -### Transaction (with retry logic) - -Open `cockraochdb-test-app/Program.cs` again and replace the contents with the following code: - -{% include copy-clipboard.html %} -~~~ csharp -{% include {{ page.version.version }}/app/txn-sample.cs %} -~~~ - -Then run the code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}} - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -Final balances: - account 1: 900 - account 2: 350 -~~~ - -However, if you want to verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -## What's Next? - -Read more about using the [.NET Npgsql driver](http://www.npgsql.org/). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-go-app-with-cockroachdb-gorm.md b/src/current/v2.0/build-a-go-app-with-cockroachdb-gorm.md deleted file mode 100644 index f6452a3b2d3..00000000000 --- a/src/current/v2.0/build-a-go-app-with-cockroachdb-gorm.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: Build a Go App with CockroachDB -summary: Learn how to use CockroachDB from a simple Go application with the GORM ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Go application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Go pq driver](https://godoc.org/github.com/lib/pq) and the [GORM ORM](http://gorm.io) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of GORM with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the GORM ORM - -To install [GORM](http://gorm.io), run the following commands: - -{% include copy-clipboard.html %} -~~~ shell -$ go get -u github.com/lib/pq # dependency -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ go get -u github.com/jinzhu/gorm -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Go code - -The following code uses the [GORM](http://gorm.io) ORM to map Go-specific objects to SQL operations. Specifically, `db.AutoMigrate(&Account{})` creates an `accounts` table based on the Account model, `db.Create(&Account{})` inserts rows into the table, and `db.Find(&accounts)` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/gorm-basic-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run gorm-basic-sample.go -~~~ - -The output should be: - -~~~ -Initial balances: -1 1000 -2 250 -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Go code - -The following code uses the [GORM](http://gorm.io) ORM to map Go-specific objects to SQL operations. Specifically, `db.AutoMigrate(&Account{})` creates an `accounts` table based on the Account model, `db.Create(&Account{})` inserts rows into the table, and `db.Find(&accounts)` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/gorm-basic-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run gorm-basic-sample.go -~~~ - -The output should be: - -~~~ -Initial balances: -1 1000 -2 250 -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [GORM ORM](http://gorm.io), or check out a more realistic implementation of GORM with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-go-app-with-cockroachdb.md b/src/current/v2.0/build-a-go-app-with-cockroachdb.md deleted file mode 100644 index 47faf4089a1..00000000000 --- a/src/current/v2.0/build-a-go-app-with-cockroachdb.md +++ /dev/null @@ -1,227 +0,0 @@ ---- -title: Build a Go App with CockroachDB -summary: Learn how to use CockroachDB from a simple Go application with the Go pq driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Go application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Go pq driver](https://godoc.org/github.com/lib/pq) and the [GORM ORM](http://gorm.io) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Go pq driver - -To install the [Go pq driver](https://godoc.org/github.com/lib/pq), run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ go get -u github.com/lib/pq -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Go code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/basic-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run basic-sample.go -~~~ - -The output should be: - -~~~ -Initial balances: -1 1000 -2 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/txn-sample.go %} -~~~ - -With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. For Go, the CockroachDB retry function is in the `crdb` package of the CockroachDB Go client. - -To install the [CockroachDB Go client](https://github.com/cockroachdb/cockroach-go), run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ go get -d github.com/cockroachdb/cockroach-go -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run txn-sample.go -~~~ - -The output should be: - -~~~ shell -Success -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Go code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/basic-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run basic-sample.go -~~~ - -The output should be: - -~~~ -Initial balances: -1 1000 -2 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/txn-sample.go %} -~~~ - -With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. For Go, the CockroachDB retry function is in the `crdb` package of the CockroachDB Go client. Clone the library into your `$GOPATH` as follows: - -{% include copy-clipboard.html %} -~~~ shell -$ mkdir -p $GOPATH/src/github.com/cockroachdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cd $GOPATH/src/github.com/cockroachdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ git clone git@github.com:cockroachdb/cockroach-go.git -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run txn-sample.go -~~~ - -The output should be: - -~~~ shell -Success -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Go pq driver](https://godoc.org/github.com/lib/pq). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v2.0/build-a-java-app-with-cockroachdb-hibernate.md deleted file mode 100644 index 1d209806429..00000000000 --- a/src/current/v2.0/build-a-java-app-with-cockroachdb-hibernate.md +++ /dev/null @@ -1,258 +0,0 @@ ---- -title: Build a Java App with CockroachDB -summary: Learn how to use CockroachDB from a simple Java application with the Hibernate ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Java application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Java JDBC driver](https://jdbc.postgresql.org/) and the [Hibernate ORM](http://hibernate.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of Hibernate with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -{{site.data.alerts.callout_danger}} -The examples on this page assume you are using a Java version <= 9. They do not work with Java 10. -{{site.data.alerts.end}} - -## Step 1. Install the Gradle build tool - -This tutorial uses the [Gradle build tool](https://gradle.org/) to get all dependencies for your application, including Hibernate. - -To install Gradle on Mac, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ brew install gradle -~~~ - -To install Gradle on a Debian-based Linux distribution like Ubuntu: - -{% include copy-clipboard.html %} -~~~ shell -$ apt-get install gradle -~~~ - -To install Gradle on a Red Hat-based Linux distribution like Fedora: - -{% include copy-clipboard.html %} -~~~ shell -$ dnf install gradle -~~~ - -For other ways to install Gradle, see [its official documentation](https://gradle.org/install). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Convert the key file for use with Java - -The private key generated for user `maxroach` by CockroachDB is [PEM encoded](https://tools.ietf.org/html/rfc1421). To read the key in a Java application, you will need to convert it into [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. - -To convert the key to PKCS#8 format, run the following OpenSSL command on the `maxroach` user's key file in the directory where you stored your certificates (`certs` in this example): - -{% include copy-clipboard.html %} -~~~ shell -$ openssl pkcs8 -topk8 -inform PEM -outform DER -in client.maxroach.key -out client.maxroach.pk8 -nocrypt -~~~ - -## Step 5. Run the Java code - -Download and extract [hibernate-basic-sample.tgz](https://github.com/cockroachdb/docs/raw/master/_includes/v2.0/app/hibernate-basic-sample/hibernate-basic-sample.tgz), which contains a Java project that includes the following files: - -File | Description ------|------------ -[`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/hibernate-basic-sample/Sample.java) | Uses [Hibernate](http://hibernate.org/orm/) to map Java object state to SQL operations. For more information, see [Sample.java](#sample-java). -[`hibernate.cfg.xml`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/hibernate-basic-sample/hibernate.cfg.xml) | Specifies how to connect to the database and that the database schema will be deleted and recreated each time the app is run. For more information, see [hibernate.cfg.xml](#hibernate-cfg-xml). -[`build.gradle`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/hibernate-basic-sample/build.gradle) | Used to build and run your app. For more information, see [build.gradle](#build-gradle). - -In the `hibernate-basic-sample` directory, build and run the application: - -{% include copy-clipboard.html %} -~~~ shell -$ gradle run -~~~ - -Toward the end of the output, you should see: - -~~~ -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -### Sample.java - -The Java code shown below uses the [Hibernate ORM](http://hibernate.org/orm/) to map Java object state to SQL operations. Specifically, this code: - -- Creates an `accounts` table in the database based on the `Account` class. - -- Inserts rows into the table using `session.save(new Account())`. - -- Defines the SQL query for selecting from the table so that balances can be printed using the `CriteriaQuery query` object. - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/hibernate-basic-sample/Sample.java %} -~~~ - -### hibernate.cfg.xml - -The Hibernate config (in `hibernate.cfg.xml`, shown below) specifies how to connect to the database. Note the [connection URL](connection-parameters.html#connect-using-a-url) that turns on SSL and specifies the location of the security certificates. - -{% include copy-clipboard.html %} -~~~ xml -{% include {{page.version.version}}/app/hibernate-basic-sample/hibernate.cfg.xml %} -~~~ - -### build.gradle - -The Gradle build file specifies the dependencies (in this case the Postgres JDBC driver and Hibernate): - -{% include copy-clipboard.html %} -~~~ groovy -{% include {{page.version.version}}/app/hibernate-basic-sample/build.gradle %} -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Java code - -Download and extract [hibernate-basic-sample.tgz](https://github.com/cockroachdb/docs/raw/master/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz), which contains a Java project that includes the following files: - -File | Description ------|------------ -[`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/hibernate-basic-sample/Sample.java) | Uses [Hibernate](http://hibernate.org/orm/) to map Java object state to SQL operations. For more information, see [Sample.java](#sample-java). -[`hibernate.cfg.xml`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/hibernate-basic-sample/hibernate.cfg.xml) | Specifies how to connect to the database and that the database schema will be deleted and recreated each time the app is run. For more information, see [hibernate.cfg.xml](#hibernate-cfg-xml). -[`build.gradle`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/hibernate-basic-sample/build.gradle) | Used to build and run your app. For more information, see [build.gradle](#build-gradle). - -In the `hibernate-basic-sample` directory, build and run the application: - -{% include copy-clipboard.html %} -~~~ shell -$ gradle run -~~~ - -Toward the end of the output, you should see: - -~~~ -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -### Sample.java - -The Java code shown below uses the [Hibernate ORM](http://hibernate.org/orm/) to map Java object state to SQL operations. Specifically, this code: - -- Creates an `accounts` table in the database based on the `Account` class. - -- Inserts rows into the table using `session.save(new Account())`. - -- Defines the SQL query for selecting from the table so that balances can be printed using the `CriteriaQuery query` object. - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/Sample.java %} -~~~ - -### hibernate.cfg.xml - -The Hibernate config (in `hibernate.cfg.xml`, shown below) specifies how to connect to the database. Note the [connection URL](connection-parameters.html#connect-using-a-url) that turns on SSL and specifies the location of the security certificates. - -{% include copy-clipboard.html %} -~~~ xml -{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/hibernate.cfg.xml %} -~~~ - -### build.gradle - -The Gradle build file specifies the dependencies (in this case the Postgres JDBC driver and Hibernate): - -{% include copy-clipboard.html %} -~~~ groovy -{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/build.gradle %} -~~~ - -
      - -## What's next? - -Read more about using the [Hibernate ORM](http://hibernate.org/orm/), or check out a more realistic implementation of Hibernate with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-java-app-with-cockroachdb.md b/src/current/v2.0/build-a-java-app-with-cockroachdb.md deleted file mode 100644 index cdf240d8942..00000000000 --- a/src/current/v2.0/build-a-java-app-with-cockroachdb.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -title: Build a Java App with CockroachDB -summary: Learn how to use CockroachDB from a simple Java application with the JDBC driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Java application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Java JDBC driver](https://jdbc.postgresql.org/) and the [Hibernate ORM](http://hibernate.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -{{site.data.alerts.callout_danger}} -The examples on this page assume you are using a Java version <= 9. They do not work with Java 10. -{{site.data.alerts.end}} - -## Step 1. Install the Java JDBC driver - -Download and set up the Java JDBC driver as described in the [official documentation](https://jdbc.postgresql.org/documentation/setup/). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Convert the key file for use with Java - -The private key generated for user `maxroach` by CockroachDB is [PEM encoded](https://tools.ietf.org/html/rfc1421). To read the key in a Java application, you will need to convert it into [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. - -To convert the key to PKCS#8 format, run the following OpenSSL command on the `maxroach` user's key file in the directory where you stored your certificates: - -{% include copy-clipboard.html %} -~~~ shell -$ openssl pkcs8 -topk8 -inform PEM -outform DER -in client.maxroach.key -out client.maxroach.pk8 -nocrypt -~~~ - -## Step 5. Run the Java code - -Now that you have created a database and set up encryption keys, in this section you will: - -- [Create a table and insert some rows](#basic1) -- [Execute a batch of statements as a transaction](#txn1) - - - -### Basic example - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements: create a table, insert rows, and read and print the rows. - -To run it: - -1. Download [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/BasicSample.java), or create the file yourself and copy the code below. -2. Download [the PostgreSQL JDBC driver](https://jdbc.postgresql.org/download/). -3. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath): - - {% include copy-clipboard.html %} - ~~~ shell - $ javac -classpath .:/path/to/postgresql.jar BasicSample.java - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ java -classpath .:/path/to/postgresql.jar BasicSample - ~~~ - - The output should be: - - ~~~ - Initial balances: - account 1: 1000 - account 2: 250 - ~~~ - -The contents of [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/BasicSample.java): - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/BasicSample.java %} -~~~ - - - -### Transaction example (with retry logic) - -Next, use the following code to execute a batch of statements as a [transaction](transactions.html) to transfer funds from one account to another. - -To run it: - -1. Download TxnSample.java, or create the file yourself and copy the code below. Note the use of [`SQLException.getSQLState()`](https://docs.oracle.com/javase/tutorial/jdbc/basics/sqlexception.html) instead of `getErrorCode()`. -2. Compile and run the code (again adding the PostgreSQL JDBC driver to your classpath): - - {% include copy-clipboard.html %} - ~~~ shell - $ javac -classpath .:/path/to/postgresql.jar TxnSample.java - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ java -classpath .:/path/to/postgresql.jar TxnSample - ~~~ - - The output should be: - - ~~~ - account 1: 900 - account 2: 350 - ~~~ - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/TxnSample.java %} -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Java code - -Now that you have created a database, in this section you will: - -- [Create a table and insert some rows](#basic2) -- [Execute a batch of statements as a transaction](#txn2) - - - -### Basic example - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -To run it: - -1. Download [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/BasicSample.java), or create the file yourself and copy the code below. -2. Download [the PostgreSQL JDBC driver](https://jdbc.postgresql.org/download/). -3. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath): - - {% include copy-clipboard.html %} - ~~~ shell - $ javac -classpath .:/path/to/postgresql.jar BasicSample.java - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ java -classpath .:/path/to/postgresql.jar BasicSample - ~~~ - -The contents of [`BasicSample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/BasicSample.java): - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/insecure/BasicSample.java %} -~~~ - - - -### Transaction example (with retry logic) - -Next, use the following code to execute a batch of statements as a [transaction](transactions.html) to transfer funds from one account to another. - -To run it: - -1. Download TxnSample.java, or create the file yourself and copy the code below. Note the use of [`SQLException.getSQLState()`](https://docs.oracle.com/javase/tutorial/jdbc/basics/sqlexception.html) instead of `getErrorCode()`. -2. Compile and run the code (again adding the PostgreSQL JDBC driver to your classpath): - - {% include copy-clipboard.html %} - ~~~ shell - $ javac -classpath .:/path/to/postgresql.jar TxnSample.java - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ java -classpath .:/path/to/postgresql.jar TxnSample - ~~~ - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/insecure/TxnSample.java %} -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Java JDBC driver](https://jdbc.postgresql.org/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb-sequelize.md b/src/current/v2.0/build-a-nodejs-app-with-cockroachdb-sequelize.md deleted file mode 100644 index 17191e9abb3..00000000000 --- a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb-sequelize.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: Build a Node.js App with CockroachDB -summary: Learn how to use CockroachDB from a simple Node.js application with the Sequelize ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Node.js application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Node.js pg driver](https://www.npmjs.com/package/pg) and the [Sequelize ORM](https://sequelize.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of Sequelize with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms)repository. -{{site.data.alerts.end}} - - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Sequelize ORM - -To install Sequelize, as well as a [CockroachDB Node.js package](https://github.com/cockroachdb/sequelize-cockroachdb) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ npm install sequelize sequelize-cockroachdb -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Node.js code - -The following code uses the [Sequelize](https://sequelize.org/) ORM to map Node.js-specific objects to SQL operations. Specifically, `Account.sync({force: true})` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.bulkCreate([...])` inserts rows into the table, and `Account.findAll()` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ js -{% include {{ page.version.version }}/app/sequelize-basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node sequelize-basic-sample.js -~~~ - -The output should be: - -~~~ shell -1 1000 -2 250 -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=/tmp/certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - - - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Node.js code - -The following code uses the [Sequelize](https://sequelize.org/) ORM to map Node.js-specific objects to SQL operations. Specifically, `Account.sync({force: true})` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.bulkCreate([...])` inserts rows into the table, and `Account.findAll()` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ js -{% include {{ page.version.version }}/app/insecure/sequelize-basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node sequelize-basic-sample.js -~~~ - -The output should be: - -~~~ shell -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, you can again use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SHOW TABLES' --database=bank -~~~ - -~~~ -+------------+ -| table_name | -+------------+ -| accounts | -+------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Sequelize ORM](https://sequelize.org/), or check out a more realistic implementation of Sequelize with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md b/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md deleted file mode 100644 index 25815cdb64f..00000000000 --- a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -title: Build a Node.js App with CockroachDB -summary: Learn how to use CockroachDB from a simple Node.js application with the Node.js pg driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Node.js application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Node.js pg driver](https://www.npmjs.com/package/pg) and the [Sequelize ORM](https://sequelize.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install Node.js packages - -To let your application communicate with CockroachDB, install the [Node.js pg driver](https://www.npmjs.com/package/pg): - -{% include copy-clipboard.html %} -~~~ shell -$ npm install pg -~~~ - -The example app on this page also requires [`async`](https://www.npmjs.com/package/async): - -{% include copy-clipboard.html %} -~~~ shell -$ npm install async -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Node.js code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the [`basic-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/basic-sample.js) file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node basic-sample.js -~~~ - -The output should be: - -~~~ -Initial balances: -{ id: '1', balance: '1000' } -{ id: '2', balance: '250' } -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another and then read the updated values, where all included statements are either committed or aborted. - -Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/txn-sample.js) file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/txn-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node txn-sample.js -~~~ - -The output should be: - -~~~ -Balances after transfer: -{ id: '1', balance: '900' } -{ id: '2', balance: '350' } -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Node.js code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the [`basic-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/basic-sample.js) file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/insecure/basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node basic-sample.js -~~~ - -The output should be: - -~~~ -Initial balances: -{ id: '1', balance: '1000' } -{ id: '2', balance: '250' } -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another and then read the updated values, where all included statements are either committed or aborted. - -Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/txn-sample.js) file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/insecure/txn-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node txn-sample.js -~~~ - -The output should be: - -~~~ -Balances after transfer: -{ id: '1', balance: '900' } -{ id: '2', balance: '350' } -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Node.js pg driver](https://www.npmjs.com/package/pg). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-php-app-with-cockroachdb.md b/src/current/v2.0/build-a-php-app-with-cockroachdb.md deleted file mode 100644 index fe0bcacc31c..00000000000 --- a/src/current/v2.0/build-a-php-app-with-cockroachdb.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Build a PHP App with CockroachDB -summary: Learn how to use CockroachDB from a simple PHP application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple PHP application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the [php-pgsql driver](https://www.php.net/manual/en/book.pgsql.php) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the php-pgsql driver - -Install the php-pgsql driver as described in the [official documentation](https://www.php.net/manual/en/book.pgsql.php). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the PHP code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.php file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/basic-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances: -1: 1000 -2: 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.php file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/txn-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances after transfer: -1: 900 -2: 350 -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the PHP code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.php file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/insecure/basic-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances: -1: 1000 -2: 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.php file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/insecure/txn-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances after transfer: -1: 900 -2: 350 -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [php-pgsql driver](https://www.php.net/manual/en/book.pgsql.php). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v2.0/build-a-python-app-with-cockroachdb-sqlalchemy.md deleted file mode 100644 index 21b0a20c09b..00000000000 --- a/src/current/v2.0/build-a-python-app-with-cockroachdb-sqlalchemy.md +++ /dev/null @@ -1,179 +0,0 @@ ---- -title: Build a Python App with CockroachDB -summary: Learn how to use CockroachDB from a simple Python application with the SQLAlchemy ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Python application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Python psycopg2 driver](http://initd.org/psycopg/docs/) and the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of SQLAlchemy with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the SQLAlchemy ORM - -To install SQLAlchemy, as well as a [CockroachDB Python package](https://github.com/cockroachdb/sqlalchemy-cockroachdb) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ pip install sqlalchemy sqlalchemy-cockroachdb psycopg2 -~~~ - -{{site.data.alerts.callout_success}} -You can substitute psycopg2 for other alternatives that include the psycopg python package. -{{site.data.alerts.end}} - -For other ways to install SQLAlchemy, see the [official documentation](http://docs.sqlalchemy.org/en/latest/intro.html#installation-guide). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 3. Run the Python code - -The following code uses the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) to map Python-specific objects to SQL operations. Specifically, `Base.metadata.create_all(engine)` creates an `accounts` table based on the Account class, `session.add_all([Account(),... -])` inserts rows into the table, and `session.query(Account)` selects from the table so that balances can be printed. - -{{site.data.alerts.callout_info}} -The sqlalchemy-cockroachdb python package installed earlier is triggered by the cockroachdb:// prefix in the engine URL. Using postgres:// to connect to your cluster will not work. -{{site.data.alerts.end}} - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/sqlalchemy-basic-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python sqlalchemy-basic-sample.py -~~~ - -The output should be: - -~~~ shell -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Python code - -The following code uses the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) to map Python-specific objects to SQL operations. Specifically, `Base.metadata.create_all(engine)` creates an `accounts` table based on the Account class, `session.add_all([Account(),... -])` inserts rows into the table, and `session.query(Account)` selects from the table so that balances can be printed. - -{{site.data.alerts.callout_info}} -The sqlalchemy-cockroachdb python package installed earlier is triggered by the cockroachdb:// prefix in the engine URL. Using postgres:// to connect to your cluster will not work. -{{site.data.alerts.end}} - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/insecure/sqlalchemy-basic-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python sqlalchemy-basic-sample.py -~~~ - -The output should be: - -~~~ shell -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/), or check out a more realistic implementation of SQLAlchemy with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-python-app-with-cockroachdb.md b/src/current/v2.0/build-a-python-app-with-cockroachdb.md deleted file mode 100644 index 493683440fa..00000000000 --- a/src/current/v2.0/build-a-python-app-with-cockroachdb.md +++ /dev/null @@ -1,231 +0,0 @@ ---- -title: Build a Python App with CockroachDB -summary: Learn how to use CockroachDB from a simple Python application with the psycopg2 driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Python application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Python psycopg2 driver](http://initd.org/psycopg/docs/) and the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the psycopg2 driver - -To install the Python psycopg2 driver, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ pip install psycopg2 -~~~ - -For other ways to install psycopg2, see the [official documentation](http://initd.org/psycopg/docs/install.html). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Python code - -Now that you have a database and a user, you'll run the code shown below to: - -- Create a table and insert some rows -- Read and update values as an atomic [transaction](transactions.html) - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.py file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/basic-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python basic-sample.py -~~~ - -The output should be: - -~~~ -Initial balances: -['1', '1000'] -['2', '250'] -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.py file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/txn-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python txn-sample.py -~~~ - -The output should be: - -~~~ -Balances after transfer: -['1', '900'] -['2', '350'] -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Python code - -Now that you have a database and a user, you'll run the code shown below to: - -- Create a table and insert some rows -- Read and update values as an atomic [transaction](transactions.html) - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.py file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/insecure/basic-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python basic-sample.py -~~~ - -The output should be: - -~~~ -Initial balances: -['1', '1000'] -['2', '250'] -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.py file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/insecure/txn-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python txn-sample.py -~~~ - -The output should be: - -~~~ -Balances after transfer: -['1', '900'] -['2', '350'] -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Python psycopg2 driver](http://initd.org/psycopg/docs/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-ruby-app-with-cockroachdb-activerecord.md b/src/current/v2.0/build-a-ruby-app-with-cockroachdb-activerecord.md deleted file mode 100644 index 22144ad57f6..00000000000 --- a/src/current/v2.0/build-a-ruby-app-with-cockroachdb-activerecord.md +++ /dev/null @@ -1,171 +0,0 @@ ---- -title: Build a Ruby App with CockroachDB -summary: Learn how to use CockroachDB from a simple Ruby application with the ActiveRecord ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Ruby application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Ruby pg driver](https://rubygems.org/gems/pg) and the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of ActiveRecord with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the ActiveRecord ORM - -To install ActiveRecord as well as the [pg driver](https://rubygems.org/gems/pg) and a [CockroachDB Ruby package](https://github.com/cockroachdb/activerecord-cockroachdb-adapter) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ gem install activerecord pg activerecord-cockroachdb-adapter -~~~ - -{{site.data.alerts.callout_info}} -The exact command above will vary depending on the desired version of ActiveRecord. Specifically, version 4.2.x of ActiveRecord requires version 0.1.x of the adapter; version 5.1.x of ActiveRecord requires version 0.2.x of the adapter. -{{site.data.alerts.end}} - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Ruby code - -The following code uses the [ActiveRecord](http://guides.rubyonrails.org/active_record_basics.html) ORM to map Ruby-specific objects to SQL operations. Specifically, `Schema.new.change()` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.create()` inserts rows into the table, and `Account.all` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/activerecord-basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby activerecord-basic-sample.rb -~~~ - -The output should be: - -~~~ shell --- create_table(:accounts, {:force=>true}) - -> 0.0361s -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Ruby code - -The following code uses the [ActiveRecord](http://guides.rubyonrails.org/active_record_basics.html) ORM to map Ruby-specific objects to SQL operations. Specifically, `Schema.new.change()` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.create()` inserts rows into the table, and `Account.all` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/insecure/activerecord-basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby activerecord-basic-sample.rb -~~~ - -The output should be: - -~~~ shell --- create_table(:accounts, {:force=>true}) - -> 0.0361s -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html), or check out a more realistic implementation of ActiveRecord with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md b/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md deleted file mode 100644 index f461cee9b94..00000000000 --- a/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -title: Build a Ruby App with CockroachDB -summary: Learn how to use CockroachDB from a simple Ruby application with the pg client driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Ruby application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Ruby pg driver](https://rubygems.org/gems/pg) and the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Ruby pg driver - -To install the [Ruby pg driver](https://rubygems.org/gems/pg), run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ gem install pg -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Ruby code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -The following code connects as the `maxroach` user and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.rb file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby basic-sample.rb -~~~ - -The output should be: - -~~~ -Initial balances: -{"id"=>"1", "balance"=>"1000"} -{"id"=>"2", "balance"=>"250"} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rb file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/txn-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby txn-sample.rb -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Ruby code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -The following code connects as the `maxroach` user and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.rb file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/insecure/basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby basic-sample.rb -~~~ - -The output should be: - -~~~ -Initial balances: -{"id"=>"1", "balance"=>"1000"} -{"id"=>"2", "balance"=>"250"} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rb file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/insecure/txn-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby txn-sample.rb -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Ruby pg driver](https://rubygems.org/gems/pg). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-a-rust-app-with-cockroachdb.md b/src/current/v2.0/build-a-rust-app-with-cockroachdb.md deleted file mode 100644 index 7cab3fb80ce..00000000000 --- a/src/current/v2.0/build-a-rust-app-with-cockroachdb.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Build a Rust App with CockroachDB -summary: Learn how to use CockroachDB from a simple Rust application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple Rust application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the Rust Postgres driver enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Install the Rust Postgres driver - -Install the Rust Postgres driver as described in the official documentation. - -{% include {{ page.version.version }}/app/common-steps.md %} - -## Step 5. Create a table in the new database - -As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---database=bank \ ---user=maxroach \ --e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)' -~~~ - -## Step 6. Run the Rust code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic Statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.rs file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ rust -{% include {{ page.version.version }}/app/basic-sample.rs %} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rs file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}}With the default SERIALIZABLE isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ rust -{% include {{ page.version.version }}/app/txn-sample.rs %} -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -## What's Next? - -Read more about using the Rust Postgres driver. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v2.0/build-an-app-with-cockroachdb.md b/src/current/v2.0/build-an-app-with-cockroachdb.md deleted file mode 100644 index 9e66d914d34..00000000000 --- a/src/current/v2.0/build-an-app-with-cockroachdb.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Build an App with CockroachDB -summary: The tutorials in this section show you how to build a simple application with CockroachDB, using PostgreSQL-compatible client drivers and ORMs. -tags: golang, python, java -toc: false -twitter: false ---- - -The tutorials in this section show you how to build a simple application with CockroachDB using PostgreSQL-compatible client drivers and ORMs. - -{{site.data.alerts.callout_info}}We have tested the drivers and ORMs featured here enough to claim beta-level support. This means that applications using advanced or obscure features of a driver or ORM may encounter incompatibilities. If you encounter problems, please open an issue with details to help us make progress toward full support.{{site.data.alerts.end}} - -App Language | Featured Driver | Featured ORM --------------|-----------------|------------- -Go | [pq](build-a-go-app-with-cockroachdb.html) | [GORM](build-a-go-app-with-cockroachdb-gorm.html) -Python | [psycopg2](build-a-python-app-with-cockroachdb.html) | [SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) -Ruby | [pg](build-a-ruby-app-with-cockroachdb.html) | [ActiveRecord](build-a-ruby-app-with-cockroachdb-activerecord.html) -Java | [jdbc](build-a-java-app-with-cockroachdb.html) | [Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) -Node.js | [pg](build-a-nodejs-app-with-cockroachdb.html) | [Sequelize](build-a-nodejs-app-with-cockroachdb-sequelize.html) -C++ | [libpqxx](build-a-c++-app-with-cockroachdb.html) | No ORMs tested -C# (.NET) | [Npgsql](build-a-csharp-app-with-cockroachdb.html) | No ORMs tested -Clojure | [java.jdbc](build-a-clojure-app-with-cockroachdb.html) | No ORMs tested -PHP | [php-pgsql](build-a-php-app-with-cockroachdb.html) | No ORMs tested -Rust | [postgres](build-a-rust-app-with-cockroachdb.html) | No ORMs tested diff --git a/src/current/v2.0/bytes.md b/src/current/v2.0/bytes.md deleted file mode 100644 index b8ac1026b6f..00000000000 --- a/src/current/v2.0/bytes.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: BYTES -summary: The BYTES data type stores binary strings of variable length. -toc: true ---- - -The `BYTES` [data type](data-types.html) stores binary strings of variable length. - - -## Aliases - -In CockroachDB, the following are aliases for `BYTES`: - -- `BYTEA` -- `BLOB` - -## Syntax - -To express a byte array constant, see the section on -[byte array literals](sql-constants.html#byte-array-literals) for more -details. For example, the following three are equivalent literals for the same -byte array: `b'abc'`, `b'\141\142\143'`, `b'\x61\x62\x63'`. - -In addition to this syntax, CockroachDB also supports using -[string literals](sql-constants.html#string-literals), including the -syntax `'...'`, `e'...'` and `x'....'` in contexts where a byte array -is otherwise expected. - -## Size - -The size of a `BYTES` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Example - -~~~ sql -> CREATE TABLE bytes (a INT PRIMARY KEY, b BYTES); - -> -- explicitly typed BYTES literals -> INSERT INTO bytes VALUES (1, b'\141\142\143'), (2, b'\x61\x62\x63'), (3, b'\141\x62\c'); - -> -- string literal implicitly typed as BYTES -> INSERT INTO bytes VALUES (4, 'abc'); - - -> SELECT * FROM bytes; -~~~ -~~~ -+---+-----+ -| a | b | -+---+-----+ -| 1 | abc | -| 2 | abc | -| 3 | abc | -| 4 | abc | -+---+-----+ -(4 rows) -~~~ - -## Supported Conversions - -`BYTES` values can be -[cast](data-types.html#data-type-conversions-casts) explicitly to -[`STRING`](string.html). The output of the conversion starts with the -two characters `\`, `x` and the rest of the string is composed by the -hexadecimal encoding of each byte in the input. For example, -`x'48AA'::STRING` produces `'\x48AA'`. - -`STRING` values can be cast explicitly to `BYTES`. This conversion -will fail if the hexadecimal digits are not valid, or if there is an -odd number of them. Two conversion modes are supported: - -- If the string starts with the two special characters `\` and `x` - (e.g., `\xAABB`), the rest of the string is interpreted as a sequence - of hexadecimal digits. The string is then converted to a byte array - where each pair of hexadecimal digits is converted to one byte. - -- Otherwise, the string is converted to a byte array that contains - its UTF-8 encoding. - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/cancel-job.md b/src/current/v2.0/cancel-job.md deleted file mode 100644 index edd171c3229..00000000000 --- a/src/current/v2.0/cancel-job.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: CANCEL JOB -summary: The CANCEL JOB statement stops long-running jobs such as imports, backups, and schema changes. -toc: true ---- - -New in v1.1: The `CANCEL JOB` [statement](sql-statements.html) lets you stop long-running jobs, which include [`IMPORT`](import.html) jobs and enterprise [`BACKUP`](backup.html) and [`RESTORE`](restore.html) tasks. - - -## Limitations - -When an enterprise [`RESTORE`](restore.html) is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance. - -## Required Privileges - -By default, only the `root` user can cancel a job. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/cancel_job.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`job_id` | The ID of the job you want to cancel, which can be found with [`SHOW JOBS`](show-jobs.html). - -## Examples - -### Cancel a Restore - -~~~ sql -> SHOW JOBS; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ -~~~ sql -> CANCEL JOB 27536791415282; -~~~ - -## See Also - -- [`SHOW JOBS`](show-jobs.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`IMPORT`](import.html) \ No newline at end of file diff --git a/src/current/v2.0/cancel-query.md b/src/current/v2.0/cancel-query.md deleted file mode 100644 index c0d06b34df2..00000000000 --- a/src/current/v2.0/cancel-query.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: CANCEL QUERY -summary: The CANCEL QUERY statement cancels a running SQL query. -toc: true ---- - -New in v1.1: The `CANCEL QUERY` [statement](sql-statements.html) cancels a running SQL query. - - -## Considerations - -- Schema changes (statements beginning with ALTER) cannot currently be cancelled. However, to monitor the progress of schema changes, you can use SHOW JOBS. -- In rare cases where a query is close to completion when a cancellation request is issued, the query may run to completion. - -## Required Privileges - -The `root` user can cancel any currently active queries, whereas non-`root` users cancel only their own currently active queries. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/cancel_query.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`query_id` | A [scalar expression](scalar-expressions.html) that produces the ID of the query to cancel.

      `CANCEL QUERY` accepts a single query ID. If a subquery is used and returns multiple IDs, the `CANCEL QUERY` statement will therefore fail. - -## Response - -When a query is successfully cancelled, CockroachDB sends a `query execution canceled` error to the client that issued the query. - -- If the canceled query was a single, stand-alone statement, no further action is required by the client. -- If the canceled query was part of a larger, multi-statement [transaction](transactions.html), the client should then issue a [`ROLLBACK`](rollback-transaction.html) statement. - -## Examples - -### Cancel a Query via the Query ID - -In this example, we use the [`SHOW QUERIES`](show-queries.html) statement to get the ID of a query and then pass the ID into the `CANCEL QUERY` statement: - -~~~ sql -> SHOW QUERIES; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 14:08:22.878113+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc206c47a9690000000000000002 | 2 | root | 2017-08-14 19:11:05.309119+00:00 | SHOW CLUSTER QUERIES | 127.0.0.1:50921 | | NULL | preparing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -~~~ - -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -### Cancel a Query via a Subquery - -In this example, we nest a [`SELECT` clause](select-clause.html) that retrieves the ID of a query inside the `CANCEL QUERY` statement: - -~~~ sql -> CANCEL QUERY (SELECT query_id FROM [SHOW CLUSTER QUERIES] - WHERE client_address = '192.168.0.72:56194' - AND username = 'mroach' - AND query = 'SELECT * FROM test.kv ORDER BY k'); -~~~ - -{{site.data.alerts.callout_info}}CANCEL QUERY accepts a single query ID. If subquery is used and returns multiple IDs, the CANCEL QUERY statement will therefore fail.{{site.data.alerts.end}} - -## See Also - -- [Manage Long-Running Queries](manage-long-running-queries.html) -- [`SHOW QUERIES`](show-queries.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/check.md b/src/current/v2.0/check.md deleted file mode 100644 index 68e137c75c6..00000000000 --- a/src/current/v2.0/check.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: CHECK Constraint -summary: The CHECK constraint specifies that values for the column in INSERT or UPDATE statements must satisfy a Boolean expression. -toc: true ---- - -The `CHECK` [constraint](constraints.html) specifies that values for the column in [`INSERT`](insert.html) or [`UPDATE`](update.html) statements must return `TRUE` or `NULL` for a Boolean expression. If any values return `FALSE`, the entire statement is rejected. - - -## Details - -- If you add a `CHECK` constraint to an existing table, added values, along with any updates to current values, are checked. To check the existing rows, use [`VALIDATE CONSTRAINT`](validate-constraint.html). -- `CHECK` constraints may be specified at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled consistently. -- You can have multiple `CHECK` constraints on a single column but ideally, for performance optimization, these should be combined using the logical operators. For example: - - ~~~ sql - warranty_period INT CHECK (warranty_period >= 0) CHECK (warranty_period <= 24) - ~~~ - - should be specified as: - - ~~~ sql - warranty_period INT CHECK (warranty_period BETWEEN 0 AND 24) - ~~~ -- When a column with a `CHECK` constraint is dropped, the `CHECK` constraint is also dropped. - -## Syntax - -`CHECK` constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -{{site.data.alerts.callout_info}}You can also add the CHECK constraint to existing tables through ADD CONSTRAINT.{{site.data.alerts.end}} - -### Column Level - -
      -{% include {{ page.version.version }}/sql/diagrams/check_column_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the constrained column. | -| `column_type` | The constrained column's [data type](data-types.html). | -| `check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted.| -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0), - PRIMARY KEY (product_id, warehouse_id) - ); -~~~ - -### Table Level - -
      -{% include {{ page.version.version }}/sql/diagrams/check_table_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_def` | Definitions for any other columns in the table. | -| `name` | The name you want to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). | -| `check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted.| -| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL, - PRIMARY KEY (product_id, warehouse_id), - CONSTRAINT ok_to_supply CHECK (quantity_on_hand > 0 AND warehouse_id BETWEEN 100 AND 200) - ); -~~~ - -## Usage Example - -`CHECK` constraints may be specified at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled in a consistent fashion. - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0), - PRIMARY KEY (product_id, warehouse_id) - ); - -> INSERT INTO inventories (product_id, warehouse_id, quantity_on_hand) VALUES (1, 2, 0); -~~~ -~~~ -pq: failed to satisfy CHECK constraint (quantity_on_hand > 0) -~~~ - -## See Also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [Default Value constraint](default-value.html) -- [Foreign Key constraint](foreign-key.html) -- [Not Null constraint](not-null.html) -- [Primary Key constraint](primary-key.html) -- [Unique constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v2.0/cluster-settings.md b/src/current/v2.0/cluster-settings.md deleted file mode 100644 index 89521a4b8e4..00000000000 --- a/src/current/v2.0/cluster-settings.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Cluster Settings -summary: Learn about cluster settings that apply to all nodes of a CockroachDB cluster. -toc: true ---- - -This page shows you how to view and change CockroachDB's **cluster-wide settings**. - -{{site.data.alerts.callout_info}}In contrast to cluster-wide settings, node-level settings apply to a single node. They are defined by flags passed to the cockroach start command when starting a node and cannot be changed without stopping and restarting the node. For more details, see Start a Node.{{site.data.alerts.end}} - - -## Overview - -Cluster settings apply to all nodes of a CockroachDB cluster and control, for example, whether or not to share diagnostic details with Cockroach Labs as well as advanced options for debugging and cluster tuning. - -They can be updated anytime after a cluster has been started, but only by the `root` user. - -## Settings - -{{site.data.alerts.callout_danger}}Many cluster settings are intended for tuning CockroachDB internals. Before changing these settings, we strongly encourage you to discuss your goals with Cockroach Labs; otherwise, you use them at your own risk.{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/settings/settings.md %} - -## View Current Cluster Settings - -Use the [`SHOW CLUSTER SETTING`](show-cluster-setting.html) statement. - -## Change a Cluster Setting - -Use the [`SET CLUSTER SETTING`](set-cluster-setting.html) statement. - -Before changing a cluster setting, please note the following: - -- Changing a cluster setting is not instantaneous, as the change must be propagated to other nodes in the cluster. - -- It's not recommended to change cluster settings [upgrading to a new version of CockroachDB](upgrade-cockroach-version.html); wait until all nodes have been upgraded and then make the change. - -## See Also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Diagnostics Reporting](diagnostics-reporting.html) -- [Start a Node](start-a-node.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) diff --git a/src/current/v2.0/cluster-setup-troubleshooting.md b/src/current/v2.0/cluster-setup-troubleshooting.md deleted file mode 100644 index e20be0b5d5d..00000000000 --- a/src/current/v2.0/cluster-setup-troubleshooting.md +++ /dev/null @@ -1,191 +0,0 @@ ---- -title: Troubleshoot Cluster Setup -summary: Learn how to troubleshoot issues with starting CockroachDB clusters -toc: true ---- - -If you're having trouble starting or scaling your cluster, this page will help you troubleshoot the issue. - - -## Before You Begin - -### Terminology - -To use this guide, it's important to understand some of CockroachDB's terminology: - - - A **Cluster** acts as a single logical database, but is actually made up of many cooperating nodes. - - **Nodes** are single instances of the `cockroach` binary running on a machine. It's possible (though atypical) to have multiple nodes running on a single machine. - -### Using This Guide - -To diagnose issues, we recommend beginning with the simplest scenario and then increasing its complexity until you discover the problem. With that strategy in mind, you should proceed through these troubleshooting steps sequentially. - -We also recommend executing these steps in the environment where you want to deploy your CockroachDB cluster. However, if you run into issues you cannot solve, try the same steps in a simpler environment. For example, if you cannot successfully start a cluster using Docker, try deploying CockroachDB in the same environment without using containers. - -## Locate Your Issue - -Proceed through the following steps until you locate the source of the issue with starting or scaling your CockroachDB cluster. - -### 1. Start a Single-Node Cluster - -1. Terminate any running `cockroach` processes and remove any old data: - - ~~~ shell - $ pkill -9 cockroach - $ rm -r testStore - ~~~ - -2. Start a single insecure node and log all activity to your terminal: - - ~~~ shell - $ cockroach start --insecure --logtostderr --store=testStore - ~~~ - - Errors at this stage potentially include: - - CPU incompatibility - - Other services running on port `26257` or `8080` (CockroachDB's default `port` and `http-port` respectively). You can either stop those services or start your node with different ports, specified with the [`--port` and `--http-port`](start-a-node.html#flags-changed-in-v2-0). - - If you change the port, you will need to include the `--port=[specified port]` flag in each subsequent `cockroach` command or change the `COCKROACH_PORT` environment variable. - - Networking issues that prevent the node from communicating with itself on its hostname. You can control the hostname CockroachDB uses with the [`--host` flag](start-a-node.html#flags-changed-in-v2-0). - - If you change the host, you will need to include `--host=[specified host]` in each subsequent `cockroach` command. - -3. If the node appears to have started successfully, open a new terminal window, and attempt to execute the following SQL statement: - - ~~~ shell - $ cockroach sql --insecure -e "SHOW DATABASES" - ~~~ - - You should receive a response that looks similar to this: - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | system | - +--------------------+ - ~~~ - - Errors at this stage potentially include: - - `connection refused`, which indicates you have not included some flag that you used to start the node (e.g., `--port` or `--host`). We have additional troubleshooting steps for this error [here](common-errors.html#connection-refused). - - The node crashed. You can identify if this is the case by looking for the `cockroach` process through `ps`. If you cannot locate the `cockroach` process (i.e., it crashed), [file an issue](file-an-issue.html). - -**Next step**: If you successfully completed these steps, try starting a multi-node cluster. - -### 2. Start a Multi-Node Cluster - -1. Terminate any running `cockroach` processes and remove any old data on the additional machines:: - - ~~~ shell - $ pkill -9 cockroach - $ rm -r testStore - ~~~ - - {{site.data.alerts.callout_info}}If you're running all nodes on the same machine, skip this step. Running this command will kill your first node making it impossible to proceed.{{site.data.alerts.end}} - -2. On each machine, start the CockroachDB node, joining it to the first node: - - ~~~ shell - $ cockroach start --insecure --logtostderr --store=testStore \ - --join=[first node's host] - ~~~ - - {{site.data.alerts.callout_info}}If you're running all nodes on the same machine, you will need to change the --port, --http-port, and --store flags. For an example of this, see Start a Local Cluster.{{site.data.alerts.end}} - - Errors at this stage potentially include: - - The same port and host issues from [running a single node](#1-start-a-single-node-cluster). - - [Networking issues](#networking-troubleshooting) - - [Nodes not joining the cluster](#node-will-not-join-cluster) - -3. Visit the Admin UI on any node at `http://[node host]:8080` and click **Metrics** on the left-hand navigation bar. All nodes in the cluster should be listed and have data replicated onto them. - - Errors at this stage potentially include: - - [Networking issues](#networking-troubleshooting) - - [Nodes not receiving data](#replication-error-in-a-multi-node-cluster) - -**Next step**: If you successfully completed these steps, try [securing your deployment](manual-deployment.html) (*troubleshooting docs for this coming soon*) or reviewing our other [support resources](support-resources.html). - -## Troubleshooting Information - -Use the information below to resolve issues you encounter when trying to start or scale your cluster. - -### Networking Troubleshooting - -Most networking-related issues are caused by one of two issues: - -- Firewall rules, which require your network administrator to investigate - -- Inaccessible hostnames on your nodes, which can be controlled with the `--host` and `--advertise-host` flags on [`cockroach start`](start-a-node.html#flags-changed-in-v2-0) - -However, to efficiently troubleshoot the issue, it's important to understand where and why it's occurring. We recommend checking the following network-related issues: - -- By default, CockroachDB advertises itself to other nodes using its hostname. If your environment doesn't support DNS or the hostname is not resolvable, your nodes cannot connect to one another. In these cases, you can: - - Change the hostname each node uses to advertises itself with `--advertise-host` - - Set `--host=[node's IP address]` if the IP is a valid interface on the machine - -- Every node in the cluster should be able to `ping` each other node on the hostnames or IP addresses you use in the `--join`, `--host`, or `--advertise-host` flags. - -- Every node should be able to connect to other nodes on the port you're using for CockroachDB (**26257** by default) through `telnet` or `nc`: - - `telnet [other node host] 26257` - - `nc [other node host] 26257` - -Again, firewalls or hostname issues can cause any of these steps to fail. - -### Node Will Not Join Cluster - -When joining a node to a cluster, you might receive one of the following errors: - -~~~ -no resolvers found; use --join to specify a connected node -~~~ - -~~~ -node belongs to cluster {"cluster hash"} but is attempting to connect to a gossip network for cluster {"another cluster hash"} -~~~ - -**Solution**: Disassociate the node from the existing directory where you've stored CockroachDB data. For example, you can do either of the following: - -- Choose a different directory to store the CockroachDB data: - - ~~~ shell - # Store this node's data in [new directory] - $ cockroach start [flags] --store=[new directory] --join=[cluster host]:26257 - ~~~ - -- Remove the existing directory and start a node joining the cluster again: - - ~~~ shell - # Remove the directory - $ rm -r cockroach-data/ - - # Start a node joining the cluster - $ cockroach start [flags] --join=[cluster host]:26257 - ~~~ - -**Explanation**: When starting a node, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server, have quit `cockroach`, and then tried to join another cluster. Because the existing directory's cluster ID doesn't match the new cluster ID, the node cannot join it. - -### Replication Error in a Multi-Node Cluster - -If data is not being replicated to some nodes in the cluster, we recommend checking out the following: - -- Ensure every node but the first was started with the `--join` flag set to the hostname and port of first node (or any other node that's successfully joined the cluster). - - If the flag was not set correctly for a node, shut down the node and restart it with the `--join` flag set correctly. See [Stop a Node](stop-a-node.html) and [Start a Node](start-a-node.html) for more details. - -- Nodes might not be able to communicate on their advertised hostnames, even though they're able to connect. - - You can try to resolve this by [stopping the nodes](stop-a-node.html), and then [restarting them](start-a-node.html) with the `--advertise-host` flag set to an interface all nodes can access. - -- Check the [logs](debug-and-error-logs.html) for each node for further detail, as well as these common errors: - - - `connection refused`: [Troubleshoot your network](#networking-troubleshooting). - - `not connected to cluster` or `node [id] belongs to cluster...`: See [Node Will Not Join Cluster](#node-will-not-join-cluster) on this page. - -## Something Else? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v2.0/cockroach-commands.md b/src/current/v2.0/cockroach-commands.md deleted file mode 100644 index 66129d05d7f..00000000000 --- a/src/current/v2.0/cockroach-commands.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Cockroach Commands -summary: Learn the commands for configuring, starting, and managing a CockroachDB cluster. -toc: true ---- - -This page introduces the `cockroach` commands for configuring, starting, and managing a CockroachDB cluster, as well as logging flags that can be set on any command and environment variables that can be used in place of certain flags. - -You can run `cockroach help` in your shell to get similar guidance. - - -## Commands - -Command | Usage ---------|---- -[`start`](start-a-node.html) | Start a node. -[`init`](initialize-a-cluster.html) | Initialize a cluster. -[`cert`](create-security-certificates.html) | Create CA, node, and client certificates. -[`quit`](stop-a-node.html) | Temporarily stop a node or permanently remove a node. -[`sql`](use-the-built-in-sql-client.html) | Use the built-in SQL client. -[`user`](create-and-manage-users.html) | Get, set, list, and remove users. -[`zone`](configure-replication-zones.html) | Configure the number and location of replicas for specific sets of data. -[`node`](view-node-details.html) | List node IDs, show their status, decommission nodes for removal, or recommission nodes. -[`dump`](sql-dump.html) | Back up a table by outputting the SQL statements required to recreate the table and all its rows. -[`debug zip`](debug-zip.html) | Generate a `.zip` file that can help Cockroach Labs troubleshoot issues with your cluster. -[`gen`](generate-cockroachdb-resources.html) | Generate manpages, a bash completion file, example SQL data, or an HAProxy configuration file for a running cluster. -[`version`](view-version-details.html) | Output CockroachDB version details. - -## Environment Variables - -For many common `cockroach` flags, such as `--port` and `--user`, you can set environment variables once instead of manually passing the flags each time you execute commands. - -- To find out which flags support environment variables, see the documentation for each [command](#commands). -- To output the current configuration of CockroachDB and other environment variables, run `env`. -- When a node uses environment variables on [startup](start-a-node.html), the variable names are printed to the node's logs; however, the variable values are not. - -CockroachDB prioritizes command flags, environment variables, and defaults as follows: - -1. If a flag is set for a command, CockroachDB uses it. -2. If a flag is not set for a command, CockroachDB uses the corresponding environment variable. -3. If neither the flag nor environment variable is set, CockroachDB uses the default for the flag. -4. If there's no flag default, CockroachDB gives an error. - -For more details, see [Client Connection Parameters](connection-parameters.html). diff --git a/src/current/v2.0/cockroachdb-in-comparison.md b/src/current/v2.0/cockroachdb-in-comparison.md deleted file mode 100644 index 73afbbb5f8f..00000000000 --- a/src/current/v2.0/cockroachdb-in-comparison.md +++ /dev/null @@ -1,260 +0,0 @@ ---- -title: CockroachDB in Comparison -summary: Learn how CockroachDB compares to other popular databases like PostgreSQL, Cassandra, MongoDB, Google Cloud Spanner, and more. -tags: mongodb, mysql, dynamodb -toc: false -comparison: true ---- - -This page shows you how key features of CockroachDB stack up against other databases. Hover over features for their intended meanings, and click CockroachDB answers to view related documentation. - -
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      - - - - CockroachDB
      - Automated Scaling - - tooltip icon - - - No - Yes - - No - Yes - Yes
      - Automated Failover - - tooltip icon - - - Optional - Yes - - Optional - Yes - Yes
      - Automated Repair - - tooltip icon - - - No - Yes - - No - Yes - Yes
      - Strongly Consistent Replication - - tooltip icon - - - No - Optional - Yes - - No - Optional - Yes - Yes
      - Consensus-Based Replication - - tooltip icon - - - No - Optional - Yes - - No - Optional - Yes - Yes
      - Distributed Transactions - - tooltip icon - - - No - Yes - No* - - No - Yes - No* - Yes
      - ACID Semantics - - tooltip icon - - - Yes - No - Row-only - Row-only* - Document-only - - Yes - No - Row-only - Row-only* - Document-only - Yes
      - Eventually Consistent Reads - - tooltip icon - - - Yes - - Yes - No
      - SQL - - tooltip icon - - - Yes - No - Read-only - - Yes - No - Read-only - Yes
      - Open Source - - tooltip icon - - - Yes - No - - Yes - No - Yes
      - Commercial Version - - tooltip icon - - - Optional - No - Yes - - Optional - No - Yes - Optional
      - Support - - tooltip icon - - - Full - - Full - Full
      - - diff --git a/src/current/v2.0/collate.md b/src/current/v2.0/collate.md deleted file mode 100644 index c2aade174eb..00000000000 --- a/src/current/v2.0/collate.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: COLLATE -summary: The COLLATE feature lets you sort strings according to language- and country-specific rules. -toc: true ---- - -The `COLLATE` feature lets you sort [`STRING`](string.html) values according to language- and country-specific rules, known as collations. - -Collated strings are important because different languages have [different rules for alphabetic order](https://en.wikipedia.org/wiki/Alphabetical_order#Language-specific_conventions), especially with respect to accented letters. For example, in German accented letters are sorted with their unaccented counterparts, while in Swedish they are placed at the end of the alphabet. A collation is a set of rules used for ordering and usually corresponds to a language, though some languages have multiple collations with different rules for sorting; for example Portuguese has separate collations for Brazilian and European dialects (`pt-BR` and `pt-PT` respectively). - - -## Details - -- Operations on collated strings cannot involve strings with a different collation or strings with no collation. However, it is possible to add or overwrite a collation on the fly. - -- Only use the collation feature when you need to sort strings by a specific collation. We recommend this because every time a collated string is constructed or loaded into memory, CockroachDB computes its collation key, whose size is linear in relationship to the length of the collated string, which requires additional resources. - -- Collated strings can be considerably larger than the corresponding uncollated strings, depending on the language and the string content. For example, strings containing the character `é` produce larger collation keys in the French locale than in Chinese. - -- Collated strings that are indexed require additional disk space as compared to uncollated strings. In case of indexed collated strings, collation keys must be stored in addition to the strings from which they are derived, creating a constant factor overhead. - -## Supported Collations - -CockroachDB supports the collations provided by Go's [language package](https://godoc.org/golang.org/x/text/language#Tag). The `` argument is the BCP 47 language tag at the end of each line, immediately preceded by `//`. For example, Afrikaans is supported as the `af` collation. - -## SQL Syntax - -Collated strings are used as normal strings in SQL, but have a `COLLATE` clause appended to them. - -- **Column syntax**: `STRING COLLATE `. For example: - - ~~~ sql - > CREATE TABLE foo (a STRING COLLATE en PRIMARY KEY); - ~~~ - - {{site.data.alerts.callout_info}}You can also use any of the aliases for STRING.{{site.data.alerts.end}} - -- **Value syntax**: ` COLLATE `. For example: - - ~~~ sql - > INSERT INTO foo VALUES ('dog' COLLATE en); - ~~~ - -## Examples - -### Specify Collation for a Column - -You can set a default collation for all values in a `STRING` column. - -For example, you can set a column's default collation to German (`de`): - -~~~ sql -> CREATE TABLE de_names (name STRING COLLATE de PRIMARY KEY); -~~~ - -When inserting values into this column, you must specify the collation for every value: - -~~~ sql -> INSERT INTO de_names VALUES ('Backhaus' COLLATE de), ('Bär' COLLATE de), ('Baz' COLLATE de); -~~~ - -The sort will now honor the `de` collation that treats *ä* as *a* in alphabetic sorting: - -~~~ sql -> SELECT * FROM de_names ORDER BY name; -~~~ -~~~ -+----------+ -| name | -+----------+ -| Backhaus | -| Bär | -| Baz | -+----------+ -~~~ - -### Order by Non-Default Collation - -You can sort a column using a specific collation instead of its default. - -For example, you receive different results if you order results by German (`de`) and Swedish (`sv`) collations: - -~~~ sql -> SELECT * FROM de_names ORDER BY name COLLATE sv; -~~~ -~~~ -+----------+ -| name | -+----------+ -| Backhaus | -| Baz | -| Bär | -+----------+ -~~~ - -### Ad-Hoc Collation Casting - -You can cast any string into a collation on the fly. - -~~~ sql -> SELECT 'A' COLLATE de < 'Ä' COLLATE de; -~~~ -~~~ -true -~~~ - -However, you cannot compare values with different collations: - -~~~ sql -SELECT 'Ä' COLLATE sv < 'Ä' COLLATE de; -~~~ -~~~ -pq: unsupported comparison operator: < -~~~ - -You can also use casting to remove collations from values. - -~~~ sql -> SELECT CAST(name AS STRING) FROM de_names ORDER BY name; -~~~ - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/column-families.md b/src/current/v2.0/column-families.md deleted file mode 100644 index e212236a458..00000000000 --- a/src/current/v2.0/column-families.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: Column Families -summary: A column family is a group of columns in a table that are stored as a single key-value pair in the underlying key-value store. -toc: true ---- - -A column family is a group of columns in a table that are stored as a single key-value pair in the [underlying key-value store](architecture/storage-layer.html). Column families reduce the number of keys stored in the key-value store, resulting in improved performance during [`INSERT`](insert.html), [`UPDATE`](update.html), and [`DELETE`](delete.html) operations. - -This page explains how CockroachDB organizes columns into families as well as cases in which you might want to manually override the default behavior. - -{{site.data.alerts.callout_info}} -[Secondary indexes](indexes.html) do not respect column families. All secondary indexes store values in a single column family. -{{site.data.alerts.end}} - -## Default Behavior - -When a table is created, all columns are stored as a single column family. - -This default approach ensures efficient key-value storage and performance in most cases. However, when frequently updated columns are grouped with seldom updated columns, the seldom updated columns are nonetheless rewritten on every update. Especially when the seldom updated columns are large, it's more performant to split them into a distinct family. - -## Manual Override - -### Assign Column Families on Table Creation - -To manually assign a column family on [table creation](create-table.html), use the `FAMILY` keyword. - -For example, let's say we want to create a table to store an immutable blob of data (`data BYTES`) with a last accessed timestamp (`last_accessed TIMESTAMP`). Because we know that the blob of data will never get updated, we use the `FAMILY` keyword to break it into a separate column family: - -~~~ sql -> CREATE TABLE test ( - id INT PRIMARY KEY, - last_accessed TIMESTAMP, - data BYTES, - FAMILY f1 (id, last_accessed), - FAMILY f2 (data) -); - -> SHOW CREATE TABLE users; -~~~ - -~~~ -+-------+---------------------------------------------+ -| Table | CreateTable | -+-------+---------------------------------------------+ -| test | CREATE TABLE test ( | -| | id INT NOT NULL, | -| | last_accessed TIMESTAMP NULL, | -| | data BYTES NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id), | -| | FAMILY f1 (id, last_accessed), | -| | FAMILY f2 (data) | -| | ) | -+-------+---------------------------------------------+ -(1 row) -~~~ - -{{site.data.alerts.callout_info}}Columns that are part of the primary index are always assigned to the first column family. If you manually assign primary index columns to a family, it must therefore be the first family listed in the CREATE TABLE statement.{{site.data.alerts.end}} - -### Assign Column Families When Adding Columns - -When using the [`ALTER TABLE .. ADD COLUMN`](add-column.html) statement to add a column to a table, you can assign the column to a new or existing column family. - -- Use the `CREATE FAMILY` keyword to assign a new column to a **new family**. For example, the following would add a `data2 BYTES` column to the `test` table above and assign it to a new column family: - - ~~~ sql - > ALTER TABLE test ADD COLUMN data2 BYTES CREATE FAMILY f3; - ~~~ - -- Use the `FAMILY` keyword to assign a new column to an **existing family**. For example, the following would add a `name STRING` column to the `test` table above and assign it to family `f1`: - - ~~~ sql - > ALTER TABLE test ADD COLUMN name STRING FAMILY f1; - ~~~ - -- Use the `CREATE IF NOT EXISTS FAMILY` keyword to assign a new column to an **existing family or, if the family doesn't exist, to a new family**. For example, the following would assign the new column to the existing `f1` family; if that family didn't exist, it would create a new family and assign the column to it: - - ~~~ sql - > ALTER TABLE test ADD COLUMN name STRING CREATE IF NOT EXISTS FAMILY f1; - ~~~ - -## Compatibility with Past Releases - -Using the [`beta-20160714`](../releases/v1.0.html#beta-20160714) release makes your data incompatible with versions earlier than the [`beta-20160629`](../releases/v1.0.html#beta-20160629) release. - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [`ADD COLUMN`](add-column.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/commit-transaction.md b/src/current/v2.0/commit-transaction.md deleted file mode 100644 index 32d23df6786..00000000000 --- a/src/current/v2.0/commit-transaction.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: COMMIT -summary: Commit a transaction with the COMMIT statement in CockroachDB. -toc: true ---- - -The `COMMIT` [statement](sql-statements.html) commits the current [transaction](transactions.html) or, when using [client-side transaction retries](transactions.html#client-side-transaction-retries), clears the connection to allow new transactions to begin. - -When using [client-side transaction retries](transactions.html#client-side-transaction-retries), statements issued after [`SAVEPOINT cockroach_restart`](savepoint.html) are committed when [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html) is issued instead of `COMMIT`. However, you must still issue a `COMMIT` statement to clear the connection for the next transaction. - -For non-retryable transactions, if statements in the transaction [generated any errors](transactions.html#error-handling), `COMMIT` is equivalent to `ROLLBACK`, which aborts the transaction and discards *all* updates made by its statements. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/commit_transaction.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to commit a transaction. However, privileges are required for each statement within a transaction. - -## Aliases - -In CockroachDB, `END` is an alias for the `COMMIT` statement. - -## Example - -### Commit a Transaction - -How you commit transactions depends on how your application handles [transaction retries](transactions.html#transaction-retries). - -#### Client-Side Retryable Transactions - -When using [client-side transaction retries](transactions.html#client-side-transaction-retries), statements are committed by [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html). `COMMIT` itself only clears the connection for the next transaction. - -~~~ sql -> BEGIN; - -> SAVEPOINT cockroach_restart; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -#### Automatically Retried Transactions - -If you are using transactions that CockroachDB will [automatically retry](transactions.html#automatic-retries) (i.e., all statements sent in a single batch), commit the transaction with `COMMIT`. - -~~~ sql -> BEGIN; UPDATE products SET inventory = 100 WHERE = '8675309'; UPDATE products SET inventory = 100 WHERE = '8675310'; COMMIT; -~~~ - -## See Also - -- [Transactions](transactions.html) -- [`BEGIN`](begin-transaction.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) diff --git a/src/current/v2.0/common-errors.md b/src/current/v2.0/common-errors.md deleted file mode 100644 index 59a137aafa4..00000000000 --- a/src/current/v2.0/common-errors.md +++ /dev/null @@ -1,194 +0,0 @@ ---- -title: Common Errors -summary: Understand and resolve common error messages written to stderr or logs. -toc: false ---- - -This page helps you understand and resolve error messages written to `stderr` or your [logs](debug-and-error-logs.html). - -Topic | Message -------|-------- -Client connection | [`connection refused`](#connection-refused) -Client connection | [`node is running secure mode, SSL connection required`](#node-is-running-secure-mode-ssl-connection-required) -Transactions | [`retry transaction`](#retry-transaction) -Node startup | [`node belongs to cluster but is attempting to connect to a gossip network for cluster `](#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) -Node configuration | [`clock synchronization error: this node is more than 500ms away from at least half of the known nodes`](#clock-synchronization-error-this-node-is-more-than-500ms-away-from-at-least-half-of-the-known-nodes) -Node configuration | [`open file descriptor limit of is under the minimum required `](#open-file-descriptor-limit-of-number-is-under-the-minimum-required-number) -Replication | [`replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster"`](#replicas-failing-with-0-of-1-store-with-an-attribute-matching-likely-not-enough-nodes-in-cluster) -Deadline exceeded | [`context deadline exceeded`](#context-deadline-exceeded) -Ambiguous results | [`result is ambiguous`](#result-is-ambiguous) - -## connection refused - -This message indicates a client is trying to connect to a node that is either not running or is not listening on the specified interfaces (i.e., hostname or port). - -To resolve this issue, do one of the following: - -- If the node hasn't yet been started, [start the node](start-a-node.html). -- If you specified a `--host` flag when starting the node, you must include it with all other [`cockroach` commands](cockroach-commands.html) or change the `COCKROACH_HOST` environment variable.. -- If you specified a `--port` flag when starting the node, you must include it with all other [`cockroach` commands](cockroach-commands.html) or change the `COCKROACH_PORT` environment variable. - -If you're not sure what the `--host` and `--port` values might have been, you can look in the node's [logs](debug-and-error-logs.html). If necessary, you can also terminate the `cockroach` process, and then restart the node: - -{% include copy-clipboard.html %} -~~~ shell -$ pkill cockroach -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start [flags] -~~~ - -## node is running secure mode, SSL connection required - -This message indicates that the cluster is using TLS encryption to protect network communication, and the client is trying to open a connection without using the required TLS certificates. - -To resolve this issue, use the [`cockroach cert client-create`](create-security-certificates.html) command to generate a client certificate and key for the user trying to connect. For a secure deployment walkthrough, including generating security certificates and connecting clients, see [Manual Deployment](manual-deployment.html). - -## retry transaction - -Messages with the error code `40001` and the string `retry transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](transactions.html#client-side-transaction-retries) for more details. - -## node belongs to cluster \ but is attempting to connect to a gossip network for cluster \ - -This message usually indicates that a node tried to connect to a cluster, but the node is already a member of a different cluster. This is determined by metadata in the node's data directory. To resolve this issue, do one of the following: - -- Choose a different directory to store the CockroachDB data: - - ~~~ shell - $ cockroach start [flags] --store=[new directory] --join=[cluster host]:26257 - ~~~ - -- Remove the existing directory and start a node joining the cluster again: - - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - - ~~~ shell - $ cockroach start [flags] --join=[cluster host]:26257 - ~~~ - -This message can also occur in the following scenario: - -1. The first node of a cluster is started without the `--join` flag. -2. Subsequent nodes are started with the `--join` flag pointing to the first node. -3. The first node is stopped and restarted after the node's data directory is deleted or using a new directory. This causes the first node to initialize a new cluster. -4. The other nodes, still communicating with the first node, notice that their cluster ID and the first node's cluster ID do not match. - -To avoid this scenario, update your scripts to use the new, recommended approach to initializing a cluster: - -1. Start each initial node of the cluster with the `--join` flag set to addresses of 3 to 5 of the initial nodes. -2. Run the `cockroach init` command against any node to perform a one-time cluster initialization. -3. When adding more nodes, start them with the same `--join` flag as used for the initial nodes. - -For more guidance, see this [example](start-a-node.html#start-a-multi-node-cluster). - -## open file descriptor limit of \ is under the minimum required \ - -CockroachDB can use a large number of open file descriptors, often more than is available by default. This message indicates that the machine on which a CockroachDB node is running is under CockroachDB's recommended limits. - -For more details on CockroachDB's file descriptor limits and instructions on increasing the limit on various platforms, see [File Descriptors Limit](recommended-production-settings.html#file-descriptors-limit). - -## replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster - -### When running a single-node cluster - -When running a single-node CockroachDB cluster, an error about replicas failing will eventually show up in the node's log files, for example: - -~~~ shell -E160407 09:53:50.337328 storage/queue.go:511 [replicate] 7 replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster" -~~~ - -This happens because CockroachDB expects three nodes by default. If you do not intend to add additional nodes, you can stop this error by updating your default zone configuration to expect only one node: - -{% include copy-clipboard.html %} -~~~ shell -# Insecure cluster: -$ cockroach zone set .default --insecure --disable-replication -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Secure cluster: -$ cockroach zone set .default --certs-dir=[path to certs directory] --disable-replication -~~~ - -The `--disable-replication` flag automatically reduces the zone's replica count to 1, but you can do this manually as well: - -{% include copy-clipboard.html %} -~~~ shell -# Insecure cluster: -$ echo 'num_replicas: 1' | cockroach zone set .default --insecure -f - -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Secure cluster: -$ echo 'num_replicas: 1' | cockroach zone set .default --certs-dir=[path to certs directory] -f - -~~~ - -See [Configure Replication Zones](configure-replication-zones.html) for more details. - -### When running a multi-node cluster - -When running a multi-node CockroachDB cluster, if you see an error like the one above about replicas failing, some nodes might not be able to talk to each other. For recommended actions, see [Cluster Setup Troubleshooting](cluster-setup-troubleshooting.html#replication-error-in-a-multi-node-cluster). - -## clock synchronization error: this node is more than 500ms away from at least half of the known nodes - -This error indicates that a node has spontaneously shut down because it detected that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default). CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency, so the node shutting down in this way avoids the risk of consistency anomalies. - -To prevent this from happening, you should run clock synchronization software on each node. For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Recommended Approach -------------|--------------------- -[Manual](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. - -## context deadline exceeded - -This message occurs when a component of CockroachDB gives up because it was relying on another component that has not behaved as expected, for example, another node dropped a network connection. To investigate further, look in the node's logs for the primary failure that is the root cause. - -## result is ambiguous - -In a distributed system, some errors can have ambiguous results. For -example, if you receive a `connection closed` error while processing a -`COMMIT` statement, you cannot tell whether the transaction -successfully committed or not. These errors are possible in any -database, but CockroachDB is somewhat more likely to produce them than -other databases because ambiguous results can be caused by failures -between the nodes of a cluster. These errors are reported with the -PostgreSQL error code `40003` (`statement_completion_unknown`) and the -message `result is ambiguous`. - -Ambiguous errors can be caused by nodes crashing, network failures, or -timeouts. If you experience a lot of these errors when things are -otherwise stable, look for performance issues. Note that ambiguity is -only possible for the last statement of a transaction (`COMMIT` or -`RELEASE SAVEPOINT`) or for statements outside a transaction. If a connection drops during a transaction that has not yet tried to commit, the transaction will definitely be aborted. - -In general, you should handle ambiguous errors the same way as -`connection closed` errors. If your transaction is -[idempotent](https://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning), -it is safe to retry it on ambiguous errors. `UPSERT` operations are -typically idempotent, and other transactions can be written to be -idempotent by verifying the expected state before performing any -writes. Increment operations such as `UPDATE my_table SET x=x+1 WHERE -id=$1` are typical examples of operations that cannot easily be made -idempotent. If your transaction is not idempotent, then you should -decide whether to retry or not based on whether it would be better for -your application to apply the transaction twice or return an error to -the user. - -## Something Else? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v2.0/common-table-expressions.md b/src/current/v2.0/common-table-expressions.md deleted file mode 100644 index 489240b7d65..00000000000 --- a/src/current/v2.0/common-table-expressions.md +++ /dev/null @@ -1,178 +0,0 @@ ---- -title: Common Table Expressions -summary: Common Table Expressions (CTEs) simplify the definition and use of subqueries -toc: true -toc_not_nested: true ---- - - -New in v2.0: -Common Table Expressions, or CTEs, provide a shorthand name to a -possibly complex [subquery](subqueries.html) before it is used in a -larger query context. This improves readability of the SQL code. - -CTEs can be used in combination with [`SELECT` -clauses](select-clause.html) and [`INSERT`](insert.html), -[`DELETE`](delete.html), [`UPDATE`](update.html) and -[`UPSERT`](upsert.html) statements. - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/with_clause.html %}
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_alias_name` | The name to use to refer to the common table expression from the accompanying query or statement. -`name` | A name for one of the columns in the newly defined common table expression. -`preparable_stmt` | The statement or subquery to use as common table expression. - -## Overview - -A query or statement of the form `WITH x AS y IN z` creates the -temporary table name `x` for the results of the subquery `y`, to be -reused in the context of the query `z`. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH o AS (SELECT * FROM orders WHERE id IN (33, 542, 112)) - SELECT * - FROM customers AS c, o - WHERE o.customer_id = c.id; -~~~ - -In this example, the `WITH` clause defines the temporary name `o` for -the subquery over `orders`, and that name becomes a valid table name -for use in any [table expression](table-expressions.html) of the -subsequent `SELECT` clause. - -This query is equivalent to, but arguably simpler to read than: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * - FROM customers AS c, (SELECT * FROM orders WHERE id IN (33, 542, 112)) AS o - WHERE o.customer_id = c.id; -~~~ - -It is also possible to define multiple common table expressions -simultaneously with a single `WITH` clause, separated by commas. Later -subqueries can refer to earlier subqueries by name. For example, the -following query is equivalent to the two examples above: - -{% include copy-clipboard.html %} -~~~ sql -> WITH o AS (SELECT * FROM orders WHERE id IN (33, 542, 112)), - results AS (SELECT * FROM customers AS c, o WHERE o.customer_id = c.id) - SELECT * FROM results; -~~~ - -In this example, the second CTE `results` refers to the first CTE `o` -by name. The final query refers to the CTE `results`. - -## Nested `WITH` Clauses - -It is possible to use a `WITH` clause in a subquery, or even a `WITH` clause within another `WITH` clause. For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (SELECT * FROM (WITH b AS (SELECT * FROM c) - SELECT * FROM b)) - SELECT * FROM a; -~~~ - -When analyzing [table expressions](table-expressions.html) that -mention a CTE name, CockroachDB will choose the CTE definition that is -closest to the table expression. For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (TABLE x), - b AS (WITH a AS (TABLE y) - SELECT * FROM a) - SELECT * FROM b; -~~~ - -In this example, the inner subquery `SELECT * FROM a` will select from -table `y` (closest `WITH` clause), not from table `x`. - -## Data Modifying Statements - -It is possible to use a data-modifying statement (`INSERT`, `DELETE`, -etc.) as a common table expression. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH v AS (INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x) - SELECT x+1 FROM v -~~~ - -However, the following restriction applies: only `WITH` sub-clauses at -the top level of a SQL statement can contain data-modifying -statements. The example above is valid, but the following is not: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT x+1 FROM - (WITH v AS (INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x) - SELECT * FROM v); -~~~ - -This is not valid because the `WITH` clause that defines an `INSERT` -common table expression is not at the top level of the query. - -{{site.data.alerts.callout_info}} -If a common table expression contains -a data-modifying statement (INSERT, DELETE, -etc.), the modifications are performed fully even if only part -of the results are used, e.g., with LIMIT. See Data -Writes in Subqueries for details. -{{site.data.alerts.end}} - -
      - -## Known Limitations - -{{site.data.alerts.callout_info}} -The following limitations may be lifted -in a future version of CockroachDB. -{{site.data.alerts.end}} - -
      - -### Referring to a CTE by name more than once - -{% include {{ page.version.version }}/known-limitations/cte-by-name.md %} - -### Using CTEs with data-modifying statements - -{% include {{ page.version.version }}/known-limitations/cte-with-dml.md %} - -### Using CTEs with views - -{% include {{ page.version.version }}/known-limitations/cte-with-view.md %} - -### Using CTEs with `VALUES` clauses - -{% include {{ page.version.version }}/known-limitations/cte-in-values-clause.md %} - -### Using CTEs with Set Operations - -{% include {{ page.version.version }}/known-limitations/cte-in-set-expression.md %} - -## See also - -- [Subqueries](subqueries.html) -- [Selection Queries](selection-queries.html) -- [Table Expressions](table-expressions.html) -- [`EXPLAIN`](explain.html) diff --git a/src/current/v2.0/computed-columns.md b/src/current/v2.0/computed-columns.md deleted file mode 100644 index da6ae2ae50e..00000000000 --- a/src/current/v2.0/computed-columns.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: Computed Columns -summary: A computed column stores data generated by an expression included in the column definition. -toc: true ---- - -New in v2.0: A computed column stores data generated from other columns by a [scalar expression](scalar-expressions.html) included in the column definition. - - -## Why use computed columns? - -Computed columns are especially useful when used with [partitioning](partitioning.html), [`JSONB`](jsonb.html) columns, or [secondary indexes](indexes.html). - -- **Partitioning** requires that partitions are defined using columns that are a prefix of the [primary key](primary-key.html). In the case of geo-partitioning, some applications will want to collapse the number of possible values in this column, to make certain classes of queries more performant. For example, if a users table has a country and state column, then you can make a stored computed column locality with a reduced domain for use in partitioning. For more information, see the [partitioning example](#create-a-table-with-geo-partitions-and-a-computed-column) below. - -- **JSONB** columns are used for storing semi-structured `JSONB` data. When the table's primary information is stored in `JSONB`, it's useful to index a particular field of the `JSONB` document. In particular, computed columns allow for the following use case: a two-column table with a `PRIMARY KEY` column and a `payload` column, whose primary key is computed as some field from the `payload` column. This alleviates the need to manually separate your primary keys from your JSON blobs. For more information, see the [`JSONB` example](#create-a-table-with-a-jsonb-column-and-a-computed-column) below. - -- **Secondary indexes** can be created on computed columns, which is especially useful when a table is frequently sorted. See the [secondary indexes example](#create-a-table-with-a-secondary-index-on-a-computed-column) below. - -## Considerations - -Computed columns: - -- Cannot be added after a table is created. Follow the [GitHub issue](https://github.com/cockroachdb/cockroach/issues/22652) for updates on this limitation. -- Cannot be used to generate other computed columns. -- Cannot be a [foreign key](foreign-key.html) reference. -- Behave like any other column, with the exception that they cannot be written to directly. -- Are mutually exclusive with [`DEFAULT`](default-value.html). - -## Creation - -Computed columns can only be added at the time of [table creation](create-table.html). Use the following syntax: - -~~~ -column_name AS () STORED -~~~ - -Parameter | Description -----------|------------ -`column_name` | The [name/identifier](keywords-and-identifiers.html#identifiers) of the computed column. -`` | The [data type](data-types.html) of the computed column. -`` | The pure [scalar expression](scalar-expressions.html) used to compute column values. Any functions marked as `impure`, such as `now()` or `nextval()` cannot be used. -`STORED` | _(Required)_ The computed column is stored alongside other columns. - -## Examples - -### Create a Table with a Computed Column - -{% include {{ page.version.version }}/computed-columns/simple.md %} - -### Create a Table with Geo-partitions and a Computed Column - -{% include {{ page.version.version }}/computed-columns/partitioning.md %} The `locality` values can then be used for geo-partitioning. - -### Create a Table with a `JSONB` Column and a Computed Column - -{% include {{ page.version.version }}/computed-columns/jsonb.md %} - -### Create a Table with a Secondary Index on a Computed Column - -{% include {{ page.version.version }}/computed-columns/secondary-index.md %} - -## See Also - -- [Scalar Expressions](scalar-expressions.html) -- [Information Schema](information-schema.html) -- [`CREATE TABLE`](create-table.html) -- [`JSONB`](jsonb.html) -- [Define Table Partitions (Enterprise)](partitioning.html) diff --git a/src/current/v2.0/configure-replication-zones.md b/src/current/v2.0/configure-replication-zones.md deleted file mode 100644 index 0b03ce55bed..00000000000 --- a/src/current/v2.0/configure-replication-zones.md +++ /dev/null @@ -1,823 +0,0 @@ ---- -title: Configure Replication Zones -summary: In CockroachDB, you use replication zones to control the number and location of replicas for specific sets of data. -keywords: ttl, time to live, availability zone -toc: true ---- - -In CockroachDB, you use **replication zones** to control the number and location of replicas for specific sets of data, both when replicas are first added and when they are rebalanced to maintain cluster equilibrium. Initially, there are some special pre-configured replication zones for internal system data along with a default replication zone that applies to the rest of the cluster. You can adjust these pre-configured zones as well as add zones for individual databases, tables and secondary indexes, and rows ([enterprise-only](enterprise-licensing.html)) as needed. For example, you might use the default zone to replicate most data in a cluster normally within a single datacenter, while creating a specific zone to more highly replicate a certain database or table across multiple datacenters and geographies. - -This page explains how replication zones work and how to use the `cockroach zone` [command](cockroach-commands.html) to configure them. - -{{site.data.alerts.callout_info}} -Currently, only the `root` user can configure replication zones. -{{site.data.alerts.end}} - -## Replication Zone Levels - -### For Table Data - -There are five replication zone levels for [**table data**](architecture/distribution-layer.html#table-data) in a cluster, listed from least to most granular: - -Level | Description -------|------------ -Cluster | CockroachDB comes with a pre-configured `.default` replication zone that applies to all table data in the cluster not constrained by a database, table, or row-specific replication zone. This zone can be adjusted but not removed. See [View the Default Replication Zone](#view-the-default-replication-zone) and [Edit the Default Replication Zone](#edit-the-default-replication-zone) for more details. -Database | You can add replication zones for specific databases. See [Create a Replication Zone for a Database](#create-a-replication-zone-for-a-database) for more details. -Table | You can add replication zones for specific tables. See [Create a Replication Zone for a Table](#create-a-replication-zone-for-a-table). -Index ([Enterprise-only](enterprise-licensing.html)) | The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes. See [Create a Replication Zone for a Secondary Index](#create-a-replication-zone-for-a-secondary-index) for more details. -Row ([Enterprise-only](enterprise-licensing.html)) | You can add replication zones for specific rows in a table or secondary index by [defining table partitions](partitioning.html). See [Create a Replication Zone for a Table Partition](#create-a-replication-zone-for-a-table-or-secondary-index-partition-new-in-v2-0) for more details. - -### For System Data - -In addition, CockroachDB stores internal [**system data**](architecture/distribution-layer.html#monolithic-sorted-map-structure) in what are called system ranges. There are two replication zone levels for this internal system data, listed from least to most granular: - -Level | Description -------|------------ -Cluster | The `.default` replication zone mentioned above also applies to all system ranges not constrained by a more specific replication zone. -System Range | CockroachDB comes with pre-configured replication zones for the "meta" and "liveness" system ranges. If necessary, you can add replication zones for the "timeseries" range and other "system" ranges as well. See [Create a Replication Zone for a System Range](#create-a-replication-zone-for-a-system-range) for more details.

      CockroachDB also comes with a pre-configured replication zone for one internal table, `system.jobs`, which stores metadata about long-running jobs such as schema changes and backups. Historical queries are never run against this table and the rows in it are updated frequently, so the pre-configured zone gives this table a lower-than-default `ttlseconds`. - -### Level Priorities - -When replicating data, whether table or system, CockroachDB always uses the most granular replication zone available. For example, for a piece of user data: - -1. If there's a replication zone for the row, CockroachDB uses it. -2. If there's no applicable row replication zone and the row is from a secondary index, CockroachDB uses the secondary index replication zone. -3. If the row isn't from a secondary index or there is no applicable secondary index replication zone, CockroachDB uses the table replication zone. -4. If there's no applicable table replication zone, CockroachDB uses the database replication zone. -5. If there's no applicable database replication zone, CockroachDB uses the `.default` cluster-wide replication zone. - -{{site.data.alerts.callout_danger}} -{% include {{page.version.version}}/known-limitations/system-range-replication.md %} -{{site.data.alerts.end}} - -## Replication Zone Format - -A replication zone is specified in [YAML](https://en.wikipedia.org/wiki/YAML) format and looks like this: - -~~~ yaml -range_min_bytes: -range_max_bytes: -gc: - ttlseconds: -num_replicas: -constraints: -~~~ - -Field | Description -------|------------ -`range_min_bytes` | Not yet implemented. -`range_max_bytes` | The maximum size, in bytes, for a range of data in the zone. When a range reaches this size, CockroachDB will split it into two ranges.

      **Default:** `67108864` (64MiB) -`ttlseconds` | The number of seconds overwritten values will be retained before garbage collection. Smaller values can save disk space if values are frequently overwritten; larger values increase the range allowed for `AS OF SYSTEM TIME` queries, also know as [Time Travel Queries](select-clause.html#select-historical-data-time-travel).

      It is not recommended to set this below `600` (10 minutes); doing so will cause problems for long-running queries. Also, since all versions of a row are stored in a single range that never splits, it is not recommended to set this so high that all the changes to a row in that time period could add up to more than 64MiB; such oversized ranges could contribute to the server running out of memory or other problems.

      **Default:** `90000` (25 hours) -`num_replicas` | The number of replicas in the zone.

      **Default:** `3` -`constraints` | A JSON object or array of required and/or prohibited constraints influencing the location of replicas. See [Types of Constraints](#types-of-constraints) and [Scope of Constraints](#scope-of-constraints) for more details.

      **Default:** No constraints, with CockroachDB locating each replica on a unique node and attempting to spread replicas evenly across localities. - -## Replication Constraints - -The location of replicas, both when they are first added and when they are rebalanced to maintain cluster equilibrium, is based on the interplay between descriptive attributes assigned to nodes and constraints set in zone configurations. - -{{site.data.alerts.callout_success}}For demonstrations of how to set node attributes and replication constraints in different scenarios, see Scenario-based Examples below.{{site.data.alerts.end}} - -### Descriptive Attributes Assigned to Nodes - -When starting a node with the [`cockroach start`](start-a-node.html) command, you can assign the following types of descriptive attributes: - -Attribute Type | Description ----------------|------------ -**Node Locality** | Using the `--locality` flag, you can assign arbitrary key-value pairs that describe the locality of the node. Locality might include country, region, datacenter, rack, etc. The key-value pairs should be ordered from most inclusive to least inclusive (e.g., country before datacenter before rack), and the keys and the order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. For example:

      `--locality=region=east,datacenter=us-east-1`
      `--locality=region=east,datacenter=us-east-2`
      `--locality=region=west,datacenter=us-west-1`

      CockroachDB attempts to spread replicas evenly across the cluster based on locality, with the order determining the priority. However, locality can be used to influence the location of data replicas in various ways using replication zones.

      When there is high latency between nodes, CockroachDB also uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance. See [Follow-the-workload](demo-follow-the-workload.html) for more details. -**Node Capability** | Using the `--attrs` flag, you can specify node capability, which might include specialized hardware or number of cores, for example:

      `--attrs=ram:64gb` -**Store Type/Capability** | Using the `attrs` field of the `--store` flag, you can specify disk type or capability, for example:

      `--store=path=/mnt/ssd01,attrs=ssd`
      `--store=path=/mnt/hda1,attrs=hdd:7200rpm` - -### Types of Constraints - -The node-level and store-level descriptive attributes mentioned above can be used as the following types of constraints in replication zones to influence the location of replicas. However, note the following general guidance: - -- When locality is the only consideration for replication, it's recommended to set locality on nodes without specifying any constraints in zone configurations. In the absence of constraints, CockroachDB attempts to spread replicas evenly across the cluster based on locality. -- Required and prohibited constraints are useful in special situations where, for example, data must or must not be stored in a specific country or on a specific type of machine. - -Constraint Type | Description | Syntax -----------------|-------------|------- -**Required** | When placing replicas, the cluster will consider only nodes/stores with matching attributes or localities. When there are no matching nodes/stores, new replicas will not be added. | `+ssd` -**Prohibited** | When placing replicas, the cluster will ignore nodes/stores with matching attributes or localities. When there are no alternate nodes/stores, new replicas will not be added. | `-ssd` - -### Scope of Constraints - -Constraints can be specified such that they apply to all replicas in a zone or such that different constraints apply to different replicas, meaning you can effectively pick the exact location of each replica. - -Constraint Scope | Description | Syntax ------------------|-------------|------- -**All Replicas** | Constraints specified using JSON array syntax apply to all replicas in every range that's part of the replication zone. | `constraints: [+ssd, -region=west]` -**Per-Replica** | Multiple lists of constraints can be provided in a JSON object, mapping each list of constraints to an integer number of replicas in each range that the constraints should apply to.

      The total number of replicas constrained cannot be greater than the total number of replicas for the zone (`num_replicas`). However, if the total number of replicas constrained is less than the total number of replicas for the zone, the non-constrained replicas will be allowed on any nodes/stores. | `constraints: {"+ssd,-region=west": 2, "+region=east": 1}` - -## Node/Replica Recommendations - -See [Cluster Topography](recommended-production-settings.html#cluster-topology) recommendations for production deployments. - -## Subcommands - -Subcommand | Usage ------------|------ -`ls` | List all replication zones. -`get` | View the YAML contents of a replication zone. -`set` | Create or edit a replication zone. -`rm` | Remove a replication zone. - -## Synopsis - -~~~ shell -# List all replication zones: -$ cockroach zone ls - -# View the default replication zone for the cluster: -$ cockroach zone get .default - -# View the replication zone for a database: -$ cockroach zone get - -# View the replication zone for a table: -$ cockroach zone get - -# View the replication zone for an index: -$ cockroach zone get - -# View the replication zone for a table or index partition: -$ cockroach zone get - -# Edit the default replication zone for the cluster: -$ cockroach zone set .default --file= - -# Create/edit the replication zone for a database: -$ cockroach zone set --file= - -# Create/edit the replication zone for a table: -$ cockroach zone set --file= - -# Create/edit the replication zone for an index: -$ cockroach zone set --file= - -# Create/edit the replication zone for a table or index partition: -$ cockroach zone set --file= - -# Remove the replication zone for a database: -$ cockroach zone rm - -# Remove the replication zone for a table: -$ cockroach zone rm - -# Remove the replication zone for an index: -$ cockroach zone rm - -# Remove the replication zone for a table or index partition: -$ cockroach zone set --file= - -# View help: -$ cockroach zone --help -$ cockroach zone ls --help -$ cockroach zone get --help -$ cockroach zone set --help -$ cockroach zone rm --help -~~~ - -## Flags - -The `zone` command and subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--disable-replication` | Disable replication in the zone by setting the zone's replica count to 1. This is equivalent to setting `num_replicas: 1`. -`--echo-sql` | New in v1.1: Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below. -`--file`
      `-f` | The path to the [YAML file](#replication-zone-format) defining the zone configuration. To pass the zone configuration via the standard input, set this flag to `-`.

      This flag is relevant only for the `set` subcommand. - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -Currently, only the `root` user can configure replication zones and the `--database` flag is not effective. - -### Logging - -By default, the `zone` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Basic Examples - -These examples focus on the basic approach and syntax for working with zone configuration. For examples demonstrating how to use constraints, see [Scenario-based Examples](#scenario-based-examples). - -### List the Pre-Configured Replication Zones - -New in v2.0: Newly created CockroachDB clusters start with some special pre-configured replication zones: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone ls --insecure -~~~ - -~~~ -.default -.liveness -.meta -system.jobs -~~~ - -### View the Default Replication Zone - -The cluster-wide replication zone (`.default`) is initially set to replicate data to any three nodes in your cluster, with ranges in each replica splitting once they get larger than 67108864 bytes. - -To view the default replication zone, use the `cockroach zone get .default` command with appropriate flags: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone get .default --insecure -~~~ - -~~~ -.default -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 86400 -num_replicas: 3 -constraints: [] -~~~ - -### Edit the Default Replication Zone - -{{site.data.alerts.callout_danger}} -{% include {{page.version.version}}/known-limitations/system-range-replication.md %} -{{site.data.alerts.end}} - -To edit the default replication zone, create a YAML file defining only the values you want to change (other values will be copied from the `.default` zone), and use the `cockroach zone set .default -f ` command with appropriate flags: - -{% include copy-clipboard.html %} -~~~ shell -$ cat default_update.yaml -~~~ - -~~~ -num_replicas: 5 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set .default --insecure -f default_update.yaml -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 86400 -num_replicas: 5 -constraints: [] -~~~ - -Alternately, you can pass the YAML content via the standard input: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set .default --insecure -f - -~~~ - -### Create a Replication Zone for a Database - -To control replication for a specific database, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags: - -{% include copy-clipboard.html %} -~~~ shell -$ cat database_zone.yaml -~~~ - -~~~ -num_replicas: 7 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set db1 --insecure -f database_zone.yaml -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 86400 -num_replicas: 5 -constraints: [] -~~~ - -Alternately, you can pass the YAML content via the standard input: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set db1 --insecure -f - -~~~ - -### Create a Replication Zone for a Table - -To control replication for a specific table, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags: - -{% include copy-clipboard.html %} -~~~ shell -$ cat table_zone.yaml -~~~ - -~~~ -num_replicas: 7 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set db1.t1 --insecure -f table_zone.yaml -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 86400 -num_replicas: 7 -constraints: [] -~~~ - -Alternately, you can pass the YAML content via the standard input: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 7' | cockroach zone set db1.t1 --insecure -f - -~~~ - -### Create a Replication Zone for a Secondary Index - -{{site.data.alerts.callout_info}} -This is an [enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes. - -To control replication for a specific secondary index, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags: - -{{site.data.alerts.callout_success}} -To get the name of a secondary index, which you need for the `cockroach zone set` command, use the [`SHOW INDEX`](show-index.html) or [`SHOW CREATE TABLE`](show-create-table.html) statements. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ cat index_zone.yaml -~~~ - -~~~ -num_replicas: 7 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set db1.table@idx1 \ ---insecure \ ---host= \ --f index_zone.yaml -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 86400 -num_replicas: 7 -constraints: [] -~~~ - -Alternately, you can pass the YAML content via the standard input: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 7' | cockroach zone set db1.table@idx1 \ ---insecure \ ---host= \ --f - -~~~ - -### Create a Replication Zone for a Table or Secondary Index Partition New in v2.0 - -{{site.data.alerts.callout_info}} -This is an [enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -To [control replication for table partitions](partitioning.html#replication-zones), create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags: - -{% include copy-clipboard.html %} -~~~ shell -$ cat > australia_zone.yml -~~~ - -~~~ shell -constraints: [+datacenter=au1] -~~~ - -Apply zone configurations to corresponding partitions: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students_by_list.australia \ ---insecure \ ---host= \ --f australia_zone.yml -~~~ - -{{site.data.alerts.callout_success}} -Since the syntax is the same for defining a replication zone for a table or index partition (`database.table.partition`), give partitions names that communicate what they are partitioning, e.g., `australia_table` vs `australia_idx1`. -{{site.data.alerts.end}} - -### Create a Replication Zone for a System Range - -In addition to the databases and tables that are visible via the SQL interface, CockroachDB stores internal data in what are called system ranges. CockroachDB comes with pre-configured replication zones for some of these ranges: - -Zone Name | Description -----------|-----------------|------------ -`.meta` | The "meta" ranges contain the authoritative information about the location of all data in the cluster.

      Because historical queries are never run on meta ranges and it is advantageous to keep these ranges smaller for reliable performance, CockroachDB comes with a **pre-configured** `.meta` replication zone giving these ranges a lower-than-default `ttlseconds`.

      If your cluster is running in multiple datacenters, it's a best practice to configure the meta ranges to have a copy in each datacenter. -`.liveness` | New in v2.0: The "liveness" range contains the authoritative information about which nodes are live at any given time.

      Just as for "meta" ranges, historical queries are never run on the liveness range, so CockroachDB comes with a **pre-configured** `.liveness` replication zone giving this range a lower-than-default `ttlseconds`.

      If this range is unavailable, the entire cluster will be unavailable, so giving it a high replication factor is strongly recommended. -`.timeseries` | The "timeseries" ranges contain monitoring data about the cluster that powers the graphs in CockroachDB's admin UI. If necessary, you can add a `.timeseries` replication zone to control the replication of this data. -`.system` | There are system ranges for a variety of other important internal data, including information needed to allocate new table IDs and track the status of a cluster's nodes. If necessary, you can add a `.system` replication zone to control the replication of this data. - -To control replication for one of the above sets of system ranges, create a YAML file defining only the values you want to change (other values will not be affected), and use the `cockroach zone set -f ` command with appropriate flags: - -{% include copy-clipboard.html %} -~~~ shell -$ cat meta_zone.yaml -~~~ - -~~~ -num_replicas: 7 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set .meta --insecure -f meta_zone.yaml -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 86400 -num_replicas: 7 -constraints: [] -~~~ - -Alternately, you can pass the YAML content via the standard input: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 7' | cockroach zone set .meta --insecure -f - -~~~ - -### Reveal the SQL statements sent implicitly by the command-line utility - -In this example, we use the `--echo-sql` flag to reveal the SQL statement sent implicitly by the command-line utility: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set .default --insecure --echo-sql -f - -~~~ - -~~~ -> BEGIN -> SAVEPOINT cockroach_restart -> SELECT config FROM system.zones WHERE id = $1 -> UPSERT INTO system.zones (id, config) VALUES ($1, $2) -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 90000 -num_replicas: 5 -constraints: [] -> RELEASE SAVEPOINT cockroach_restart -> COMMIT -~~~ - -## Scenario-based Examples - -### Even Replication Across Datacenters - -**Scenario:** - -- You have 6 nodes across 3 datacenters, 2 nodes in each datacenter. -- You want data replicated 3 times, with replicas balanced evenly across all three datacenters. - -**Approach:** - -Start each node with its datacenter location specified in the `--locality` flag: - -~~~ shell -# Start the two nodes in datacenter 1: -$ cockroach start --insecure --host= --locality=datacenter=us-1 -$ cockroach start --insecure --host= --locality=datacenter=us-1 \ ---join=:26257 - -# Start the two nodes in datacenter 2: -$ cockroach start --insecure --host= --locality=datacenter=us-2 \ ---join=:26257 -$ cockroach start --insecure --host= --locality=datacenter=us-2 \ ---join=:26257 - -# Start the two nodes in datacenter 3: -$ cockroach start --insecure --host= --locality=datacenter=us-3 \ ---join=:26257 -$ cockroach start --insecure --host= --locality=datacenter=us-3 \ ---join=:26257 -~~~ - -There's no need to make zone configuration changes; by default, the cluster is configured to replicate data three times, and even without explicit constraints, the cluster will aim to diversify replicas across node localities. - -### Per-Replica Constraints to Specific Datacenters New in v2.0 - -**Scenario:** - -- You have 5 nodes across 5 datacenters in 3 regions, 1 node in each datacenter. -- You want data replicated 3 times, with a quorum of replicas for a database holding West Coast data centered on the West Coast and a database for nation-wide data replicated across the entire country. - -**Approach:** - -1. Start each node with its region and datacenter location specified in the `--locality` flag: - - ~~~ shell - # Start the four nodes: - $ cockroach start --insecure --host= --locality=region=us-west1,datacenter=us-west1-a - $ cockroach start --insecure --host= --locality=region=us-west1,datacenter=us-west1-b \ - --join=:26257 - $ cockroach start --insecure --host= --locality=region=us-central1,datacenter=us-central1-a \ - --join=:26257 - $ cockroach start --insecure --host= --locality=region=us-east1,datacenter=us-east1-a \ - --join=:26257 - $ cockroach start --insecure --host= --locality=region=us-east1,datacenter=us-east1-b \ - --join=:26257 - ~~~ - -2. On any node, configure a replication zone for the database used by the West Coast application: - - {% include copy-clipboard.html %} - ~~~ shell - # Create a YAML file with the replica count set to 5: - $ cat west_app_zone.yaml - ~~~ - - ~~~ - constraints: {"+region=us-west1": 2, "+region=us-central1": 1} - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Apply the replication zone to the database used by the West Coast application: - $ cockroach zone set west_app_db --insecure -f west_app_zone.yaml - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 3 - constraints: {+region=us-central1: 1, +region=us-west1: 2} - ~~~ - - Two of the database's three replicas will be put in `region=us-west1` and its remaining replica will be put in `region=us-central1`. This gives the application the resilience to survive the total failure of any one datacenter while providing low-latency reads and writes on the West Coast because a quorum of replicas are located there. - -3. No configuration is needed for the nation-wide database. The cluster is configured to replicate data 3 times and spread them as widely as possible by default. Because the first key-value pair specified in each node's locality is considered the most significant part of each node's locality, spreading data as widely as possible means putting one replica in each of the three different regions. - -### Multiple Applications Writing to Different Databases - -**Scenario:** - -- You have 2 independent applications connected to the same CockroachDB cluster, each application using a distinct database. -- You have 6 nodes across 2 datacenters, 3 nodes in each datacenter. -- You want the data for application 1 to be replicated 5 times, with replicas evenly balanced across both datacenters. -- You want the data for application 2 to be replicated 3 times, with all replicas in a single datacenter. - -**Approach:** - -1. Start each node with its datacenter location specified in the `--locality` flag: - - ~~~ shell - # Start the three nodes in datacenter 1: - $ cockroach start --insecure --host= --locality=datacenter=us-1 - $ cockroach start --insecure --host= --locality=datacenter=us-1 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-1 \ - --join=:26257 - - # Start the three nodes in datacenter 2: - $ cockroach start --insecure --host= --locality=datacenter=us-2 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-2 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-2 \ - --join=:26257 - ~~~ - -2. On any node, configure a replication zone for the database used by application 1: - - {% include copy-clipboard.html %} - ~~~ shell - # Create a YAML file with the replica count set to 5: - $ cat app1_zone.yaml - ~~~ - - ~~~ - num_replicas: 5 - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Apply the replication zone to the database used by application 1: - $ cockroach zone set app1_db --insecure -f app1_zone.yaml - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 5 - constraints: [] - ~~~ - Nothing else is necessary for application 1's data. Since all nodes specify their datacenter locality, the cluster will aim to balance the data in the database used by application 1 between datacenters 1 and 2. - -3. On any node, configure a replication zone for the database used by application 2: - - {% include copy-clipboard.html %} - ~~~ shell - # Create a YAML file with 1 datacenter as a required constraint: - $ cat app2_zone.yaml - ~~~ - - ~~~ - constraints: [+datacenter=us-2] - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Apply the replication zone to the database used by application 2: - $ cockroach zone set app2_db --insecure -f app2_zone.yaml - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 3 - constraints: [+datacenter=us-2] - ~~~ - The required constraint will force application 2's data to be replicated only within the `us-2` datacenter. - -### Stricter Replication for a Specific Table - -**Scenario:** - -- You have 7 nodes, 5 with SSD drives and 2 with HDD drives. -- You want data replicated 3 times by default. -- Speed and availability are important for a specific table that is queried very frequently, however, so you want the data in that table to be replicated 5 times, preferably on nodes with SSD drives. - -**Approach:** - -1. Start each node with `ssd` or `hdd` specified as store attributes: - - ~~~ shell - # Start the 5 nodes with SSD storage: - $ cockroach start --insecure --host= --store=path=node1,attrs=ssd - $ cockroach start --insecure --host= --store=path=node2,attrs=ssd \ - --join=:26257 - $ cockroach start --insecure --host= --store=path=node3,attrs=ssd \ - --join=:26257 - $ cockroach start --insecure --host= --store=path=node4,attrs=ssd \ - --join=:26257 - $ cockroach start --insecure --host= --store=path=node5,attrs=ssd \ - --join=:26257 - - # Start the 2 nodes with HDD storage: - $ cockroach start --insecure --host= --store=path=node6,attrs=hdd \ - --join=:26257 - $ cockroach start --insecure --host= --store=path=node7,attrs=hdd \ - --join=:26257 - ~~~ - -2. On any node, configure a replication zone for the table that must be replicated more strictly: - - {% include copy-clipboard.html %} - ~~~ shell - # Create a YAML file with the replica count set to 5 - # and the ssd attribute as a required constraint: - $ cat table_zone.yaml - ~~~ - - ~~~ - num_replicas: 5 - constraints: [+ssd] - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Apply the replication zone to the table: - $ cockroach zone set db.important_table --insecure -f table_zone.yaml - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 5 - constraints: [+ssd] - ~~~ - Data in the table will be replicated 5 times, and the required constraint will place data in the table on nodes with `ssd` drives. - -### Tweaking the Replication of System Ranges - -**Scenario:** - -- You have nodes spread across 7 datacenters. -- You want data replicated 5 times by default. -- For better performance, you want a copy of the meta ranges in all of the datacenters. -- To save disk space, you only want the internal timeseries data replicated 3 times by default. - -**Approach:** - -1. Start each node with a different locality attribute: - - ~~~ shell - $ cockroach start --insecure --host= --locality=datacenter=us-1 - $ cockroach start --insecure --host= --locality=datacenter=us-2 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-3 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-4 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-5 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-6 \ - --join=:26257 - $ cockroach start --insecure --host= --locality=datacenter=us-7 \ - --join=:26257 - ~~~ - -2. On any node, configure the default replication zone: - - {% include copy-clipboard.html %} - ~~~ shell - echo 'num_replicas: 5' | cockroach zone set .default --insecure -f - - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 5 - constraints: [] - ~~~ - - All data in the cluster will be replicated 5 times, including both SQL data and the internal system data. - -3. On any node, configure the `.meta` replication zone: - - {% include copy-clipboard.html %} - ~~~ shell - echo 'num_replicas: 7' | cockroach zone set .meta --insecure -f - - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 7 - constraints: [] - ~~~ - - The `.meta` addressing ranges will be replicated such that one copy is in all 7 datacenters, while all other data will be replicated 5 times. - -4. On any node, configure the `.timeseries` replication zone: - - {% include copy-clipboard.html %} - ~~~ shell - echo 'num_replicas: 3' | cockroach zone set .timeseries --insecure -f - - ~~~ - - ~~~ - range_min_bytes: 1048576 - range_max_bytes: 67108864 - gc: - ttlseconds: 86400 - num_replicas: 3 - constraints: [] - ~~~ - - The timeseries data will only be replicated 3 times without affecting the configuration of all other data. - -## See Also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Table Partitioning](partitioning.html) diff --git a/src/current/v2.0/connection-parameters.md b/src/current/v2.0/connection-parameters.md deleted file mode 100644 index 76d810c3be2..00000000000 --- a/src/current/v2.0/connection-parameters.md +++ /dev/null @@ -1,265 +0,0 @@ ---- -title: Client Connection Parameters -summary: This page describes the parameters used to establish a client connection. -toc: true ---- - -Client applications, including client [`cockroach` -commands](cockroach-commands.html), work by establishing a network -connection to a CockroachDB cluster. The client connection parameters -determine which CockroachDB cluster they connect to, and how to -establish this network connection. - - - -## Supported Connection Parameters - -There are two principal ways a client can connect to CockroachDB: - -- Most client apps, including most `cockroach` commands, use a SQL connection - established via a [PostgreSQL connection URL](#connect-using-a-url). When using a URL, - a client can also specify SSL/TLS settings and additional SQL-level parameters. This mode provides the most configuration flexibility. -- Most `cockroach` commands also provide [discrete connection parameters](#connect-using-discrete-parameters) that - can specify the connection parameters separately from a URL. This mode is somewhat less flexible than using a URL. -- Some `cockroach` commands support connections using either a URL - connection string or discrete parameters, whereas some only support - discrete connection parameters. - -The following table summarizes which client supports which connection parameters: - -Client | Supports [connection by URL](#connect-using-a-url) | Supports [discrete connection parameters](#connect-using-discrete-parameters) --------|----------------------------|----------------------------------- -Client apps using a PostgreSQL driver | ✓ | Application-dependent -[`cockroach init`](initialize-a-cluster.html) | ✗ | ✓ -[`cockroach quit`](stop-a-node.html) | ✗ | ✓ -[`cockroach sql`](use-the-built-in-sql-client.html) | ✓ | ✓ -[`cockroach user`](create-and-manage-users.html) | ✓ | ✓ -[`cockroach zone`](configure-replication-zones.html) | ✓ | ✓ -[`cockroach node`](view-node-details.html) | ✓ | ✓ -[`cockroach dump`](sql-dump.html) | ✓ | ✓ -[`cockroach debug zip`](debug-zip.html) | ✗ | ✓ - -## Connect Using a URL - -SQL clients, including some [`cockroach` commands](cockroach-commands.html) can connect using a URL. - -A connection URL has the following format: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://:@:/? -~~~ - -Component | Description | Required -----------|-------------|---------- -`` | The [SQL user](create-and-manage-users.html) that will own the client session. | ✗ -`` | The user's password. It is not recommended to pass the password in the URL directly.

      [Find more detail about how CockroachDB handles passwords](create-and-manage-users.html#user-authentication). | ✗ -`` | The host name or address of a CockroachDB node or load balancer. | Required by most client drivers. -`` | The port number of the SQL interface of the CockroachDB node or load balancer. | Required by most client drivers. -`` | A database name to use as [current database](sql-name-resolution.html#current-database). | ✗ -`` | [Additional connection parameters](#additional-connection-parameters), including SSL/TLS certificate settings. | ✗ - -{{site.data.alerts.callout_info}}You can specify the URL for -cockroach commands that accept a URL with the -command-line flag --url. If --url is not -specified but the environment variable COCKROACH_URL is -defined, the environment variable is used. Otherwise, the -cockroach command will use discrete connection parameters -as described below.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}The <database> -part should not be specified for any cockroach command -other than cockroach -sql.{{site.data.alerts.end}} - -### Additional Connection Parameters - -The following additional parameters can be passed after the `?` character in the URL: - -Parameter | Description | Default value -----------|-------------|--------------- -`application_name` | An initial value for the [`application_name` session variable](set-vars.html). | Empty string. -`sslmode` | Which type of secure connection to use: `disable`, `allow`, `prefer`, `require`, `verify-ca` or `verify-full`. See [Secure Connections With URLs](#secure-connections-with-urls) for details. | `disable` -`sslrootcert` | Path to the [CA certificate](create-security-certificates.html), when `sslmode` is not `disable`. | Empty string. -`sslcert` | Path to the [client certificate](create-security-certificates.html), when `sslmode` is not `disable`. | Empty string. -`sslkey` | Path to the [client private key](create-security-certificates.html), when `sslmode` is not `disable`. | Empty string. - -### Secure Connections With URLs - -The following values are supported for `sslmode`, although only the first and the last are recommended for use. - -Parameter | Description | Recommended for use -----------|-------------|-------------------- -`sslmode=disable` | Do not use an encrypted, secure connection at all. | Use during development. -`sslmode=allow` | Enable a secure connection only if the server requires it.

      **Not supported in all clients.** | -`sslmode=prefer` | Try to establish a secure connection, but accept an insecure connection if the server does not support secure connections.

      **Not supported in all clients.** | -`sslmode=require` | Force a secure connection. An error occurs if the secure connection cannot be established. | -`sslmode=verify-ca` | Force a secure connection and verify that the server certificate is signed by a known CA. | -`sslmode=verify-full` | Force a secure connection, verify that the server certificate is signed by a known CA, and verify that the server address matches that specified in the certificate. | Use for [secure deployments](secure-a-cluster.html). - -{{site.data.alerts.callout_danger}}Some client drivers and the -cockroach commands do not support -sslmode=allow and sslmode=prefer. Check the -documentation of your SQL driver to determine whether these options -are supported.{{site.data.alerts.end}} - -### Example URL for an Insecure Connection - -The following URL is suitable to connect to a CockroachDB node using an insecure connection: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/mydb?sslmode=disable -~~~ - -This specifies a connection for the `root` user to server `servername` -on port 26257 (the default CockroachDB SQL port), with `mydb` set as -current database. `sslmode=disable` makes the connection insecure. - -### Example URL for a Secure Connection - -The following URL is suitable to connect to a CockroachDB node using a secure connection: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/mydb?sslmode=verify-full&sslrootcert=path/to/ca.crt&sslcert=path/to/client.crt&sslkey=path/to/client.key -~~~ - -This uses the following components: - -- User `root` -- Host name `servername`, port number 26257 (the default CockroachDB SQL port) -- Current database `mydb` -- SSL/TLS mode `verify-full`: - - Root CA certificate `path/to/ca.crt` - - Client certificate `path/to/client.crt` - - Client key `path/to/client.key` - -For details about how to create and manage SSL/TLS certificates, see -[Create Security Certificates](create-security-certificates.html) and -[Rotate Certificates](rotate-certificates.html). - -## Connect Using Discrete Parameters - -Most [`cockroach` commands](cockroach-commands.html) accept connection -parameters as separate, discrete command-line flags, in addition (or -in replacement) to `--url` which [specifies all parameters as a -URL](#connect-using-a-url). - -For each command-line flag that directs a connection parameter, -CockroachDB also recognizes an environment variable. The environment -variable is used when the command-line flag is not specified. - -{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %} - -{{site.data.alerts.callout_info}}The command-line flag ---url is only supported for cockroach -commands that use a SQL connection. See Supported Connection -Parameters for details.{{site.data.alerts.end}} - -### Example Command-Line Flags for an Insecure Connection - -The following command-line flags establish an insecure connection: - -{% include_cached copy-clipboard.html %} -~~~ ---user root \ - --host servername \ - --port 26257 \ - --insecure -~~~ - -This specifies a connection for the `root` user to server `servername` -on port 26257 (the default CockroachDB SQL port). `--insecure` makes -the connection insecure. - -### Example Command-Line Flags for a Secure Connection - -The following command-line flags establish a secure connection: - -{% include_cached copy-clipboard.html %} -~~~ ---user root \ - --host servername \ - --port 26257 \ - --certs-dir path/to/certs -~~~ - -This uses the following components: - -- User `root` -- Host name `servername`, port number 26257 (the default CockroachDB SQL port) -- SSL/TLS enabled, with settings: - - Root CA certificate `path/to/certs/ca.crt` - - Client certificate `path/to/client..crt` (`path/to/certs/client.root.crt` with `--user root`) - - Client key `path/to/client..key` (`path/to/certs/client.root.key` with `--user root`) - -{{site.data.alerts.callout_info}}When using discrete connection -parameters, the file names of the CA and client certificates and -client key are derived automatically from the value of --certs-dir, -and cannot be customized. To use customized file names, use a connection URL -instead.{{site.data.alerts.end}} - -## Using Both URL and Client Parameters - -Changed in v2.0 - -Several [`cockroach` commands](cockroach-commands.html) support both a -[connection URL](#connect-using-a-url) with `--url` (or `COCKROACH_URL`) and [discrete connection -parameters](#connect-using-discrete-parameters). - -They can be combined as follows: the URL has highest priority, then -the discrete parameters. - -This combination is useful so that discrete command-line flags can -override settings not otherwise set in the URL. - -In other words: - -- If a URL is specified: - - For any URL component that is specified, that information is used - and the corresponding discrete parameter is ignored. - - For any URL component that is missing, if a corresponding discrete - parameter is specified (either via command-line flag or as - environment variable), the discrete parameter is used. - - If a component is missing in the URL and no corresponding discrete - parameter is specified, the default value is used. -- If no URL is specified, the discrete parameters are used. For every - component not specified, the default value is used. - -### Example Override of the Current Database - -For example, the `cockroach start` command prints out the following connection URL: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/?sslmode=disable -~~~ - -It is possible to connect `cockroach sql` to this server and also -specify `mydb` as the current database, using the following command: - -{% include_cached copy-clipboard.html %} -~~~ -cockroach sql \ - --url "postgres://root@servername:26257/?sslmode=disable" \ - --database mydb -~~~ - -This is equivalent to: - -{% include_cached copy-clipboard.html %} -~~~ -cockroach sql --url "postgres://root@servername:26257/mydb?sslmode=disable" -~~~ - -## See Also - -- [`cockroach` commands](cockroach-commands.html) -- [Create Security Certificates](create-security-certificates.html) -- [Secure a Cluster](secure-a-cluster.html) -- [Create and Manage Users](create-and-manage-users.html) diff --git a/src/current/v2.0/constraints.md b/src/current/v2.0/constraints.md deleted file mode 100644 index 9a6392d832d..00000000000 --- a/src/current/v2.0/constraints.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: Constraints -summary: Constraints offer additional data integrity by enforcing conditions on the data within a column. -toc: true ---- - -Constraints offer additional data integrity by enforcing conditions on the data within a column. Whenever values are manipulated (inserted, deleted, or updated), constraints are checked and modifications that violate constraints are rejected. - -For example, the Unique constraint requires that all values in a column be unique from one another (except *NULL* values). If you attempt to write a duplicate value, the constraint rejects the entire statement. - - -## Supported Constraints - -| Constraint | Description | -|------------|-------------| -| [Check](check.html) | Values must return `TRUE` or `NULL` for a Boolean expression. | -| [Default Value](default-value.html) | If a value is not defined for the constrained column in an `INSERT` statement, the Default Value is written to the column. | -| [Foreign Keys](foreign-key.html) | Values must exactly match existing values from the column it references. | -| [Not Null](not-null.html) | Values may not be *NULL*. | -| [Primary Key](primary-key.html) | Values must uniquely identify each row *(one per table)*. This behaves as if the Not Null and Unique constraints are applied, as well as automatically creates an [index](indexes.html) for the table using the constrained columns. | -| [Unique](unique.html) | Each non-*NULL* value must be unique. This also automatically creates an [index](indexes.html) for the table using the constrained columns. | - -## Using Constraints - -### Add Constraints - -How you add constraints depends on the number of columns you want to constrain, as well as whether or not the table is new. - -- **One column of a new table** has its constraints defined after the column's data type. For example, this statement applies the Primary Key constraint to `foo.a`: - - ``` sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ``` -- **Multiple columns of a new table** have their constraints defined after the table's columns. For example, this statement applies the Primary Key constraint to `foo`'s columns `a` and `b`: - - ``` sql - > CREATE TABLE bar (a INT, b INT, PRIMARY KEY (a,b)); - ``` - - {{site.data.alerts.callout_info}}The Default Value and Not Null constraints cannot be applied to multiple columns.{{site.data.alerts.end}} - -- **Existing tables** can have the following constraints added: - - **Check**, **Foreign Key**, and **Unique** constraints can be added through [`ALTER TABLE...ADD CONSTRAINT`](add-constraint.html). For example, this statement adds the Unique constraint to `baz.id`: - - ~~~ sql - > ALTER TABLE baz ADD CONSTRAINT id_unique UNIQUE (id); - ~~~ - - - **Default Values** can be added through [`ALTER TABLE...ALTER COLUMN`](alter-column.html#set-or-change-a-default-value). For example, this statement adds the Default Value constraint to `baz.bool`: - - ~~~ sql - > ALTER TABLE baz ALTER COLUMN bool SET DEFAULT true; - ~~~ - - - **Primary Key** and **Not Null** constraints cannot be added or changed. However, you can go through [this process](#table-migrations-to-add-or-change-immutable-constraints) to migrate data from your current table to a new table with the constraints you want to apply. - -#### Order of Constraints - -The order in which you list constraints is not important because constraints are applied to every modification of their respective tables or columns. - -#### Name Constraints on New Tables - -You can name constraints applied to new tables using the `CONSTRAINT` clause before defining the constraint: - -``` sql -> CREATE TABLE foo (a INT CONSTRAINT another_name PRIMARY KEY); - -> CREATE TABLE bar (a INT, b INT, CONSTRAINT yet_another_name PRIMARY KEY (a,b)); -``` - -### View Constraints - -To view a table's constraints, use [`SHOW CONSTRAINTS`](show-constraints.html) or [`SHOW CREATE TABLE`](show-create-table.html). - -### Remove Constraints - -The procedure for removing a constraint depends on its type: - -| Constraint Type | Procedure | -|-----------------|-----------| -| [Check](check.html) | Use [`DROP CONSTRAINT`](drop-constraint.html) | -| [Default Value](default-value.html) | Use [`ALTER COLUMN`](alter-column.html#remove-default-constraint) | -| [Foreign Keys](foreign-key.html) | Use [`DROP CONSTRAINT`](drop-constraint.html) | -| [Not Null](not-null.html) | Use [`ALTER COLUMN`](alter-column.html#remove-not-null-constraint) | -| [Primary Key](primary-key.html) | Primary Keys cannot be removed. However, you can move the table's data to a new table with [this process](#table-migrations-to-add-or-change-immutable-constraints). | -| [Unique](unique.html) | The Unique constraint cannot be dropped directly. However, you can use [`DROP INDEX`](drop-index.html) to remove the index automatically created by the Unique constraint (whose name ends in `_key`) to remove the constraint. | - -### Change Constraints - -The procedure for changing a constraint depends on its type: - -| Constraint Type | Procedure | -|-----------------|-----------| -| [Check](check.html) | [Issue a transaction](transactions.html#syntax) that adds a new Check constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). | -| [Default Value](default-value.html) | The Default Value can be changed through [`ALTER COLUMN`](alter-column.html). | -| [Foreign Keys](foreign-key.html) | [Issue a transaction](transactions.html#syntax) that adds a new Foreign Key constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). | -| [Not Null](not-null.html) | The Not Null constraint cannot be changed, only removed. However, you can move the table's data to a new table with [this process](#table-migrations-to-add-or-change-immutable-constraints). | -| [Primary Key](primary-key.html) | Primary Keys cannot be modified. However, you can move the table's data to a new table with [this process](#table-migrations-to-add-or-change-immutable-constraints). | -| [Unique](unique.html) | [Issue a transaction](transactions.html#syntax) that adds a new Unique constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). | - -#### Table Migrations to Add or Change Immutable Constraints - -If you want to make a change to an immutable constraint, you can use the following process: - -1. [Create a new table](create-table.html) with the constraints you want to apply. -2. Move the data from the old table to the new one using [`INSERT` from a `SELECT` statement](insert.html#insert-from-a-select-statement). -3. [Drop the old table](drop-table.html), and then [rename the new table to the old name](rename-table.html). This cannot be done transactionally. - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE TABLE`](show-create-table.html) diff --git a/src/current/v2.0/create-a-file-server.md b/src/current/v2.0/create-a-file-server.md deleted file mode 100644 index ee1f0486b3b..00000000000 --- a/src/current/v2.0/create-a-file-server.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Create a File Server for Imports and Backups -summary: Learn how to create a simple file server for use with CockroachDB IMPORT and BACKUP -toc: true ---- - -If you need a location to store files for the [`IMPORT`](import.html) process or [CockroachDB enterprise backups](backup.html), but do not have access to (or simply cannot use) cloud storage providers, you can easily create your own file server. You can then use this file server by leveraging support for our HTTP Export Storage API. - -This is especially useful for: - -- Implementing a compatibility layer in front of custom or proprietary storage providers for which CockroachDB does not yet have built-in support -- Using on-premises storage - - -## HTTP Export Storage API - -CockroachDB tasks that require reading or writing external files (such as [`IMPORT`](import.html) and [`BACKUP`](backup.html)) can use the HTTP Export Storage API by prefacing the address with `http`, e.g., `http://fileserver/mnt/cockroach-exports`. - -This API uses the `GET`, `PUT` and `DELETE` methods. This behaves like you would expect typical HTTP requests to work. After a `PUT` request to some path, a subsequent `GET` request should return the content sent in the `PUT` request body, at least until a `DELETE` request is received for that path. - -## Examples - -You can use any file server software that supports `GET`, `PUT` and `DELETE` methods, but we've included code samples for common ones: - -- [Caddy](#using-caddy-as-a-file-server) -- [nginx](#using-nginx-as-a-file-server) - -{{site.data.alerts.callout_info}}We do not recommend using any machines running cockroach as file servers. Using machines that are running cockroach as file servers could negatively impact performance if I/O operations exceed capacity.{{site.data.alerts.end}} - -### Using Caddy as a File Server - -1. [Download a `caddy` binary](https://caddyserver.com/download) that includes the `http.upload` plugin. - -2. Run `caddy` with an [`upload` directive](https://caddyserver.com/docs/http.upload), either in the command line or via [`Caddyfile`](https://caddyserver.com/docs/caddyfile). - - Command line example (with no TLS): - - ~~~ shell - caddy -root /mnt/cockroach-exports "upload / {" 'to "/mnt/cockroach-exports"' 'yes_without_tls' "}" - ~~~ - - `Caddyfile` example (using a key and cert): - - ~~~ shell - tls key cert - root "/mnt/cockroach-exports" - upload / { - to "/mnt/cockroach-exports" - } - ~~~ - -### Using nginx as a File Server - -1. Install `nginx` with the `webdav` module (often included in `-full` or similarly named packages in various distributions). - -2. In the `nginx.conf` file, add a `dav_methods PUT DELETE` directive. For example: - - ~~~ nginx - events { - worker_connections 1024; - } - http { - server { - listen 20150; - location / { - dav_methods PUT DELETE; - root /mnt/cockroach-exports; - sendfile on; - sendfile_max_chunk 1m; - } - } - } - ~~~ - -## See Also - -- [`IMPORT`](import.html) -- [`BACKUP`](backup.html) (*Enterprise only*) -- [`RESTORE`](restore.html) (*Enterprise only*) diff --git a/src/current/v2.0/create-and-manage-users.md b/src/current/v2.0/create-and-manage-users.md deleted file mode 100644 index cfb69ac8314..00000000000 --- a/src/current/v2.0/create-and-manage-users.md +++ /dev/null @@ -1,245 +0,0 @@ ---- -title: Manage Users -summary: To create and manage your cluster's users (which lets you control SQL-level privileges), use the cockroach user command with appropriate flags. -toc: true ---- - -To create, manage, and remove your cluster's users (which lets you control SQL-level [privileges](privileges.html)), use the `cockroach user` [command](cockroach-commands.html) with appropriate flags. - -{{site.data.alerts.callout_success}}You can also use the CREATE USER and DROP USER statements to create and remove users.{{site.data.alerts.end}} - - -## Considerations - -- Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. -- After creating users, you must [grant them privileges to databases and tables](grant.html). -- On secure clusters, you must [create client certificates for users](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) and users must [authenticate their access to the cluster](#user-authentication). -- {% include {{ page.version.version }}/misc/remove-user-callout.html %} - -## Subcommands - -Subcommand | Usage ------------|------ -`get` | Retrieve a table containing a user and their hashed password. -`ls` | List all users. -`rm` | Remove a user. -`set` | Create or update a user. - -## Synopsis - -~~~ shell -# Create a user: -$ cockroach user set - -# List all users: -$ cockroach user ls - -# Display a specific user: -$ cockroach user get - -# View help: -$ cockroach user --help -$ cockroach user get --help -$ cockroach user ls --help -$ cockroach user rm --help -$ cockroach user set --help -~~~ - -## Flags - -The `user` command and subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--password` | Enable password authentication for the user; you will be prompted to enter the password on the command line.

      Changed in v2.0: Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. -`--echo-sql` | New in v1.1: Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below. -`--pretty` | Format table rows printed to the standard output using ASCII art and disable escaping of special characters.

      When disabled with `--pretty=false`, or when the standard output is not a terminal, table rows are printed as tab-separated values, and special characters are escaped. This makes the output easy to parse by other programs.

      **Default:** `true` when output is a terminal, `false` otherwise - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -Currently, only the `root` user can create users. - -{{site.data.alerts.callout_info}} -Changed in v2.0: Password creation is supported only in secure clusters for non-root users. The root user must authenticate with a client certificate and key. -{{site.data.alerts.end}} - -### Logging - -By default, the `user` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## User Authentication - -Secure clusters require users to authenticate their access to databases and tables. CockroachDB offers two methods for this: - -- [Client certificate and key authentication](#secure-clusters-with-client-certificates), which is available to all users. To ensure the highest level of security, we recommend only using client certificate and key authentication. - -- [Password authentication](#secure-clusters-with-passwords), which is available to non-`root` users who you've created passwords for. To set a password for a non-`root` user, include the `--password` flag in the `cockroach user set` command. - - Users can use passwords to authenticate without supplying client certificates and keys; however, we recommend using certificate-based authentication whenever possible. - - Changed in v2.0: Password creation is supported only in secure clusters. - -## Examples - -### Create a User - -
      - - -
      -

      - -Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set jpointsman --certs-dir=certs -~~~ - -{{site.data.alerts.callout_success}}If you want to allow password authentication for the user, include the --password flag and then enter and confirm the password at the command prompt.{{site.data.alerts.end}} - -After creating users, you must: - -- [Create their client certificates](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client). -- [Grant them privileges to databases](grant.html). - -
      - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set jpointsman --insecure -~~~ - -After creating users, you must [grant them privileges to databases](grant.html). - -
      - -### Authenticate as a Specific User - -
      - - -
      -

      - -
      - -#### Secure Clusters with Client Certificates - -All users can authenticate their access to a secure cluster using [a client certificate](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) issued to their username. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --user=jpointsman -~~~ - -#### Secure Clusters with Passwords - -Users with passwords can authenticate their access by entering their password at the command prompt instead of using their client certificate and key. - -If we cannot find client certificate and key files matching the user, we fall back on password authentication. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --user=jpointsman -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --user=jpointsman -~~~ - -
      - -### Update a User's Password - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set jpointsman --certs-dir=certs --password -~~~ - -After issuing this command, enter and confirm the user's new password at the command prompt. - -Changed in v2.0: Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. - -### List All Users - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user ls --insecure -~~~ - -~~~ -+------------+ -| username | -+------------+ -| jpointsman | -+------------+ -~~~ - -### Find a Specific User - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user get jpointsman --insecure -~~~ - -~~~ -+------------+--------------------------------------------------------------+ -| username | hashedPassword | -+------------+--------------------------------------------------------------+ -| jpointsman | $2a$108tm5lYjES9RSXSKtQFLhNO.e/ysTXCBIRe7XeTgBrR6ubXfp6dDczS | -+------------+--------------------------------------------------------------+ -~~~ - -### Remove a User - -{{site.data.alerts.callout_danger}}{% include {{ page.version.version }}/misc/remove-user-callout.html %}{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user rm jpointsman --insecure -~~~ - -{{site.data.alerts.callout_success}}You can also use the DROP USER SQL statement to remove users.{{site.data.alerts.end}} - -### Reveal the SQL Statements Sent Implicitly by the Command-line Utility - -In this example, we use the `--echo-sql` flag to reveal the SQL statement sent implicitly by the command-line utility: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user rm jpointsman --insecure --echo-sql -~~~ - -~~~ -> DELETE FROM system.users WHERE username=$1 -DELETE 1 -~~~ - -## See Also - -- [`CREATE USER`](create-user.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Manage Roles](roles.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/create-database.md b/src/current/v2.0/create-database.md deleted file mode 100644 index aae9f6d8855..00000000000 --- a/src/current/v2.0/create-database.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -title: CREATE DATABASE -summary: The CREATE DATABASE statement creates a new CockroachDB database. -toc: true ---- - -The `CREATE DATABASE` [statement](sql-statements.html) creates a new CockroachDB database. - - -## Required Privileges - -Only the `root` user can create databases. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/create_database.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`IF NOT EXISTS` | Create a new database only if a database of the same name does not already exist; if one does exist, do not return an error. -`name` | The name of the database to create, which [must be unique](#create-fails-name-already-in-use) and follow these [identifier rules](keywords-and-identifiers.html#identifiers). -`encoding` | The `CREATE DATABASE` statement accepts an optional `ENCODING` clause for compatibility with PostgreSQL, but `UTF-8` is the only supported encoding. The aliases `UTF8` and `UNICODE` are also accepted. Values should be enclosed in single quotes and are case-insensitive.

      Example: `CREATE DATABASE bank ENCODING = 'UTF-8'`. - -## Example - -### Create a Database - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ -> SHOW DATABASES; -~~~ - -~~~ -+----------+ -| Database | -+----------+ -| bank | -| system | -+----------+ -~~~ - -### Create Fails (Name Already In Use) - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+----------+ -| Database | -+----------+ -| bank | -| system | -+----------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -~~~ -pq: database "bank" already exists -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+----------+ -| Database | -+----------+ -| bank | -| system | -+----------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE IF NOT EXISTS bank; -~~~ - -SQL does not generate an error, but instead responds `CREATE DATABASE` even though a new database wasn't created. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+----------+ -| Database | -+----------+ -| bank | -| system | -+----------+ -~~~ - -## See Also - -- [`SHOW DATABASES`](show-databases.html) -- [`RENAME DATABASE`](rename-database.html) -- [`SET DATABASE`](set-vars.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/create-index.md b/src/current/v2.0/create-index.md deleted file mode 100644 index d0a94e5503f..00000000000 --- a/src/current/v2.0/create-index.md +++ /dev/null @@ -1,147 +0,0 @@ ---- -title: CREATE INDEX -summary: The CREATE INDEX statement creates an index for a table. Indexes improve your database's performance by helping SQL quickly locate data. -toc: true ---- - -The `CREATE INDEX` [statement](sql-statements.html) creates an index for a table. [Indexes](indexes.html) improve your database's performance by helping SQL locate data without having to look through every row of a table. - -New in v2.0: To create an index on the schemaless data in a [`JSONB`](jsonb.html) column, use an [inverted index](inverted-indexes.html). - -{{site.data.alerts.callout_info}}Indexes are automatically created for a table's PRIMARY KEY and UNIQUE columns.

      When querying a table, CockroachDB uses the fastest index. For more information about that process, see Index Selection in CockroachDB.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Synopsis - -**Standard index:** - -
      {% include {{ page.version.version }}/sql/diagrams/create_index.html %}
      - -**Inverted index:** - -
      {% include {{ page.version.version }}/sql/diagrams/create_inverted_index.html %}
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -|`UNIQUE` | Apply the [Unique constraint](unique.html) to the indexed columns.

      This causes the system to check for existing duplicate values on index creation. It also applies the Unique constraint at the table level, so the system checks for duplicate values when inserting or updating data.| -| `INVERTED` | New in v2.0: Create an [inverted index](inverted-indexes.html) on the schemaless data in the specified [`JSONB`](jsonb.html) column.

      You can also use the PostgreSQL-compatible syntax `USING GIN`. For more details, see [Inverted Indexes](inverted-indexes.html#creation).| -|`IF NOT EXISTS` | Create a new index only if an index of the same name does not already exist; if one does exist, do not return an error.| -|`opt_index_name`
      `index_name` | The name of the index to create, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers).

      If you do not specify a name, CockroachDB uses the format `__key/idx`. `key` indicates the index applies the Unique constraint; `idx` indicates it does not. Example: `accounts_balance_idx`| -|`table_name` | The name of the table you want to create the index on. | -|`column_name` | The name of the column you want to index.| -|`ASC` or `DESC`| Sort the column in ascending (`ASC`) or descending (`DESC`) order in the index. How columns are sorted affects query results, particularly when using `LIMIT`.

      __Default:__ `ASC`| -|`STORING ...`| Store (but do not sort) each column whose name you include.

      For information on when to use `STORING`, see [Store Columns](#store-columns).

      `COVERING` aliases `STORING` and works identically. -`opt_interleave` | You can potentially optimize query performance by [interleaving indexes](interleave-in-parent.html), which changes how CockroachDB stores your data. -`opt_partition_by` | Docs coming soon. - -## Examples - -### Create Standard Indexes - -To create the most efficient indexes, we recommend reviewing: - -- [Indexes: Best Practices](indexes.html#best-practices) -- [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) - -#### Single-Column Indexes - -Single-column indexes sort the values of a single column. - -~~~ sql -> CREATE INDEX ON products (price); -~~~ - -Because each query can only use one index, single-column indexes are not typically as useful as multiple-column indexes. - -#### Multiple-Column Indexes - -Multiple-column indexes sort columns in the order you list them. - -~~~ sql -> CREATE INDEX ON products (price, stock); -~~~ - -To create the most useful multiple-column indexes, we recommend reviewing our [best practices](indexes.html#indexing-columns). - -#### Unique Indexes - -Unique indexes do not allow duplicate values among their columns. - -~~~ sql -> CREATE UNIQUE INDEX ON products (name, manufacturer_id); -~~~ - -This also applies the [Unique constraint](unique.html) at the table level, similarly to [`ALTER TABLE`](alter-table.html). The above example is equivalent to: - -~~~ sql -> ALTER TABLE products ADD CONSTRAINT products_name_manufacturer_id_key UNIQUE (name, manufacturer_id); -~~~ - -### Create Inverted Indexes New in v2.0 - -[Inverted indexes](inverted-indexes.html) can be created on schemaless data in a [`JSONB`](jsonb.html) column. - -~~~ sql -> CREATE INVERTED INDEX ON users (profile); -~~~ - -The above example is equivalent to the following PostgreSQL-compatible syntax: - -~~~ sql -> CREATE INDEX ON users USING GIN (profile); -~~~ - -### Store Columns - -Storing a column improves the performance of queries that retrieve (but don’t filter) its values. - -~~~ sql -> CREATE INDEX ON products (price) STORING (name); -~~~ - -However, to use stored columns, queries must filter another column in the same index. For example, SQL can retrieve `name` values from the above index only when a query's `WHERE` clause filters `price`. - -### Change Column Sort Order - -To sort columns in descending order, you must explicitly set the option when creating the index. (Ascending order is the default.) - -~~~ sql -> CREATE INDEX ON products (price DESC, stock); -~~~ - -How columns are sorted impacts the order of rows returned by queries using the index, which particularly affects queries using `LIMIT`. - -### Query Specific Indexes - -Normally, CockroachDB selects the index that it calculates will scan the fewest rows. However, you can override that selection and specify the name of the index you want to use. To find the name, use [`SHOW INDEX`](show-index.html). - -~~~ sql -> SHOW INDEX FROM products; -~~~ -~~~ -+----------+--------------------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+----------+--------------------+--------+-----+--------+-----------+---------+----------+ -| products | primary | true | 1 | id | ASC | false | false | -| products | products_price_idx | false | 1 | price | ASC | false | false | -| products | products_price_idx | false | 2 | id | ASC | false | true | -+----------+--------------------+--------+-----+--------+-----------+---------+----------+ -(3 rows) -~~~ -~~~ sql -> SELECT name FROM products@products_price_idx WHERE price > 10; -~~~ - -## See Also - -- [Indexes](indexes.html) -- [`SHOW INDEX`](show-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/create-role.md b/src/current/v2.0/create-role.md deleted file mode 100644 index 8c09a0ba148..00000000000 --- a/src/current/v2.0/create-role.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: CREATE ROLE (Enterprise) -summary: The CREATE ROLE statement creates SQL roles, which are groups containing any number of roles and users as members. -toc: true ---- - -New in v2.0: The `CREATE ROLE` [statement](sql-statements.html) creates SQL [roles](roles.html), which are groups containing any number of roles and users as members. You can assign privileges to roles, and all members of the role (regardless of whether if they are direct or indirect members) will inherit the role's privileges. - -{{site.data.alerts.callout_info}}CREATE ROLE is an enterprise-only feature.{{site.data.alerts.end}} - - -## Considerations - -- Role names: - - Are case-insensitive - - Must start with either a letter or underscore - - Must contain only letters, numbers, or underscores - - Must be between 1 and 63 characters. -- After creating roles, you must [grant them privileges to databases and tables](grant.html). -- Roles and users can be members of roles. -- Roles and users share the same namespace and must be unique. -- All privileges of a role are inherited by all of its members. -- There is no limit to the number of members in a role. -- Roles cannot log in. They do not have a password and cannot use certificates. -- Membership loops are not allowed (direct: `A is a member of B is a member of A` or indirect: `A is a member of B is a member of C ... is a member of A`). - -## Required Privileges - -Roles can only be created by superusers, i.e., members of the `admin` role. The `admin` role exists by default with `root` as the member. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_role.html %}
      - -## Parameters - -| Parameter | Description | -------------|-------------- -`name` | The name of the role you want to create. Role names are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters.

      Note that roles and [users](create-user.html) share the same namespace and must be unique. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE ROLE dev_ops; -~~~ -~~~ -CREATE ROLE 1 -~~~ - -After creating roles, you can [add users to the role](grant-roles.html) and [grant the role privileges](grant.html). - -## See Also - -- [Manage Roles](roles.html) -- [`DROP ROLE` (Enterprise)](drop-user.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`SHOW ROLES`](show-roles.html) -- [`SHOW USERS`](show-users.html) -- [`SHOW GRANTS`](show-grants.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/create-security-certificates-openssl.md b/src/current/v2.0/create-security-certificates-openssl.md deleted file mode 100644 index 5bdfcb1b623..00000000000 --- a/src/current/v2.0/create-security-certificates-openssl.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -title: Create Security Certificates -summary: A secure CockroachDB cluster uses TLS for encrypted inter-node and client-node communication. -toc: true ---- - - - -A secure CockroachDB cluster uses [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) for encrypted inter-node and client-node communication, which requires CA, node, and client certificates and keys. To create these certificates and keys, use the `cockroach cert` [commands](cockroach-commands.html) with the appropriate subcommands and flags, or use [`openssl` commands](https://wiki.openssl.org/index.php/). - - -## Subcommands - -Subcommand | Usage ------------|------ -[`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) | Create an RSA private key. -[`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) | Create CA certificate and CSRs (certificate signing requests). -[`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) | Create node and client certificates using the CSRs. - -## Configuration Files - -To use [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) and [`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) subcommands, you need the following configuration files: - -File name pattern | File usage --------------|------------ -`ca.cnf` | CA configuration file -`node.cnf` | Server configuration file -`client.cnf` | Client configuration file - -## Certificate Directory - -To create node and client certificates using the OpenSSL commands, you need access to a local copy of the CA certificate and key. We recommend creating all certificates (node, client, and CA certificates), and node and client keys in one place and then distributing them appropriately. Store the CA key somewhere safe and keep a backup; if you lose it, you will not be able to add new nodes or clients to your cluster. - -Use the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) and [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) subcommands to create all certificates, and node and client keys in a single directory, with the files named as follows: - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate -`node.crt` | Server certificate -`node.key` | Key for server certificate -`client..crt` | Client certificate for `` (for example: `client.root.crt` for user `root`) -`client..key` | Key for the client certificate - -Note the following: - -- The CA key should not be uploaded to the nodes and clients, so it should be created in a separate directory. - -- Keys (files ending in `.key`) must not have group or world permissions (maximum permissions are 0700, or `rwx------`). This check can be disabled by setting the environment variable `COCKROACH_SKIP_KEY_PERMISSION_CHECK=true`. - -## Examples - -### Create the CA key and certificate pair - -1. Create two directories: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: Create your CA certificate and all node and client certificates and keys in this directory and then upload the relevant files to the nodes and clients. - - `my-safe-directory`: Create your CA key in this directory and then reference the key when generating node and client certificates. After that, keep the key safe and secret; do not upload it to your nodes or clients. - -2. Create the `ca.cnf` file and copy the following configuration into it. - - You can set the CA certificate expiration period using the `default_days` parameter. We recommend using the CockroachDB default value of the CA certificate expiration period, which is 3660 days. - - {% include copy-clipboard.html %} - ~~~ shell - # OpenSSL CA configuration file - [ ca ] - default_ca = CA_default - - [ CA_default ] - default_days = 3660 - database = index.txt - serial = serial.txt - default_md = sha256 - copy_extensions = copy - unique_subject = no - - # Used to create the CA certificate. - [ req ] - prompt=no - distinguished_name = distinguished_name - x509_extensions = extensions - - [ distinguished_name ] - organizationName = Cockroach - commonName = Cockroach CA - - [ extensions ] - keyUsage = critical,digitalSignature,nonRepudiation,keyEncipherment,keyCertSign - basicConstraints = critical,CA:true,pathlen:1 - - # Common policy for nodes and users. - [ signing_policy ] - organizationName = supplied - commonName = supplied - - # Used to sign node certificates. - [ signing_node_req ] - keyUsage = critical,digitalSignature,keyEncipherment - extendedKeyUsage = serverAuth,clientAuth - - # Used to sign client certificates. - [ signing_client_req ] - keyUsage = critical,digitalSignature,keyEncipherment - extendedKeyUsage = clientAuth - ~~~ - - {{site.data.alerts.callout_info}}The keyUsage and extendedkeyUsage parameters are vital for CockroachDB functions. You can modify or omit other parameters as per your preferred OpenSSL configuration, but do not omit the keyUsage and extendedkeyUsage parameters. {{site.data.alerts.end}} - -3. Create the CA key using the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl genrsa -out my-safe-directory/ca.key 2048 - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ chmod 400 my-safe-directory/ca.key - ~~~ - -4. Create the CA certificate using the [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl req \ - -new \ - -x509 \ - -config ca.cnf \ - -key my-safe-directory/ca.key \ - -out certs/ca.crt \ - -days 3660 \ - -batch - ~~~ - -5. Reset database and index files. - - {% include copy-clipboard.html %} - ~~~ shell - $ rm -f index.txt serial.txt - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ touch index.txt - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ echo '01' > serial.txt - ~~~ - -### Create the certificate and key pairs for nodes - -In the following steps, replace the placeholder text in the code with the actual username and node address. - -1. Create the `node.cnf` file for the first node and copy the following configuration into it: - - {% include copy-clipboard.html %} - ~~~ shell - # OpenSSL node configuration file - [ req ] - prompt=no - distinguished_name = distinguished_name - req_extensions = extensions - - [ distinguished_name ] - organizationName = Cockroach - # Required value for commonName, do not change. - commonName = node - - [ extensions ] - subjectAltName = DNS:,DNS:,IP: - ~~~ - - {{site.data.alerts.callout_danger}}The commonName and subjectAltName parameters are vital for CockroachDB functions. It is also required that commonName be set to node. You can modify or omit other parameters as per your preferred OpenSSL configuration, but do not omit the commonName and subjectAltName parameters. {{site.data.alerts.end}} - -2. Create the key for the first node using the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl genrsa -out certs/node.key 2048 - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ chmod 400 certs/node.key - ~~~ - -3. Create the CSR for the first node using the [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - # Create Node certificate signing request. - $ openssl req \ - -new \ - -config node.cnf \ - -key certs/node.key \ - -out node.csr \ - -batch - ~~~ - -4. Sign the node CSR to create the node certificate for the first node using the [`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) command. - - You can set the node certificate expiration period using the `days` flag. We recommend using the CockroachDB default value of the node certificate expiration period, which is 1830 days. - - {% include copy-clipboard.html %} - ~~~ shell - # Sign the CSR using the CA key. - $ openssl ca \ - -config ca.cnf \ - -keyfile my-safe-directory/ca.key \ - -cert certs/ca.crt \ - -policy signing_policy \ - -extensions signing_node_req \ - -out certs/node.crt \ - -outdir certs/ \ - -in node.csr \ - -days 1830 \ - -batch - ~~~ - -5. Upload certificates to the first node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -6. Delete the local copy of the first node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key.{{site.data.alerts.end}} - -7. Repeat steps 1 - 6 for each additional node. - -8. Remove the `.pem` files in the `certs` directory. These files are unnecessary duplicates of the `.crt` files that CockroachDB requires. - -### Create the certificate and key pair for a client - -In the following steps, replace the placeholder text in the code with the actual username. - -1. Create the `client.cnf` file for the first client and copy the following configuration into it: - - {% include copy-clipboard.html %} - ~~~ shell - # OpenSSL client configuration file - [ req ] - prompt=no - distinguished_name = distinguished_name - - [ distinguished_name ] - organizationName = Cockroach - commonName = - ~~~ - - {{site.data.alerts.callout_info}}The commonName parameter is vital for CockroachDB functions. You can modify or omit other parameters as per your preferred OpenSSL configuration, but do not omit the commonName parameter. {{site.data.alerts.end}} - -2. Create the key for the first client using the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl genrsa -out certs/client..key 2048 - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ chmod 400 certs/client..key - ~~~ - -3. Create the CSR for the first client using the [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - # Create client certificate signing request - $ openssl req \ - -new \ - -config client.cnf \ - -key certs/client..key \ - -out client..csr \ - -batch - ~~~ - -4. Sign the client CSR to create the client certificate for the first client using the [`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) command. You can set the client certificate expiration period using the `days` flag. We recommend using the CockroachDB default value of the client certificate expiration period, which is 1830 days. - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl ca \ - -config ca.cnf \ - -keyfile my-safe-directory/ca.key \ - -cert certs/ca.crt \ - -policy signing_policy \ - -extensions signing_client_req \ - -out certs/client..crt \ - -outdir certs/ \ - -in client..csr \ - -days 1830 \ - -batch - ~~~ - -5. Upload certificates to the first client using your preferred method. - -6. Repeat steps 1 - 5 for each additional client. - -7. Remove the `.pem` files in the `certs` directory. These files are unnecessary duplicates of the `.crt` files that CockroachDB requires. - -## See Also - -- [Manual Deployment](manual-deployment.html): Learn about starting a multi-node secure cluster and accessing it from a client. -- [Start a Node](start-a-node.html): Learn more about the flags you pass when adding a node to a secure cluster -- [Client Connection Parameters](connection-parameters.html) diff --git a/src/current/v2.0/create-security-certificates.md b/src/current/v2.0/create-security-certificates.md deleted file mode 100644 index 5126af243ed..00000000000 --- a/src/current/v2.0/create-security-certificates.md +++ /dev/null @@ -1,290 +0,0 @@ ---- -title: Create Security Certificates -summary: A secure CockroachDB cluster uses TLS for encrypted inter-node and client-node communication. -toc: true ---- - -
      - - -
      - -A secure CockroachDB cluster uses [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) for encrypted inter-node and client-node communication, which requires CA, node, and client certificates and keys. To create these certificates and keys, use the `cockroach cert` [commands](cockroach-commands.html) with the appropriate subcommands and flags, or use [`openssl` commands](https://wiki.openssl.org/index.php/). - -{{site.data.alerts.callout_success}}For details about when and how to change security certificates without restarting nodes, see Rotate Security Certificates.{{site.data.alerts.end}} - - -## How Security Certificates Work - -1. Using the `cockroach cert` command, you create a CA certificate and key and then node and client certificates that are signed by the CA certificate. Since you need access to a copy of the CA certificate and key to create node and client certs, it's best to create everything in one place. - -2. You then upload the appropriate node certificate and key and the CA certificate to each node, and you upload the appropriate client certificate and key and the CA certificate to each client. - -3. When nodes establish contact to each other, and when clients establish contact to nodes, they use the CA certificate to verify each other's identity. - -## Subcommands - -Subcommand | Usage ------------|------ -`create-ca` | Create the self-signed certificate authority (CA), which you'll use to create and authenticate certificates for your entire cluster. -`create-node` | Create a certificate and key for a specific node in the cluster. You specify all addresses at which the node can be reached and pass appropriate flags. -`create-client` | Create a certificate and key for a [specific user](create-and-manage-users.html) accessing the cluster from a client. You specify the username of the user who will use the certificate and pass appropriate flags. -`list` | List certificates and keys found in the certificate directory. - -## Certificate Directory - -When using `cockroach cert` to create node and client certificates, you will need access to a local copy of the CA certificate and key. It is therefore recommended to create all certificates and keys in one place and then distribute node and client certificates and keys appropriately. For the CA key, be sure to store it somewhere safe and keep a backup; if you lose it, you will not be able to add new nodes or clients to your cluster. For a walkthrough of this process, see [Manual Deployment](manual-deployment.html). - -The `create-*` subcommands generate the CA certificate and all node and client certificates and keys in a single directory specified by the `--certs-dir` flag, with the files named as follows: - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate -`node.crt` | Server certificate -`node.key` | Key for server certificate -`client..crt` | Client certificate for `` (eg: `client.root.crt` for user `root`) -`client..key` | Key for the client certificate - -Note the following: - -- The CA key is never loaded automatically by `cockroach` commands, so it should be created in a separate directory, identified by the `--ca-key` flag. - -- Keys (files ending in `.key`) must not have group or world permissions (maximum permissions are 0700, or `rwx------`). This check can be disabled by setting the environment variable `COCKROACH_SKIP_KEY_PERMISSION_CHECK=true`. - -## Synopsis - -~~~ shell -# Create the CA certificate and key: -$ cockroach cert create-ca \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] - -# Create a node certificate and key: -$ cockroach cert create-node \ - [node-hostname] \ - [node-other-hostname] \ - [node-yet-another-hostname] \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] - -# Create a client certificate and key: -$ cockroach cert create-client \ - [username] \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] - -# List certificates and keys: -$ cockroach cert list \ - --certs-dir=[path-to-certs-directory] - -# View help: -$ cockroach cert --help -$ cockroach cert create-ca --help -$ cockroach cert create-node --help -$ cockroach cert create-client --help -$ cockroach cert list --help -~~~ - -## Flags - -The `cert` command and subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](#certificate-directory) containing all certificates and keys needed by `cockroach` commands.

      This flag is used by all subcommands.

      **Default:** `${HOME}/.cockroach-certs/` -`--ca-key` | The path to the private key protecting the CA certificate.

      This flag is required for all `create-*` subcommands. When used with `create-ca` in particular, it defines where to create the CA key; the specified directory must exist.

      **Env Variable:** `COCKROACH_CA_KEY` -`--allow-ca-key-reuse` | When running the `create-ca` subcommand, pass this flag to re-use an existing CA key identified by `--ca-key`. Otherwise, a new CA key will be generated.

      This flag is used only by the `create-ca` subcommand. It helps avoid accidentally re-using an existing CA key. -`--overwrite` | When running `create-*` subcommands, pass this flag to allow existing files in the certificate directory (`--certs-dir`) to be overwritten.

      This flag helps avoid accidentally overwriting sensitive certificates and keys. -`--lifetime` | The lifetime of the certificate, in hours, minutes, and seconds.

      Certificates are valid from the time they are created through the duration specified in `--lifetime`.

      **Default:** `87840h0m0s` (10 years) -`--key-size` | The size of the CA, node, or client key, in bits.

      **Default:** `2048` - -### Logging - -By default, the `cert` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Create the CA certificate and key pair - -1. Create two directories: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -2. Generate the CA certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 8 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - ~~~ - -### Create the certificate and key pairs for nodes - -1. Generate the certificate and key for the first node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - node1.example.com \ - node1.another-example.com \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 24 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - -rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:16 node.crt - -rw------- 1 maxroach maxroach 1.6K Jul 10 14:16 node.key - ~~~ - -2. Upload certificates to the first node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -3. Delete the local copy of the first node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.{{site.data.alerts.end}} - -4. Create the certificate and key for the second node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - node2.example.com \ - node2.another-example.com \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 24 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - -rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:17 node.crt - -rw------- 1 maxroach maxroach 1.6K Jul 10 14:17 node.key - ~~~ - -5. Upload certificates to the second node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -6. Repeat steps 3 - 5 for each additional node. - -### Create the certificate and key pair for a client - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client \ -maxroach \ ---certs-dir=certs \ ---ca-key=my-safe-directory/ca.key -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ ls -l certs -~~~ - -~~~ -total 40 --rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt --rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:13 client.maxroach.crt --rw------- 1 maxroach maxroach 1.6K Jul 10 14:13 client.maxroach.key --rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:17 node.crt --rw------- 1 maxroach maxroach 1.6K Jul 10 14:17 node.key -~~~ - -### List certificates and keys - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert list \ ---certs-dir=certs -~~~ - -~~~ -Certificate directory: certs -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -| Usage | Certificate File | Key File | Expires | Notes | Error | -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -| Certificate Authority | ca.crt | | 2027/07/18 | num certs: 1 | | -| Node | node.crt | node.key | 2022/07/14 | addresses: node2.example.com,node2.another-example.com | | -| Client | client.maxroach.crt | client.maxroach.key | 2022/07/14 | user: maxroach | | -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -(3 rows) -~~~ - -## See Also - -- [Client Connection Parameters](connection-parameters.html) -- [Rotate Security Certificates](rotate-certificates.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](secure-a-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/create-sequence.md b/src/current/v2.0/create-sequence.md deleted file mode 100644 index 51f49fe6319..00000000000 --- a/src/current/v2.0/create-sequence.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: CREATE SEQUENCE -summary: -toc: true ---- - -New in v2.0: The `CREATE SEQUENCE` [statement](sql-statements.html) creates a new sequence in a database. Use a sequence to auto-increment integers in a table. - - -## Considerations - -- Using a sequence is slower than [auto-generating unique IDs with the `gen_random_uuid()`, `uuid_v4()` or `unique_rowid()` built-in functions](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb). Incrementing a sequence requires a write to persistent storage, whereas auto-generating a unique ID does not. Therefore, use auto-generated unique IDs unless an incremental sequence is preferred or required. -- A column that uses a sequence can have a gap in the sequence values if a transaction advances the sequence and is then rolled back. Sequence updates are committed immediately and aren't rolled back along with their containing transaction. This is done to avoid blocking concurrent transactions that use the same sequence. - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the parent database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_sequence.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`seq_name` | The name of the sequence to be created, which must be unique within its database and follow the [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.seq_name`. -`INCREMENT` | The value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence.

      **Default:** `1` -`MINVALUE` | The minimum value of the sequence. Default values apply if not specified or if you enter `NO MINVALUE`.

      **Default for ascending:** `1`

      **Default for descending:** `MININT` -`MAXVALUE` | The maximum value of the sequence. Default values apply if not specified or if you enter `NO MAXVALUE`.

      **Default for ascending:** `MAXINT`

      **Default for descending:** `-1` -`START` | The first value of the sequence.

      **Default for ascending:** `1`

      **Default for descending:** `-1` -`NO CYCLE` | Currently, all sequences are set to `NO CYCLE` and the sequence will not wrap. - - - -## Sequence Functions - -We support the following [SQL sequence functions](functions-and-operators.html): - -- `nextval('seq_name')` - {{site.data.alerts.callout_info}}If nextval() is used in conjunction with RETURNING NOTHING statements, the sequence increments can be reordered. For more information, see Parallel Statement Execution.{{site.data.alerts.end}} -- `currval('seq_name')` -- `lastval()` -- `setval('seq_name', value, is_called)` - -## Examples - -### List All Sequences - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_seq | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -### Create a Sequence with Default Settings - -In this example, we create a sequence with default settings. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE customer_seq; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE SEQUENCE customer_seq; -~~~ -~~~ -+--------------+------------------------------------------------------------------------------------------+ -| Sequence | CreateSequence | -+--------------+------------------------------------------------------------------------------------------+ -| customer_seq | CREATE SEQUENCE customer_seq MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT 1 START 1 | -+--------------+------------------------------------------------------------------------------------------+ -~~~ - -### Create a Sequence with User-Defined Settings - -In this example, we create a sequence that starts at -1 and descends in increments of 2. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE desc_customer_list START -1 INCREMENT -2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE SEQUENCE desc_customer_list; -~~~ - -~~~ -+--------------------+----------------------------------------------------------------------------------------------------+ -| Sequence | CreateSequence | -+--------------------+----------------------------------------------------------------------------------------------------+ -| desc_customer_list | CREATE SEQUENCE desc_customer_list MINVALUE -9223372036854775808 MAXVALUE -1 INCREMENT -2 START -1 | -+--------------------+----------------------------------------------------------------------------------------------------+ -~~~ - -### Create a Table with a Sequence - -In this example, we create a table using the sequence we created in the first example as the table's primary key. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customer_list ( - id INT PRIMARY KEY DEFAULT nextval('customer_seq'), - customer string, - address string - ); -~~~ - -Insert a few records to see the sequence. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customer_list (customer, address) - VALUES - ('Lauren', '123 Main Street'), - ('Jesse', '456 Broad Ave'), - ('Amruta', '9876 Green Parkway'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customer_list; -~~~ -~~~ -+----+----------+--------------------+ -| id | customer | address | -+----+----------+--------------------+ -| 1 | Lauren | 123 Main Street | -| 2 | Jesse | 456 Broad Ave | -| 3 | Amruta | 9876 Green Parkway | -+----+----------+--------------------+ -~~~ - -### View the Current Value of a Sequence - -To view the current value without incrementing the sequence, use: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customer_seq; -~~~ -~~~ -+------------+---------+-----------+ -| last_value | log_cnt | is_called | -+------------+---------+-----------+ -| 3 | 0 | true | -+------------+---------+-----------+ -~~~ - -{{site.data.alerts.callout_info}}The log_cnt and is_called columns are returned only for PostgreSQL compatibility; they are not stored in the database.{{site.data.alerts.end}} - -If a value has been obtained from the sequence in the current session, you can also use the `currval('seq_name')` function to get that most recently obtained value: - -~~~ sql -> SELECT currval('customer_seq'); -~~~ -~~~ -+---------+ -| currval | -+---------+ -| 3 | -+---------+ -~~~ - -## See Also -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`RENAME SEQUENCE`](rename-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [Functions and Operators](functions-and-operators.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/create-table-as.md b/src/current/v2.0/create-table-as.md deleted file mode 100644 index a3c772dccde..00000000000 --- a/src/current/v2.0/create-table-as.md +++ /dev/null @@ -1,219 +0,0 @@ ---- -title: CREATE TABLE AS -summary: The CREATE TABLE AS statement persists the result of a query into the database for later reuse. -toc: true ---- - -The `CREATE TABLE ... AS` statement creates a new table from a [selection query](selection-queries.html). - - -## Intended Use - -Tables created with `CREATE TABLE ... AS` are intended to persist the -result of a query for later reuse. - -This can be more efficient than a [view](create-view.html) when the -following two conditions are met: - -- The result of the query is used as-is multiple times. -- The copy needs not be kept up-to-date with the original table over time. - -When the results of a query are reused multiple times within a larger -query, a view is advisable instead. The query optimizer can "peek" -into the view and optimize the surrounding query using the primary key -and indices of the tables mentioned in the view query. - -A view is also advisable when the results must be up-to-date; a view -always retrieves the current data from the tables that the view query -mentions. - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the parent database. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/create_table_as.html %} -
      - -## Parameters - - - -| Parameter | Description | -|-----------|-------------| -| `IF NOT EXISTS` | Create a new table only if a table of the same name does not already exist in the database; if one does exist, do not return an error.

      Note that `IF NOT EXISTS` checks the table name only; it does not check if an existing table has the same columns, indexes, constraints, etc., of the new table. | -| `table_name` | The name of the table to create, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`.

      The [`UPSERT`](upsert.html) and [`INSERT ON CONFLICT`](insert.html) statements use a temporary table called `excluded` to handle uniqueness conflicts during execution. It's therefore not recommended to use the name `excluded` for any of your tables. | -| `name` | The name of the column you want to use instead of the name of the column from `select_stmt`. | -| `select_stmt` | A [selection query](selection-queries.html) to provide the data. | - -## Limitations - -The [primary key](primary-key.html) of tables created with `CREATE -TABLE ... AS` is not derived from the query results. Like for other -tables, it is not possible to add or change the primary key after -creation. Moreover, these tables are not -[interleaved](interleave-in-parent.html) with other tables. The -default rules for [column families](column-families.html) apply. - -For example: - -~~~ sql -> CREATE TABLE logoff ( - user_id INT PRIMARY KEY, - user_email STRING UNIQUE, - logoff_date DATE NOT NULL, -); -> CREATE TABLE logoff_copy AS TABLE logoff; -> SHOW CREATE TABLE logoff_copy; -~~~ -~~~ -+-------------+-----------------------------------------------------------------+ -| Table | CreateTable | -+-------------+-----------------------------------------------------------------+ -| logoff_copy | CREATE TABLE logoff_copy ( | -| | user_id INT NULL, | -| | user_email STRING NULL, | -| | logoff_date DATE NULL, | -| | FAMILY "primary" (user_id, user_email, logoff_date, rowid) | -| | ) | -+-------------+-----------------------------------------------------------------+ -(1 row) -~~~ - -The example illustrates that the primary key, unique and "not null" -constraints are not propagated to the copy. - -It is however possible to -[create a secondary index](create-index.html) after `CREATE TABLE -... AS`. - -For example: - -~~~ sql -> CREATE INDEX logoff_copy_id_idx ON logoff_copy(user_id); -> SHOW CREATE TABLE logoff_copy; -~~~ -~~~ -+-------------+-----------------------------------------------------------------+ -| Table | CreateTable | -+-------------+-----------------------------------------------------------------+ -| logoff_copy | CREATE TABLE logoff_copy ( | -| | user_id INT NULL, | -| | user_email STRING NULL, | -| | logoff_date DATE NULL, | -| | INDEX logoff_copy_id_idx (user_id ASC), | -| | FAMILY "primary" (user_id, user_email, logoff_date, rowid) | -| | ) | -+-------------+-----------------------------------------------------------------+ -(1 row) -~~~ - -For maximum data storage optimization, consider using separately -[`CREATE`](create-table.html) followed by -[`INSERT INTO ...`](insert.html) to populate the table using the query -results. - -## Examples - -### Create a Table from a `SELECT` Query - -~~~ sql -> SELECT * FROM customers WHERE state = 'NY'; -~~~ -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ -~~~ sql -> CREATE TABLE customers_ny AS SELECT * FROM customers WHERE state = 'NY'; - -> SELECT * FROM customers_ny; -~~~ -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -### Change Column Names - - - -This statement creates a copy of an existing table but with changed column names. - - -~~~ sql -> CREATE TABLE customers_ny (id, first_name) AS SELECT id, name FROM customers WHERE state = 'NY'; - -> SELECT * FROM customers_ny; -~~~ -~~~ -+----+------------+ -| id | first_name | -+----+------------+ -| 6 | Dorotea | -| 15 | Thales | -+----+------------+ -~~~ - -### Create a Table from a `VALUES` Clause - -~~~ sql -> CREATE TABLE tech_states AS VALUES ('CA'), ('NY'), ('WA'); - -> SELECT * FROM tech_states; -~~~ -~~~ -+---------+ -| column1 | -+---------+ -| CA | -| NY | -| WA | -+---------+ -(3 rows) -~~~ - - -### Create a Copy of an Existing Table - -~~~ sql -> CREATE TABLE customers_ny_copy AS TABLE customers_ny; - -> SELECT * FROM customers_ny_copy; -~~~ -~~~ -+----+------------+ -| id | first_name | -+----+------------+ -| 6 | Dorotea | -| 15 | Thales | -+----+------------+ -~~~ - -When a table copy is created this way, the copy is not associated to -any primary key, secondary index or constraint that was present on the -original table. - -## See Also - -- [Selection Queries](selection-queries.html) -- [Simple `SELECT` Clause](select-clause.html) -- [`CREATE TABLE`](create-table.html) -- [`CREATE VIEW`](create-view.html) -- [`INSERT`](insert.html) -- [`DROP TABLE`](drop-table.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/create-table.md b/src/current/v2.0/create-table.md deleted file mode 100644 index 11b1dfafc58..00000000000 --- a/src/current/v2.0/create-table.md +++ /dev/null @@ -1,429 +0,0 @@ ---- -title: CREATE TABLE -summary: The CREATE TABLE statement creates a new table in a database. -toc: true ---- - -The `CREATE TABLE` [statement](sql-statements.html) creates a new table in a database. - - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the parent database. - -## Synopsis - -
      - - -

      - -
      -{% include {{ page.version.version }}/sql/diagrams/create_table.html %} -
      - -
      - -
      -{% include {{ page.version.version }}/sql/diagrams/create_table.html %} -
      - -**column_def ::=** - -
      -{% include {{ page.version.version }}/sql/diagrams/column_def.html %} -
      - -**col_qualification ::=** - -
      -{% include {{ page.version.version }}/sql/diagrams/col_qualification.html %} -
      - -**index_def ::=** - -
      -{% include {{ page.version.version }}/sql/diagrams/index_def.html %} -
      - -**family_def ::=** - -
      -{% include {{ page.version.version }}/sql/diagrams/family_def.html %} -
      - -**table_constraint ::=** - -
      -{% include {{ page.version.version }}/sql/diagrams/table_constraint.html %} -
      - -**opt_interleave ::=** - -
      -{% include {{ page.version.version }}/sql/diagrams/opt_interleave.html %} -
      - -
      - -{{site.data.alerts.callout_success}}To create a table from the results of a SELECT statement, use CREATE TABLE AS. -{{site.data.alerts.end}} - -## Parameters - -Parameter | Description -----------|------------ -`IF NOT EXISTS` | Create a new table only if a table of the same name does not already exist in the database; if one does exist, do not return an error.

      Note that `IF NOT EXISTS` checks the table name only; it does not check if an existing table has the same columns, indexes, constraints, etc., of the new table. -`table_name` | The name of the table to create, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`.

      The [`UPSERT`](upsert.html) and [`INSERT ON CONFLICT`](insert.html) statements use a temporary table called `excluded` to handle uniqueness conflicts during execution. It's therefore not recommended to use the name `excluded` for any of your tables. -`column_def` | A comma-separated list of column definitions. Each column requires a [name/identifier](keywords-and-identifiers.html#identifiers) and [data type](data-types.html); optionally, a [column-level constraint](constraints.html) or other column qualification (e.g., [computed columns](computed-columns.html)) can be specified. Column names must be unique within the table but can have the same name as indexes or constraints.

      Any Primary Key, Unique, and Check [constraints](constraints.html) defined at the column level are moved to the table-level as part of the table's creation. Use the [`SHOW CREATE TABLE`](show-create-table.html) statement to view them at the table level. -`index_def` | An optional, comma-separated list of [index definitions](indexes.html). For each index, the column(s) to index must be specified; optionally, a name can be specified. Index names must be unique within the table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). See the [Create a Table with Secondary Indexes and Inverted Indexes](#create-a-table-with-secondary-and-inverted-indexes-new-in-v2-0) example below.

      The [`CREATE INDEX`](create-index.html) statement can be used to create an index separate from table creation. -`family_def` | An optional, comma-separated list of [column family definitions](column-families.html). Column family names must be unique within the table but can have the same name as columns, constraints, or indexes.

      A column family is a group of columns that are stored as a single key-value pair in the underlying key-value store. CockroachDB automatically groups columns into families to ensure efficient storage and performance. However, there are cases when you may want to manually assign columns to families. For more details, see [Column Families](column-families.html). -`table_constraint` | An optional, comma-separated list of [table-level constraints](constraints.html). Constraint names must be unique within the table but can have the same name as columns, column families, or indexes. -`opt_interleave` | You can potentially optimize query performance by [interleaving tables](interleave-in-parent.html), which changes how CockroachDB stores your data. -`opt_partition_by` | New in v2.0: An [enterprise-only](enterprise-licensing.html) option that lets you define table partitions at the row level. You can define table partitions by list or by range. See [Define Table Partitions](partitioning.html) for more information. - -## Table-Level Replication - -By default, tables are created in the default replication zone but can be placed into a specific replication zone. See [Create a Replication Zone for a Table](configure-replication-zones.html#create-a-replication-zone-for-a-table) for more information. - -## Row-Level Replication New in v2.0 - -CockroachDB allows [enterprise users](enterprise-licensing.html) to [define table partitions](partitioning.html), thus providing row-level control of how and where the data is stored. See [Create a Replication Zone for a Table Partition](configure-replication-zones.html#create-a-replication-zone-for-a-table-or-secondary-index-partition-new-in-v2-0) for more information. - -{{site.data.alerts.callout_info}}The primary key required for partitioning is different from the conventional primary key. To define the primary key for partitioning, prefix the unique identifier(s) in the primary key with all columns you want to partition and subpartition the table on, in the order in which you want to nest your subpartitions. See Partition using Primary Key for more details.{{site.data.alerts.end}} - -## Examples - -### Create a Table (No Primary Key Defined) - -In CockroachDB, every table requires a [primary key](primary-key.html). If one is not explicitly defined, a column called `rowid` of the type `INT` is added automatically as the primary key, with the `unique_rowid()` function used to ensure that new rows always default to unique `rowid` values. The primary key is automatically indexed. - -{{site.data.alerts.callout_info}}Strictly speaking, a primary key's unique index is not created; it is derived from the key(s) under which the data is stored, so it takes no additional space. However, it appears as a normal unique index when using commands like SHOW INDEX.{{site.data.alerts.end}} - -~~~ sql -> CREATE TABLE logon ( - user_id INT, - logon_date DATE -); - -> SHOW COLUMNS FROM logon; -~~~ - -~~~ -+------------+------+------+---------+---------+ -| Field | Type | Null | Default | Indices | -+------------+------+------+---------+---------+ -| user_id | INT | true | NULL | {} | -| logon_date | DATE | true | NULL | {} | -+------------+------+------+---------+---------+ -(2 rows) -~~~ - -~~~ sql -> SHOW INDEX FROM logon; -~~~ - -~~~ -+-------+---------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+---------+--------+-----+--------+-----------+---------+----------+ -| logon | primary | true | 1 | rowid | ASC | false | false | -+-------+---------+--------+-----+--------+-----------+---------+----------+ -(1 row) -~~~ - -### Create a Table (Primary Key Defined) - -In this example, we create a table with three columns. One column is the [primary key](primary-key.html), another is given the [Unique constraint](unique.html), and the third has no constraints. The primary key and column with the Unique constraint are automatically indexed. - -~~~ sql -> CREATE TABLE logoff ( - user_id INT PRIMARY KEY, - user_email STRING UNIQUE, - logoff_date DATE -); - -> SHOW COLUMNS FROM logoff; -~~~ - -~~~ -+-------------+--------+-------+---------+---------------------------------+ -| Field | Type | Null | Default | Indices | -+-------------+--------+-------+---------+---------------------------------+ -| user_id | INT | false | NULL | {primary,logoff_user_email_key} | -| user_email | STRING | true | NULL | {logoff_user_email_key} | -| logoff_date | DATE | true | NULL | {} | -+-------------+--------+-------+---------+---------------------------------+ -(3 rows) -~~~ - -~~~ sql -> SHOW INDEX FROM logoff; -~~~ - -~~~ -+--------+-----------------------+--------+-----+------------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+--------+-----------------------+--------+-----+------------+-----------+---------+----------+ -| logoff | primary | true | 1 | user_id | ASC | false | false | -| logoff | logoff_user_email_key | true | 1 | user_email | ASC | false | false | -| logoff | logoff_user_email_key | true | 2 | user_id | ASC | false | true | -+--------+-----------------------+--------+-----+------------+-----------+---------+----------+ -(3 rows) -~~~ - -### Create a Table with Secondary and Inverted Indexes New in v2.0 - -In this example, we create two secondary indexes during table creation. Secondary indexes allow efficient access to data with keys other than the primary key. This example also demonstrates a number of column-level and table-level [constraints](constraints.html). - -[Inverted indexes](inverted-indexes.html), which are new in v2.0, allow efficient access to the schemaless data in a [`JSONB`](jsonb.html) column. - -This example also demonstrates a number of column-level and table-level [constraints](constraints.html). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE product_information ( - product_id INT PRIMARY KEY NOT NULL, - product_name STRING(50) UNIQUE NOT NULL, - product_description STRING(2000), - category_id STRING(1) NOT NULL CHECK (category_id IN ('A','B','C')), - weight_class INT, - warranty_period INT CONSTRAINT valid_warranty CHECK (warranty_period BETWEEN 0 AND 24), - supplier_id INT, - product_status STRING(20), - list_price DECIMAL(8,2), - min_price DECIMAL(8,2), - catalog_url STRING(50) UNIQUE, - date_added DATE DEFAULT CURRENT_DATE(), - misc JSONB, - CONSTRAINT price_check CHECK (list_price >= min_price), - INDEX date_added_idx (date_added), - INDEX supp_id_prod_status_idx (supplier_id, product_status), - INVERTED INDEX details (misc) -); - -> SHOW INDEX FROM product_information; -~~~ - -~~~ -+---------------------+--------------------------------------+--------+-----+----------------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+---------------------+--------------------------------------+--------+-----+----------------+-----------+---------+----------+ -| product_information | primary | true | 1 | product_id | ASC | false | false | -| product_information | product_information_product_name_key | true | 1 | product_name | ASC | false | false | -| product_information | product_information_product_name_key | true | 2 | product_id | ASC | false | true | -| product_information | product_information_catalog_url_key | true | 1 | catalog_url | ASC | false | false | -| product_information | product_information_catalog_url_key | true | 2 | product_id | ASC | false | true | -| product_information | date_added_idx | false | 1 | date_added | ASC | false | false | -| product_information | date_added_idx | false | 2 | product_id | ASC | false | true | -| product_information | supp_id_prod_status_idx | false | 1 | supplier_id | ASC | false | false | -| product_information | supp_id_prod_status_idx | false | 2 | product_status | ASC | false | false | -| product_information | supp_id_prod_status_idx | false | 3 | product_id | ASC | false | true | -| product_information | details | false | 1 | misc | ASC | false | false | -| product_information | details | false | 2 | product_id | ASC | false | true | -+---------------------+--------------------------------------+--------+-----+----------------+-----------+---------+----------+ -(12 rows) -~~~ - -We also have other resources on indexes: - -- Create indexes for existing tables using [`CREATE INDEX`](create-index.html). -- [Learn more about indexes](indexes.html). - -### Create a Table with Auto-Generated Unique Row IDs - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -### Create a Table with a Foreign Key Constraint - -[Foreign key constraints](foreign-key.html) guarantee a column uses only values that already exist in the column it references, which must be from another table. This constraint enforces referential integrity between the two tables. - -There are a [number of rules](foreign-key.html#rules-for-creating-foreign-keys) that govern foreign keys, but the two most important are: - -- Foreign key columns must be [indexed](indexes.html) when creating the table using `INDEX`, `PRIMARY KEY`, or `UNIQUE`. - -- Referenced columns must contain only unique values. This means the `REFERENCES` clause must use exactly the same columns as a [Primary Key](primary-key.html) or [Unique](unique.html) constraint. - -New in v2.0: You can include a [foreign key action](foreign-key.html#foreign-key-actions-new-in-v2-0) to specify what happens when a column referenced by a foreign key constraint is updated or deleted. The default actions are `ON UPDATE NO ACTION` and `ON DELETE NO ACTION`. - -In this example, we use `ON DELETE CASCADE` (i.e., when row referenced by a foreign key constraint is deleted, all dependent rows are also deleted). - -{% include copy-clipboard.html %} -``` sql -> CREATE TABLE customers ( - id INT PRIMARY KEY, - name STRING - ); -``` - -{% include copy-clipboard.html %} -``` sql -> CREATE TABLE orders ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers(id) ON DELETE CASCADE - ); -``` - -{% include copy-clipboard.html %} -``` sql -> SHOW CREATE TABLE orders; -``` -``` -+--------+---------------------------------------------------------------------------------------------------------------------+ -| Table | CreateTable | -+--------+---------------------------------------------------------------------------------------------------------------------+ -| orders | CREATE TABLE orders ( | -| | id INT NOT NULL, | -| | customer_id INT NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | CONSTRAINT fk_customer_id_ref_customers FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE, | -| | INDEX orders_auto_index_fk_customer_id_ref_customers (customer_id ASC), | -| | FAMILY "primary" (id, customer_id) | -| | ) | -+--------+---------------------------------------------------------------------------------------------------------------------+ -``` - -{% include copy-clipboard.html %} -``` sql -> INSERT INTO customers VALUES (1, 'Lauren'); -``` - -{% include copy-clipboard.html %} -``` sql -> INSERT INTO orders VALUES (1,1); -``` - -{% include copy-clipboard.html %} -``` sql -> DELETE FROM customers WHERE id = 1; -``` - -{% include copy-clipboard.html %} -``` sql -> SELECT * FROM orders; -``` -``` -+----+-------------+ -| id | customer_id | -+----+-------------+ -+----+-------------+ -``` - -### Create a Table that Mirrors Key-Value Storage - -{% include {{ page.version.version }}/faq/simulate-key-value-store.html %} - -### Create a Table from a `SELECT` Statement - -You can use the [`CREATE TABLE AS`](create-table-as.html) statement to create a new table from the results of a `SELECT` statement, for example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers WHERE state = 'NY'; -~~~ -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_ny AS SELECT * FROM customers WHERE state = 'NY'; - -> SELECT * FROM customers_ny; -~~~ -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -### Create a Table with a Computed Column New in v2.0 - -{% include {{ page.version.version }}/computed-columns/simple.md %} - -### Create a Table with Partitions New in v2.0 - -{{site.data.alerts.callout_info}}The primary key required for partitioning is different from the conventional primary key. To define the primary key for partitioning, prefix the unique identifier(s) in the primary key with all columns you want to partition and subpartition the table on, in the order in which you want to nest your subpartitions. See Partition using Primary Key for more details.{{site.data.alerts.end}} - -#### Create a Table with Partitions by List - -In this example, we create a table and [define partitions by list](partitioning.html#partition-by-list). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students_by_list ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (country, id)) - PARTITION BY LIST (country) - (PARTITION north_america VALUES IN ('CA','US'), - PARTITION australia VALUES IN ('AU','NZ'), - PARTITION DEFAULT VALUES IN (default)); -~~~ - -#### Create a Table with Partitions by Range - -In this example, we create a table and [define partitions by range](partitioning.html#partition-by-range). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students_by_range ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (expected_graduation_date, id)) - PARTITION BY RANGE (expected_graduation_date) - (PARTITION graduated VALUES FROM (MINVALUE) TO ('2017-08-15'), - PARTITION current VALUES FROM ('2017-08-15') TO (MAXVALUE)); -~~~ - -### Show the Definition of a Table - -To show the definition of a table, use the [`SHOW CREATE TABLE`](show-create-table.html) statement. The contents of the `CreateTable` column in the response is a string with embedded line breaks that, when echoed, produces formatted output. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE logoff; -~~~ - -~~~ -+--------+----------------------------------------------------------+ -| Table | CreateTable | -+--------+----------------------------------------------------------+ -| logoff | CREATE TABLE logoff ( | -| | user_id INT NOT NULL, | -| | user_email STRING(50) NULL, | -| | logoff_date DATE NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (user_id), | -| | UNIQUE INDEX logoff_user_email_key (user_email), | -| | FAMILY "primary" (user_id, user_email, logoff_date) | -| | ) | -+--------+----------------------------------------------------------+ -(1 row) -~~~ - -## See Also - -- [`INSERT`](insert.html) -- [`ALTER TABLE`](alter-table.html) -- [`DELETE`](delete.html) -- [`DROP TABLE`](drop-table.html) -- [`RENAME TABLE`](rename-table.html) -- [`SHOW TABLES`](show-tables.html) -- [`SHOW COLUMNS`](show-columns.html) -- [Column Families](column-families.html) -- [Table-Level Replication Zones](configure-replication-zones.html#create-a-replication-zone-for-a-table) -- [Define Table Partitions](partitioning.html) diff --git a/src/current/v2.0/create-user.md b/src/current/v2.0/create-user.md deleted file mode 100644 index 4d5c5055f2c..00000000000 --- a/src/current/v2.0/create-user.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: CREATE USER -summary: The CREATE USER statement creates SQL users, which let you control privileges on your databases and tables. -toc: true ---- - -The `CREATE USER` [statement](sql-statements.html) creates SQL users, which let you control [privileges](privileges.html) on your databases and tables. - -{{site.data.alerts.callout_success}}You can also use the cockroach user set command to create and manage users.{{site.data.alerts.end}} - - -## Considerations - -- Role names: - - Are case-insensitive - - Must start with either a letter or underscore - - Must contain only letters, numbers, or underscores - - Must be between 1 and 63 characters. -- After creating users, you must [grant them privileges to databases and tables](grant.html). -- On secure clusters, you must [create client certificates for users](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) and users must [authenticate their access to the cluster](#user-authentication). - -## Required Privileges - -The user must have the `INSERT` and `UPDATE` [privileges](privileges.html) on the `system.users` table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_user.html %}
      - -## Parameters - - - -| Parameter | Description | -|-----------|-------------| -|`user_name` | The name of the user you want to create.

      Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters.| -|`password` | Let the user [authenticate their access to a secure cluster](#user-authentication) using this password. Passwords must be entered as [string](string.html) values surrounded by single quotes (`'`).

      Changed in v2.0: Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key.| - -## User Authentication - -Secure clusters require users to authenticate their access to databases and tables. CockroachDB offers two methods for this: - -- [Client certificate and key authentication](#secure-clusters-with-client-certificates), which is available to all users. To ensure the highest level of security, we recommend only using client certificate and key authentication. - -- [Password authentication](#secure-clusters-with-passwords), which is available to non-`root` users who you've created passwords for. To create a user with a password, use the `WITH PASSWORD` clause of `CREATE USER`. To add a password to an existing user, use the [`cockroach user`](create-and-manage-users.html#update-a-users-password) command. - - Users can use passwords to authenticate without supplying client certificates and keys; however, we recommend using certificate-based authentication whenever possible. - - Changed in v2.0: Password creation is supported only in secure clusters. - -## Examples - -### Create a User - -Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. - -~~~ sql -> CREATE USER jpointsman; -~~~ - -After creating users, you must: - -- [Grant them privileges to databases](grant.html). -- For secure clusters, you must also [create their client certificates](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client). - -### Create a User With a Password - -~~~ sql -> CREATE USER jpointsman WITH PASSWORD 'Q7gc8rEdS'; -~~~ - -Changed in v2.0: Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. - -### Manage Users - -After creating users, you can manage them using the [`cockroach user`](create-and-manage-users.html) command. - -### Authenticate as a Specific User - -
      - - -
      -

      - -
      - -#### Secure Clusters with Client Certificates - -All users can authenticate their access to a secure cluster using [a client certificate](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) issued to their username. - -~~~ shell -$ cockroach sql --user=jpointsman -~~~ - -#### Secure Clusters with Passwords - -[Users with passwords](#create-a-user) can authenticate their access by entering their password at the command prompt instead of using their client certificate and key. - -If we cannot find client certificate and key files matching the user, we fall back on password authentication. - -~~~ shell -$ cockroach sql --user=jpointsman -~~~ - -
      - -
      - -~~~ shell -$ cockroach sql --insecure --user=jpointsman -~~~ - -
      - -## See Also - -- [`cockroach user` command](create-and-manage-users.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Manage Roles](roles.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/create-view.md b/src/current/v2.0/create-view.md deleted file mode 100644 index dcaf796fdf8..00000000000 --- a/src/current/v2.0/create-view.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: CREATE VIEW -summary: The CREATE VIEW statement creates a . -toc: true ---- - -The `CREATE VIEW` statement creates a new [view](views.html), which is a stored query represented as a virtual table. - - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the parent database and the `SELECT` privilege on any table(s) referenced by the view. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/create_view.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`view_name` | The name of the view to create, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`. -`name_list` | An optional, comma-separated list of column names for the view. If specified, these names will be used in the response instead of the columns specified in `AS select_stmt`. -`AS select_stmt` | The [selection query](selection-queries.html) to execute when the view is requested.

      Note that it is not currently possible to use `*` to select all columns from a referenced table or view; instead, you must specify specific columns. - -## Example - -{{site.data.alerts.callout_success}}This example highlights one key benefit to using views: simplifying complex queries. For additional benefits and examples, see Views.{{site.data.alerts.end}} - -Let's say you're using our [sample `startrek` database](generate-cockroachdb-resources.html#generate-example-data), which contains two tables, `episodes` and `quotes`. There's a foreign key constraint between the `episodes.id` column and the `quotes.episode` column. To count the number of famous quotes per season, you could run the following join: - -~~~ sql -> SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -+--------+----------+ -| season | count(*) | -+--------+----------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+----------+ -(3 rows) -~~~ - -Alternatively, to make it much easier to run this complex query, you could create a view: - -~~~ sql -> CREATE VIEW startrek.quotes_per_season (season, quotes) - AS SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -CREATE VIEW -~~~ - -The view is then represented as a virtual table alongside other tables in the database: - -~~~ sql -> SHOW TABLES FROM startrek; -~~~ - -~~~ -+-------------------+ -| Table | -+-------------------+ -| episodes | -| quotes | -| quotes_per_season | -+-------------------+ -(4 rows) -~~~ - -Executing the query is as easy as `SELECT`ing from the view, as you would from a standard table: - -~~~ sql -> SELECT * FROM startrek.quotes_per_season; -~~~ - -~~~ -+--------+--------+ -| season | quotes | -+--------+--------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+--------+ -(3 rows) -~~~ - -## Known Limitations - -{{site.data.alerts.callout_info}} The following limitations may be lifted -in a future version of CockroachDB.{{site.data.alerts.end}} - -{% include {{ page.version.version }}/known-limitations/cte-with-view.md %} - -## See Also - -- [Selection Queries](selection-queries.html) -- [Views](views.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`ALTER VIEW`](alter-view.html) -- [`DROP VIEW`](drop-view.html) diff --git a/src/current/v2.0/data-types.md b/src/current/v2.0/data-types.md deleted file mode 100644 index ba12f51d8c0..00000000000 --- a/src/current/v2.0/data-types.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Data Types -summary: Learn about the data types supported by CockroachDB. -toc: true ---- - -## Supported Types - -CockroachDB supports the following data types. Click a type for more details. - -Type | Description | Example ------|-------------|-------- -[`ARRAY`](array.html) | A 1-dimensional, 1-indexed, homogeneous array of any non-array data type. | `{"sky","road","car"}` -[`BOOL`](bool.html) | A Boolean value. | `true` -[`BYTES`](bytes.html) | A string of binary characters. | `b'\141\061\142\062\143\063'` -[`COLLATE`](collate.html) | The `COLLATE` feature lets you sort [`STRING`](string.html) values according to language- and country-specific rules, known as collations. | `'a1b2c3' COLLATE en` -[`DATE`](date.html) | A date. | `DATE '2016-01-25'` -[`DECIMAL`](decimal.html) | An exact, fixed-point number. | `1.2345` -[`FLOAT`](float.html) | A 64-bit, inexact, floating-point number. | `1.2345` -[`INET`](inet.html) | New in v2.0: An IPv4 or IPv6 address. | `192.168.0.1` -[`INT`](int.html) | A signed integer, up to 64 bits. | `12345` -[`INTERVAL`](interval.html) | A span of time. | `INTERVAL '2h30m30s'` -[`JSONB`](jsonb.html) | New in v2.0: JSON (JavaScript Object Notation) data. | `'{"first_name": "Lola", "last_name": "Dog", "location": "NYC", "online" : true, "friends" : 547}'` -[`SERIAL`](serial.html) | A pseudo-type that combines an [integer type](int.html) with a [`DEFAULT` expression](default-value.html). | `148591304110702593 ` -[`STRING`](string.html) | A string of Unicode characters. | `'a1b2c3'` -[`TIME`](time.html) | New in v2.0: A time of day with no time zone. | `TIME '01:23:45.123456'` -[`TIMESTAMP`
      `TIMESTAMPTZ`](timestamp.html) | A date and time pairing in UTC. | `TIMESTAMP '2016-01-25 10:10:10'`
      `TIMESTAMPTZ '2016-01-25 10:10:10-05:00'` -[`UUID`](uuid.html) | A 128-bit hexadecimal value. | `7f9c24e8-3b12-4fef-91e0-56a2d5a246ec` - -## Data Type Conversions & Casts - -CockroachDB supports explicit type conversions using the following methods: - -- ` 'string literal'`, to convert from the literal representation of a value to a value of that type. For example: - `DATE '2008-12-21'`, `INT '123'`, or `BOOL 'true'`. - -- `::`, or its equivalent longer form `CAST( AS )`, which converts an arbitrary expression of one built-in type to another (this is also known as type coercion or "casting"). For example: - `NOW()::DECIMAL`, `VARIANCE(a+2)::INT`. - - {{site.data.alerts.callout_success}} - To create constant values, consider using a - type annotation - instead of a cast, as it provides more predictable results. - {{site.data.alerts.end}} - -- Other [built-in conversion functions](functions-and-operators.html) when the type is not a SQL type, for example `from_ip()`, `to_ip()` to convert IP addresses between `STRING` and `BYTES` values. - - -You can find each data type's supported conversion and casting on its -respective page in its section **Supported Casting & Conversion**. diff --git a/src/current/v2.0/date.md b/src/current/v2.0/date.md deleted file mode 100644 index ac58cb0442e..00000000000 --- a/src/current/v2.0/date.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: DATE -summary: CockroachDB's DATE data type stores a year, month, and day. -toc: true ---- - -The `DATE` [data type](data-types.html) stores a year, month, and day. - - -## Syntax - -A constant value of type `DATE` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `DATE` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`DATE`. - -The string format for dates is `YYYY-MM-DD`. For example: `DATE '2016-12-23'`. - -CockroachDB also supports using uninterpreted -[string literals](sql-constants.html#string-literals) in contexts -where a `DATE` value is otherwise expected. - -## Size - -A `DATE` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -~~~ sql -> CREATE TABLE dates (a DATE PRIMARY KEY, b INT); - -> SHOW COLUMNS FROM dates; -~~~ -~~~ -+-------+------+-------+---------+ -| Field | Type | Null | Default | -+-------+------+-------+---------+ -| a | DATE | false | NULL | -| b | INT | true | NULL | -+-------+------+-------+---------+ -~~~ -~~~ sql -> -- explicitly typed DATE literal -> INSERT INTO dates VALUES (DATE '2016-03-26', 12345); - -> -- string literal implicitly typed as DATE -> INSERT INTO dates VALUES ('2016-03-27', 12345); - -> SELECT * FROM dates; -~~~ -~~~ -+---------------------------+-------+ -| a | b | -+---------------------------+-------+ -| 2016-03-26 00:00:00+00:00 | 12345 | -| 2016-03-27 00:00:00+00:00 | 12345 | -+---------------------------+-------+ -~~~ - -## Supported Casting & Conversion - -`DATE` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts to number of days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`DECIMAL` | Converts to number of days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`FLOAT` | Converts to number of days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`TIMESTAMP` | Sets the time to 00:00 (midnight) in the resulting timestamp -`STRING` | –– - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/debug-and-error-logs.md b/src/current/v2.0/debug-and-error-logs.md deleted file mode 100644 index 83c5c259b16..00000000000 --- a/src/current/v2.0/debug-and-error-logs.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Understand Debug & Error Logs -summary: CockroachDB logs include details about certain node-level and range-level events, such as errors. -toc: true ---- - -If you need to [troubleshoot](troubleshooting-overview.html) issues with your cluster, you can check a node's logs, which include details about certain node-level and range-level events, such as errors. For example, if CockroachDB crashes, it normally logs a stack trace to what caused the problem. - -{{site.data.alerts.callout_success}} -For detailed information about queries being executed against your system, see [SQL Audit Logging](sql-audit-logging.html). -{{site.data.alerts.end}} - - -## Details - -When a node processes a [`cockroach` command](cockroach-commands.html), it produces a stream of messages about the command's activities. Each message's body describes the activity, and its envelope contains metadata such as the message's severity level. - -As a command generates messages, CockroachDB uses the [command](#commands)'s [logging flags](#flags) and the message's [severity level](#severity-levels) to determine the appropriate [location](#output-locations) for it. - -Each node's logs detail only the internal activity of that node without visibility into the behavior of other nodes in the cluster. When troubleshooting, this means that you must identify the node where the problem occurred or [collect the logs from all active nodes in your cluster](debug-zip.html). - -### Commands - -All [`cockroach` commands](cockroach-commands.html) support logging. However, it's important to note: - -- `cockroach start` generates most messages related to the operation of your cluster. -- Other commands do generate messages, but they're typically only interesting in troubleshooting scenarios. - -### Severity Levels - -CockroachDB identifies each message with a severity level, letting operators know if they need to intercede: - -1. `INFO` *(lowest severity; no action necessary)* -2. `WARNING` -3. `ERROR` -4. `FATAL` *(highest severity; requires operator attention)* - -**Default Behavior by Severity Level** - -Command | `INFO` messages | `WARNING` and above messages ---------|--------|-------------------- -[`cockroach start`](start-a-node.html) | Write to file | Write to file -[All other commands](cockroach-commands.html) | Discard | Print to `stderr` - -### Output Locations - -Based on the command's flags and the message's [severity level](#severity-levels), CockroachDB does one of the following: - -- [Writes the message to a file](#write-to-file) -- [Prints it to `stderr`](#print-to-stderr) -- [Discards the message entirely](#discard-message) - -#### Write to File - -CockroachDB can write messages to log files. The files are named using the following format: - -~~~ -cockroach.[host].[user].[start timestamp in UTC].[process ID].log -~~~ - -For example: - -~~~ -cockroach.richards-mbp.rloveland.2018-03-15T15_24_10Z.024338.log -~~~ - -{{site.data.alerts.callout_info}}All log file timestamps are in UTC because CockroachDB is designed to be deployed in a distributed cluster. Nodes may be located in different time zones, and using UTC makes it easy to correlate log messages from those nodes no matter where they are located.{{site.data.alerts.end}} - -Property | `cockroach start` | All other commands ----------|-------------------|------------------- -Enabled by | Default1 | Explicit `--log-dir` flag -Default File Destination | `[first `[`store`](start-a-node.html#store)` dir]/logs` | *N/A* -Change File Destination | `--log-dir=[destination]` | `--log-dir=[destination]` -Default Severity Level Threshold | `INFO` | *N/A* -Change Severity Threshold | `--log-file-verbosity=[severity level]` | `--log-file-verbosity=[severity level]` -Disabled by | `--log-dir=`1 | Default - -{{site.data.alerts.callout_info}}1 If the cockroach process does not have access to on-disk storage, cockroach start does not write messages to log files; instead it prints all messages to stderr.{{site.data.alerts.end}} - -#### Print to `stderr` - -CockroachDB can print messages to `stderr`, which normally prints them to the machine's terminal but does not store them. - -Property | `cockroach start` | All other commands ----------|-------------------|------------------- -Enabled by | Explicit `--logtostderr` flag2 | Default -Default Severity Level Threshold | *N/A* | `WARNING` -Change Severity Threshold | `--logtostderr=[severity level]` | `--logtostderr=[severity level]` -Disabled by | Default2 | `--logtostderr=NONE` - -{{site.data.alerts.callout_info}}2 cockroach start does not print any messages to stderr unless the cockroach process does not have access to on-disk storage, in which case it defaults to --logtostderr=INFO and prints all messages to stderr.{{site.data.alerts.end}} - -#### Discard Message - -Messages with severity levels below the `--logtostderr` and `--log-file-verbosity` flag's values are neither written to files nor printed to `stderr`, so they are discarded. - -By default, commands besides `cockroach start` discard messages with the `INFO` [severity level](#severity-levels). - -## Flags - -{% include {{ page.version.version }}/misc/logging-flags.md %} - -The `--log-backtrace-at`, `--verbosity`, and `--v` flags are intended for internal debugging by CockroachDB contributors. - -## See Also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) diff --git a/src/current/v2.0/debug-zip.md b/src/current/v2.0/debug-zip.md deleted file mode 100644 index 6ce7457493a..00000000000 --- a/src/current/v2.0/debug-zip.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: Collect Debug Information from Your Cluster -summary: Learn the commands for collecting debug information from all nodes in your cluster. -toc: true ---- - -The `debug zip` [command](cockroach-commands.html) connects to your cluster and gathers the following information from each active node into a single file (inactive nodes are not included): - -- [Log files](debug-and-error-logs.html) -- Schema change events -- Node liveness -- Gossip data -- Stack traces -- Range lists -- A list of databases and tables -- Heap profiles (**new in v2.0**) - -{{site.data.alerts.callout_danger}}The file produced by cockroach debug zip can contain highly sensitive, unanonymized information, such as usernames, passwords, and possibly your table's data. You should share this data only with Cockroach Labs developers and only after determining the most secure method of delivery.{{site.data.alerts.end}} - - -## Details - -### Use Cases - -There are two scenarios in which `debug zip` is useful: - -- To collect all of your nodes' logs, which you can then parse to locate issues. It's important to note, though, that `debug zip` can only access logs from active nodes. See more information [on this page](#collecting-log-files). - -- If you experience severe or difficult-to-reproduce issues with your cluster, Cockroach Labs might ask you to send us your cluster's debugging information using `cockroach debug zip`. - -{{site.data.alerts.callout_danger}}The file produced by cockroach debug zip can contain highly sensitive, unanonymized information, such as usernames, passwords, and your table's data. You should share this data only with Cockroach Labs developers and only after determining the most secure method of delivery.{{site.data.alerts.end}} - -### Collecting Log Files - -When you issue the `debug zip` command, the node that receives the request connects to each other node in the cluster. Once it's connected, the node requests the content of all log files stored on the node, the location of which is determined by the `--log-dir` value when you [started the node](start-a-node.html). - -Because `debug zip` relies on CockroachDB's distributed architecture, this means that nodes not currently connected to the cluster cannot respond to the request, so their log files *are not* included. - -After receiving the log files from all of the active nodes, the requesting node aggregates the files and writes them to an archive file you specify. - -You can locate logs in the unarchived file's `debug/nodes/[node dir]/logs` directories. - -## Subcommands - -While the `cockroach debug` command has a few subcommands, the only subcommand users are expected to use is `zip` which collects all of your cluster's debug information in a single file. - -`debug`'s other subcommands are useful only to CockroachDB's developers and contributors. - -## Synopsis - -~~~ shell -# Generate a debug zip: -$ cockroach debug zip [ZIP file destination] [flags] -~~~ - -It's important to understand that the `[flags]` here are used to connect to CockroachDB nodes. This means the values you use in those flags must connect to an active node. If no nodes are live, you must [start at least one node](start-a-node.html). - -## Flags - -The `debug zip` subcommand supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html). The directory must contain valid certificates if running in secure mode.

      **Env Variable:** `COCKROACH_CERTS_DIR`
      **Default:** `${HOME}/.cockroach-certs/` -`--host` | The server host to connect to. This can be the address of any node in the cluster.

      **Env Variable:** `COCKROACH_HOST`
      **Default:** `localhost` -`--insecure` | Run in insecure mode. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

      **Env Variable:** `COCKROACH_INSECURE`
      **Default:** `false` -`--port`
      `-p` | The server port to connect to.

      **Env Variable:** `COCKROACH_PORT`
      **Default:** `26257` - -### Logging - -By default, the `debug zip` command logs errors it experiences to `stderr`. Note that these are errors executing `debug zip`; these are not errors that the logs collected by `debug zip` contain. - -If you need to troubleshoot this command's behavior, you can also change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Generate a debug zip file - -~~~ shell -# Generate the debug zip file for an insecure cluster: -$ cockroach debug zip ./cockroach-data/logs/debug.zip --insecure - -# Generate the debug zip file for a secure cluster: -$ cockroach debug zip ./cockroach-data/logs/debug.zip - -# Generate the debug zip file from a remote machine: -$ cockroach debug zip ./crdb-debug.zip --host=200.100.50.25 -~~~ - -{{site.data.alerts.callout_info}}Secure examples assume you have the appropriate certificates in the default certificate directory, ${HOME}/.cockroach-certs/.{{site.data.alerts.end}} - -## See Also - -- [File an Issue](file-an-issue.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v2.0/decimal.md b/src/current/v2.0/decimal.md deleted file mode 100644 index ec6a5d0eb45..00000000000 --- a/src/current/v2.0/decimal.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: DECIMAL -summary: The DECIMAL data type stores exact, fixed-point numbers. -toc: true ---- - -The `DECIMAL` [data type](data-types.html) stores exact, fixed-point numbers. This type is used when it is important to preserve exact precision, for example, with monetary data. - - -## Aliases - -In CockroachDB, the following are aliases for `DECIMAL`: - -- `DEC` -- `NUMERIC` - -## Precision and Scale - -To limit a decimal column, use `DECIMAL(precision, scale)`, where `precision` is the **maximum** count of digits both to the left and right of the decimal point and `scale` is the **exact** count of digits to the right of the decimal point. The `precision` must not be smaller than the `scale`. Also note that using `DECIMAL(precision)` is equivalent to `DECIMAL(precision, 0)`. - -When inserting a decimal value: - -- If digits to the right of the decimal point exceed the column's `scale`, CockroachDB rounds to the scale. -- If digits to the right of the decimal point are fewer than the column's `scale`, CockroachDB pads to the scale with `0`s. -- If digits to the left and right of the decimal point exceed the column's `precision`, CockroachDB gives an error. -- If the column's `precision` and `scale` are identical, the inserted value must round to less than 1. - -## Syntax - -A constant value of type `DECIMAL` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `1.414` or `-1234`. - -The special IEEE754 values for positive infinity, negative infinity -and [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) cannot be -entered using numeric literals directly and must be converted using an -[interpreted literal](sql-constants.html#interpreted-literals) or an -[explicit conversion](scalar-expressions.html#explicit-type-coercions) -from a string literal instead. - -The following values are recognized: - -| Syntax | Value | -|----------------------------------------|---------------------------------------------------------| -| `inf`, `infinity`, `+inf`, `+infinity` | +∞ | -| `-inf`, `-infinity` | -∞ | -| `nan` | [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) | - -For example: - -- `DECIMAL '+Inf'` -- `'-Inf'::DECIMAL` -- `CAST('NaN' AS DECIMAL)` - -## Size - -The size of a `DECIMAL` value is variable, starting at 9 bytes. It's recommended to keep values under 64 kilobytes to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Examples - -~~~ sql -> CREATE TABLE decimals (a DECIMAL PRIMARY KEY, b DECIMAL(10,5), c NUMERIC); - -> SHOW COLUMNS FROM decimals; -~~~ -~~~ -+-------+---------------+-------+---------+ -| Field | Type | Null | Default | -+-------+---------------+-------+---------+ -| a | DECIMAL | false | NULL | -| b | DECIMAL(10,5) | true | NULL | -| c | DECIMAL | true | NULL | -+-------+---------------+-------+---------+ -~~~ -~~~ sql -> INSERT INTO decimals VALUES (1.01234567890123456789, 1.01234567890123456789, 1.01234567890123456789); - -> SELECT * FROM decimals; -~~~ -~~~ -+------------------------+---------+-----------------------+ -| a | b | c | -+------------------------+---------+-----------------------+ -| 1.01234567890123456789 | 1.01235 | 1.0123456789012346789 | -+------------------------+---------+-----------------------+ -# The value in "a" matches what was inserted exactly. -# The value in "b" has been rounded to the column's scale. -# The value in "c" is handled like "a" because NUMERIC is an alias. -~~~ - -## Supported Casting & Conversion - -`DECIMAL` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Truncates decimal precision -`FLOAT` | Loses precision and may round up to +/- infinity if the value is too large in magnitude, or to +/-0 if the value is too small in magnitude -`BOOL` | **0** converts to `false`; all other values convert to `true` -`STRING` | –– - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/default-value.md b/src/current/v2.0/default-value.md deleted file mode 100644 index c37dd778797..00000000000 --- a/src/current/v2.0/default-value.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: Default Value Constraint -summary: The Default Value constraint specifies a value to populate a column with if none is provided. -toc: true ---- - -The Default Value [constraint](constraints.html) specifies a value to write into the constrained column if one is not defined in an `INSERT` statement. The value may be either a hard-coded literal or an expression that is evaluated at the time the row is created. - - -## Details - -- The [data type](data-types.html) of the Default Value must be the same as the data type of the column. -- The Default Value constraint only applies if the column does not have a value specified in the [`INSERT`](insert.html) statement. You can still insert a *NULL* into an optional (nullable) column by explicitly inserting *NULL*. For example, `INSERT INTO foo VALUES (1, NULL);`. - -## Syntax - -You can only apply the Default Value constraint to individual columns. - -{{site.data.alerts.callout_info}}You can also add the Default Value constraint to an existing table through ALTER COLUMN. {{site.data.alerts.end}} - -
      -{% include {{ page.version.version }}/sql/diagrams/default_value_column_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the constrained column. | -| `column_type` | The constrained column's [data type](data-types.html). | -| `default_value` | The value you want to insert by default, which must evaluate to the same [data type](data-types.html) as the `column_type`.| -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -## Usage Example - -~~~ sql -> CREATE TABLE inventories ( - product_id INT, - warehouse_id INT, - quantity_on_hand INT DEFAULT 100, - PRIMARY KEY (product_id, warehouse_id) - ); - -> INSERT INTO inventories (product_id, warehouse_id) VALUES (1,20); - -> INSERT INTO inventories (product_id, warehouse_id, quantity_on_hand) VALUES (2,30, NULL); - -> SELECT * FROM inventories; -~~~ -~~~ -+------------+--------------+------------------+ -| product_id | warehouse_id | quantity_on_hand | -+------------+--------------+------------------+ -| 1 | 20 | 100 | -| 2 | 30 | NULL | -+------------+--------------+------------------+ -~~~ - -If the Default Value constraint is not specified and an explicit value is not given, a value of *NULL* is assigned to the column. - -## See Also - -- [Constraints](constraints.html) -- [`ALTER COLUMN`](alter-column.html) -- [Check constraint](check.html) -- [Foreign Key constraint](foreign-key.html) -- [Not Null constraint](not-null.html) -- [Primary Key constraint](primary-key.html) -- [Unique constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v2.0/delete.md b/src/current/v2.0/delete.md deleted file mode 100644 index cc9c4346535..00000000000 --- a/src/current/v2.0/delete.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -title: DELETE -summary: The DELETE statement deletes one or more rows from a table. -toc: true ---- - -The `DELETE` [statement](sql-statements.html) deletes rows from a table. - -{{site.data.alerts.callout_danger}}If you delete a row that is referenced by a foreign key constraint and has an ON DELETE action, all of the dependent rows will also be deleted or updated.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}To delete columns, see DROP COLUMN.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `DELETE` and `SELECT` [privileges](privileges.html) on the table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/delete.html %} -
      - -
      - -## Parameters - - - -| Parameter | Description | -|-----------|-------------| -| `common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -| `table_name` | The name of the table that contains the rows you want to update. -| `AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -|`WHERE a_expr`| `a_expr` must be an expression that returns Boolean values using columns (e.g., ` = `). Delete rows that return `TRUE`.

      __Without a `WHERE` clause in your statement, `DELETE` removes all rows from the table.__| -| `sort_clause` | An `ORDER BY` clause. See [Ordering Query Results](query-order.html) for more details. -| `limit_clause` | A `LIMIT` clause. See [Limiting Query Results](limit-offset.html) for more details. -| `RETURNING target_list` | Return values based on rows deleted, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html).

      To return nothing in the response, not even the number of rows updated, use `RETURNING NOTHING`. | - -## Success Responses - -Successful `DELETE` statements return one of the following: - -| Response | Description | -|-----------|-------------| -|`DELETE` _`int`_ | _int_ rows were deleted.

      `DELETE` statements that do not delete any rows respond with `DELETE 0`. When `RETURNING NOTHING` is used, this information is not included in the response. | -|Retrieved table | Including the `RETURNING` clause retrieves the deleted rows, using the columns identified by the clause's parameters.

      [See an example.](#return-deleted-rows)| - -## Disk Space Usage After Deletes - -Deleting a row does not immediately free up the disk space. This is -due to the fact that CockroachDB retains [the ability to query tables -historically](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/). - -If disk usage is a concern, there are two potential solutions. The -first is to [reduce the time-to-live](configure-replication-zones.html) -(TTL) for the zone, which will cause garbage collection to clean up -deleted rows more frequently. Second, unlike `DELETE`, -[truncate](truncate.html) immediately deletes the entire table, so -consider if you can use `TRUNCATE` instead. - -## Select Performance on Deleted Rows - -Queries that scan across tables that have lots of deleted rows will -have to scan over deletions that have not yet been garbage -collected. Certain database usage patterns that frequently scan over -and delete lots of rows will want to reduce the -[time-to-live](configure-replication-zones.html) values to clean up -deleted rows more frequently. - -## Examples - -### Delete All Rows - -You can delete all rows from a table by not including a `WHERE` clause in your `DELETE` statement. - -~~~ sql -> DELETE FROM account_details; -~~~ -~~~ -DELETE 7 -~~~ - -This is roughly equivalent to [`TRUNCATE`](truncate.html). - -~~~ -> TRUNCATE account_details; -~~~ -~~~ -TRUNCATE -~~~ - -As you can see, one difference is that `TRUNCATE` does not return the number of rows it deleted. - -{{site.data.alerts.callout_info}}The TRUNCATE statement removes all rows from a table by dropping the table and recreating a new table with the same name. This is much more performant than deleting each of the rows. {{site.data.alerts.end}} - -### Delete Specific Rows - -When deleting specific rows from a table, the most important decision you make is which columns to use in your `WHERE` clause. When making that choice, consider the potential impact of using columns with the [Primary Key](primary-key.html)/[Unique](unique.html) constraints (both of which enforce uniqueness) versus those that are not unique. - -#### Delete Rows Using Primary Key/Unique Columns - -Using columns with the [Primary Key](primary-key.html) or [Unique](unique.html) constraints to delete rows ensures your statement is unambiguous—no two rows contain the same column value, so it's less likely to delete data unintentionally. - -In this example, `account_id` is our primary key and we want to delete the row where it equals 1. Because we're positive no other rows have that value in the `account_id` column, there's no risk of accidentally removing another row. - -~~~ sql -> DELETE FROM account_details WHERE account_id = 1 RETURNING *; -~~~ -~~~ -+------------+---------+--------------+ -| account_id | balance | account_type | -+------------+---------+--------------+ -| 1 | 32000 | Savings | -+------------+---------+--------------+ -~~~ - -#### Delete Rows Using Non-Unique Columns - -Deleting rows using non-unique columns removes _every_ row that returns `TRUE` for the `WHERE` clause's `a_expr`. This can easily result in deleting data you didn't intend to. - -~~~ sql -> DELETE FROM account_details WHERE balance = 30000 RETURNING *; -~~~ -~~~ -+------------+---------+--------------+ -| account_id | balance | account_type | -+------------+---------+--------------+ -| 2 | 30000 | Checking | -| 3 | 30000 | Savings | -+------------+---------+--------------+ -~~~ - -The example statement deleted two rows, which might be unexpected. - -### Return Deleted Rows - -To see which rows your statement deleted, include the `RETURNING` clause to retrieve them using the columns you specify. - -#### Use All Columns -By specifying `*`, you retrieve all columns of the delete rows. - -~~~ sql -> DELETE FROM account_details WHERE balance < 23000 RETURNING *; -~~~ -~~~ -+------------+---------+--------------+ -| account_id | balance | account_type | -+------------+---------+--------------+ -| 4 | 22000 | Savings | -+------------+---------+--------------+ -~~~ - -#### Use Specific Columns - -To retrieve specific columns, name them in the `RETURNING` clause. - -~~~ sql -> DELETE FROM account_details WHERE account_id = 5 RETURNING account_id, account_type; -~~~ -~~~ -+------------+--------------+ -| account_id | account_type | -+------------+--------------+ -| 5 | Checking | -+------------+--------------+ -~~~ - -#### Change Column Labels - -When `RETURNING` specific columns, you can change their labels using `AS`. - -~~~ sql -> DELETE FROM account_details WHERE balance < 22500 RETURNING account_id, balance AS final_balance; -~~~ -~~~ -+------------+---------------+ -| account_id | final_balance | -+------------+---------------+ -| 6 | 23500 | -+------------+---------------+ -~~~ - -## See Also - -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) -- [Limiting Query Results](limit-offset.html) diff --git a/src/current/v2.0/demo-automatic-cloud-migration.md b/src/current/v2.0/demo-automatic-cloud-migration.md deleted file mode 100644 index c3050064311..00000000000 --- a/src/current/v2.0/demo-automatic-cloud-migration.md +++ /dev/null @@ -1,253 +0,0 @@ ---- -title: Cross-Cloud Migration -summary: Use a local cluster to simulate migrating from one cloud platform to another. -toc: true ---- - -CockroachDB's flexible [replication controls](configure-replication-zones.html) make it trivially easy to run a single CockroachDB cluster across cloud platforms and to migrate data from one cloud to another without any service interruption. This page walks you through a local simulation of the process. - -## Watch a Live Demo - -{% include_cached youtube.html video_id="cCJkgZy6s2Q" %} - -## Step 1. Install prerequisites - -In this tutorial, you'll use CockroachDB, the HAProxy load balancer, and CockroachDB's version of the YCSB load generator, which requires Go. Before you begin, make sure these applications are installed: - -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install [HAProxy](http://www.haproxy.org/). If you're on a Mac and using Homebrew, use `brew install haproxy`. -- Install [Go](https://golang.org/doc/install) version 1.9 or higher. If you're on a Mac and using Homebrew, use `brew install go`. You can check your local version by running `go version`. -- Install the [CockroachDB version of YCSB](https://github.com/cockroachdb/loadgen/tree/master/ycsb): `go get github.com/cockroachdb/loadgen/ycsb` - -Also, to keep track of the data files and logs for your cluster, you may want to create a new directory (e.g., `mkdir cloud-migration`) and start all your nodes in that directory. - -## Step 2. Start a 3-node cluster on "cloud 1" - -If you've already [started a local cluster](start-a-local-cluster.html), the commands for starting nodes should be familiar to you. The new flag to note is [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes), which accepts key-value pairs that describe the topography of a node. In this case, you're using the flag to specify that the first 3 nodes are running on cloud 1. - -In a new terminal, start node 1 on cloud 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=1 \ ---store=cloud1node1 \ ---host=localhost \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~~ - -In a new terminal, start node 2 on cloud 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=1 \ ---store=cloud1node2 \ ---host=localhost \ ---port=25258 \ ---http-port=8081 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -In a new terminal, start node 3 on cloud 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=1 \ ---store=cloud1node3 \ ---host=localhost \ ---port=25259 \ ---http-port=8082 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 3. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost \ ---port=26257 -~~~ - -## Step 4. Set up HAProxy load balancing - -You're now running 3 nodes in a simulated cloud. Each of these nodes is an equally suitable SQL gateway to your cluster, but to ensure an even balancing of client requests across these nodes, you can use a TCP load balancer. Let's use the open-source [HAProxy](http://www.haproxy.org/) load balancer that you installed earlier. - -In a new terminal, run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command, specifying the port of any node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---insecure \ ---host=localhost \ ---port=26257 -~~~ - -This command generates an `haproxy.cfg` file automatically configured to work with the 3 nodes of your running cluster. In the file, change `bind :26257` to `bind :26000`. This changes the port on which HAProxy accepts requests to a port that is not already in use by a node and that will not be used by the nodes you'll add later. - -~~~ -global - maxconn 4096 - -defaults - mode tcp - timeout connect 10s - timeout client 1m - timeout server 1m - -listen psql - bind :26000 - mode tcp - balance roundrobin - server cockroach1 localhost:26257 - server cockroach2 localhost:26258 - server cockroach3 localhost:26259 -~~~ - -Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - -{% include copy-clipboard.html %} -~~~ shell -$ haproxy -f haproxy.cfg -~~~ - -## Step 5. Start a load generator - -Now that you have a load balancer running in front of your cluster, let's use the YCSB load generator that you installed earlier to simulate multiple client connections, each performing mixed read/write workloads. - -In a new terminal, start `ycsb`, pointing it at HAProxy's port: - -{% include copy-clipboard.html %} -~~~ shell -$ $HOME/go/bin/ycsb -duration 20m -tolerate-errors -concurrency 10 -max-rate 1000 'postgresql://root@localhost:26000?sslmode=disable' -~~~ - -This command initiates 10 concurrent client workloads for 20 minutes, but limits the total load to 1000 operations per second (since you're running everything on a single machine). - -## Step 6. Watch data balance across all 3 nodes - -Now open the Admin UI at `http://localhost:8080` and click **Metrics** in the left-hand navigation bar. The **Overview** dashboard is displayed. Hover over the **SQL Queries** graph at the top. After a minute or so, you'll see that the load generator is executing approximately 95% reads and 5% writes across all nodes: - -CockroachDB Admin UI - -Scroll down a bit and hover over the **Replicas per Node** graph. Because CockroachDB replicates each piece of data 3 times by default, the replica count on each of your 3 nodes should be identical: - -CockroachDB Admin UI - -## Step 7. Add 3 nodes on "cloud 2" - -At this point, you're running three nodes on cloud 1. But what if you'd like to start experimenting with resources provided by another cloud vendor? Let's try that by adding three more nodes to a new cloud platform. Again, the flag to note is [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes), which you're using to specify that these next 3 nodes are running on cloud 2. - -In a new terminal, start node 4 on cloud 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=2 \ ---store=cloud2node4 \ ---host=localhost \ ---port=26260 \ ---http-port=8083 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -In a new terminal, start node 5 on cloud 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=2 \ ---store=cloud2node5 \ ---host=localhost \ ---port=25261 \ ---http-port=8084 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -In a new terminal, start node 6 on cloud 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=2 \ ---store=cloud2node6 \ ---host=localhost \ ---port=25262 \ ---http-port=8085 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 8. Watch data balance across all 6 nodes - -Back on the **Overview** dashboard in Admin UI, hover over the **Replicas per Node** graph again. Because you used [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) to specify that nodes are running on 2 clouds, you'll see an approximately even number of replicas on each node, indicating that CockroachDB has automatically rebalanced replicas across both simulated clouds: - -CockroachDB Admin UI - -Note that it takes a few minutes for the Admin UI to show accurate per-node replica counts on hover. This is why the new nodes in the screenshot above show 0 replicas. However, the graph lines are accurate, and you can click **View node list** in the **Summary** area for accurate per-node replica counts as well. - -## Step 9. Migrate all data to "cloud 2" - -So your cluster is replicating across two simulated clouds. But let's say that after experimentation, you're happy with cloud vendor 2, and you decide that you'd like to move everything there. Can you do that without interruption to your live client traffic? Yes, and it's as simple as running a single command to add a [hard constraint](configure-replication-zones.html#replication-constraints) that all replicas must be on nodes with `--locality=cloud=2`. - -In a new terminal, edit the default replication zone: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'constraints: [+cloud=2]' | cockroach zone set .default --insecure --host=localhost -f - -~~~ - -## Step 10. Verify the data migration - -Back on the **Overview** dashboard in the Admin UI, hover over the **Replicas per Node** graph again. Very soon, you'll see the replica count double on nodes 4, 5, and 6 and drop to 0 on nodes 1, 2, and 3: - -CockroachDB Admin UI - -This indicates that all data has been migrated from cloud 1 to cloud 2. In a real cloud migration scenario, at this point you would update the load balancer to point to the nodes on cloud 2 and then stop the nodes on cloud 1. But for the purpose of this local simulation, there's no need to do that. - -## Step 11. Stop the cluster - -Once you're done with your cluster, stop YCSB by switching into its terminal and pressing **CTRL-C**. Then do the same for HAProxy and each CockroachDB node. - -{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores and the HAProxy config file: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cloud1node1 cloud1node2 cloud1node3 cloud2node4 cloud2node5 cloud2node6 haproxy.cfg -~~~ - -## What's Next? - -Use a local cluster to explore these other core CockroachDB features: - -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) - -You may also want to learn other ways to control the location and number of replicas in a cluster: - -- [Even Replication Across Datacenters](configure-replication-zones.html#even-replication-across-datacenters) -- [Multiple Applications Writing to Different Databases](configure-replication-zones.html#multiple-applications-writing-to-different-databases) -- [Stricter Replication for a Specific Table](configure-replication-zones.html#stricter-replication-for-a-specific-table) -- [Tweaking the Replication of System Ranges](configure-replication-zones.html#tweaking-the-replication-of-system-ranges) diff --git a/src/current/v2.0/demo-automatic-rebalancing.md b/src/current/v2.0/demo-automatic-rebalancing.md deleted file mode 100644 index 254f2709b0c..00000000000 --- a/src/current/v2.0/demo-automatic-rebalancing.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: Automatic Rebalancing -summary: Use a local cluster to explore how CockroachDB automatically rebalances data as you scale. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB automatically rebalances data as you scale. Starting with a 3-node local cluster, you'll lower the maximum size for a single range, the unit of data that is replicated in CockroachDB. You'll then download and run the `block_writer` example program, which continuously inserts data into your cluster, and watch the replica count quickly increase as ranges split. You'll then add 2 more nodes and watch how CockroachDB automatically rebalances replicas to efficiently use all available capacity. - - -## Before You Begin - -In this tutorial, you'll use an example Go program to quickly insert data into a CockroachDB cluster. To run the example program, you must have a [Go environment](http://golang.org/doc/code.html) with a 64-bit version of Go 1.7.1. - -- You can download the [Go binary](http://golang.org/doc/code.html) directly from the official site. -- Be sure to set the `$GOPATH` and `$PATH` environment variables as described [here](https://golang.org/doc/code.html#GOPATH). - -## Step 1. Start a 3-node cluster - -Use the [`cockroach start`](start-a-node.html) command to start 3 nodes: - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 1: -$ cockroach start --insecure \ ---store=scale-node1 \ ---host=localhost \ ---port=26257 \ ---http-port=8080 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 2: -$ cockroach start --insecure \ ---store=scale-node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 3: -$ cockroach start --insecure \ ---store=scale-node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 2. Initial the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost \ ---port=26257 -~~~ - -## Step 3. Verify that the cluster is live - -In a new terminal, connect the [built-in SQL shell](use-the-built-in-sql-client.html) to any node to verify that the cluster is live: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+--------------------+ -| Database | -+--------------------+ -| system | -+--------------------+ -(1 row) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 4. Lower the max range size - -In CockroachDB, you use [replication zones](configure-replication-zones.html) to control the number and location of replicas. Initially, there is a single default replication zone for the entire cluster that is set to copy each range of data 3 times. This default replication factor is fine for this demo. - -However, the default replication zone also defines the size at which a single range of data spits into two ranges. Since you want to create many ranges quickly and then see how CockroachDB automatically rebalances them, reduce the max range size from the default 67108864 bytes (64MB) to cause ranges to split more quickly: - -{% include copy-clipboard.html %} -~~~ shell -$ echo -e "range_min_bytes: 1\nrange_max_bytes: 262144" | cockroach zone set .default --insecure -f - -~~~ - -~~~ -range_min_bytes: 1 -range_max_bytes: 262144 -gc: - ttlseconds: 86400 -num_replicas: 3 -constraints: [] -~~~ - -## Step 5. Download and run the `block_writer` program - -CockroachDB provides a number of [example programs in Go](https://github.com/cockroachdb/examples-go) for simulating client workloads. The program you'll use for this demonstration is called [`block_writer`](https://github.com/cockroachdb/examples-go/tree/master/block_writer). It will simulate multiple clients inserting data into the cluster. - -Download and install the program: - -{% include copy-clipboard.html %} -~~~ shell -$ go get github.com/cockroachdb/examples-go/block_writer -~~~ - -Then run the program for 1 minute, long enough to generate plenty of ranges: - -{% include copy-clipboard.html %} -~~~ shell -$ block_writer -duration 1m -~~~ - -Once it's running, `block_writer` will output the number of rows written per second: - -~~~ - 1s: 776.7/sec 776.7/sec - 2s: 696.3/sec 736.7/sec - 3s: 659.9/sec 711.1/sec - 4s: 557.4/sec 672.6/sec - 5s: 485.0/sec 635.1/sec - 6s: 563.5/sec 623.2/sec - 7s: 725.2/sec 637.7/sec - 8s: 779.2/sec 655.4/sec - 9s: 859.0/sec 678.0/sec -10s: 960.4/sec 706.1/sec -~~~ - -## Step 6. Watch the replica count increase - -Open the Admin UI at `http://localhost:8080` and you’ll see the bytes, replica count, and other metrics increase as the `block_writer` program inserts data. - -CockroachDB Admin UI - -## Step 7. Add 2 more nodes - -Adding capacity is as simple as starting more nodes and joining them to the running cluster: - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 4: -$ cockroach start --insecure \ ---store=scale-node4 \ ---host=localhost \ ---port=26260 \ ---http-port=8083 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 5: -$ cockroach start --insecure \ ---store=scale-node5 \ ---host=localhost \ ---port=26261 \ ---http-port=8084 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 8. Watch data rebalance across all 5 nodes - -Back in the Admin UI, you'll now see 5 nodes listed. At first, the bytes and replica count will be lower for nodes 4 and 5. Very soon, however, you'll see those metrics even out across all nodes, indicating that data has been automatically rebalanced to utilize the additional capacity of the new nodes. - -CockroachDB Admin UI - -## Step 9. Stop the cluster - -Once you're done with your test cluster, stop each node by switching to its terminal and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf scale-node1 scale-node2 scale-node3 scale-node4 scale-node5 -~~~ - -## What's Next? - -Use a local cluster to explore these other core CockroachDB features: - -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/v2.0/demo-data-replication.md b/src/current/v2.0/demo-data-replication.md deleted file mode 100644 index 2917eaa48fc..00000000000 --- a/src/current/v2.0/demo-data-replication.md +++ /dev/null @@ -1,276 +0,0 @@ ---- -title: Data Replication -summary: Use a local cluster to explore how CockroachDB replicates and distributes data. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB replicates and distributes data. Starting with a 1-node local cluster, you'll write some data, add 2 nodes, and watch how the data is replicated automatically. You'll then update the cluster to replicate 5 ways, add 2 more nodes, and again watch how all existing replicas are re-replicated to the new nodes. - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start a 1-node cluster - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node1 \ ---host=localhost -~~~ - -## Step 2. Write data - -In a new terminal, use the [`cockroach gen`](generate-cockroachdb-resources.html) command to generate an example `intro` database: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data intro | cockroach sql --insecure -~~~ - -In the same terminal, open the [built-in SQL shell](use-the-built-in-sql-client.html) and verify that the new `intro` database was added with one table, `mytable`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+--------------------+ -| Database | -+--------------------+ -| intro | -| system | -+--------------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM intro; -~~~ - -~~~ -+---------+ -| Table | -+---------+ -| mytable | -+---------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM intro.mytable WHERE (l % 2) = 0; -~~~ - -~~~ -+----+-----------------------------------------------------+ -| l | v | -+----+-----------------------------------------------------+ -| 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, | -| 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ | -| 4 | ! "?##mW##?"- | -| 6 | ! C O N G R A T S _am#Z??A#ma, Y | -| 8 | ! _ummY" "9#ma, A | -| 10 | ! vm#Z( )Xmms Y | -| 12 | ! .j####mmm#####mm#m##6. | -| 14 | ! W O W ! jmm###mm######m#mmm##6 | -| 16 | ! ]#me*Xm#m#mm##m#m##SX##c | -| 18 | ! dm#||+*$##m#mm#m#Svvn##m | -| 20 | ! :mmE=|+||S##m##m#1nvnnX##; A | -| 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M | -| 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A | -| 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z | -| 28 | ! U D 4##c|+|+|]m#kvnvnno##P E | -| 30 | ! I 4#ma+|++]mmhvnnvq##P` ! | -| 32 | ! D I ?$#q%+|dmmmvnnm##! | -| 34 | ! T -4##wu#mm#pw##7' | -| 36 | ! -?$##m####Y' | -| 38 | ! !! "Y##Y"- | -| 40 | ! | -+----+-----------------------------------------------------+ -(21 rows) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 3. Add two nodes - -In a new terminal, add node 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---join=localhost:26257 -~~~ - -In a new terminal, add node 3: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---join=localhost:26257 -~~~ - -## Step 4. Watch data replicate to the new nodes - -Open the Admin UI at http://localhost:8080 to see that all three nodes are listed. At first, the replica count will be lower for nodes 2 and 3. Very soon, the replica count will be identical across all three nodes, indicating that all data in the cluster has been replicated 3 times; there's a copy of every piece of data on each node. - -CockroachDB Admin UI - -## Step 5. Increase the replication factor - -As you just saw, CockroachDB replicates data 3 times by default. Now, in the terminal you used for the built-in SQL shell or in a new terminal, use the [`cockroach zone`](configure-replication-zones.html) command change the cluster's `.default` replication factor to 5: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set .default --insecure -f - -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 90000 -num_replicas: 5 -constraints: [] -~~~ - -In addition to the `.default` replication zone for database and table data, CockroachDB comes with pre-configured replication zones for [important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range). To list these pre-configured zones, use the `cockroach zone ls` subcommand: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone ls --insecure -~~~ - -~~~ -.default -.liveness -.meta -system.jobs -~~~ - -For the cluster as a whole to remain available, the "system ranges" for this internal data must always retain a majority of their replicas. Therefore, if you increase the default replication factor, be sure to also increase the replication factor for these replication zones as well: - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set .liveness --insecure -f - -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 600 -num_replicas: 5 -constraints: [] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set .meta --insecure -f - -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 3600 -num_replicas: 5 -constraints: [] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ echo 'num_replicas: 5' | cockroach zone set system.jobs --insecure -f - -~~~ - -~~~ -range_min_bytes: 1048576 -range_max_bytes: 67108864 -gc: - ttlseconds: 600 -num_replicas: 5 -constraints: [] -~~~ - -## Step 6. Add two more nodes - -In a new terminal, add node 4: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---host=localhost \ ---store=repdemo-node4 \ ---port=26260 \ ---http-port=8083 \ ---join=localhost:26257 -~~~ - -In a new terminal, add node 5: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---host=localhost \ ---store=repdemo-node5 \ ---port=26261 \ ---http-port=8084 \ ---join=localhost:26257 -~~~ - -## Step 7. Watch data replicate to the new nodes - -Back in the Admin UI, you'll see that there are now 5 nodes listed. Again, at first, the replica count will be lower for nodes 4 and 5. But because you changed the default replication factor to 5, very soon, the replica count will be identical across all 5 nodes, indicating that all data in the cluster has been replicated 5 times. - -CockroachDB Admin UI - -## Step 8. Stop the cluster - -Once you're done with your test cluster, stop each node by switching to its terminal and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}} -For the last 2 nodes, the shutdown process will take longer (about a minute) and will eventually force stop the nodes. This is because, with only 2 nodes still online, a majority of replicas are no longer available (3 of 5), and so the cluster is not operational. To speed up the process, press **CTRL-C** a second time in the nodes' terminals. -{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf repdemo-node1 repdemo-node2 repdemo-node3 repdemo-node4 repdemo-node5 -~~~ - -## What's Next? - -Use a local cluster to explore these other core CockroachDB features: - -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/v2.0/demo-fault-tolerance-and-recovery.md b/src/current/v2.0/demo-fault-tolerance-and-recovery.md deleted file mode 100644 index ca8adc08d8b..00000000000 --- a/src/current/v2.0/demo-fault-tolerance-and-recovery.md +++ /dev/null @@ -1,374 +0,0 @@ ---- -title: Fault Tolerance & Recovery -summary: Use a local cluster to explore how CockroachDB remains available during, and recovers after, failure. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB remains available during, and recovers after, failure. Starting with a 3-node local cluster, you'll remove a node and see how the cluster continues uninterrupted. You'll then write some data while the node is offline, rejoin the node, and see how it catches up with the rest of the cluster. Finally, you'll add a fourth node, remove a node again, and see how missing replicas eventually re-replicate to the new node. - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start a 3-node cluster - -Use the [`cockroach start`](start-a-node.html) command to start 3 nodes: - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 1: -$ cockroach start \ ---insecure \ ---store=fault-node1 \ ---host=localhost \ ---port=26257 \ ---http-port=8080 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 2: -$ cockroach start \ ---insecure \ ---store=fault-node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 3: -$ cockroach start \ ---insecure \ ---store=fault-node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 2. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost \ ---port=26257 -~~~ - -## Step 3. Verify that the cluster is live - -In a new terminal, use the [`cockroach sql`](use-the-built-in-sql-client.html) command to connect the built-in SQL shell to any node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+--------------------+ -| Database | -+--------------------+ -| system | -+--------------------+ -(1 row) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 4. Remove a node temporarily - -In the terminal running node 2, press **CTRL-C** to stop the node. - -Alternatively, you can open a new terminal and run the [`cockroach quit`](stop-a-node.html) command against port `26258`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --insecure --port=26258 -~~~ - -~~~ -initiating graceful shutdown of server -ok -~~~ - -## Step 5. Verify that the cluster remains available - -Switch to the terminal for the built-in SQL shell and reconnect the shell to node 1 (port `26257`) or node 3 (port `26259`): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26259 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+--------------------+ -| Database | -+--------------------+ -| bank | -| system | -+--------------------+ -(2 rows) -~~~ - -As you see, despite one node being offline, the cluster continues uninterrupted because a majority of replicas (2/3) remains available. If you were to remove another node, however, leaving only one node live, the cluster would be unresponsive until another node was brought back online. - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 6. Write data while the node is offline - -In the same terminal, use the [`cockroach gen`](generate-cockroachdb-resources.html) command to generate an example `startrek` database: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data startrek | cockroach sql --insecure -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -DROP TABLE -CREATE TABLE -INSERT 79 -CREATE TABLE -INSERT 200 -~~~ - -Then reconnect the SQL shell to node 1 (port `26257`) or node 3 (port `26259`) and verify that the new `startrek` database was added with two tables, `episodes` and `quotes`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26259 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+--------------------+ -| Database | -+--------------------+ -| startrek | -| system | -+--------------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM startrek; -~~~ - -~~~ -+----------+ -| Table | -+----------+ -| episodes | -| quotes | -+----------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.episodes LIMIT 10; -~~~ - -~~~ -+----+--------+-----+--------------------------------+----------+ -| id | season | num | title | stardate | -+----+--------+-----+--------------------------------+----------+ -| 1 | 1 | 1 | The Man Trap | 1531.1 | -| 2 | 1 | 2 | Charlie X | 1533.6 | -| 3 | 1 | 3 | Where No Man Has Gone Before | 1312.4 | -| 4 | 1 | 4 | The Naked Time | 1704.2 | -| 5 | 1 | 5 | The Enemy Within | 1672.1 | -| 6 | 1 | 6 | Mudd's Women | 1329.8 | -| 7 | 1 | 7 | What Are Little Girls Made Of? | 2712.4 | -| 8 | 1 | 8 | Miri | 2713.5 | -| 9 | 1 | 9 | Dagger of the Mind | 2715.1 | -| 10 | 1 | 10 | The Corbomite Maneuver | 1512.2 | -+----+--------+-----+--------------------------------+----------+ -(10 rows) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 7. Rejoin the node to the cluster - -Switch to the terminal for node 2, and rejoin the node to the cluster, using the same command that you used in step 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---store=fault-node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---join=localhost:26257 -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: http://localhost:8081 -sql: postgresql://root@localhost:26258?sslmode=disable -logs: node2/logs -store[0]: path=fault-node2 -status: restarted pre-existing node -clusterID: {5638ba53-fb77-4424-ada9-8a23fbce0ae9} -nodeID: 2 -~~~ - -## Step 8. Verify that the rejoined node has caught up - -Switch to the terminal for the built-in SQL shell, connect the shell to the rejoined node 2 (port `26258`), and check for the `startrek` data that was added while the node was offline: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26258 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.episodes LIMIT 10; -~~~ - -~~~ -+----+--------+-----+--------------------------------+----------+ -| id | season | num | title | stardate | -+----+--------+-----+--------------------------------+----------+ -| 1 | 1 | 1 | The Man Trap | 1531.1 | -| 2 | 1 | 2 | Charlie X | 1533.6 | -| 3 | 1 | 3 | Where No Man Has Gone Before | 1312.4 | -| 4 | 1 | 4 | The Naked Time | 1704.2 | -| 5 | 1 | 5 | The Enemy Within | 1672.1 | -| 6 | 1 | 6 | Mudd's Women | 1329.8 | -| 7 | 1 | 7 | What Are Little Girls Made Of? | 2712.4 | -| 8 | 1 | 8 | Miri | 2713.5 | -| 9 | 1 | 9 | Dagger of the Mind | 2715.1 | -| 10 | 1 | 10 | The Corbomite Maneuver | 1512.2 | -+----+--------+-----+--------------------------------+----------+ -(10 rows) -~~~ - -At first, while node 2 is catching up, it acts as a proxy to one of the other nodes with the data. This shows that even when a copy of the data is not local to the node, it has seamless access. - -Soon enough, node 2 catches up entirely. To verify, open the Admin UI at `http://localhost:8080` to see that all three nodes are listed, and the replica count is identical for each. This means that all data in the cluster has been replicated 3 times; there's a copy of every piece of data on each node. - -{{site.data.alerts.callout_success}}CockroachDB replicates data 3 times by default. You can customize the number and location of replicas for the entire cluster or for specific sets of data using replication zones.{{site.data.alerts.end}} - -CockroachDB Admin UI - -## Step 9. Add another node - -Now, to prepare the cluster for a permanent node failure, open a new terminal and add a fourth node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=fault-node4 \ ---host=localhost \ ---port=26260 \ ---http-port=8083 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: http://localhost:8083 -sql: postgresql://root@localhost:26260?sslmode=disable -logs: node4/logs -store[0]: path=fault-node4 -status: initialized new node, joined pre-existing cluster -clusterID: {5638ba53-fb77-4424-ada9-8a23fbce0ae9} -nodeID: 4 -~~~ - -## Step 10. Remove a node permanently - -Again, switch to the terminal running node 2 and press **CTRL-C** to stop it. - -Alternatively, you can open a new terminal and run the [`cockroach quit`](stop-a-node.html) command against port `26258`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --insecure --port=26258 -~~~ - -~~~ -initiating graceful shutdown of server -ok -server drained and shutdown completed -~~~ - -## Step 11. Verify that the cluster re-replicates missing replicas - -Back in the Admin UI, you'll see 4 nodes listed. After about 1 minute, the dot next to node 2 will turn yellow, indicating that the node is not responding. - -CockroachDB Admin UI - -After about 10 minutes, node 2 will move into a **Dead Nodes** section, indicating that the node is not expected to come back. At this point, in the **Live Nodes** section, you should also see that the **Replicas** count for node 4 matches the count for node 1 and 3, the other live nodes. This indicates that all missing replicas (those that were on node 2) have been re-replicated to node 4. - -CockroachDB Admin UI - -## Step 12. Stop the cluster - -Once you're done with your test cluster, stop each node by switching to its terminal and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf fault-node1 fault-node2 fault-node3 fault-node4 fault-node5 -~~~ - -## What's Next? - -Use a local cluster to explore these other core CockroachDB features: - -- [Data Replication](demo-data-replication.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/v2.0/demo-follow-the-workload.md b/src/current/v2.0/demo-follow-the-workload.md deleted file mode 100644 index b3718cc443a..00000000000 --- a/src/current/v2.0/demo-follow-the-workload.md +++ /dev/null @@ -1,297 +0,0 @@ ---- -title: Follow-the-Workload -summary: CockroachDB can dynamically optimize read latency for the location from which most of the workload is originating. -toc: true ---- - -"Follow-the-workload" refers to CockroachDB's ability to dynamically optimize read latency for the location from which most of the workload is originating. This page explains how "follow-the-workload" works and walks you through a simple demonstration using a local cluster. - - -## Overview - -### Basic Terms - -To understand how "follow-the-workload" works, it's important to start with some basic terms: - -Term | Description ------|------------ -**Range** | CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -**Range Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -**Range Lease** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. - -### How It Works - -"Follow-the-workload" is based on the way **range leases** handle read requests. Read requests bypass the Raft consensus protocol, accessing the range replica that holds the range lease (the leaseholder) and sending the results to the client without needing to coordinate with any of the other range replicas. Bypassing Raft, and the network round trips involved, is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder. - -This increases the speed of reads, but it doesn't guarantee that the range lease will be anywhere close to the origin of requests. If requests are coming from the US West, for example, and the relevant range lease is on a node in the US East, the requests would likely enter a gateway node in the US West and then get routed to the node with the range lease in the US East. - -However, you can cause the cluster to actively move range leases for even better read performance by starting each node with the [`--locality`](start-a-node.html#locality) flag. With this flag specified, the cluster knows about the location of each node, so when there's high latency between nodes, the cluster will move active range leases to a node closer to the origin of the majority of the workload. This is especially helpful for applications with workloads that move around throughout the day (e.g., most of the traffic is in the US East in the morning and in the US West in the evening). - -{{site.data.alerts.callout_success}}To enable "follow-the-workload", you just need to start each node of the cluster with the --locality flag, as shown in the tutorial below. No additional user action is required.{{site.data.alerts.end}} - -### Example - -In this example, let's imagine that lots of read requests are going to node 1, and that the requests are for data in range 3. Because range 3's lease is on node 3, the requests are routed to node 3, which returns the results to node 1. Node 1 then responds to the clients. - -Follow-the-workload example - -However, if the nodes were started with the [`--locality`](start-a-node.html#locality) flag, after a short while, the cluster would move range 3's lease to node 1, which is closer to the origin of the workload, thus reducing the network round trips and increasing the speed of reads. - -Follow-the-workload example - -## Tutorial - -### Step 1. Install prerequisites - -In this tutorial, you'll use CockroachDB, the `comcast` network tool to simulate network latency on your local workstation, and the `kv` load generator to simulate client workloads. Before you begin, make sure these applications are installed: - -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install [Go](https://golang.org/doc/install) version 1.9 or higher. If you're on a Mac and using Homebrew, use `brew install go`. You can check your local version by running `go version`. -- Install the [`comcast`](https://github.com/tylertreat/comcast) network simulation tool: `go get github.com/tylertreat/comcast` -- Install the [`kv`](https://github.com/cockroachdb/loadgen/tree/master/kv) load generator: `go get github.com/cockroachdb/loadgen/kv` - -Also, to keep track of the data files and logs for your cluster, you may want to create a new directory (e.g., `mkdir follow-workload`) and start all your nodes in that directory. - -### Step 2. Start simulating network latency - -"Follow-the-workload" only kicks in when there's high latency between the nodes of the CockroachDB cluster. In this tutorial, you'll run 3 nodes on your local workstation, with each node pretending to be in a different region of the US. To simulate latency between the nodes, use the `comcast` tool that you installed earlier. - -In a new terminal, start `comcast` as follows: - -{% include copy-clipboard.html %} -~~~ shell -$ comcast --device lo0 --latency 100 -~~~ - -For the `--device` flag, use `lo0` if you're on Mac or `lo` if you're on Linux. If neither works, run the `ifconfig` command and find the interface responsible for `127.0.0.1` in the output. - -This command causes a 100 millisecond delay for all requests on the loopback interface of your local workstation. It will only affect connections from the machine to itself, not to/from the Internet. - -### Step 3. Start the cluster - -Use the [`cockroach start`](start-a-node.html) command to start 3 nodes on your local workstation, using the [`--locality`](start-a-node.html#locality) flag to pretend that each node is in a different region of the US. - -1. In a new terminal, start a node in the "US West": - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --locality=region=us-west \ - --host=localhost \ - --store=follow1 \ - --port=26257 \ - --http-port=8080 \ - --join=localhost:26257,localhost:26258,localhost:26259 - ~~~ - -2. In a new terminal, start a node in the "US Midwest": - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --locality=region=us-midwest \ - --host=localhost \ - --store=follow2 \ - --port=26258 \ - --http-port=8081 \ - --join=localhost:26257,localhost:26258,localhost:26259 - ~~~ - -3. In a new terminal, start a node in the "US East": - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --locality=region=us-east \ - --host=localhost \ - --store=follow3 \ - --port=26259 \ - --http-port=8082 \ - --join=localhost:26257,localhost:26258,localhost:26259 - ~~~ - -### Step 4. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost \ ---port=26257 -~~~ - -### Step 5. Simulate traffic in the US East - -Now that the cluster is live, use the `kv` load generator that you installed earlier to simulate multiple client connections to the node in the "US East". - -1. In a new terminal, start `kv`, pointing it at port `26259`, which is the port of the node with the `us-east` locality: - - {% include copy-clipboard.html %} - ~~~ shell - $ kv -duration 1m -concurrency 32 -read-percent 100 -max-rate 100 'postgresql://root@localhost:26259?sslmode=disable' - ~~~ - - This command initiates 32 concurrent read-only workloads for 1 minute but limits the entire `kv` process to 100 operations per second (since you're running everything on a single machine). While `kv` is running, it will print some stats to the terminal: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 23.0 23.0 838.9 838.9 838.9 - 2s 0 111.0 66.9 805.3 838.9 838.9 - 3s 0 100.0 78.0 209.7 209.7 209.7 - 4s 0 99.9 83.5 209.7 209.7 209.7 - 5s 0 100.0 86.8 209.7 209.7 209.7 - ... - ~~~ - - {{site.data.alerts.callout_info}}The latency numbers printed are over 200 milliseconds because the 100 millisecond delay in each direction (200ms round-trip) caused by the comcast tool also applies to the traffic going from the kv process to the cockroach process. If you were to set up more advanced rules that excluded the kv process's traffic or to run this on a real network with real network delay, these numbers would be down in the single-digit milliseconds.{{site.data.alerts.end}} - -2. Let the load generator run to completion. - -### Step 6. Check the location of the range lease - -The load generator created a `kv` table that maps to an underlying key-value range. Verify that the range's lease moved to the node in the "US East" as follows. - -1. In a new terminal, run the [`cockroach node status`](view-node-details.html) command against any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach node status --insecure --port=26257 - ~~~ - - ~~~ - +----+-----------------+--------+---------------------+---------------------+---------+ - | id | address | build | updated_at | started_at | is_live | - +----+-----------------+--------+---------------------+---------------------+---------+ - | 1 | localhost:26257 | v2.0.3 | 2018-07-05 07:58:17 | 2018-07-05 07:53:57 | true | - | 2 | localhost:26258 | v2.0.3 | 2018-07-05 07:58:19 | 2018-07-05 07:53:58 | true | - | 3 | localhost:26259 | v2.0.3 | 2018-07-05 07:58:19 | 2018-07-05 07:53:59 | true | - +----+-----------------+--------+---------------------+---------------------+---------+ - (3 rows) - ~~~ - -2. In the response, note the ID of the node running on port `26259`. - -3. In the same terminal, connect the [built-in SQL shell](use-the-built-in-sql-client.html) to any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --port=26257 - ~~~ - -4. Check where the range lease is for the `kv` table: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW EXPERIMENTAL_RANGES FROM TABLE test.kv; - ~~~ - - ~~~ - +-----------+---------+----------+----------+--------------+ - | Start Key | End Key | Range ID | Replicas | Lease Holder | - +-----------+---------+----------+----------+--------------+ - | NULL | NULL | 29 | {1,2,3} | 3 | - +-----------+---------+----------+----------+--------------+ - (1 row) - ~~~ - - `Replicas` and `Lease Holder` indicate the node IDs. As you can see, the lease for the range holding the `kv` table's data is on node 3, which is the same ID as the node on port `26259`. - -### Step 7. Simulate traffic in the US West - -1. In the same terminal, start `kv`, pointing it at port `26257`, which is the port of the node with the `us-west` locality: - - {% include copy-clipboard.html %} - ~~~ shell - $ kv -duration 7m -concurrency 32 -read-percent 100 -max-rate 100 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - This time, the command runs for a little longer, 7 minutes instead of 1 minute. This is necessary since the system will still "remember" the earlier requests to the other locality. - -2. Let the load generator run to completion. - -### Step 8. Check the location of the range lease - -Verify that the range's lease moved to the node in the "US West" as follows. - -1. In a same terminal, run the [`cockroach node status`](view-node-details.html) command against any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach node status --insecure --port=26257 - ~~~ - - ~~~ - +----+-----------------+--------+---------------------+---------------------+---------+ - | id | address | build | updated_at | started_at | is_live | - +----+-----------------+--------+---------------------+---------------------+---------+ - | 1 | localhost:26257 | v2.0.3 | 2018-07-05 08:11:17 | 2018-07-05 07:53:57 | true | - | 2 | localhost:26258 | v2.0.3 | 2018-07-05 08:11:19 | 2018-07-05 07:53:58 | true | - | 3 | localhost:26259 | v2.0.3 | 2018-07-05 08:11:19 | 2018-07-05 07:53:59 | true | - +----+-----------------+--------+---------------------+---------------------+---------+ - (3 rows) - ~~~ - -2. In the response, note the ID of the node running on port `26257`. - -3. Connect the [built-in SQL shell](use-the-built-in-sql-client.html) to any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --port=26257 - ~~~ - -4. Check where the range lease is for the `kv` table: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW EXPERIMENTAL_RANGES FROM TABLE test.kv; - ~~~ - - ~~~ - +-----------+---------+----------+----------+--------------+ - | Start Key | End Key | Range ID | Replicas | Lease Holder | - +-----------+---------+----------+----------+--------------+ - | NULL | NULL | 29 | {1,2,3} | 1 | - +-----------+---------+----------+----------+--------------+ - (1 row) - ~~~ - - As you can see, the lease for the range holding the `kv` table's data is now on node 1, which is the same ID as the node on port `26257`. - -### Step 9. Stop the cluster - -Once you're done with your cluster, press **CTRL-C** in each node's terminal. - -{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf follow1 follow2 follow3 -~~~ - -### Step 10. Stop simulating network latency - -Once you're done with this tutorial, you will not want a 100 millisecond delay for all requests on your local workstation, so stop the `comcast` tool: - -{% include copy-clipboard.html %} -~~~ shell -$ comcast --device lo0 --stop -~~~ - -## What's Next? - -Use a local cluster to explore these other core CockroachDB benefits - -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/v2.0/demo-json-support.md b/src/current/v2.0/demo-json-support.md deleted file mode 100644 index 36317e0e28c..00000000000 --- a/src/current/v2.0/demo-json-support.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -title: JSON Support -summary: Use a local cluster to explore how CockroachDB can store and query unstructured JSONB data. -toc: true ---- - -New in v2.0: This page walks you through a simple demonstration of how CockroachDB can store and query unstructured [`JSONB`](jsonb.html) data from a third-party API, as well as how an [inverted index](inverted-indexes.html) can optimize your queries. - - -## Step 1. Install prerequisites - -
      - - -
      - -
      -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install the latest version of [Go](https://golang.org/dl/): `brew install go` -- Install the [PostgreSQL driver](https://github.com/lib/pq): `go get github.com/lib/pq` -
      - -
      -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install the [Python psycopg2 driver](http://initd.org/psycopg/docs/install.html): `pip install psycopg2` -- Install the [Python Requests library](https://requests.readthedocs.io/en/latest/): `pip install requests` -
      - -## Step 2. Start a single-node cluster - -For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=json-test \ ---host=localhost -~~~ - -## Step 3. Create a user - -In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set maxroach --insecure -~~~ - -## Step 4. Create a database and grant privileges - -As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -Next, create a database called `jsonb_test`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE jsonb_test; -~~~ - -Set the database as the default: - -{% include copy-clipboard.html %} -~~~ sql -> SET DATABASE = jsonb_test; -~~~ - -Then [grant privileges](grant.html) to the `maxroach` user: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE jsonb_test TO maxroach; -~~~ - -## Step 5. Create a table - -Still in the SQL shell, create a table called `programming`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE programming ( - id UUID DEFAULT uuid_v4()::UUID PRIMARY KEY, - posts JSONB - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE programming; -~~~ -~~~ -+--------------+-------------------------------------------------+ -| Table | CreateTable | -+--------------+-------------------------------------------------+ -| programming | CREATE TABLE programming ( | -| | id UUID NOT NULL DEFAULT uuid_v4()::UUID, | -| | posts JSON NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | FAMILY "primary" (id, posts) | -| | ) | -+--------------+-------------------------------------------------+ -~~~ - -## Step 6. Run the code - -Now that you have a database, user, and a table, let's run code to insert rows into the table. - -
      - - -
      - -
      -The code queries the [Reddit API](https://www.reddit.com/dev/api/) for posts in [/r/programming](https://www.reddit.com/r/programming/). The Reddit API only returns 25 results per page; however, each page returns an `"after"` string that tells you how to get the next page. Therefore, the program does the following in a loop: - -1. Makes a request to the API. -2. Inserts the results into the table and grabs the `"after"` string. -3. Uses the new `"after"` string as the basis for the next request. - -Download the json-sample.go file, or create the file yourself and copy the code into it: - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/json/json-sample.go %} -~~~ - -In a new terminal window, navigate to your sample code file and run it: - -{% include copy-clipboard.html %} -~~~ shell -$ go run json-sample.go -~~~ -
      - -
      -The code queries the [Reddit API](https://www.reddit.com/dev/api/) for posts in [/r/programming](https://www.reddit.com/r/programming/). The Reddit API only returns 25 results per page; however, each page returns an `"after"` string that tells you how to get the next page. Therefore, the program does the following in a loop: - -1. Makes a request to the API. -2. Grabs the `"after"` string. -3. Inserts the results into the table. -4. Uses the new `"after"` string as the basis for the next request. - -Download the json-sample.py file, or create the file yourself and copy the code into it: - -{% include copy-clipboard.html %} -~~~ python -{% include {{ page.version.version }}/json/json-sample.py %} -~~~ - -In a new terminal window, navigate to your sample code file and run it: - -{% include copy-clipboard.html %} -~~~ shell -$ python json-sample.py -~~~ -
      - -The program will take awhile to finish, but you can start querying the data right away. - -## Step 7. Query the data - -Back in the terminal where the SQL shell is running, verify that rows of data are being inserted into your table: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT count(*) FROM programming; -~~~ -~~~ -+-------+ -| count | -+-------+ -| 1120 | -+-------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT count(*) FROM programming; -~~~ -~~~ -+-------+ -| count | -+-------+ -| 2400 | -+-------+ -~~~ - -Now, retrieve all the current entries where the link is pointing to somewhere on GitHub: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id FROM programming \ -WHERE posts @> '{"data": {"domain": "github.com"}}'; -~~~ -~~~ -+--------------------------------------+ -| id | -+--------------------------------------+ -| 0036d489-3fe3-46ec-8219-2eaee151af4b | -| 00538c2f-592f-436a-866f-d69b58e842b6 | -| 00aff68c-3867-4dfe-82b3-2a27262d5059 | -| 00cc3d4d-a8dd-4c9a-a732-00ed40e542b0 | -| 00ecd1dd-4d22-4af6-ac1c-1f07f3eba42b | -| 012de443-c7bf-461a-b563-925d34d1f996 | -| 014c0ac8-4b4e-4283-9722-1dd6c780f7a6 | -| 017bfb8b-008e-4df2-90e4-61573e3a3f62 | -| 0271741e-3f2a-4311-b57f-a75e5cc49b61 | -| 02f31c61-66a7-41ba-854e-1ece0736f06b | -| 035f31a1-b695-46be-8b22-469e8e755a50 | -| 03bd9793-7b1b-4f55-8cdd-99d18d6cb3ea | -| 03e0b1b4-42c3-4121-bda9-65bcb22dcf72 | -| 0453bc77-4349-4136-9b02-3a6353ea155e | -... -+--------------------------------------+ -(334 rows) - -Time: 105.877736ms -~~~ - -{{site.data.alerts.callout_info}}Since you are querying live data, your results for this and the following steps may vary from the results documented in this tutorial.{{site.data.alerts.end}} - -## Step 8. Create an inverted index to optimize performance - -The query in the previous step took 105.877736ms. To optimize the performance of queries that filter on the `JSONB` column, let's create an [inverted index](inverted-indexes.html) on the column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INVERTED INDEX ON programming(posts); -~~~ - -## Step 9. Run the query again - -Now that there is an inverted index, the same query will run much faster: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id FROM programming \ -WHERE posts @> '{"data": {"domain": "github.com"}}'; -~~~ -~~~ -(334 rows) - -Time: 28.646769ms -~~~ - -Instead of 105.877736ms, the query now takes 28.646769ms. - -## What's Next? - -Use a local cluster to explore these other core CockroachDB features: - -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) - -You may also want to learn more about the [`JSONB`](jsonb.html) data type and [inverted indexes](inverted-indexes.html). diff --git a/src/current/v2.0/deploy-cockroachdb-on-aws-insecure.md b/src/current/v2.0/deploy-cockroachdb-on-aws-insecure.md deleted file mode 100644 index 01f51ba281e..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-aws-insecure.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: Deploy CockroachDB on AWS EC2 (Insecure) -summary: Learn how to deploy CockroachDB on Amazon's AWS EC2 platform. -toc: true -toc_not_nested: true -ssh-link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -- All instances running CockroachDB should be members of the same Security Group. - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI - -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). - -#### Inter-node and load balancer-node communication - -| Field | Recommended Value | -|-------|-------------------| -| Type | Custom TCP Rule | -| Protocol | TCP | -| Port Range | **26257** | -| Source | The name of your security group (e.g., *sg-07ab277a*) | - -#### Admin UI - -| Field | Recommended Value | -|-------|-------------------| -| Type | Custom TCP Rule | -| Protocol | TCP | -| Port Range | **8080** | -| Source | Your network's IP ranges | - -#### Application data - -| Field | Recommended Value | -|-------|-------------------| -| Type | Custom TCP Rules | -| Protocol | TCP | -| Port Range | **26257** | -| Source | Your application's IP ranges | - -## Step 2. Create instances - -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `m3.large` instances (2 vCPUs and 7.5 GiB of RAM per instance) for internal testing. - -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -AWS offers fully-managed load balancing to distribute traffic between instances. - -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the AWS load balancer, not to a CockroachDB node. - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-aws.md b/src/current/v2.0/deploy-cockroachdb-on-aws.md deleted file mode 100644 index 46fb0610d17..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-aws.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: Deploy CockroachDB on AWS EC2 -summary: Learn how to deploy CockroachDB on Amazon's AWS EC2 platform. -toc: true -toc_not_nested: true -ssh-link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html - ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -- All instances running CockroachDB should be members of the same Security Group. - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI - -You can create these rules using [Security Groups' Inbound Rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule). - -#### Inter-node and load balancer-node communication - -| Field | Recommended Value | -|-------|-------------------| -| Type | Custom TCP Rule | -| Protocol | TCP | -| Port Range | **26257** | -| Source | The name of your security group (e.g., *sg-07ab277a*) | - -#### Admin UI - -| Field | Recommended Value | -|-------|-------------------| -| Type | Custom TCP Rule | -| Protocol | TCP | -| Port Range | **8080** | -| Source | Your network's IP ranges | - -#### Application data - -| Field | Recommended Value | -|-------|-------------------| -| Type | Custom TCP Rules | -| Protocol | TCP | -| Port Range | **26257** | -| Source | Your application's IP ranges | - -## Step 2. Create instances - -[Create an instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to ensure survivability. - -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/), with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `m3.large` instances (2 vCPUs and 7.5 GiB of RAM per instance) for internal testing. - -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -AWS offers fully-managed load balancing to distribute traffic between instances. - -1. [Add AWS load balancing](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-increase-availability.html). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test your cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-digital-ocean-insecure.md b/src/current/v2.0/deploy-cockroachdb-on-digital-ocean-insecure.md deleted file mode 100644 index 32cab2f192c..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-digital-ocean-insecure.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Deploy CockroachDB on Digital Ocean (Insecure) -summary: Learn how to deploy a CockroachDB cluster on Digital Ocean. -toc: true -toc_not_nested: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-connect-to-your-droplet-with-ssh - ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Digital Ocean, using Digital Ocean's managed load balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -- If all of your CockroachDB nodes and clients will run on Droplets in a single region, consider using [private networking](https://docs.digitalocean.com/products/networking/vpc/how-to/create/). - -## Step 1. Create Droplets - -[Create Droplets](https://www.digitalocean.com/community/tutorials/how-to-create-your-first-digitalocean-droplet) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate droplet for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use any [droplets](https://www.digitalocean.com/pricing/) except standard droplets with only 1 GB of RAM, which is below our minimum requirement. All Digital Ocean droplets use SSD storage. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 2. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 3. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Digital Ocean offers fully-managed load balancers to distribute traffic between Droplets. - -1. [Create a Digital Ocean Load Balancer](https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-load-balancers). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the node Droplets. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Digital Ocean's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 4. Configure your network - -Set up a firewall for each of your Droplets, allowing TCP communication on the following two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -For guidance, you can use Digital Ocean's guide to configuring firewalls based on the Droplet's OS: - -- Ubuntu and Debian can use [`ufw`](https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server). -- FreeBSD can use [`ipfw`](https://www.digitalocean.com/community/tutorials/recommended-steps-for-new-freebsd-10-1-servers). -- Fedora can use [`iptables`](https://www.digitalocean.com/community/tutorials/initial-setup-of-a-fedora-22-server). -- CoreOS can use [`iptables`](https://www.digitalocean.com/community/tutorials/how-to-secure-your-coreos-cluster-with-tls-ssl-and-firewall-rules). -- CentOS can use [`firewalld`](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-firewalld-on-centos-7). - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the Digital Ocean Load Balancer, not to a CockroachDB node. - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-digital-ocean.md b/src/current/v2.0/deploy-cockroachdb-on-digital-ocean.md deleted file mode 100644 index 609bbb850f3..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-digital-ocean.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Deploy CockroachDB on Digital Ocean -summary: Learn how to deploy a CockroachDB cluster on Digital Ocean. -toc: true -toc_not_nested: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-connect-to-your-droplet-with-ssh - ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Digital Ocean, using Digital Ocean's managed load balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -- If all of your CockroachDB nodes and clients will run on Droplets in a single region, consider using [private networking](https://docs.digitalocean.com/products/networking/vpc/how-to/create/). - -## Step 1. Create Droplets - -[Create Droplets](https://www.digitalocean.com/community/tutorials/how-to-create-your-first-digitalocean-droplet) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate droplet for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use any [droplets](https://www.digitalocean.com/pricing/) except standard droplets with only 1 GB of RAM, which is below our minimum requirement. All Digital Ocean droplets use SSD storage. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 2. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 3. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Digital Ocean offers fully-managed load balancers to distribute traffic between Droplets. - -1. [Create a Digital Ocean Load Balancer](https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-load-balancers). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the node Droplets. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Digital Ocean's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 4. Configure your network - -Set up a firewall for each of your Droplets, allowing TCP communication on the following two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -For guidance, you can use Digital Ocean's guide to configuring firewalls based on the Droplet's OS: - -- Ubuntu and Debian can use [`ufw`](https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server). -- FreeBSD can use [`ipfw`](https://www.digitalocean.com/community/tutorials/recommended-steps-for-new-freebsd-10-1-servers). -- Fedora can use [`iptables`](https://www.digitalocean.com/community/tutorials/initial-setup-of-a-fedora-22-server). -- CoreOS can use [`iptables`](https://www.digitalocean.com/community/tutorials/how-to-secure-your-coreos-cluster-with-tls-ssl-and-firewall-rules). -- CentOS can use [`firewalld`](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-firewalld-on-centos-7). - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-google-cloud-platform-insecure.md b/src/current/v2.0/deploy-cockroachdb-on-google-cloud-platform-insecure.md deleted file mode 100644 index cc990ceee42..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-google-cloud-platform-insecure.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: Deploy CockroachDB on Google Cloud Platform GCE (Insecure) -summary: Learn how to deploy CockroachDB on Google Cloud Platform's Compute Engine. -toc: true -toc_not_nested: true -ssh-link: https://cloud.google.com/compute/docs/instances/connecting-to-instance - ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE), using Google's TCP Proxy Load Balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster) -- **8080** (`tcp:8080`) for exposing your Admin UI - -Inter-node communication works by default using your GCE instances' internal IP addresses, which allow communication with other instances on CockroachDB's default port `26257`. However, to expose your admin UI and allow traffic from the TCP proxy load balancer and health checker to your instances, you need to [create firewall rules for your project](https://cloud.google.com/compute/docs/vpc/firewalls). - -### Creating Firewall Rules - -When creating firewall rules, we recommend using Google Cloud Platform's **tag** feature, which lets you specify that you want to apply the rule only to instance that include the same tag. - -#### Admin UI - -| Field | Recommended Value | -|-------|-------------------| -| Name | **cockroachadmin** | -| Source filter | IP ranges | -| Source IP ranges | Your local network's IP ranges | -| Allowed protocols... | **tcp:8080** | -| Target tags | **cockroachdb** | - -#### Application Data - -Applications will not connect directly to your CockroachDB nodes. Instead, they'll connect to GCE's TCP Proxy Load Balancing service, which automatically routes traffic to the instances that are closest to the user. Because this service is implemented at the edge of the Google Cloud, you'll need to create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is covered in [Step 4](#step-4-set-up-tcp-proxy-load-balancing). - -{{site.data.alerts.callout_danger}}When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see Manual Deployment for guidance).{{site.data.alerts.end}} - -## Step 2. Create instances - -[Create an instance](https://cloud.google.com/compute/docs/instances/create-start-instance) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use `n1-standard` or `n1-highcpu` [predefined VMs](https://cloud.google.com/compute/pricing#predefined_machine_types), or [custom VMs](https://cloud.google.com/compute/pricing#custommachinetypepricing), with [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) or [SSD persistent disks](https://cloud.google.com/compute/docs/disks/#pdspecs). For example, Cockroach Labs has used custom VMs (8 vCPUs and 16 GiB of RAM per VM) for internal testing. - -- **Do not** use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on a single core. - -- If you used a tag for your firewall rules, when you create the instance, select **Management, disk, networking, SSH keys**. Then on the **Networking** tab, in the **Network tags** field, enter **cockroachdb**. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up TCP Proxy Load Balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -GCE offers fully-managed [TCP Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/). This service lets you use a single IP address for all users around the world, automatically routing traffic to the instances that are closest to the user. - -{{site.data.alerts.callout_danger}}When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see the On-Premises tutorial for guidance).{{site.data.alerts.end}} - -To use GCE's TCP Proxy Load Balancing service: - -1. For each zone in which you're running an instance, [create a distinct instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances). - - To ensure that the load balancer knows where to direct traffic, specify a port name mapping, with `tcp26257` as the **Port name** and `26257` as the **Port number**. -2. [Add the relevant instances to each instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances#addinstances). -3. [Configure Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#configure_load_balancer). - - During backend configuration, create a health check, setting the **Protocol** to `HTTP`, the **Port** to `8080`, and the **Request path** to path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - - If you want to maintain long-lived SQL connections that may be idle for more than tens of seconds, increase the backend timeout setting accordingly. - - During frontend configuration, reserve a static IP address and choose a port. Note this address/port combination, as you'll use it for all of you client connections. -4. [Create a firewall rule](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall) to allow traffic from the load balancer and health checker to your instances. This is necessary because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud. - - Be sure to set **Source IP ranges** to `130.211.0.0/22` and `35.191.0.0/16` and set **Target tags** to `cockroachdb` (not to the value specified in the linked instructions). - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the GCE load balancer, not to a CockroachDB node. - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-google-cloud-platform.md b/src/current/v2.0/deploy-cockroachdb-on-google-cloud-platform.md deleted file mode 100644 index 620e03d5d94..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-google-cloud-platform.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: Deploy CockroachDB on Google Cloud Platform GCE -summary: Learn how to deploy CockroachDB on Google Cloud Platform's Compute Engine. -toc: true -toc_not_nested: true -ssh-link: https://cloud.google.com/compute/docs/instances/connecting-to-instance - ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE), using Google's TCP Proxy Load Balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster) -- **8080** (`tcp:8080`) for exposing your Admin UI - -Inter-node communication works by default using your GCE instances' internal IP addresses, which allow communication with other instances on CockroachDB's default port `26257`. However, to expose your admin UI and allow traffic from the TCP proxy load balancer and health checker to your instances, you need to [create firewall rules for your project](https://cloud.google.com/compute/docs/vpc/firewalls). - -### Creating Firewall Rules - -When creating firewall rules, we recommend using Google Cloud Platform's **tag** feature, which lets you specify that you want to apply the rule only to instance that include the same tag. - -#### Admin UI - -| Field | Recommended Value | -|-------|-------------------| -| Name | **cockroachadmin** | -| Source filter | IP ranges | -| Source IP ranges | Your local network's IP ranges | -| Allowed protocols... | **tcp:8080** | -| Target tags | **cockroachdb** | - -#### Application Data - -Applications will not connect directly to your CockroachDB nodes. Instead, they'll connect to GCE's TCP Proxy Load Balancing service, which automatically routes traffic to the instances that are closest to the user. Because this service is implemented at the edge of the Google Cloud, you'll need to create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is covered in [Step 4](#step-4-set-up-tcp-proxy-load-balancing). - -{{site.data.alerts.callout_danger}}When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see the On-Premises tutorial for guidance).{{site.data.alerts.end}} - -## Step 2. Create instances - -[Create an instance](https://cloud.google.com/compute/docs/instances/create-start-instance) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use `n1-standard` or `n1-highcpu` [predefined VMs](https://cloud.google.com/compute/pricing#predefined_machine_types), or [custom VMs](https://cloud.google.com/compute/pricing#custommachinetypepricing), with [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) or [SSD persistent disks](https://cloud.google.com/compute/docs/disks/#pdspecs). For example, Cockroach Labs has used custom VMs (8 vCPUs and 16 GiB of RAM per VM) for internal testing. - -- **Do not** use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on a single core. - -- If you used a tag for your firewall rules, when you create the instance, select **Management, disk, networking, SSH keys**. Then on the **Networking** tab, in the **Network tags** field, enter **cockroachdb**. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up TCP Proxy Load Balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -GCE offers fully-managed [TCP Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/). This service lets you use a single IP address for all users around the world, automatically routing traffic to the instances that are closest to the user. - -{{site.data.alerts.callout_danger}}When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see the On-Premises tutorial for guidance).{{site.data.alerts.end}} - -To use GCE's TCP Proxy Load Balancing service: - -1. For each zone in which you're running an instance, [create a distinct instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances). - - To ensure that the load balancer knows where to direct traffic, specify a port name mapping, with `tcp26257` as the **Port name** and `26257` as the **Port number**. -2. [Add the relevant instances to each instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances#addinstances). -3. [Configure TCP Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#configure_load_balancer). - - During backend configuration, create a health check, setting the **Protocol** to `HTTPS`, the **Port** to `8080`, and the **Request path** to path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - - If you want to maintain long-lived SQL connections that may be idle for more than tens of seconds, increase the backend timeout setting accordingly. - - During frontend configuration, reserve a static IP address and note the IP address and the port you select. You'll use this address and port for all client connections. -4. [Create a firewall rule](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall) to allow traffic from the load balancer and health checker to your instances. This is necessary because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud. - - Be sure to set **Source IP ranges** to `130.211.0.0/22` and `35.191.0.0/16` and set **Target tags** to `cockroachdb` (not to the value specified in the linked instructions). - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-microsoft-azure-insecure.md b/src/current/v2.0/deploy-cockroachdb-on-microsoft-azure-insecure.md deleted file mode 100644 index 1c10ca35740..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-microsoft-azure-insecure.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -title: Deploy CockroachDB on Microsoft Azure (Insecure) -summary: Learn how to deploy CockroachDB on Microsoft Azure. -toc: true -toc_not_nested: true -ssh-link: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys - ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Microsoft Azure, using Azure's managed load balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -To enable this in Azure, you must create a Resource Group, Virtual Network, and Network Security Group. - -1. [Create a Resource Group](https://azure.microsoft.com/en-us/updates/create-empty-resource-groups/). - -2. [Create a Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-vnet-arm-pportal) that uses your **Resource Group**. - -3. [Create a Network Security Group](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-nsg-arm-pportal) that uses your **Resource Group**, and then add the following **inbound** rules to it: - - **Admin UI support**: - - | Field | Recommended Value | - |-------|-------------------| - | Name | **cockroachadmin** | - | Source | **IP Addresses** | - | Source IP addresses/CIDR ranges | Your local network’s IP ranges | - | Source port ranges | * | - | Destination | **Any** | - | Destination port range | **8080** | - | Protocol | **TCP** | - | Action | **Allow** | - | Priority | Any value > 1000 | - - **Application support**: - - {{site.data.alerts.callout_success}}If your application is also hosted on the same Azure Virtual Network, you will not need to create a firewall rule for your application to communicate with your load balancer.{{site.data.alerts.end}} - - | Field | Recommended Value | - |-------|-------------------| - | Name | **cockroachapp** | - | Source | **IP Addresses** | - | Source IP addresses/CIDR ranges | Your local network’s IP ranges | - | Source port ranges | * | - | Destination | **Any** | - | Destination port range | **26257** | - | Protocol | **TCP** | - | Action | **Allow** | - | Priority | Any value > 1000 | - -## Step 2. Create VMs - -[Create Linux VMs](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate VM for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use storage-optimized [Ls-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-storage) VMs with [Premium Storage](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage) or local SSD storage with a Linux filesystem such as `ext4` (not the Windows `ntfs` filesystem). For example, Cockroach Labs has used `Standard_L4s` VMs (4 vCPUs and 32 GiB of RAM per VM) for internal testing. - - - If you choose local SSD storage, on reboot, the VM can come back with the `ntfs` filesystem. Be sure your automation monitors for this and reformats the disk to the Linux filesystem you chose initially. - -- **Do not** use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on a single core. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - -- When creating the VMs, make sure to select the **Resource Group**, **Virtual Network**, and **Network Security Group** you created. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Microsoft Azure offers fully-managed load balancing to distribute traffic between instances. - -1. [Add Azure load balancing](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Azure's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the Azure load balancer, not to a CockroachDB node. - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-microsoft-azure.md b/src/current/v2.0/deploy-cockroachdb-on-microsoft-azure.md deleted file mode 100644 index 284c52aebaa..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-microsoft-azure.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Deploy CockroachDB on Microsoft Azure -summary: Learn how to deploy CockroachDB on Microsoft Azure. -toc: true -toc_not_nested: true -ssh-link: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys - ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Microsoft Azure, using Azure's managed load balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -To enable this in Azure, you must create a Resource Group, Virtual Network, and Network Security Group. - -1. [Create a Resource Group](https://azure.microsoft.com/en-us/updates/create-empty-resource-groups/). -2. [Create a Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-vnet-arm-pportal) that uses your **Resource Group**. -3. [Create a Network Security Group](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-nsg-arm-pportal) that uses your **Resource Group**, and then add the following **inbound** rules to it: - - **Admin UI support**: - - | Field | Recommended Value | - |-------|-------------------| - | Name | **cockroachadmin** | - | Source | **IP Addresses** | - | Source IP addresses/CIDR ranges | Your local network’s IP ranges | - | Source port ranges | * | - | Destination | **Any** | - | Destination port range | **8080** | - | Protocol | **TCP** | - | Action | **Allow** | - | Priority | Any value > 1000 | - - **Application support**: - - {{site.data.alerts.callout_success}}If your application is also hosted on the same Azure Virtual Network, you will not need to create a firewall rule for your application to communicate with your load balancer.{{site.data.alerts.end}} - - | Field | Recommended Value | - |-------|-------------------| - | Name | **cockroachapp** | - | Source | **IP Addresses** | - | Source IP addresses/CIDR ranges | Your local network’s IP ranges | - | Source port ranges | * | - | Destination | **Any** | - | Destination port range | **26257** | - | Protocol | **TCP** | - | Action | **Allow** | - | Priority | Any value > 1000 | - -## Step 2. Create VMs - -[Create Linux VMs](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate VM for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#cluster-topology). - -- Use storage-optimized [Ls-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-storage) VMs with [Premium Storage](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage) or local SSD storage with a Linux filesystem such as `ext4` (not the Windows `ntfs` filesystem). For example, Cockroach Labs has used `Standard_L4s` VMs (4 vCPUs and 32 GiB of RAM per VM) for internal testing. - - - If you choose local SSD storage, on reboot, the VM can come back with the `ntfs` filesystem. Be sure your automation monitors for this and reformats the disk to the Linux filesystem you chose initially. - -- **Do not** use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on a single core. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - -- When creating the VMs, make sure to select the **Resource Group**, **Virtual Network**, and **Network Security Group** you created. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#cluster-topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Microsoft Azure offers fully-managed load balancing to distribute traffic between instances. - -1. [Add Azure load balancing](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Azure's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-premises-insecure.md b/src/current/v2.0/deploy-cockroachdb-on-premises-insecure.md deleted file mode 100644 index 26184e6d022..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-premises-insecure.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -title: Deploy CockroachDB On-Premises (Insecure) -summary: Learn how to manually deploy an insecure, multi-node CockroachDB cluster on multiple machines. -toc: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2 ---- - - - -This tutorial shows you how to manually deploy an insecure multi-node CockroachDB cluster on multiple machines, using [HAProxy](http://www.haproxy.org/) load balancers to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -## Step 1. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 2. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 3. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 4. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 5. Set up HAProxy load balancers - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - {{site.data.alerts.callout_success}}With a single load balancer, client connections are resilient to node failure, but the load balancer itself is a point of failure. It's therefore best to make load balancing resilient as well by using multiple load balancing instances, with a mechanism like floating IPs or DNS to select load balancers for clients.{{site.data.alerts.end}} - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster, so we feature that tool here. - -1. SSH to the machine where you want to run HAProxy. - -2. Install HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ apt-get install haproxy - ~~~ - -3. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -4. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command, specifying the address of any CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach gen haproxy --insecure \ - --host=
      \ - --port=26257 \ - ~~~ - - By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - - The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - - Field | Description - ------|------------ - `timeout connect`
      `timeout client`
      `timeout server` | Timeout values that should be suitable for most deployments. - `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

      This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. - `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. - `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. - `server` | For each node in the cluster, this field specifies the interface that the node listens on (i.e., the address passed in the `--host` flag on node startup) as well as the port to use for HTTP health checks. - - {{site.data.alerts.callout_info}}For full details on these and other configuration settings, see the HAProxy Configuration Manual.{{site.data.alerts.end}} - -6. Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - - {% include copy-clipboard.html %} - ~~~ shell - $ haproxy -f haproxy.cfg - ~~~ - -7. Repeat these steps for each additional instance of HAProxy you want to run. - -## Step 6. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 7. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 8. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 9. Use the cluster - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/deploy-cockroachdb-on-premises.md b/src/current/v2.0/deploy-cockroachdb-on-premises.md deleted file mode 100644 index 65e6166831e..00000000000 --- a/src/current/v2.0/deploy-cockroachdb-on-premises.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: Deploy CockroachDB On-Premises -summary: Learn how to manually deploy a secure, multi-node CockroachDB cluster on multiple machines. -toc: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2 - ---- - - - -This tutorial shows you how to manually deploy a secure multi-node CockroachDB cluster on multiple machines, using [HAProxy](http://www.haproxy.org/) load balancers to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - - -## Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -## Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -## Step 1. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 2. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 3. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 4. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 5. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 6. Set up HAProxy load balancers - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - {{site.data.alerts.callout_success}}With a single load balancer, client connections are resilient to node failure, but the load balancer itself is a point of failure. It's therefore best to make load balancing resilient as well by using multiple load balancing instances, with a mechanism like floating IPs or DNS to select load balancers for clients.{{site.data.alerts.end}} - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster, so we feature that tool here. - -1. On your local machine, run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command with the `--host` flag set to the address of any node and security flags pointing to the CA cert and the client cert and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach gen haproxy \ - --certs-dir=certs \ - --host=
      \ - --port=26257 - ~~~ - - By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - - The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - - Field | Description - ------|------------ - `timeout connect`
      `timeout client`
      `timeout server` | Timeout values that should be suitable for most deployments. - `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

      This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. - `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. - `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. - `server` | For each node in the cluster, this field specifies the interface that the node listens on (i.e., the address passed in the `--host` flag on node startup) as well as the port to use for HTTP health checks. - - {{site.data.alerts.callout_info}}For full details on these and other configuration settings, see the HAProxy Configuration Manual.{{site.data.alerts.end}} - -2. Upload the `haproxy.cfg` file to the machine where you want to run HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ scp haproxy.cfg @:~/ - ~~~ - -3. SSH to the machine where you want to run HAProxy. - -4. Install HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ apt-get install haproxy - ~~~ - -5. Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - - {% include copy-clipboard.html %} - ~~~ shell - $ haproxy -f haproxy.cfg - ~~~ - -6. Repeat these steps for each additional instance of HAProxy you want to run. - -## Step 7. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 8. Set up monitoring and alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 9. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 10. Use the cluster - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/diagnostics-reporting.md b/src/current/v2.0/diagnostics-reporting.md deleted file mode 100644 index 031f1ffd3f0..00000000000 --- a/src/current/v2.0/diagnostics-reporting.md +++ /dev/null @@ -1,322 +0,0 @@ ---- -title: Diagnostics Reporting -summary: Learn about the diagnostic details that get shared with CockroachDB and how to opt out of sharing. -toc: true ---- - -By default, the Admin UI and each node of a CockroachDB cluster share anonymous usage details with Cockroach Labs. These details, which are completely scrubbed of identifiable information, greatly help us understand and improve how the system behaves in real-world scenarios. - -This page explains the details that get shared and how to opt out of sharing. - -{{site.data.alerts.callout_success}}For insights into your cluster's performance and health, use the built-in Admin UI or a third-party monitoring tool like Prometheus.{{site.data.alerts.end}} - - -## What Gets Shared - -When diagnostics reporting is on, each node of a CockroachDB cluster shares anonymized storage details, SQL table structure details, and SQL query statistics with Cockroach Labs on an hourly basis, as well as crash reports as they occur. If the Admin UI is accessed, the anonymized user information and page views are shared. Please note that the details that get shared may change over time, but as that happens, we will update this page and announce the changes in release notes. - -### Storage Details - -Each node of a CockroachDB cluster shares the following storage details on an hourly basis: - -Detail | Description --------|------------ -Node ID | The internal ID of the node. -Store ID | The internal ID of each store on the node. -Bytes | The amount of live data used by applications and the CockroachDB system on the node and per store. This excludes historical and deleted data. -Range Count | The number of ranges on the node and per store. -Key Count | The number of keys stored on the node and per store. - -#### Example - -This JSON example shows what storage details look like when sent to Cockroach Labs, in this case for a node with two stores. - -~~~ json -{ - "node":{ - "node_id":1, - "bytes":64828, - "key_count":138, - "range_count":12 - }, - "stores":[ - { - "node_id":1, - "store_id":1, - "bytes":64828, - "key_count":138, - "range_count":12 - }, - { - "node_id":1, - "store_id":2, - "bytes":0, - "key_count":0, - "range_count":0 - } - ] -} -~~~ - -### SQL Table Structure Details - -Each node of a CockroachDB cluster shares the following details about the structure of each table stored on the node on an hourly basis: - -{{site.data.alerts.callout_info}}No actual table data or table/column names are shared, just metadata about the structure of tables. All names and other string values are scrubbed and replaced with underscores.{{site.data.alerts.end}} - -Detail | Description --------|------------ -Table | Metadata about each table, such as its internal ID, when it was last modified, and how many times it has been renamed. Table names are replaced with underscores. -Column | Metadata about each column in a table, such as its internal ID and type. Column names are replaced with underscores. -Column Families | Metadata about [column families](column-families.html) in a table, such as its internal ID and the columns included in the family. Family and column names are replaced with underscores. -Indexes | Metadata about the primary index and any secondary indexes on the table, such as the internal ID of an index and the columns covered by an index. All index, column, and other strings are replaced with underscores. -Privileges | Metadata about user [privileges](privileges.html) on the table, such as the number of privileges granted to each user. Usernames are replaced with underscores. -Checks | Metadata about any [check constraints](check.html) on the table. Check constraint names and expressions are replaced with underscores. - -#### Example - -This JSON example shows an excerpt of what table structure details look like when sent to Cockroach Labs, in this case for a node with just one table. Note that all names and other strings have been scrubbed and replaced with underscores. - -~~~ json -{ - "schema":[ - { - "name":"_", - "id":51, - "parent_id":50, - "version":1, - "up_version":false, - "modification_time":{ - "wall_time":0, - "logical":0 - }, - "columns":[ - { - "name":"_", - "id":1, - "type":{ - "kind":1, - "width":0, - "precision":0 - }, - "nullable":true, - "default_expr":"_", - "hidden":false - }, - ... - ], - ... - } - ] -} -~~~ - -### SQL Query Statistics - -Each node of a CockroachDB cluster shares the following statistics about the SQL queries it has executed on an hourly basis: - -{{site.data.alerts.callout_info}}No query results are shared, just the queries themselves, with all names and other strings scrubbed and replaced with underscores, and statistics about the queries.{{site.data.alerts.end}} - -Detail | Description --------|------------ -Query | The query executed. Names and other strings are replaced with underscores. -Counts | The number of times the query was executed, the number of times the query was committed on the first attempt (without retries), and the maximum observed number of times the query was retried automatically. -Last Error | The last error the query encountered. -Rows | The number of rows returned or observed. -Latencies | The amount of time involved in various aspects of the query, for example, the time to parse the query, the time to plan the query, and the time to run the query and fetch/compute results. - -#### Example - -This JSON example shows an excerpt of what query statistics look like when sent to Cockroach Labs. Note that all names and other strings have been scrubbed from the queries and replaced with underscores. - -~~~ json -{ - "sqlstats": { - "-3750763034362895579": { - "CREATE DATABASE _": { - "count": 1, - "first_attempt_count": 1, - "max_retries": 0, - "last_err": "", - "num_rows": { - "mean": 0, - "squared_diffs": 0 - }, - "parse_lat": { - "mean": 0.00010897, - "squared_diffs": 0 - }, - "plan_lat": { - "mean": 0.000011004, - "squared_diffs": 0 - }, - "run_lat": { - "mean": 0.002049073, - "squared_diffs": 0 - }, - "service_lat": { - "mean": 0.00220478, - "squared_diffs": 0 - }, - "overhead_lat": { - "mean": 0.0000357329999999996, - "squared_diffs": 0 - } - }, - "INSERT INTO _ VALUES (_)": { - "count": 10, - "first_attempt_count": 10, - "max_retries": 0, - "last_err": "", - "num_rows": { - "mean": 2, - "squared_diffs": 0 - }, - "parse_lat": { - "mean": 0.000021831200000000002, - "squared_diffs": 5.024879776000002e-10 - }, - "plan_lat": { - "mean": 0.00007221249999999999, - "squared_diffs": 7.744142312499998e-9 - }, - "run_lat": { - "mean": 0.0003641647, - "squared_diffs": 1.0141981141410002e-7 - }, - "service_lat": { - "mean": 0.00048527110000000004, - "squared_diffs": 2.195025173849e-7 - }, - "overhead_lat": { - "mean": 0.00002706270000000002, - "squared_diffs": 2.347266118100001e-9 - } - }, - ... - } - } -} -~~~ - -### Admin UI Details - -CockroachDB uses the Identity and Page methods of [Segment](https://segment.com/)'s analytics.js library to collect anonymized data about Admin UI usage. - -#### Identity event - -The Admin UI shares the following anonymized information once per Admin UI session: - -Detail | Description --------|------------ -User ID | The GUID of the cluster. -Enterprise | Whether or not the user is an enterprise license user. -User Agent | The browser used to access the Admin UI. -Version | The CockroachDB cluster version. - -#### Page events - -The Admin UI shares the following anonymized information about page views in batches of 20 page views: - -Detail | Description --------|------------ -User ID | The GUID of the cluster. -Name | The anonymized name of the Admin UI page or dashboard you navigate to. -Path | The anonymized path of the Admin UI page or dashboard you navigate to. - -#### Example - -This JSON example shows what anonymized Admin UI identity information looks like when sent to Segment: - -~~~ json -{ - "_metadata": {}, - "context": { - "library": { - "name": "analytics-node", - "version": "3.0.0" - } - }, - "messageId": "node-jFua5Hxj5peINPk0rAOGkCKgls60CiHF", - "timestamp": "2017-09-19T15:21:16.972Z", - "traits": { - "enterprise": true, - "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36", - "version": "v1.1-alpha.20170817-980-g3b098cd" - }, - "type": "identify", - "userId": "55bcbd902-f912-4a3e-91a0-56ca9de17ab7", - "writeKey": "5Vbp8WMYDmZTfCwE0uiUqEdAcTiZWFDb", - "sentAt": "2017-09-19T15:21:27.095Z", - "integrations": {}, - "receivedAt": "2017-09-19T15:21:27.169Z", - "originalTimestamp": "2017-09-19T15:21:16.898Z" -}, -~~~ - -This JSON example shows what anonymized Admin UI page views information looks like when sent to Segment: - -~~~ json -{ - "_metadata": {}, - "context": { - "library": { - "name": "analytics-node", - "version": "3.0.0" - } - }, - "messageId": "node-xuStnk7A2i30FDPdC51rpqxEU9gmym84", - "name": "/cluster", - "properties": { - "path": "/cluster" - }, - "timestamp": "2017-09-19T11:23:16.391Z", - "type": "page", - "userId": "c98564c4-5b95-40d3-82cc-bb18937930e1", - "writeKey": "5Vbp8WMYDmZTfCwE0uiUqEdAcTiZWFDb", - "sentAt": "2017-09-16T11:23:17.390Z", - "integrations": {}, - "receivedAt": "2017-09-16T11:23:26.412Z", - "originalTimestamp": "2017-09-16T11:23:07.369Z" -} -~~~ - -## Opt Out of Diagnostics Reporting - -### At Cluster Initialization - -To make sure that absolutely no diagnostic details are shared, you can set the environment variable `COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING=true` before starting the first node of the cluster. Note that this works only when set before starting the first node of the cluster. Once the cluster is running, you need to use the `SET CLUSTER SETTING` method described below. - -### After Cluster Initialization - -To stop sending diagnostic details to Cockroach Labs once a cluster is running, [use the built-in SQL client](use-the-built-in-sql-client.html) to execute the following [`SET CLUSTER SETTING`](set-cluster-setting.html) statement, which switches the `diagnostics.reporting.enabled` [cluster setting](cluster-settings.html) to `false`: - -~~~ sql -> SET CLUSTER SETTING diagnostics.reporting.enabled = false; -~~~ - -This change will not be instantaneous, as it must be propagated to other nodes in the cluster. - -## Check the State of Diagnostics Reporting - -To check the state of diagnostics reporting, [use the built-in SQL client](use-the-built-in-sql-client.html) to execute the following [`SHOW CLUSTER SETTING`](show-cluster-setting.html) statement: - -~~~ sql -> SHOW CLUSTER SETTING diagnostics.reporting.enabled; -~~~ - -~~~ -+-------------------------------+ -| diagnostics.reporting.enabled | -+-------------------------------+ -| false | -+-------------------------------+ -(1 row) -~~~ - -If the setting is `false`, diagnostics reporting is off; if the setting is `true`, diagnostics reporting is on. - -## See Also - -- [Cluster Settings](cluster-settings.html) -- [Start a Node](start-a-node.html) diff --git a/src/current/v2.0/distributed-transactions.md b/src/current/v2.0/distributed-transactions.md deleted file mode 100644 index 50d1bba5886..00000000000 --- a/src/current/v2.0/distributed-transactions.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Distributed Transactions -summary: CockroachDB implements efficient, fully-serializable distributed transactions. -toc: false ---- - -CockroachDB distributes [transactions](transactions.html) across your cluster, whether it’s a few servers in a single location or many servers across multiple datacenters. Unlike with sharded setups, you don’t need to know the precise location of data; you just talk to any node in your cluster and CockroachDB gets your transaction to the right place seamlessly. Distributed transactions proceed without downtime or additional latency while rebalancing is underway. You can even move tables – or entire databases – between data centers or cloud infrastructure providers while the cluster is under load. - -- Easily build consistent applications -- Optimistic concurrency with distributed deadlock detection -- Serializable default isolation level - -Distributed transactions in CockroachDB - -## See Also - -- [How CockroachDB Does Distributed, Atomic Transactions](https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/) -- [Serializable, Lockless, Distributed: Isolation in CockroachDB](https://www.cockroachlabs.com/blog/serializable-lockless-distributed-isolation-cockroachdb/) diff --git a/src/current/v2.0/drop-column.md b/src/current/v2.0/drop-column.md deleted file mode 100644 index 886dada09cc..00000000000 --- a/src/current/v2.0/drop-column.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: DROP COLUMN -summary: Use the ALTER COLUMN statement to remove columns from tables. -toc: true ---- - -The `DROP COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and removes columns from a table. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/drop_column.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table with the column you want to drop. | -| `name` | The name of the column you want to drop.

      When a column with a `CHECK` constraint is dropped, the `CHECK` constraint is also dropped. | -| `CASCADE` | Drop the column even if objects (such as [views](views.html)) depend on it; drop the dependent objects, as well.

      `CASCADE` does not list objects it drops, so should be used cautiously. However, `CASCADE` will not drop dependent indexes; you must use [`DROP INDEX`](drop-index.html).

      New in v2.0: `CASCADE` will drop a column with a foreign key constraint if it is the only column in the reference. | -| `RESTRICT` | *(Default)* Do not drop the column if any objects (such as [views](views.html)) depend on it. | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Drop Columns - -If you no longer want a column in a table, you can drop it. - -``` sql -> ALTER TABLE orders DROP COLUMN billing_zip; -``` - -### Prevent Dropping Columns with Dependent Objects (`RESTRICT`) - -If the column has dependent objects, such as [views](views.html), CockroachDB will not drop the column by default; however, if you want to be sure of the behavior you can include the `RESTRICT` clause. - -``` sql -> ALTER TABLE orders DROP COLUMN customer RESTRICT; -``` -``` -pq: cannot drop column "customer" because view "customer_view" depends on it -``` - -### Drop Column & Dependent Objects (`CASCADE`) - -If you want to drop the column and all of its dependent options, include the `CASCADE` clause. - -{{site.data.alerts.callout_danger}}CASCADE does not list objects it drops, so should be used cautiously.{{site.data.alerts.end}} - -``` sql -> SHOW CREATE VIEW customer_view; -``` -``` -+---------------+----------------------------------------------------------------+ -| View | CreateView | -+---------------+----------------------------------------------------------------+ -| customer_view | CREATE VIEW customer_view AS SELECT customer FROM store.orders | -+---------------+----------------------------------------------------------------+ -``` -``` sql -> ALTER TABLE orders DROP COLUMN customer CASCADE; -> SHOW CREATE VIEW customer_view; -``` -``` -pq: view "customer_view" does not exist -``` - -## See Also - -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`DROP INDEX`](drop-index.html) -- [`ALTER TABLE`](alter-table.html) diff --git a/src/current/v2.0/drop-constraint.md b/src/current/v2.0/drop-constraint.md deleted file mode 100644 index a31e329fc71..00000000000 --- a/src/current/v2.0/drop-constraint.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: DROP CONSTRAINT -summary: Use the ALTER CONSTRAINT statement to remove constraints from columns. -toc: true ---- - -The `DROP CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and removes Check and Foreign Key constraints from columns. - -{{site.data.alerts.callout_info}}For information about removing other constraints, see Constraints: Remove Constraints.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/drop_constraint.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table with the constraint you want to drop. | -| `name` | The name of the constraint you want to drop. | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Example - -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+---------------------------+-------------+-----------+----------------+ -| Table | Name | Type | Column(s) | Details | -+--------+---------------------------+-------------+-----------+----------------+ -| orders | fk_customer_ref_customers | FOREIGN KEY | customer | customers.[id] | -| orders | primary | PRIMARY KEY | id | NULL | -+--------+---------------------------+-------------+-----------+----------------+ -~~~ -~~~ sql -> ALTER TABLE orders DROP CONSTRAINT fk_customer_ref_customers; -~~~ -~~~ -ALTER TABLE -~~~ -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+---------+-------------+-----------+---------+ -| Table | Name | Type | Column(s) | Details | -+--------+---------+-------------+-----------+---------+ -| orders | primary | PRIMARY KEY | id | NULL | -+--------+---------+-------------+-----------+---------+ -~~~ - -{{site.data.alerts.callout_info}}You cannot drop the primary constraint, which indicates your table's Primary Key.{{site.data.alerts.end}} - -## See Also - -- [`DROP COLUMN`](drop-column.html) -- [`DROP INDEX`](drop-index.html) -- [`ALTER TABLE`](alter-table.html) diff --git a/src/current/v2.0/drop-database.md b/src/current/v2.0/drop-database.md deleted file mode 100644 index 2fb57e28ae8..00000000000 --- a/src/current/v2.0/drop-database.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: DROP DATABASE -summary: The DROP DATABASE statement removes a database and all its objects from a CockroachDB cluster. -toc: true ---- - -The `DROP DATABASE` [statement](sql-statements.html) removes a database and all its objects from a CockroachDB cluster. - - -## Required Privileges - -The user must have the `DROP` [privilege](privileges.html) on the database and on all tables in the database. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/drop_database.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the database if it exists; if it does not exist, do not return an error. -`name` | The name of the database you want to drop. -`CASCADE` | _(Default)_ Drop all tables and views in the database as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables.

      `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | Do not drop the database if it contains any [tables](create-table.html) or [views](create-view.html). - -## Examples - -### Drop a database and its objects (`CASCADE`) - -For non-interactive sessions (e.g., client applications), `DROP DATABASE` applies the `CASCADE` option by default, which drops all tables and views in the database as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables. - -~~~ sql -> SHOW TABLES FROM db2; -~~~ - -~~~ -+-------+ -| Table | -+-------+ -| t1 | -| v1 | -+-------+ -(2 rows) -~~~ - -~~~ sql -> DROP DATABASE db2; -~~~ - -~~~ sql -> SHOW TABLES FROM db2; -~~~ - -~~~ -pq: database "db2" does not exist -~~~ - -For interactive sessions from the [built-in SQL client](use-the-built-in-sql-client.html), either the `CASCADE` option must be set explicitly or the `--unsafe-updates` flag must be set when starting the shell. - -### Prevent dropping a non-empty database (`RESTRICT`) - -When a database is not empty, the `RESTRICT` option prevents the database from being dropped: - -~~~ sql -> SHOW TABLES FROM db2; -~~~ - -~~~ -+-------+ -| Table | -+-------+ -| t1 | -| v1 | -+-------+ -(2 rows) -~~~ - -~~~ sql -> DROP DATABASE db2 RESTRICT; -~~~ - -~~~ -pq: database "db2" is not empty and CASCADE was not specified -~~~ - -## See Also - -- [`CREATE DATABASE`](create-database.html) -- [`SHOW DATABASES`](show-databases.html) -- [`RENAME DATABASE`](rename-database.html) -- [`SET DATABASE`](set-vars.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/drop-index.md b/src/current/v2.0/drop-index.md deleted file mode 100644 index 6e447129cea..00000000000 --- a/src/current/v2.0/drop-index.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: DROP INDEX -summary: The DROP INDEX statement removes indexes from tables. -toc: true ---- - -The `DROP INDEX` [statement](sql-statements.html) removes indexes from tables. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/drop_index.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on each specified table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `IF EXISTS` | Drop the named indexes if they exist; if they do not exist, do not return an error.| -| `table_name` | The name of the table with the index you want to drop. Find table names with [`SHOW TABLES`](show-tables.html).| -| `index_name` | The name of the index you want to drop. Find index names with [`SHOW INDEX`](show-index.html).

      You cannot drop a table's `primary` index.| -| `CASCADE` | Drop all objects (such as [constraints](constraints.html)) that depend on the indexes. To drop a `UNIQUE INDEX`, you must use `CASCADE`.

      `CASCADE` does not list objects it drops, so should be used cautiously.| -| `RESTRICT` | _(Default)_ Do not drop the indexes if any objects (such as [constraints](constraints.html)) depend on them.| - -## Examples - -### Remove an Index (No Dependencies) -~~~ sql -> SHOW INDEX FROM tbl; -~~~ -~~~ -+-------+--------------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+--------------+--------+-----+--------+-----------+---------+----------+ -| tbl | primary | true | 1 | id | ASC | false | false | -| tbl | tbl_name_idx | false | 1 | name | ASC | false | false | -| tbl | tbl_name_idx | false | 2 | id | ASC | false | true | -+-------+--------------+--------+-----+--------+-----------+---------+----------+ -(3 rows) -~~~ -~~~ sql -> DROP INDEX tbl@tbl_name_idx; - -> SHOW INDEX FROM tbl; -~~~ -~~~ -+-------+---------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+---------+--------+-----+--------+-----------+---------+----------+ -| tbl | primary | true | 1 | id | ASC | false | false | -+-------+---------+--------+-----+--------+-----------+---------+----------+ -(1 row) -~~~ - -### Remove an Index and Dependent Objects with `CASCADE` - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -~~~ sql -> SHOW INDEX FROM orders; -~~~ -~~~ -+--------+---------------------+--------+-----+----------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+--------+---------------------+--------+-----+----------+-----------+---------+----------+ -| orders | primary | true | 1 | id | ASC | false | false | -| orders | orders_customer_idx | false | 1 | customer | ASC | false | false | -| orders | orders_customer_idx | false | 2 | id | ASC | false | true | -+--------+---------------------+--------+-----+----------+-----------+---------+----------+ -(3 rows) -~~~ -~~~ sql -> DROP INDEX orders@orders_customer_idx; -~~~ -~~~ -pq: index "orders_customer_idx" is in use as a foreign key constraint -~~~ -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+---------------------------+-------------+------------+----------------+ -| Table | Name | Type | Column(s) | Details | -+--------+---------------------------+-------------+------------+----------------+ -| orders | fk_customer_ref_customers | FOREIGN KEY | [customer] | customers.[id] | -| orders | primary | PRIMARY KEY | [id] | NULL | -+--------+---------------------------+-------------+------------+----------------+ -~~~ -~~~ sql -> DROP INDEX orders@orders_customer_idx CASCADE; - -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+---------+-------------+-----------+---------+ -| Table | Name | Type | Column(s) | Details | -+--------+---------+-------------+-----------+---------+ -| orders | primary | PRIMARY KEY | [id] | NULL | -+--------+---------+-------------+-----------+---------+ -~~~ diff --git a/src/current/v2.0/drop-role.md b/src/current/v2.0/drop-role.md deleted file mode 100644 index 8f94dd4b84f..00000000000 --- a/src/current/v2.0/drop-role.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: DROP ROLE (Enterprise) -summary: The DROP ROLE statement removes one or more SQL roles. -toc: true ---- - -New in v2.0: The `DROP ROLE` [statement](sql-statements.html) removes one or more SQL roles. - -{{site.data.alerts.callout_info}}DROP ROLE is an enterprise-only feature.{{site.data.alerts.end}} - - -## Considerations - -- The `admin` role cannot be dropped, and `root` must always be a member of `admin`. -- A role cannot be dropped if it has privileges. Use [`REVOKE`](revoke.html) to remove privileges. - -## Required Privileges - -Roles can only be dropped by super users, i.e., members of the `admin` role. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_role.html %}
      - - -## Parameters - -| Parameter | Description | -------------|-------------- -`name` | The name of the role to remove. To remove multiple roles, use a comma-separate list of roles.

      You can use [`SHOW ROLES`](show-roles.html) to find the names of roles. - -## Example - -In this example, first check a role's privileges. Then, revoke the role's privileges and remove the role. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON documents FOR dev_ops; -~~~ -~~~ -+------------+--------+-----------+---------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+---------+------------+ -| jsonb_test | public | documents | dev_ops | INSERT | -+------------+--------+-----------+---------+------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE INSERT ON documents FROM dev_ops; -~~~ - -{{site.data.alerts.callout_info}}All of a role's privileges must be revoked before the role can be dropped.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> DROP ROLE dev_ops; -~~~ -~~~ -DROP ROLE 1 -~~~ - -## See Also - -- [Manage Roles](roles.html) -- [`CREATE ROLE` (Enterprise)](create-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/drop-sequence.md b/src/current/v2.0/drop-sequence.md deleted file mode 100644 index 39a16f8d1da..00000000000 --- a/src/current/v2.0/drop-sequence.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: DROP SEQUENCE -summary: -toc: true ---- - -New in v2.0: The `DROP SEQUENCE` [statement](sql-statements.html) removes a sequence from a database. - - -## Required Privileges - -The user must have the `DROP` [privilege](privileges.html) on the specified sequence(s). - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_sequence.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Drop the sequence only if it exists; if it does not exist, do not return an error. -`sequence_name` | The name of the sequence you want to drop. Find the sequence name with `SHOW CREATE TABLE` on the table that uses the sequence. -`RESTRICT` | _(Default)_ Do not drop the sequence if any objects (such as [constraints](constraints.html) and tables) use it. -`CASCADE` | Not yet implemented. Currently, you can only drop a sequence if nothing depends on it. - - - -## Examples - -### Remove a Sequence (No Dependents) - -In this example, other objects do not depend on the sequence being dropped. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_seq | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP SEQUENCE customer_seq; -~~~ -~~~ -DROP SEQUENCE -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - - - - -## See Also -- [`CREATE SEQUENCE`](create-sequence.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`RENAME SEQUENCE`](rename-sequence.html) -- [Functions and Operators](functions-and-operators.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/drop-table.md b/src/current/v2.0/drop-table.md deleted file mode 100644 index 7f4105f8048..00000000000 --- a/src/current/v2.0/drop-table.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: DROP TABLE -summary: The DROP TABLE statement removes a table and all its indexes from a database. -toc: true ---- - -The `DROP TABLE` [statement](sql-statements.html) removes a table and all its indexes from a database. - - -## Required Privileges - -The user must have the `DROP` [privilege](privileges.html) on the specified table(s). If `CASCADE` is used, the user must have the privileges required to drop each dependent object as well. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/drop_table.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the table if it exists; if it does not exist, do not return an error. -`table_name` | A comma-separated list of table names. To find table names, use [`SHOW TABLES`](show-tables.html). -`CASCADE` | Drop all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on the table.

      `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | _(Default)_ Do not drop the table if any objects (such as [constraints](constraints.html) and [views](views.html)) depend on it. - -## Examples - -### Remove a Table (No Dependencies) - -In this example, other objects do not depend on the table being dropped. - -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+--------------------+ -| Table | -+--------------------+ -| accounts | -| branches | -| user_accounts_view | -+--------------------+ -(3 rows) -~~~ - -~~~ sql -> DROP TABLE bank.branches; -~~~ - -~~~ -DROP TABLE -~~~ - -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+--------------------+ -| Table | -+--------------------+ -| accounts | -| user_accounts_view | -+--------------------+ -(2 rows) -~~~ - -### Remove a Table and Dependent Objects with `CASCADE` - -In this example, a view depends on the table being dropped. Therefore, it's only possible to drop the table while simultaneously dropping the dependent view using `CASCADE`. - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+--------------------+ -| Table | -+--------------------+ -| accounts | -| user_accounts_view | -+--------------------+ -(2 rows) -~~~ - -~~~ sql -> DROP TABLE bank.accounts; -~~~ - -~~~ -pq: cannot drop table "accounts" because view "user_accounts_view" depends on it -~~~ - -~~~sql -> DROP TABLE bank.accounts CASCADE; -~~~ - -~~~ -DROP TABLE -~~~ - -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+-------+ -| Table | -+-------+ -+-------+ -(0 rows) -~~~ - -## See Also - -- [`ALTER TABLE`](alter-table.html) -- [`CREATE TABLE`](create-table.html) -- [`INSERT`](insert.html) -- [`RENAME TABLE`](rename-table.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW TABLES`](show-tables.html) -- [`UPDATE`](update.html) -- [`DELETE`](delete.html) -- [`DROP INDEX`](drop-index.html) -- [`DROP VIEW`](drop-view.html) diff --git a/src/current/v2.0/drop-user.md b/src/current/v2.0/drop-user.md deleted file mode 100644 index fb5a771fd99..00000000000 --- a/src/current/v2.0/drop-user.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: DROP USER -summary: The DROP USER statement removes one or more SQL users. -toc: true ---- - -New in v1.1: The `DROP USER` [statement](sql-statements.html) removes one or more SQL users. - -{{site.data.alerts.callout_success}}You can also use the cockroach user rm command to remove users.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `DELETE` [privilege](privileges.html) on the `system.users` table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_user.html %}
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -|`user_name` | The username of the user to remove. To remove multiple users, use a comma-separate list of usernames.

      You can use [`SHOW USERS`](show-users.html) to find usernames.| - -## Example - -New in v2.0: All of a user's privileges must be revoked before the user can be dropped. - -In this example, first check a user's privileges. Then, revoke the user's privileges before removing the user. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON test.customers FOR mroach; -~~~ - -~~~ -+-----------+--------+------------+ -| Table | User | Privileges | -+-----------+--------+------------+ -| customers | mroach | CREATE | -| customers | mroach | INSERT | -| customers | mroach | UPDATE | -+-----------+--------+------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE CREATE,INSERT,UPDATE ON test.customers FROM mroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP USER mroach; -~~~ - -## See Also - -- [`cockroach user` command](create-and-manage-users.html) -- [`CREATE USER`](create-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/drop-view.md b/src/current/v2.0/drop-view.md deleted file mode 100644 index dc209cfc1a9..00000000000 --- a/src/current/v2.0/drop-view.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: DROP VIEW -summary: The DROP VIEW statement removes a view from a database. -toc: true ---- - -The `DROP VIEW` [statement](sql-statements.html) removes a [view](views.html) from a database. - - -## Required Privileges - -The user must have the `DROP` [privilege](privileges.html) on the specified view(s). If `CASCADE` is used to drop dependent views, the user must have the `DROP` privilege on each dependent view as well. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_view.html %}
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `IF EXISTS` | Drop the view if it exists; if it does not exist, do not return an error.| -| `table_name` | A comma-separated list of view names. To find view names, use:

      `SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';`| -| `CASCADE` | Drop other views that depend on the view being dropped.

      `CASCADE` does not list views it drops, so should be used cautiously.| -| `RESTRICT` | _(Default)_ Do not drop the view if other views depend on it.| - -## Examples - -### Remove a View (No Dependencies) - -In this example, other views do not depend on the view being dropped. - -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -~~~ sql -> DROP VIEW bank.user_emails; -~~~ - -~~~ -DROP VIEW -~~~ - -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(1 row) -~~~ - -### Remove a View (With Dependencies) - -In this example, another view depends on the view being dropped. Therefore, it's only possible to drop the view while simultaneously dropping the dependent view using `CASCADE`. - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent views without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -~~~ sql -> DROP VIEW bank.user_accounts; -~~~ - -~~~ -pq: cannot drop view "user_accounts" because view "user_emails" depends on it -~~~ - -~~~sql -> DROP VIEW bank.user_accounts CASCADE; -~~~ - -~~~ -DROP VIEW -~~~ - -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | create_test | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(1 row) -~~~ - -## See Also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`ALTER VIEW`](alter-view.html) diff --git a/src/current/v2.0/enable-node-map.md b/src/current/v2.0/enable-node-map.md deleted file mode 100644 index 4470bbb8bdc..00000000000 --- a/src/current/v2.0/enable-node-map.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: Enable the Node Map -summary: Learn how to enable the node map in the Admin UI. -toc: true ---- - -New in v2.0 The **Node Map** visualizes the geographical configuration of a multi-regional cluster by plotting the node localities on a world map. The **Node Map** also provides real-time cluster metrics, with the ability to drill down to individual nodes to monitor and troubleshoot the cluster health and performance. - -This page walks you through the process of setting up and enabling the **Node Map**. - -{{site.data.alerts.callout_info}}The Node Map is an enterprise-only feature. However, you can request a trial license to try it out. {{site.data.alerts.end}} - -CockroachDB Admin UI - - -## Set Up and Enable the Node Map - -To enable the **Node Map**, you need to start the cluster with the correct `--locality` flags and assign the latitudes and longitudes for each locality. - -{{site.data.alerts.callout_info}}The Node Map will not be displayed until all nodes are started with the correct --locality flags and all localities are assigned the corresponding latitudes and longitudes. {{site.data.alerts.end}} - -Consider a scenario of a four-node geo-distributed cluster with the following configuration: - -| Node | Region | Datacenter | -| ------ | ------ | ------ | -| Node1 | us-east-1 | us-east-1a | -| Node2 | us-east-1 | us-east-1b | -| Node3 | us-west-1 | us-west-1a | -| Node4 | eu-west-1 | eu-west-1a | - -### Step 1. Ensure the CockroachDB Version is 2.0 or Higher - -~~~ shell -$ cockroach version -~~~ - -~~~ -Build Tag: {{page.release_info.version}} -Build Time: {{page.release_info.build_time}} -Distribution: CCL -Platform: darwin amd64 (x86_64-apple-darwin13) -Go Version: go1.10 -C Compiler: 4.2.1 Compatible Clang 3.8.0 (tags/RELEASE_380/final) -Build SHA-1: 367ad4f673b33694df06caaa2d7fc63afaaf3053 -Build Type: release -~~~ - -If any node is running an earlier version, [upgrade it to CockroachDB v2.0](upgrade-cockroach-version.html). - -### Step 2. Start the Nodes with the Correct `--locality` Flags - -To start a new cluster with the correct `--locality` flags: - -Start Node 1: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=us-east-1,datacenter=us-east-1a \ ---host= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=:26257,:26257,:26257,:26257 -~~~ - -Start Node 2: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=us-east-1,datacenter=us-east-1b \ ---host= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=:26257,:26257,:26257,:26257 -~~~ - -Start Node 3: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=us-west-1,datacenter=us-west-1a \ ---host= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=:26257,:26257,:26257,:26257 -~~~ - -Start Node 4: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=eu-west-1,datacenter=eu-west-1a \ ---host= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=:26257,:26257,:26257,:26257 -~~~ - -Use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --insecure -~~~ - -[Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui). The following page is displayed: - -CockroachDB Admin UI - -### Step 3. [Set the Enterprise License](enterprise-licensing.html) and refresh the Admin UI - -The following page should be displayed: - -CockroachDB Admin UI - -### Step 4. Set the Latitudes and Longitudes for the Localities - -Launch the built-in SQL client: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=
      -~~~ - -Insert the approximate latitudes and longitudes of each region into the `system.locations` table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO system.locations VALUES - ('region', 'us-east-1', 37.478397, -76.453077), - ('region', 'us-west-1', 38.837522, -120.895824), - ('region', 'eu-west-1', 53.142367, -7.692054); -~~~ - -{{site.data.alerts.callout_info}}The Node Map will not be displayed until all regions are assigned the corresponding latitudes and longitudes. {{site.data.alerts.end}} - -For the latitudes and longitudes of AWS, Azure, and Google Cloud regions, see [Location Coordinates for Reference](#location-coordinates-for-reference). - -### Step 5. View the Node Map - -[Open the **Overview page**](admin-ui-access-and-navigate.html) and select **Node Map** from the **View** drop-down menu. The **Node Map** will be displayed: - -CockroachDB Admin UI - -### Step 6. Navigate the Node Map - -Let's say you want to navigate to Node 2, which is in datacenter `us-east-1a` in the `us-east-1` region: - -1. Click on the map component marked as **region=us-east-1** on the **Node Map**. The datacenter view is displayed. -2. Click on the datacenter component marked as **datacenter=us-east-1a**. The individual node components are displayed. -3. To navigate back to the cluster view, either click on **Cluster** in the bread-crumb trail at the top of the **Node Map**, or click **Up to region=us-east-1** and then click **Up to Cluster** in the lower left-hand side of the **Node Map**. - -CockroachDB Admin UI - -## Troubleshoot the Node Map - -### Node Map Not Displayed - -The **Node Map** will not be displayed until all nodes have localities and are assigned the corresponding latitudes and longitudes. To verify if you have assigned localities as well as latitude and longitudes assigned to all nodes, navigate to the Localities debug page (`https://
      :8080/#/reports/localities`) in the Admin UI. - -The Localities debug page displays the following: - -- Localities configuration that you set up while starting the nodes with the `--locality` flags. -- Nodes corresponding to each locality. -- Latitude and longitude coordinates for each locality/node. - -On the page, ensure that every node has a locality as well as latitude/longitude coordinates assigned to them. - -### Node Map Not Displayed for All Locality Levels - -The **Node Map** is displayed only for the locality levels that have latitude/longitude coordinates assigned to them: - -- If you assign the latitude/longitude coordinates at the region level, the **Node Map** shows the regions on the world map. However, when you drill down to the datacenter and further to the individual nodes, the world map disappears and the datacenters/nodes are plotted in a circular layout. -- If you assign the latitude/longitude coordinates at the datacenter level, the **Node Map** shows the regions with single datacenters at the same location assigned to the datacenter, while regions with multiple datacenters are shown at the center of the datacenter coordinates in the region. When you drill down to the datacenter levels, the **Node Map** shows the datacenter at their assigned coordinates. Further drilling down to individual nodes shows the nodes in a circular layout. - -[Assign latitude/longitude coordinates](#step-4-set-the-latitudes-and-longitudes-for-the-localities) at the locality level that you want to view on the **Node Map**. - -## Known Limitations - -### Unable to Assign Latitude/Longitude Coordinates to Localities - -{% include {{ page.version.version }}/known-limitations/node-map.md %} - -### **Capacity Used** Value Displayed is More Than Configured Capacity - -{% include v2.0/misc/available-capacity-metric.md %} - -## Location Coordinates for Reference - -### AWS locations - -{% include {{ page.version.version }}/misc/aws-locations.md %} - -### Azure locations - -{% include {{ page.version.version }}/misc/azure-locations.md %} - -### Google Cloud locations - -{% include {{ page.version.version }}/misc/gce-locations.md %} diff --git a/src/current/v2.0/enterprise-licensing.md b/src/current/v2.0/enterprise-licensing.md deleted file mode 100644 index 891162a4a91..00000000000 --- a/src/current/v2.0/enterprise-licensing.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: Enterprise Licensing -summary: Request and set trial and enterprise license keys for CockroachDB -toc: true ---- - -CockroachDB distributes a single binary that contains both core and [enterprise features](https://www.cockroachlabs.com/pricing/). You can use core features without any license key. However, to use the enterprise features, you need either a trial or an enterprise license key. - -This page shows you how to obtain and set trial and enterprise license keys for CockroachDB. - - -## Types of Licenses - -Type | Description --------------|------------ -**Trial License** | A trial license enables you to try out CockroachDB enterprise features for 30 days for free. -**Enterprise License** | A paid enterprise license enables you to use CockroachDB enterprise features for longer periods (one year or more). - -## Obtain a Trial or Enterprise License Key - -To obtain a trial license key, fill out [the registration form](https://www.cockroachlabs.com/get-cockroachdb/enterprise/) and receive your trial license key via email within a few minutes. - -To upgrade to an enterprise license, [contact Sales](mailto:sales@cockroachlabs.com). - -## Set the Trial or Enterprise License Key - -As the CockroachDB `root` user, open the [built-in SQL shell](use-the-built-in-sql-client.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. Then use the `SET CLUSTER SETTING` command to set the name of your organization and the license key: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ - -## Verify the License Key - -To verify the license key, open the [built-in SQL shell](use-the-built-in-sql-client.html) and use the `SHOW CLUSTER SETTING` command to check the organization name and license key: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING cluster.organization; -~~~ -~~~ -+----------------------+ -| cluster.organization | -+----------------------+ -| Acme Company | -+----------------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING enterprise.license; -~~~ -~~~ -+--------------------------------------------------------------------+ -| enterprise.license | -+--------------------------------------------------------------------+ -| xxxxxxxxxxxx | -+--------------------------------------------------------------------+ -(1 row) -~~~ - -The license setting is also logged in the cockroach.log on the node where the command is run: - -{% include copy-clipboard.html %} -~~~ sql -$ cat cockroach.log | grep license -~~~ -~~~ -I171116 18:11:48.279604 1514 sql/event_log.go:102 [client=[::1]:56357,user=root,n1] Event: "set_cluster_setting", target: 0, info: {SettingName:enterprise.license Value:xxxxxxxxxxxx User:root} -~~~ - -## Renew an Expired License - -After your license expires, the enterprise features stop working, but your production setup is unaffected. For example, the backup and restore features would not work until the license is renewed, but you would be able to continue using all other features of CockroachDB without interruption. - -To renew an expired license, contact Sales and then [set](enterprise-licensing.html#set-the-trial-or-enterprise-license-key) the new license. - -## See Also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Enterprise Trial –– Get Started](get-started-with-enterprise-trial.html) diff --git a/src/current/v2.0/experimental-audit.md b/src/current/v2.0/experimental-audit.md deleted file mode 100644 index 3e2b67aa3fe..00000000000 --- a/src/current/v2.0/experimental-audit.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: EXPERIMENTAL_AUDIT -summary: Use the EXPERIMENTAL_AUDIT subcommand to turn SQL audit logging on or off for a table. -toc: true ---- - -`EXPERIMENTAL_AUDIT` is a subcommand of [`ALTER TABLE`](alter-table.html) that is used to turn [SQL audit logging](sql-audit-logging.html) on or off for a table. - -The audit logs contain detailed information about queries being executed against your system, including: - -- Full text of the query (which may include personally identifiable information (PII)) -- Date/Time -- Client address -- Application name - -For a detailed description of exactly what is logged, see the [Audit Log File Format](#audit-log-file-format) section below. - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/experimental_audit.html %} -
      - -## Required Privileges - -Only the `root` user can enable audit logs on a table. - -## Parameters - -| Parameter | Description | -|--------------+----------------------------------------------------------| -| `table_name` | The name of the table you want to create audit logs for. | -| `READ` | Log all table reads to the audit log file. | -| `WRITE` | Log all table writes to the audit log file. | -| `OFF` | Turn off audit logging. | - -{{site.data.alerts.callout_info}} -As of version 2.0, this command logs all reads and writes, and both the READ and WRITE parameters are required (as shown in the examples below). In a future release, this should change to allow logging only reads, only writes, or both. -{{site.data.alerts.end}} - -## Audit Log File Format - -The audit log file format is as shown below. The numbers above each column are not part of the format; they correspond to the descriptions that follow. - -~~~ -[1] [2] [3] [4] [5a] [5b] [5c] [6] [7a] [7b] [7c] [7d] [7e] [7f] [7g] [7h] -I180211 07:30:48.832004 317 sql/exec_log.go:90 [client=127.0.0.1:62503, user=root, n1] 13 exec "cockroach" {"ab"[53]:READ} "SELECT nonexistent FROM ab" {} 0.123 12 ERROR -~~~ - -1. Date -2. Time (in UTC) -3. Goroutine ID - this column is used for troubleshooting CockroachDB and may change its meaning at any time -4. Where the log line was generated -5. Logging tags - - a. Client address - - b. Username - - c. Node ID -6. Log entry counter -7. Log message: - - a. Label indicating where the data was generated (useful for troubleshooting) - - b. Current value of the [`application_name`](set-vars.html) session setting - - c. Logging trigger: - - The list of triggering tables and access modes for audit logs, since only certain (read/write) activities are added to the audit log - - d. Full text of the query (Note: May contain PII) - - e. Placeholder values, if any - - f. Query execution time (in milliseconds) - - g. Number of rows produced (e.g., for `SELECT`) or processed (e.g., for `INSERT` or `UPDATE`). - - h. Status of the query - - `OK` for success - - `ERROR` otherwise - -## Audit Log File Storage Location - -By default, audit logs are stored in the same directory as the other logs generated by CockroachDB. - -To store the audit log files in a specific directory, pass the `--sql-audit-dir` flag to [`cockroach start`](start-a-node.html). - -{{site.data.alerts.callout_success}} -If your deployment requires particular lifecycle and access policies for audit log files, point `--sql-audit-dir` at a directory that has permissions set so that only CockroachDB can create/delete files. -{{site.data.alerts.end}} - -## Examples - -### Turn on audit logging - -Let's say you have a `customers` table that contains personally identifiable information (PII). To turn on audit logs for that table, run the following command: - -{% include copy-clipboard.html %} -~~~ sql -ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE; -~~~ - -Now, every access of customer data is added to the audit log with a line that looks like the following: - -~~~ -I180211 07:30:48.832004 317 sql/exec_log.go:90 [client=127.0.0.1:62503,user=root,n1] 13 exec "cockroach" {"customers"[53]:READ} "SELECT * FROM customers" {} 123.45 12 OK -I180211 07:30:48.832004 317 sql/exec_log.go:90 [client=127.0.0.1:62503,user=root,n1] 13 exec "cockroach" {"customers"[53]:READ} "SELECT nonexistent FROM customers" {} 0.123 12 ERROR -~~~ - -To turn on auditing for more than one table, issue a separate `ALTER` statement for each table. - -For a description of the log file format, see the [Audit Log File Format](#audit-log-file-format) section. - -{{site.data.alerts.callout_success}} -For a more detailed example, see [SQL Audit Logging](sql-audit-logging.html). -{{site.data.alerts.end}} - -### Turn off audit logging - -To turn off logging, issue the following command: - -{% include copy-clipboard.html %} -~~~ sql -ALTER TABLE customers EXPERIMENTAL_AUDIT SET OFF; -~~~ - -## See Also - -- [SQL Audit Logging](sql-audit-logging.html) -- [`ALTER TABLE`](alter-table.html) -- [`cockroach start` logging flags](start-a-node.html) diff --git a/src/current/v2.0/explain.md b/src/current/v2.0/explain.md deleted file mode 100644 index 1a22a0cce68..00000000000 --- a/src/current/v2.0/explain.md +++ /dev/null @@ -1,393 +0,0 @@ ---- -title: EXPLAIN -summary: The EXPLAIN statement provides information you can use to optimize SQL queries. -toc: true ---- - -The `EXPLAIN` [statement](sql-statements.html) returns CockroachDB's query plan for an [explainable statement](#explainable-statements). You can then use this information to optimize the query. - - -## Explainable Statements - -You can `EXPLAIN` on the following statements: - -- [`ALTER USER`](sql-grammar.html#alter_user_stmt), [`ALTER TABLE`](alter-table.html), [`ALTER INDEX`](alter-index.html), [`ALTER VIEW`](alter-view.html), [`ALTER DATABASE`](alter-database.html), [`ALTER SEQUENCE`](alter-sequence.html) -- [`BACKUP`](backup.html) -- [`CANCEL JOB`](cancel-job.html), [`CANCEL QUERY`](cancel-query.html) -- [`CREATE DATABASE`](create-database.html), [`CREATE INDEX`](create-index.html), [`CREATE TABLE`](create-table.html), [`CREATE TABLE AS`](create-table-as.html), [`CREATE USER`](create-user.html), [`CREATE VIEW`](create-view.html), [`CREATE SEQUENCE`](create-sequence.html) -- [`DELETE`](delete.html) -- [`DROP DATABASE`](drop-database.html), [`DROP INDEX`](drop-index.html), [`DROP SEQUENCE`](drop-sequence.html), [`DROP TABLE`](drop-table.html), [`DROP USER`](drop-user.html), [`DROP VIEW`](drop-view.html) -- [`EXECUTE`](sql-grammar.html#execute_stmt) -- `EXPLAIN` -- [`IMPORT`](import.html) -- [`INSERT`](insert.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESET`](reset-vars.html) -- [`RESTORE`](restore.html) -- [`RESUME JOB`](resume-job.html) -- [`SELECT`](select-clause.html) and any [selection query](selection-queries.html) -- [`SET`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW BACKUP`](show-backup.html), [`SHOW COLUMNS`](show-columns.html), [`SHOW CONSTRAINTS`](show-constraints.html), [`SHOW CREATE TABLE`](show-create-table.html), [`SHOW CREATE VIEW`](show-create-view.html), [`SHOW CREATE SEQUENCE`](show-create-sequence.html), [`SHOW CLUSTER SETTING`](show-cluster-setting.html), [`SHOW DATABASES`](show-databases.html), [`SHOW GRANTS`](show-grants.html), [`SHOW INDEX`](show-index.html), [`SHOW JOBS`](show-jobs.html), [`SHOW QUERIES`](show-queries.html), [`SHOW SESSIONS`](show-sessions.html), [`SHOW TABLES`](show-tables.html), [`SHOW TRACE`](show-trace.html), [`SHOW USERS`](show-users.html), [`SHOW HISTOGRAM`](sql-grammar.html#show_histogram_stmt) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) - -## Query Optimization - -Using `EXPLAIN`'s output, you can optimize your queries by taking the following points into consideration: - -- Queries with fewer levels execute more quickly. Restructuring queries to require fewer levels of processing will generally improve performance. - -- Avoid scanning an entire table, which is the slowest way to access data. You can avoid this by [creating indexes](indexes.html) that contain at least one of the columns that the query is filtering in its `WHERE` clause. - -You can find out if your queries are performing entire table scans by using `EXPLAIN` to see which: - -- Indexes the query uses; shown as the **Description** value of rows with the **Field** value of `table` - -- Key values in the index are being scanned; shown as the **Description** value of rows with the **Field** value of `spans` - -For more information, see [Find the Indexes and Key Ranges a Query Uses](#find-the-indexes-and-key-ranges-a-query-uses). - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/explain.html %}
      - -## Required Privileges - -The user requires the appropriate [privileges](privileges.html) for the statement being explained. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `EXPRS` | Include the SQL expressions that are involved in each processing stage. | -| `QUALIFY` | Include table names when referencing columns, which might be important to verify the behavior of joins across tables with the same column names.

      To list qualified names, `QUALIFY` requires you to include the `EXPRS` option. | -| `METADATA` | Include the columns each level uses in the **Columns** column, as well as **Ordering** detail. | -| `VERBOSE` | Imply the `EXPRS`, `METADATA`, and `QUALIFY` options. | -| `TYPES` | Include the intermediate [data types](data-types.html) CockroachDB chooses to evaluate intermediate SQL expressions.

      `TYPES` also implies `METADATA` and `EXPRS` options.| -| `explainable_stmt` | The [statement](#explainable-statements) you want details about. | - -{{site.data.alerts.callout_danger}}EXPLAIN also includes other modes besides query plans that are useful only to CockroachDB developers, which are not documented here.{{site.data.alerts.end}} - -## Success Responses - -Successful `EXPLAIN` statements return tables with the following columns: - -| Column | Description | -|-----------|-------------| -| **Tree** | A tree representation showing the hierarchy of the query plan. -| **Field** | The name of a parameter relevant to the query plan node immediately above. | -| **Description** | Additional information for the parameter in **Field**. | -| **Columns** | The columns provided to the processes at lower levels of the hierarchy.

      This column displays only if the `METADATA` option is specified or implied. | -| **Ordering** | The order in which results are presented to the processes at each level of the hierarchy, as well as other properties of the result set at each level.

      This column displays only if the `METADATA` option is specified or implied. | - -## Examples - -### Default Query Plans - -By default, `EXPLAIN` includes the least detail about the query plan but can be -useful to find out which indexes and index key ranges are used by a query: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v > 3 ORDER BY v; -~~~ - -~~~ -+-----------+-------+-------------+ -| Tree | Field | Description | -+-----------+-------+-------------+ -| sort | | | -| │ | order | +v | -| └── scan | | | -| | table | kv@primary | -| | spans | ALL | -+-----------+-------+-------------+ -~~~ - -The first column shows the tree structure of the query plan; a set of properties -is displayed for each node in the tree. Most importantly, for scans, you can see -the index that is scanned (`primary` in this case) and what key ranges of the -index you are scanning (in this case, a full table scan). For more -information on indexes and key ranges, see the -[example](#find-the-indexes-and-key-ranges-a-query-uses) below. - -### `EXPRS` Option - -The `EXPRS` option includes SQL expressions that are involved in each processing stage, providing more granular detail about which portion of your query is represented at each level: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (EXPRS) SELECT * FROM kv WHERE v > 3 ORDER BY v; -~~~ - -~~~ -+-----------+--------+-------------+ -| Tree | Field | Description | -+-----------+--------+-------------+ -| sort | | | -| │ | order | +v | -| └── scan | | | -| | table | kv@primary | -| | spans | ALL | -| | filter | v > 3 | -+-----------+--------+-------------+ -~~~ - -### `METADATA` Option - -The `METADATA` option includes detail about which columns are being used by each -level, as well as properties of the result set on that level: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (METADATA) SELECT * FROM kv WHERE v > 3 ORDER BY v; -~~~ - -~~~ -+-----------+-------+------+-------+-------------+---------+------------------------------+ -| Tree | Level | Type | Field | Description | Columns | Ordering | -+-----------+-------+------+-------+-------------+---------+------------------------------+ -| sort | 0 | sort | | | (k, v) | k!=NULL; v!=NULL; key(k); +v | -| │ | 0 | | order | +v | | | -| └── scan | 1 | scan | | | (k, v) | k!=NULL; v!=NULL; key(k) | -| | 1 | | table | kv@primary | | | -| | 1 | | spans | ALL | | | -+-----------+-------+------+-------+-------------+---------+------------------------------+ -~~~ - -The **Ordering** column most importantly includes the ordering of the rows at -that level (`+v` in this case), but it also includes other information about the -result set at that level. In this case, CockroachDB was able to deduce that `k` -and `v` cannot be `NULL`, and `k` is a "key", meaning that you cannot have more -than one row with any given value of `k`. - -Note that descending (`DESC`) orderings are indicated by the `-` sign: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (METADATA) SELECT * FROM kv WHERE v > 3 ORDER BY v DESC; -~~~ - -~~~ -+-----------+-------+------+-------+-------------+---------+------------------------------+ -| Tree | Level | Type | Field | Description | Columns | Ordering | -+-----------+-------+------+-------+-------------+---------+------------------------------+ -| sort | 0 | sort | | | (k, v) | k!=NULL; v!=NULL; key(k); -v | -| │ | 0 | | order | -v | | | -| └── scan | 1 | scan | | | (k, v) | k!=NULL; v!=NULL; key(k) | -| | 1 | | table | kv@primary | | | -| | 1 | | spans | ALL | | | -+-----------+-------+------+-------+-------------+---------+------------------------------+ -~~~ - -Another property that is reported in the **Ordering** column is information -about columns that are known to be equal on any row, and "constant" columns -that are known to have the same value on all rows. For example: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (METADATA) SELECT * FROM abcd JOIN efg ON a=e AND c=1; -~~~ - -~~~ -+-----------+-------+------+----------------+--------------+-----------------------+-------------------------------+ -| Tree | Level | Type | Field | Description | Columns | Ordering | -+-----------+-------+------+----------------+--------------+-----------------------+-------------------------------+ -| join | 0 | join | | | (a, b, c, d, e, f, g) | a=e; c=CONST; a!=NULL; key(a) | -| │ | 0 | | type | inner | | | -| │ | 0 | | equality | (a) = (e) | | | -| │ | 0 | | mergeJoinOrder | +"(a=e)" | | | -| ├── scan | 1 | scan | | | (a, b, c, d) | c=CONST; a!=NULL; key(a); +a | -| │ | 1 | | table | abcd@primary | | | -| │ | 1 | | spans | ALL | | | -| └── scan | 1 | scan | | | (e, f, g) | e!=NULL; key(e); +e | -| | 1 | | table | efg@primary | | | -| | 1 | | spans | ALL | | | -+-----------+-------+------+----------------+--------------+-----------------------+-------------------------------+ -~~~ - -This indicates that on any row, column `a` has the same value with column `e`, -and that all rows have the same value on column `c`. - -### `QUALIFY` Option - -`QUALIFY` uses `
      .` notation for columns in the query plan. However, `QUALIFY` must be used with `EXPRS` to show the SQL values used: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (EXPRS, QUALIFY) SELECT a.v, b.v FROM t.kv AS a, t.kv AS b; -~~~ - -~~~ -+----------------+----------+-------------+ -| Tree | Field | Description | -+----------------+----------+-------------+ -| render | | | -| │ | render 0 | a.v | -| │ | render 1 | b.v | -| └── join | | | -| │ | type | cross | -| ├── scan | | | -| │ | table | kv@primary | -| │ | spans | ALL | -| └── scan | | | -| | table | kv@primary | -| | spans | ALL | -+----------------+----------+-------------+ -~~~ - -You can contrast this with the same statement not including the `QUALIFY` option to see that the column references are not qualified, which can lead to ambiguity if multiple tables have columns with the same names: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (EXPRS) SELECT a.v, b.v FROM kv AS a, kv AS b; -~~~ - -~~~ -+-------+--------+----------+-------------+ -| Level | Type | Field | Description | -+-------+--------+----------+-------------+ -| 0 | render | | | -| 0 | | render 0 | v | -| 0 | | render 1 | v | -| 1 | join | | | -| 1 | | type | cross | -| 2 | scan | | | -| 2 | | table | kv@primary | -| 2 | scan | | | -| 2 | | table | kv@primary | -+-------+--------+----------+-------------+ -~~~ - -### `VERBOSE` Option - -The `VERBOSE` option is an alias for the combination of `EXPRS`, `METADATA`, and `QUALIFY` options: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (VERBOSE) SELECT * FROM kv AS a JOIN kv USING (k) WHERE a.v > 3 ORDER BY a.v DESC; -~~~ - -~~~ -+---------------------+-------+--------+----------------+------------------+-----------------------+------------------------------+ -| Tree | Level | Type | Field | Description | Columns | Ordering | -+---------------------+-------+--------+----------------+------------------+-----------------------+------------------------------+ -| sort | 0 | sort | | | (k, v, v) | k!=NULL; key(k); -v | -| │ | 0 | | order | -v | | | -| └── render | 1 | render | | | (k, v, v) | k!=NULL; key(k) | -| │ | 1 | | render 0 | a.k | | | -| │ | 1 | | render 1 | a.v | | | -| │ | 1 | | render 2 | radu.public.kv.v | | | -| └── join | 2 | join | | | (k, v, k[omitted], v) | k=k; k!=NULL; key(k) | -| │ | 2 | | type | inner | | | -| │ | 2 | | equality | (k) = (k) | | | -| │ | 2 | | mergeJoinOrder | +"(k=k)" | | | -| ├── scan | 3 | scan | | | (k, v) | k!=NULL; v!=NULL; key(k); +k | -| │ | 3 | | table | kv@primary | | | -| │ | 3 | | spans | ALL | | | -| │ | 3 | | filter | v > 3 | | | -| └── scan | 3 | scan | | | (k, v) | k!=NULL; key(k); +k | -| | 3 | | table | kv@primary | | | -| | 3 | | spans | ALL | | | -+---------------------+-------+--------+----------------+------------------+-----------------------+------------------------------+ -~~~ - -### `TYPES` Option - -The `TYPES` mode includes the types of the values used in the query plan, and implies the `METADATA` and `EXPRS` options as well: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (TYPES) SELECT * FROM kv WHERE v > 3 ORDER BY v; -~~~ - -~~~ -+-----------+-------+------+--------+-----------------------------+----------------+------------------------------+ -| Tree | Level | Type | Field | Description | Columns | Ordering | -+-----------+-------+------+--------+-----------------------------+----------------+------------------------------+ -| sort | 0 | sort | | | (k int, v int) | k!=NULL; v!=NULL; key(k); +v | -| │ | 0 | | order | +v | | | -| └── scan | 1 | scan | | | (k int, v int) | k!=NULL; v!=NULL; key(k) | -| | 1 | | table | kv@primary | | | -| | 1 | | spans | ALL | | | -| | 1 | | filter | ((v)[int] > (3)[int])[bool] | | | -+-----------+-------+------+--------+-----------------------------+----------------+------------------------------+ -~~~ - -### Find the Indexes and Key Ranges a Query Uses - -You can use `EXPLAIN` to understand which indexes and key ranges queries use, -which can help you ensure a query isn't performing a full table scan. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v INT); -~~~ - -Because column `v` is not indexed, queries filtering on it alone scan the entire table: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ -+-------+------+-------+-------------+ -| Level | Type | Field | Description | -+-------+------+-------+-------------+ -| 0 | scan | | | -| 0 | | table | kv@primary | -| 0 | | spans | ALL | -+-------+------+-------+-------------+ -~~~ - -If there were an index on `v`, CockroachDB would be able to avoid scanning the -entire table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX v ON kv (v); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ -+------+-------+-------------+ -| Tree | Field | Description | -+------+-------+-------------+ -| scan | | | -| | table | kv@v | -| | spans | /4-/6 | -+------+-------+-------------+ -~~~ - -Now, only part of the index `v` is getting scanned, specifically the key range starting -at (and including) 4 and stopping before 6. - -## See Also - -- [`ALTER TABLE`](alter-table.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`BACKUP`](backup.html) -- [`CANCEL JOB`](cancel-job.html) -- [`CREATE DATABASE`](create-database.html) -- [`DROP DATABASE`](drop-database.html) -- [`EXECUTE`](sql-grammar.html#execute_stmt) -- [`IMPORT`](import.html) -- [Indexes](indexes.html) -- [`INSERT`](insert.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESET`](reset-vars.html) -- [`RESTORE`](restore.html) -- [`RESUME JOB`](resume-job.html) -- [`SELECT`](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`SET`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) diff --git a/src/current/v2.0/file-an-issue.md b/src/current/v2.0/file-an-issue.md deleted file mode 100644 index 6151e1d21ae..00000000000 --- a/src/current/v2.0/file-an-issue.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: File an Issue -summary: Learn how to file a GitHub issue with CockroachDB. -toc: false ---- - -If you've tried to [troubleshoot](troubleshooting-overview.html) an issue yourself, have [reached out for help](support-resources.html), and are still stumped, you can file an issue in GitHub. - -To file an issue in GitHub, we need the following information: - -1. A summary of the issue. - -2. The steps to reproduce the issue. - -3. The result you expected. - -4. The result that actually occurred. - -5. The first few lines of the log file from each node in the cluster in a timeframe as close as possible to reproducing the issue. On most Unix-based systems running with defaults, you can get this information using the following command: - - ~~~ shell - $ grep -F '[config]' cockroach-data/logs/cockroach.log - ~~~~ - {{site.data.alerts.callout_info}}You might need to replace cockroach-data/logs with the location of your logs.{{site.data.alerts.end}} - If the logs are not available, please include the output of `cockroach version` for each node in the cluster. - -### Template - -You can use this as a template for [filing an issue in GitHub](https://github.com/cockroachdb/cockroach/issues/new): - -~~~ - -## Summary - - - -## Steps to reproduce - -1. -2. -3. - -## Expected Result - - - -## Actual Result - - - -## Log files/version - -### Node 1 - - - -### Node 2 - - - -### Node 3 - - - -~~~ diff --git a/src/current/v2.0/float.md b/src/current/v2.0/float.md deleted file mode 100644 index 1734d9a2328..00000000000 --- a/src/current/v2.0/float.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: FLOAT -summary: The FLOAT data type stores inexact, floating-point numbers with up to 17 digits in total and at least one digit to the right of the decimal point. -toc: true ---- - -CockroachDB supports various inexact, floating-point number [data types](data-types.html) with up to 17 digits of decimal precision. - -They are handled internally using the [standard double-precision (64-bit binary-encoded) IEEE754 format](https://en.wikipedia.org/wiki/IEEE_floating_point). - - -## Names and Aliases - -Name | Aliases ------|-------- -`FLOAT` | None -`REAL` | `FLOAT4` -`DOUBLE PRECISION` | `FLOAT8` - -## Syntax - -A constant value of type `FLOAT` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `1.414` or `-1234`. - -The special IEEE754 values for positive infinity, negative infinity -and [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) cannot be -entered using numeric literals directly and must be converted using an -[interpreted literal](sql-constants.html#interpreted-literals) or an -[explicit conversion](scalar-expressions.html#explicit-type-coercions) -from a string literal instead. - -The following values are recognized: - -| Syntax | Value | -|----------------------------------------|---------------------------------------------------------| -| `inf`, `infinity`, `+inf`, `+infinity` | +∞ | -| `-inf`, `-infinity` | -∞ | -| `nan` | [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) | - -For example: - -- `FLOAT '+Inf'` -- `'-Inf'::FLOAT` -- `CAST('NaN' AS FLOAT)` - -## Size - -A `FLOAT` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE floats (a FLOAT PRIMARY KEY, b REAL, c DOUBLE PRECISION); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM floats; -~~~ - -~~~ -+-------+------------------+---------+---------+-------------+ -| Field | Type | Null | Default | Indices | -+-------+------------------+---------+---------+-------------+ -| a | FLOAT | false | NULL | {"primary"} | -| b | REAL | true | NULL | {} | -| c | DOUBLE PRECISION | true | NULL | {} | -+-------+------------------+---------+---------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO floats VALUES (1.012345678901, 2.01234567890123456789, CAST('+Inf' AS FLOAT)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM floats; -~~~ - -~~~ -+----------------+--------------------+------+ -| a | b | c | -+----------------+--------------------+------+ -| 1.012345678901 | 2.0123456789012346 | +Inf | -+----------------+--------------------+------+ -(1 row) -# Note that the value in "b" has been limited to 17 digits. -~~~ - -## Supported Casting & Conversion - -`FLOAT` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Truncates decimal precision and requires values to be between -2^63 and 2^63-1 -`DECIMAL` | Causes an error to be reported if the value is NaN or +/- Inf. -`BOOL` | **0** converts to `false`; all other values convert to `true` -`STRING` | -- - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/foreign-key.md b/src/current/v2.0/foreign-key.md deleted file mode 100644 index 0fc35e5f22e..00000000000 --- a/src/current/v2.0/foreign-key.md +++ /dev/null @@ -1,613 +0,0 @@ ---- -title: Foreign Key Constraint -summary: The Foreign Key constraint specifies a column can contain only values exactly matching existing values from the column it references. -toc: true ---- - -The Foreign Key [constraint](constraints.html) specifies that all of a column's values must exactly match existing values from the column it references, enforcing referential integrity. - -For example, if you create a foreign key on `orders.customer` that references `customers.id`: - -- Each value inserted or updated in `orders.customer` must exactly match a value in `customers.id`. -- Values in `customers.id` that are referenced by `orders.customer` cannot be deleted or updated. However, `customers.id` values that _aren't_ present in `orders.customer` can be. - -{{site.data.alerts.callout_success}}If you plan to use Foreign Keys in your schema, consider using interleaved tables, which can dramatically improve query performance.{{site.data.alerts.end}} - - -## Details - -### Rules for Creating Foreign Keys - -**Foreign Key Columns** - -- Foreign key columns must use their referenced column's [type](data-types.html). -- Each column cannot belong to more than 1 Foreign Key constraint. -- Cannot be a [computed column](computed-columns.html). -- Foreign key columns must be [indexed](indexes.html). This is required because updates and deletes on the referenced table will need to search the referencing table for any matching records to ensure those operations would not violate existing references. In practice, such indexes are likely also needed by applications using these tables, since finding all records which belong to some entity, for example all orders for a given customer, is very common. - - To meet this requirement when creating a new table, there are a few options: - - Create indexes explicitly using the [`INDEX`](create-table.html#create-a-table-with-secondary-and-inverted-indexes-new-in-v2-0) clause of `CREATE TABLE`. - - Rely on indexes created by the [Primary Key](primary-key.html) or [Unique](unique.html) constraints. - - Have CockroachDB automatically create an index of the foreign key columns for you. However, it's important to note that if you later remove the Foreign Key constraint, this automatically created index _is not_ removed. - - Using the foreign key columns as the prefix of an index's columns also satisfies the requirement for an index. For example, if you create foreign key columns `(A, B)`, an index of columns `(A, B, C)` satisfies the requirement for an index. - - To meet this requirement when adding the Foreign Key constraint to an existing table, if the columns you want to constrain are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the [`ADD CONSTRAINT`](add-constraint.html) statement to add the Foreign Key constraint to the columns. - -**Referenced Columns** - -- Referenced columns must contain only unique sets of values. This means the `REFERENCES` clause must use exactly the same columns as a [Unique](unique.html) or [Primary Key](primary-key.html) constraint on the referenced table. For example, the clause `REFERENCES tbl (C, D)` requires `tbl` to have either the constraint `UNIQUE (C, D)` or `PRIMARY KEY (C, D)`. -- In the `REFERENCES` clause, if you specify a table but no columns, CockroachDB references the table's primary key. In these cases, the Foreign Key constraint and the referenced table's primary key must contain the same number of columns. - -### _NULL_ Values - -Single-column foreign keys accept _NULL_ values. - -Multiple-column foreign keys only accept _NULL_ values in these scenarios: - -- The row you're ultimately referencing—determined by the statement's other values—contains _NULL_ as the value of the referenced column (i.e., _NULL_ is valid from the perspective of referential integrity) -- The write contains _NULL_ values for all foreign key columns - -For example, if you have a Foreign Key constraint on columns `(A, B)` and try to insert `(1, NULL)`, the write would fail unless the row with the value `1` for `(A)` contained a _NULL_ value for `(B)`. However, inserting `(NULL, NULL)` would succeed. - -However, allowing _NULL_ values in either your foreign key or referenced columns can degrade their referential integrity. To avoid this, you can use the [Not Null constraint](not-null.html) on both sets of columns when [creating your tables](create-table.html). (The Not Null constraint cannot be added to existing tables.) - -### Foreign Key Actions New in v2.0 - -When you set a foreign key constraint, you can control what happens to the constrained column when the column it's referencing (the foreign key) is deleted or updated. - -Parameter | Description -----------|------------ -`ON DELETE NO ACTION` | _Default action._ If there are any existing references to the key being deleted, the transaction will fail at the end of the statement. The key can be updated, depending on the `ON UPDATE` action.

      Alias: `ON DELETE RESTRICT` -`ON UPDATE NO ACTION` | _Default action._ If there are any existing references to the key being updated, the transaction will fail at the end of the statement. The key can be deleted, depending on the `ON DELETE` action.

      Alias: `ON UPDATE RESTRICT` -`ON DELETE RESTRICT` / `ON UPDATE RESTRICT` | `RESTRICT` and `NO ACTION` are currently equivalent until options for deferring constraint checking are added. To set an existing foreign key action to `RESTRICT`, the foreign key constraint must be dropped and recreated. -`ON DELETE CASCADE` / `ON UPDATE CASCADE` | When a referenced foreign key is deleted or updated, all rows referencing that key are deleted or updated, respectively. If there are other alterations to the row, such as a `SET NULL` or `SET DEFAULT`, the delete will take precedence.

      Note that `CASCADE` does not list objects it drops or updates, so it should be used cautiously. -`ON DELETE SET NULL` / `ON UPDATE SET NULL` | When a referenced foreign key is deleted or updated, respectively, the columns of all rows referencing that key will be set to `NULL`. The column must allow `NULL` or this update will fail. -`ON DELETE SET DEFAULT` / `ON UPDATE SET DEFAULT` | When a referenced foreign key is deleted or updated, respectively, the columns of all rows referencing that key are set to the default value for that column. If the default value for the column is null, this will have the same effect as `ON DELETE SET NULL` or `ON UPDATE SET NULL`. The default value must still conform with all other constraints, such as `UNIQUE`. - -### Performance - -Because the Foreign Key constraint requires per-row checks on two tables, statements involving foreign key or referenced columns can take longer to execute. You're most likely to notice this with operations like bulk inserts into the table with the foreign keys. - -We're currently working to improve the performance of these statements, though. - -{{site.data.alerts.callout_success}}You can improve the performance of some statements that use Foreign Keys by also using INTERLEAVE IN PARENT.{{site.data.alerts.end}} - -## Syntax - -Foreign Key constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -{{site.data.alerts.callout_info}}You can also add the Foreign Key constraint to existing tables through ADD CONSTRAINT.{{site.data.alerts.end}} - -### Column Level - -
      {% include {{ page.version.version }}/sql/diagrams/foreign_key_column_level.html %}
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the foreign key column. | -| `column_type` | The foreign key column's [data type](data-types.html). | -| `parent_table` | The name of the table the foreign key references. | -| `ref_column_name` | The name of the column the foreign key references.

      If you do not include the `ref_column_name` you want to reference from the `parent_table`, CockroachDB uses the first column of `parent_table`'s primary key. -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT PRIMARY KEY, - customer INT NOT NULL REFERENCES customers (id) ON DELETE CASCADE, - orderTotal DECIMAL(9,2), - INDEX (customer) - ); -~~~ -{{site.data.alerts.callout_danger}}CASCADE does not list objects it drops or updates, so it should be used cautiously.{{site.data.alerts.end}} - -### Table Level - -
      {% include {{ page.version.version }}/sql/diagrams/foreign_key_table_level.html %}
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_def` | Definitions for the table's columns. | -| `name` | The name of the constraint. | -| `fk_column_name` | The name of the foreign key column. | -| `parent_table` | The name of the table the foreign key references. | -| `ref_column_name` | The name of the column the foreign key references.

      If you do not include the `column_name` you want to reference from the `parent_table`, CockroachDB uses the first column of `parent_table`'s primary key. -| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -CREATE TABLE packages ( - customer INT, - "order" INT, - id INT, - address STRING(50), - delivered BOOL, - delivery_date DATE, - PRIMARY KEY (customer, "order", id), - CONSTRAINT fk_order FOREIGN KEY (customer, "order") REFERENCES orders - ) INTERLEAVE IN PARENT orders (customer, "order") - ; -~~~ - -## Usage Examples - -### Use a Foreign Key Constraint with Default Actions - -In this example, we'll create a table with a foreign key constraint with the default [actions](#foreign-key-actions-new-in-v2-0) (`ON UPDATE NO ACTION ON DELETE NO ACTION`). - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers (id INT PRIMARY KEY, email STRING UNIQUE); -~~~ - -Next, create the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT PRIMARY KEY, - customer INT NOT NULL REFERENCES customers (id), - orderTotal DECIMAL(9,2), - INDEX (customer) - ); -~~~ - -Let's insert a record into each table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers VALUES (1001, 'a@co.tld'), (1234, 'info@cockroachlabs.com'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1002, 29.99); -~~~ -~~~ -pq: foreign key violation: value [1002] not found in customers@primary [id] -~~~ - -The second record insertion returns an error because the customer `1002` doesn't exist in the referenced table. - -Let's insert a record into the referencing table and try to update the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1001, 29.99); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers SET id = 1002 WHERE id = 1001; -~~~ -~~~ -pq: foreign key violation: value(s) [1001] in columns [id] referenced in table "orders" -~~~ - -The update to the referenced table returns an error because `id = 1001` is referenced and the default [foreign key action](#foreign-key-actions-new-in-v2-0) is enabled (`ON UPDATE NO ACTION`). However, `id = 1234` is not referenced and can be updated: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers SET id = 1111 WHERE id = 1234; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ -~~~ -+------+------------------------+ -| id | email | -+------+------------------------+ -| 1001 | a@co.tld | -| 1111 | info@cockroachlabs.com | -+------+------------------------+ -~~~ - -Now let's try to delete a referenced row: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1001; -~~~ -~~~ -pq: foreign key violation: value(s) [1001] in columns [id] referenced in table "orders" -~~~ - -Similarly, the deletion returns an error because `id = 1001` is referenced and the default [foreign key action](#foreign-key-actions-new-in-v2-0) is enabled (`ON DELETE NO ACTION`). However, `id = 1111` is not referenced and can be deleted: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1111; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ -~~~ -+------+----------+ -| id | email | -+------+----------+ -| 1001 | a@co.tld | -+------+----------+ -~~~ - -### Use a Foreign Key Constraint with `CASCADE` New in v2.0 - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions-new-in-v2-0) `ON UPDATE CASCADE` and `ON DELETE CASCADE`. - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_2 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_2 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_2(id) ON UPDATE CASCADE ON DELETE CASCADE - ); -~~~ - -Insert a few records into the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_2 VALUES (1), (2), (3); -~~~ - -Insert some records into the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_2 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - -Now, let's update an `id` in the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers_2 SET id = 23 WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_2; -~~~ -~~~ -+----+ -| id | -+----+ -| 2 | -| 3 | -| 23 | -+----+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_2; -~~~ -~~~ -+-----+--------------+ -| id | customers_id | -+-----+--------------+ -| 100 | 23 | -| 101 | 2 | -| 102 | 3 | -| 103 | 23 | -+-----+--------------+ -~~~ - -When `id = 1` was updated to `id = 23` in `customers_2`, the update propagated to the referencing table `orders_2`. - -Similarly, a deletion will cascade. Let's delete `id = 23` from `customers_2`: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_2 WHERE id = 23; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_2; -~~~ -~~~ -+----+ -| id | -+----+ -| 2 | -| 3 | -+----+ -~~~ - -Let's check to make sure the rows in `orders_2` where `customers_id = 23` were also deleted: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_2; -~~~ -~~~ -+-----+--------------+ -| id | customers_id | -+-----+--------------+ -| 101 | 2 | -| 102 | 3 | -+-----+--------------+ -~~~ - -### Use a Foreign Key Constraint with `SET NULL` New in v2.0 - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions-new-in-v2-0) `ON UPDATE SET NULL` and `ON DELETE SET NULL`. - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_3 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_3 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_3(id) ON UPDATE SET NULL ON DELETE SET NULL - ); -~~~ - -Insert a few records into the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_3 VALUES (1), (2), (3); -~~~ - -Insert some records into the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_3 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 1 | -| 101 | 2 | -| 102 | 3 | -| 103 | 1 | -+-----+-------------+ -~~~ - -Now, let's update an `id` in the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers_3 SET id = 23 WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ -+----+ -| id | -+----+ -| 2 | -| 3 | -| 23 | -+----+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | NULL | -| 101 | 2 | -| 102 | 3 | -| 103 | NULL | -+-----+-------------+ -~~~ - -When `id = 1` was updated to `id = 23` in `customers_3`, the referencing `customer_id` was set to `NULL`. - -Similarly, a deletion will set the referencing `customer_id` to `NULL`. Let's delete `id = 2` from `customers_3`: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_3 WHERE id = 2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ -+----+ -| id | -+----+ -| 3 | -| 23 | -+----+ -~~~ - -Let's check to make sure the row in `orders_3` where `customers_id = 2` was updated to `NULL`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | NULL | -| 101 | NULL | -| 102 | 3 | -| 103 | NULL | -+-----+-------------+ -~~~ - -### Use a Foreign Key Constraint with `SET DEFAULT` New in v2.0 - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions-new-in-v2-0) `ON UPDATE SET DEFAULT` and `ON DELETE SET DEFAULT`. - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_4 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table with the `DEFAULT` value for `customer_id` set to `9999`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_4 ( - id INT PRIMARY KEY, - customer_id INT DEFAULT 9999 REFERENCES customers_4(id) ON UPDATE SET DEFAULT ON DELETE SET DEFAULT - ); -~~~ - -Insert a few records into the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_4 VALUES (1), (2), (3), (9999); -~~~ - -Insert some records into the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_4 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 1 | -| 101 | 2 | -| 102 | 3 | -| 103 | 1 | -+-----+-------------+ -~~~ - -Now, let's update an `id` in the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers_4 SET id = 23 WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_4; -~~~ -~~~ -+------+ -| id | -+------+ -| 2 | -| 3 | -| 23 | -| 9999 | -+------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 9999 | -| 101 | 2 | -| 102 | 3 | -| 103 | 9999 | -+-----+-------------+ -~~~ - -When `id = 1` was updated to `id = 23` in `customers_4`, the referencing `customer_id` was set to `DEFAULT` (i.e., `9999`). You can see this in the first and last rows of `orders_4`, where `id = 100` and the `customer_id` is now `9999` - -Similarly, a deletion will set the referencing `customer_id` to the `DEFAULT` value. Let's delete `id = 2` from `customers_4`: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_4 WHERE id = 2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_4; -~~~ -~~~ -+------+ -| id | -+------+ -| 3 | -| 23 | -| 9999 | -+------+ -~~~ - -Let's check to make sure the corresponding `customer_id` value to `id = 101`, was updated to the `DEFAULT` value (i.e., `9999`) in `orders_4`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 9999 | -| 101 | 9999 | -| 102 | 3 | -| 103 | 9999 | -+-----+-------------+ -~~~ - -## See Also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [Check constraint](check.html) -- [Default Value constraint](default-value.html) -- [Not Null constraint](not-null.html) -- [Primary Key constraint](primary-key.html) -- [Unique constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v2.0/frequently-asked-questions.md b/src/current/v2.0/frequently-asked-questions.md deleted file mode 100644 index 2b54d9424ec..00000000000 --- a/src/current/v2.0/frequently-asked-questions.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -title: Frequently Asked Questions -summary: CockroachDB FAQ - What is CockroachDB? How does it work? What makes it different from other databases? -tags: postgres, cassandra, google cloud spanner -toc: true ---- - -## What is CockroachDB? - -CockroachDB is a [distributed SQL](https://www.cockroachlabs.com/blog/what-is-distributed-sql/) database built on a transactional and strongly-consistent key-value store. It **scales** horizontally; **survives** disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports **strongly-consistent** ACID transactions; and provides a familiar **SQL** API for structuring, manipulating, and querying data. - -CockroachDB is inspired by Google's [Spanner](http://research.google.com/archive/spanner.html) and [F1](http://research.google.com/pubs/pub38125.html) technologies, and it's completely [open source](https://github.com/cockroachdb/cockroach). - -## When is CockroachDB a good choice? - -CockroachDB is well suited for applications that require reliable, available, and correct data, and millisecond response times, regardless of scale. It is built to automatically replicate, rebalance, and recover with minimal configuration and operational overhead. Specific use cases include: - -- Distributed or replicated OLTP -- Multi-datacenter deployments -- Multi-region deployments -- Cloud migrations -- Infrastructure initiatives built for the cloud - -CockroachDB returns single-row reads in 2ms or less and single-row writes in 4ms or less, and supports a variety of [SQL and operational tuning practices](performance-tuning.html) for optimizing query performance. However, CockroachDB is not yet suitable for heavy analytics / OLAP. - -## How easy is it to install CockroachDB? - -It's as easy as downloading a binary on OS X and Linux or running our official Docker image on Windows. There are other simple install methods as well, such as running our Homebrew recipe on OS X or building from source files on both OS X and Linux. - -For more details, see [Install CockroachDB](install-cockroachdb.html). - -## How does CockroachDB scale? - -CockroachDB scales horizontally with minimal operator overhead. You can run it on your local computer, a single server, a corporate development cluster, or a private or public cloud. [Adding capacity](start-a-node.html) is as easy as pointing a new node at the running cluster. - -At the key-value level, CockroachDB starts off with a single, empty range. As you put data in, this single range eventually reaches a threshold size (64MB by default). When that happens, the data splits into two ranges, each covering a contiguous segment of the entire key-value space. This process continues indefinitely; as new data flows in, existing ranges continue to split into new ranges, aiming to keep a relatively small and consistent range size. - -When your cluster spans multiple nodes (physical machines, virtual machines, or containers), newly split ranges are automatically rebalanced to nodes with more capacity. CockroachDB communicates opportunities for rebalancing using a peer-to-peer [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) by which nodes exchange network addresses, store capacity, and other information. - -## How does CockroachDB survive failures? - -CockroachDB is designed to survive software and hardware failures, from server restarts to datacenter outages. This is accomplished without confusing artifacts typical of other distributed systems (e.g., stale reads) using strongly-consistent replication as well as automated repair after failures. - -**Replication** - -CockroachDB replicates your data for availability and guarantees consistency between replicas using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. You can [define the location of replicas](configure-replication-zones.html) in various ways, depending on the types of failures you want to secure against and your network topology. You can locate replicas on: - -- Different servers within a rack to tolerate server failures -- Different servers on different racks within a datacenter to tolerate rack power/network failures -- Different servers in different datacenters to tolerate large scale network or power outages - -When replicating across datacenters, be aware that the round-trip latency between datacenters will have a direct effect on your database's performance. Latency in cross-continent clusters will be noticeably worse than in clusters where all nodes are geographically close together. - -**Automated Repair** - -For short-term failures, such as a server restart, CockroachDB uses Raft to continue seamlessly as long as a majority of replicas remain available. Raft makes sure that a new “leader” for each group of replicas is elected if the former leader fails, so that transactions can continue and affected replicas can rejoin their group once they’re back online. For longer-term failures, such as a server/rack going down for an extended period of time or a datacenter outage, CockroachDB automatically rebalances replicas from the missing nodes, using the unaffected replicas as sources. Using capacity information from the gossip network, new locations in the cluster are identified and the missing replicas are re-replicated in a distributed fashion using all available nodes and the aggregate disk and network bandwidth of the cluster. - -## How is CockroachDB strongly-consistent? - -CockroachDB guarantees the SQL isolation level "serializable", the highest defined by the SQL standard. -It does so by combining the Raft consensus algorithm for writes and a custom time-based synchronization algorithms for reads. -See our description of [strong consistency](strong-consistency.html) for more details. - -## How is CockroachDB both highly available and strongly consistent? - -The [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) states that it is impossible for a distributed system to simultaneously provide more than two out of the following three guarantees: - -- Consistency -- Availability -- Partition Tolerance - -CockroachDB is a CP (consistent and partition tolerant) system. This means -that, in the presence of partitions, the system will become unavailable rather than do anything which might cause inconsistent results. For example, writes require acknowledgements from a majority of replicas, and reads require a lease, which can only be transferred to a different node when writes are possible. - -Separately, CockroachDB is also Highly Available, although "available" here means something different than the way it is used in the CAP theorem. In the CAP theorem, availability is a binary property, but for High Availability, we talk about availability as a spectrum (using terms like "five nines" for a system that is available 99.999% of the time). - -Being both CP and HA means that whenever a majority of replicas can talk to each other, they should be able to make progress. For example, if you deploy CockroachDB to three datacenters and the network link to one of them fails, the other two datacenters should be able to operate normally with only a few seconds' disruption. We do this by attempting to detect partitions and failures quickly and efficiently, transferring leadership to nodes that are able to communicate with the majority, and routing internal traffic away from nodes that are partitioned away. - -## Why is CockroachDB SQL? - -At the lowest level, CockroachDB is a distributed, strongly-consistent, transactional key-value store, but the external API is Standard SQL with extensions. This provides developers familiar relational concepts such as schemas, tables, columns, and indexes and the ability to structure, manipulate, and query data using well-established and time-proven tools and processes. Also, since CockroachDB supports the PostgreSQL wire protocol, it’s simple to get your application talking to Cockroach; just find your [PostgreSQL language-specific driver](install-client-drivers.html) and start building. - -For more details, learn our [basic CockroachDB SQL statements](learn-cockroachdb-sql.html), explore the [full SQL grammar](sql-grammar.html), and try it out via our [built-in SQL client](use-the-built-in-sql-client.html). Also, to understand how CockroachDB maps SQL table data to key-value storage and how CockroachDB chooses the best index for running a query, see [SQL in CockroachDB](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/) and [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -## Does CockroachDB support distributed transactions? - -Yes. CockroachDB distributes transactions across your cluster, whether it’s a few servers in a single location or many servers across multiple datacenters. Unlike with sharded setups, you don’t need to know the precise location of data; you just talk to any node in your cluster and CockroachDB gets your transaction to the right place seamlessly. Distributed transactions proceed without downtime or additional latency while rebalancing is underway. You can even move tables – or entire databases – between data centers or cloud infrastructure providers while the cluster is under load. - -## Do transactions in CockroachDB guarantee ACID semantics? - -Yes. Every [transaction](transactions.html) in CockroachDB guarantees [ACID semantics](https://en.wikipedia.org/wiki/ACID) spanning arbitrary tables and rows, even when data is distributed. - -- **Atomicity:** Transactions in CockroachDB are “all or nothing.” If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. If a transaction succeeds, all mutations are applied together with virtual simultaneity. For a detailed discussion of atomicity in CockroachDB transactions, see [How CockroachDB Distributes Atomic Transactions](https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/). -- **Consistency:** SQL operations never see any intermediate states and move the database from one valid state to another, keeping indexes up to date. Operations always see the results of previously completed statements on overlapping data and maintain specified constraints such as unique columns. For a detailed look at how we've tested CockroachDB for correctness and consistency, see [CockroachDB Beta Passes Jepsen Testing](https://www.cockroachlabs.com/blog/cockroachdb-beta-passes-jepsen-testing/). -- **Isolation:** Transactions in CockroachDB implement the strongest ANSI isolation level: serializable (`SERIALIZABLE`). This means that transactions will never result in anomalies. For more information about transaction isolation in CockroachDB, see [Transactions: Isolation Levels](transactions.html#isolation-levels). -- **Durability:** In CockroachDB, every acknowledged write has been persisted consistently on a majority of replicas (by default, at least 2) via the [Raft consensus algorithm](https://raft.github.io/). Power or disk failures that affect only a minority of replicas (typically 1) do not prevent the cluster from operating and do not lose any data. - -## Since CockroachDB is inspired by Spanner, does it require atomic clocks to synchronize time? - -No. CockroachDB was designed to work without atomic clocks or GPS clocks. It’s an open source database intended to be run on arbitrary collections of nodes, from physical servers in a corp development cluster to public cloud infrastructure using the flavor-of-the-month virtualization layer. It’d be a showstopper to require an external dependency on specialized hardware for clock synchronization. However, CockroachDB does require moderate levels of clock synchronization for correctness. If clocks drift past a maximum threshold, nodes will be taken offline. It's therefore highly recommended to run [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -For more details on how CockroachDB handles unsynchronized clocks, see [Clock Synchronization](recommended-production-settings.html#clock-synchronization). And for a broader discussion of clocks, and the differences between clocks in Spanner and CockroachDB, see [Living Without Atomic Clocks](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/). - -## What languages can I use to work with CockroachDB? - -CockroachDB supports the PostgreSQL wire protocol, so you can use any available PostgreSQL client drivers. We've tested it from the following languages: - -- Go -- Python -- Ruby -- Java -- JavaScript (node.js) -- C++/C -- Clojure -- PHP -- Rust - -See [Install Client Drivers](install-client-drivers.html) for more details. - -## Why does CockroachDB use the PostgreSQL wire protocol instead of the MySQL protocol? - -CockroachDB uses the PostgreSQL wire protocol because it is better documented than the MySQL protocol, and because PostgreSQL has a liberal Open Source license, similar to BSD or MIT licenses, whereas MySQL has the more restrictive GNU General Public License. - -Note, however, that the protocol used doesn't significantly impact how easy it is to port applications. Swapping out SQL network drivers is rather straightforward in nearly every language. What makes it hard to move from one database to another is the dialect of SQL in use. CockroachDB's dialect is based on PostgreSQL as well. - -## What is CockroachDB’s security model? - -You can run a secure or insecure CockroachDB cluster. When secure, client/node and inter-node communication is encrypted, and SSL certificates authenticate the identity of both clients and nodes. When insecure, there's no encryption or authentication. - -Also, CockroachDB supports common SQL privileges on databases and tables. The `root` user has privileges for all databases, while unique users can be granted privileges for specific statements at the database and table-levels. - -For more details, see our documentation on [privileges](privileges.html) and the [`GRANT`](grant.html) statement. - -## How does CockroachDB compare to MySQL or PostgreSQL? - -While all of these databases support SQL syntax, CockroachDB is the only one that scales easily (without the manual complexity of sharding), rebalances and repairs itself automatically, and distributes transactions seamlessly across your cluster. - -For more insight, see [CockroachDB in Comparison](cockroachdb-in-comparison.html). - -## How does CockroachDB compare to Cassandra, HBase, MongoDB, or Riak? - -While all of these are distributed databases, only CockroachDB supports distributed transactions and provides strong consistency. Also, these other databases provide custom APIs, whereas CockroachDB offers standard SQL with extensions. - -For more insight, see [CockroachDB in Comparison](cockroachdb-in-comparison.html). - -## Can a PostgreSQL or MySQL application be migrated to CockroachDB? - -Yes, although CockroachDB is unlikely to be a drop-in replacement at this time. Due to differences in available features and syntax, migrating data from these databases to CockroachDB involves some manual effort. - -As a first step, check our [SQL Feature Support](sql-feature-support.html) page against your application's high-level SQL requirements. If essential SQL features are missing, consider workarounds and/or reach out to us via [our forum](https://forum.cockroachlabs.com/) or . - -Once you're ready to migrate, we recommend [importing your data via CSV](import.html). The process may expose places where you need to make changes for compatability. When migrating from PostgreSQL, for example, be sure to check this list of [known differences for identical input](porting-postgres.html). - -## Does Cockroach Labs offer a cloud database as a service? - -Not yet, but this is on our long-term roadmap. - -## Can I use CockroachDB as a key-value store? - -{% include {{ page.version.version }}/faq/simulate-key-value-store.html %} - -## Have questions that weren’t answered? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v2.0/functions-and-operators.md b/src/current/v2.0/functions-and-operators.md deleted file mode 100644 index 7017b4210f4..00000000000 --- a/src/current/v2.0/functions-and-operators.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Functions and Operators -summary: CockroachDB supports many built-in functions, aggregate functions, and operators. -toc: true ---- - -CockroachDB supports the following SQL functions and operators for use in [scalar expressions](scalar-expressions.html). - -{{site.data.alerts.callout_success}}In the built-in SQL shell, use \hf [function] to get inline help about a specific function.{{site.data.alerts.end}} - - -## Special Syntax Forms - -The following syntax forms are recognized for compatibility with the -SQL standard and PostgreSQL, but are equivalent to regular built-in -functions: - -{% include {{ page.version.version }}/sql/function-special-forms.md %} - -## Conditional and Function-Like Operators - -The following table lists the operators that look like built-in -functions but have special evaluation rules: - -| Operator | Description | -|----------|-------------| -| `ANNOTATE_TYPE(...)` | [Explicitly Typed Expression](scalar-expressions.html#explicitly-typed-expressions) | -| `ARRAY(...)` | [Conversion of Subquery Results to An Array](scalar-expressions.html#conversion-of-subquery-results-to-an-array) | -| `ARRAY[...]` | [Conversion of Scalar Expressions to An Array](scalar-expressions.html#array-constructors) | -| `CAST(...)` | [Type Cast](scalar-expressions.html#explicit-type-coercions) | -| `COALESCE(...)` | [First non-NULL expression with Short Circuit](scalar-expressions.html#coalesce-and-ifnull-expressions) | -| `EXISTS(...)` | [Existence Test on the Result of Subqueries](scalar-expressions.html#existence-test-on-the-result-of-subqueries) | -| `IF(...)` | [Conditional Evaluation](scalar-expressions.html#if-expressions) | -| `IFNULL(...)` | Alias for `COALESCE` restricted to two operands | -| `NULLIF(...)` | [Return `NULL` conditionally](scalar-expressions.html#nullif-expressions) | -| `ROW(...)` | [Tuple Constructor](scalar-expressions.html#tuple-constructor) | - -## Built-in Functions - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-2.0/docs/generated/sql/functions.md %} - -## Aggregate Functions - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-2.0/docs/generated/sql/aggregates.md %} - -## Operators - -The following table lists all CockroachDB operators from highest to lowest precedence, i.e., the order in which they will be evaluated within a statement. Operators with the same precedence are left associative. This means that those operators are grouped together starting from the left and moving right. - -| Order of Precedence | Operator | Name | Operator Arity | -| ------------------- | -------- | ---- | -------------- | -| 1 | `.` | Member field access operator | binary | -| 2 | `::` | [Type cast](scalar-expressions.html#explicit-type-coercions) | binary | -| 3 | `-` | Unary minus | unary (prefix) | -| | `~` | Bitwise not | unary (prefix) | -| 4 | `^` | Exponentiation | binary | -| 5 | `*` | Multiplication | binary | -| | `/` | Division | binary | -| | `//` | Floor division | binary | -| | `%` | Modulo | binary | -| 6 | `+` | Addition | binary | -| | `-` | Subtraction | binary | -| 7 | `<<` | Bitwise left-shift | binary | -| | `>>` | Bitwise right-shift | binary | -| 8 | `&` | Bitwise AND | binary | -| 9 | `#` | Bitwise XOR | binary | -| 10 | | | Bitwise OR | binary | -| 11 | || | Concatenation | binary | -| | `< ANY`, ` SOME`, ` ALL` | [Multi-valued] "less than" comparison | binary | -| | `> ANY`, ` SOME`, ` ALL` | [Multi-valued] "greater than" comparison | binary | -| | `= ANY`, ` SOME`, ` ALL` | [Multi-valued] "equal" comparison | binary | -| | `<= ANY`, ` SOME`, ` ALL` | [Multi-valued] "less than or equal" comparison | binary | -| | `>= ANY`, ` SOME`, ` ALL` | [Multi-valued] "greater than or equal" comparison | binary | -| | `<> ANY` / `!= ANY`, `<> SOME` / `!= SOME`, `<> ALL` / `!= ALL` | [Multi-valued] "not equal" comparison | binary | -| | `[NOT] LIKE ANY`, `[NOT] LIKE SOME`, `[NOT] LIKE ALL` | [Multi-valued] `LIKE` comparison | binary | -| | `[NOT] ILIKE ANY`, `[NOT] ILIKE SOME`, `[NOT] ILIKE ALL` | [Multi-valued] `ILIKE` comparison | binary | -| 12 | `[NOT] BETWEEN` | Value is [not] within the range specified | binary | -| | `[NOT] BETWEEN SYMMETRIC` | Like `[NOT] BETWEEN`, but in non-sorted order. For example, whereas `a BETWEEN b AND c` means `b <= a <= c`, `a BETWEEN SYMMETRIC b AND c` means `(b <= a <= c) OR (c <= a <= b)`. | binary | -| | `[NOT] IN` | Value is [not] in the set of values specified | binary | -| | `[NOT] LIKE` | Matches [or not] LIKE expression, case sensitive | binary | -| | `[NOT] ILIKE` | Matches [or not] LIKE expression, case insensitive | binary | -| | `[NOT] SIMILAR` | Matches [or not] SIMILAR TO regular expression | binary | -| | `~` | Matches regular expression, case sensitive | binary | -| | `!~` | Does not match regular expression, case sensitive | binary | -| | `~*` | Matches regular expression, case insensitive | binary | -| | `!~*` | Does not match regular expression, case insensitive | binary | -| 13 | `=` | Equal | binary | -| | `<` | Less than | binary | -| | `>` | Greater than | binary | -| | `<=` | Less than or equal to | binary | -| | `>=` | Greater than or equal to | binary | -| | `!=`, `<>` | Not equal | binary | -| 14 | `IS [DISTINCT FROM]` | Equal, considering `NULL` as value | binary | -| | `IS NOT [DISTINCT FROM]` | `a IS NOT b` equivalent to `NOT (a IS b)` | binary | -| | `ISNULL`, `IS UNKNOWN` , `NOTNULL`, `IS NOT UNKNOWN` | Equivalent to `IS NULL` / `IS NOT NULL` | unary (postfix) | -| | `IS NAN`, `IS NOT NAN` | [Comparison with the floating-point NaN value](scalar-expressions.html#comparison-with-nan) | unary (postfix) | -| | `IS OF(...)` | Type predicate | unary (postfix) -| 15 | `NOT` | [Logical NOT](scalar-expressions.html#logical-operators) | unary | -| 16 | `AND` | [Logical AND](scalar-expressions.html#logical-operators) | binary | -| 17 | `OR` | [Logical OR](scalar-expressions.html#logical-operators) | binary | - -[Multi-valued]: scalar-expressions.html#multi-valued-comparisons - -### Supported Operations - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/release-2.0/docs/generated/sql/operators.md %} - - diff --git a/src/current/v2.0/generate-cockroachdb-resources.md b/src/current/v2.0/generate-cockroachdb-resources.md deleted file mode 100644 index ddf31b0d92b..00000000000 --- a/src/current/v2.0/generate-cockroachdb-resources.md +++ /dev/null @@ -1,288 +0,0 @@ ---- -title: Generate CockroachDB Resources -summary: Use cockroach gen to generate command-line interface utlities, such as man pages, and example data. -toc: true ---- - -The `cockroach gen` command can generate command-line interface (CLI) utilities ([`man` pages](https://en.wikipedia.org/wiki/Man_page) and a`bash` autocompletion script), example SQL data suitable to populate test databases, and an HAProxy configuration file for load balancing a running cluster. - - -## Subcommands - -| Subcommand | Usage | -| -----------|------ | -| `man` | Generate man pages for CockroachDB. | -| `autocomplete` | Generate bash autocompletion script for CockroachDB. | -| `example-data` | Generate example SQL data. | -| `haproxy` | Generate an HAProxy config file for a running CockroachDB cluster. | - -## Synopsis - -~~~ shell -# Generate man pages: -$ cockroach gen man - -# Generate bash autocompletion script: -$ cockroach gen autocomplete - -# Generate example SQL data: -$ cockroach gen example-data intro | cockroach sql -$ cockroach gen example-data startrek | cockroach sql - -# Generate an HAProxy config file for a running cluster: -$ cockroach gen haproxy - -# View help: -$ cockroach gen --help -$ cockroach gen man --help -$ cockroach gen autocomplete --help -$ cockroach gen example-data --help -$ cockroach gen haproxy --help -~~~ - -## Flags - -The `gen` subcommands supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -#### `man` - -Flag | Description ------|----------- -`--path` | The path where man pages will be generated.

      **Default:** `man/man1` under the current directory - -#### `autocomplete` - -Flag | Description ------|----------- -`--out` | The path where the autocomplete file will be generated.

      **Default:** `cockroach.bash` in the current directory - -#### `example-data` - -No flags are supported. See the [Generate Example Data](#generate-example-data) example for guidance. - -#### `haproxy` - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html). The directory must contain valid certificates if running in secure mode.

      **Env Variable:** `COCKROACH_CERTS_DIR`
      **Default:** `${HOME}/.cockroach-certs/` -`--host` | The server host to connect to. This can be the address of any node in the cluster.

      **Env Variable:** `COCKROACH_HOST`
      **Default:** `localhost` -`--insecure` | Run in insecure mode. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

      **Env Variable:** `COCKROACH_INSECURE`
      **Default:** `false` -`--out` | The path where the `haproxy.cfg` file will be generated. If an `haproxy.cfg` file already exists in the directory, it will be overwritten.

      **Default:** `haproxy.cfg` in the current directory -`--port`
      `-p` | The server port to connect to.

      **Env Variable:** `COCKROACH_PORT`
      **Default:** `26257` - -### Logging - -By default, the `gen` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Generate `man` Pages - -~~~ shell -# Generate man pages: -$ cockroach gen man - -# Move the man pages to the man directory: -$ sudo mv man/man1/* /usr/share/man/man1 - -# Access man pages: -$ man cockroach -~~~ - -### Generate a `bash` Autocompletion Script - -~~~ shell -# Generate bash autocompletion script: -$ cockroach gen autocomplete - -# Add the script to your .bashrc and .bash_profle: -$ printf "\n\n#cockroach bash autocomplete\nsource 'cockroach.bash'" >> ~/.bashrc -$ printf "\n\n#cockroach bash autocomplete\nsource 'cockroach.bash'" >> ~/.bash_profile -~~~ - -You can now use `tab` to autocomplete `cockroach` commands. - -### Generate Example Data - -To test out CockroachDB, you can generate an example `startrek` database, which contains 2 tables, `episodes` and `quotes`. - -~~~ shell -# Generate example `startrek` database: -$ cockroach gen example-data startrek | cockroach sql --insecure -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -DROP TABLE -CREATE TABLE -INSERT 79 -CREATE TABLE -INSERT 200 -~~~ - -~~~ shell -# Launch the built-in SQL client to view it: -$ cockroach sql --insecure -~~~ - -~~~ sql -> SHOW TABLES FROM startrek; -~~~ -~~~ -+----------+ -| Table | -+----------+ -| episodes | -| quotes | -+----------+ -(2 rows) -~~~ - -You can also generate an example `intro` database, which contains 1 table, `mytable`, with a hidden message: - -~~~ shell -# Generate example `intro` database: -$ cockroach gen example-data intro | cockroach sql --insecure -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -CREATE TABLE -INSERT 1 -INSERT 1 -INSERT 1 -INSERT 1 -... -~~~ - -~~~ shell -# Launch the built-in SQL client to view it: -$ cockroach sql --insecure -~~~ - -~~~ sql -> SHOW TABLES FROM intro; -~~~ - -~~~ -+---------+ -| Table | -+---------+ -| mytable | -+---------+ -(1 row) -~~~ - -~~~ sql -> SELECT * FROM intro.mytable WHERE (l % 2) = 0; -~~~ - -~~~ -+----+------------------------------------------------------+ -| l | v | -+----+------------------------------------------------------+ -| 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, | -| 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ | -| 4 | ! "?##mW##?"- | -| 6 | ! C O N G R A T S _am#Z??A#ma, Y | -| 8 | ! _ummY" "9#ma, A | -| 10 | ! vm#Z( )Xmms Y | -| 12 | ! .j####mmm#####mm#m##6. | -| 14 | ! W O W ! jmm###mm######m#mmm##6 | -| 16 | ! ]#me*Xm#m#mm##m#m##SX##c | -| 18 | ! dm#||+*$##m#mm#m#Svvn##m | -| 20 | ! :mmE=|+||S##m##m#1nvnnX##; A | -| 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M | -| 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A | -| 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z | -| 28 | ! U D 4##c|+|+|]m#kvnvnno##P E | -| 30 | ! I 4#ma+|++]mmhvnnvq##P` ! | -| 32 | ! D I ?$#q%+|dmmmvnnm##! | -| 34 | ! T -4##wu#mm#pw##7' | -| 36 | ! -?$##m####Y' | -| 38 | ! !! "Y##Y"- | -| 40 | ! | -+----+------------------------------------------------------+ -(21 rows) -~~~ - -### Generate an HAProxy Configuration File - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster. - -
      - - -

      - -
      -To generate an HAProxy config file for a secure cluster, run the `cockroach gen haproxy` command, specifying the location of [certificate directory](create-security-certificates.html) and the address of any instance running a CockroachDB node: - -~~~ shell -$ cockroach gen haproxy \ ---certs-dir= \ ---host=
      \ ---port=26257 -~~~ -
      - -
      -To generate an HAProxy config file for an insecure cluster, run the `cockroach gen haproxy` command, specifying the address of any instance running a CockroachDB node: - -~~~ shell -$ cockroach gen haproxy --insecure \ ---host=
      \ ---port=26257 -~~~ -
      - -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - -~~~ -global - maxconn 4096 - -defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - -listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 - server cockroach2 :26257 - server cockroach3 :26257 -~~~ - -The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - -Field | Description -------|------------ -`timeout connect`
      `timeout client`
      `timeout server` | Timeout values that should be suitable for most deployments. -`bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

      This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. -`balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. -`option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. -`server` | For each node in the cluster, this field specifies the interface that the node listens on, i.e., the address passed in the `--host` flag on node startup. - -{{site.data.alerts.callout_info}}For full details on these and other configuration settings, see the HAProxy Configuration Manual.{{site.data.alerts.end}} - -## See Also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Deploy CockroachDB On-Premises](deploy-cockroachdb-on-premises.html) (using HAProxy for load balancing) diff --git a/src/current/v2.0/get-started-with-enterprise-trial.md b/src/current/v2.0/get-started-with-enterprise-trial.md deleted file mode 100644 index 4c8b2aadd6c..00000000000 --- a/src/current/v2.0/get-started-with-enterprise-trial.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Enterprise Trial –– Get Started -summary: Check out this page to get started with your CockroachDB Enterprise Trial -toc: true -license: true ---- - -Congratulations on starting your CockroachDB Enterprise Trial! With it, you'll not only get access to CockroachDB's core capabilities like [high availability](high-availability.html) and [`SERIALIZABLE` isolation](strong-consistency.html), but also our Enterprise-only features like distributed [`BACKUP`](backup.html) & [`RESTORE`](restore.html), [geo-partitioning](partitioning.html), and [cluster visualization](enable-node-map.html). - -## Install CockroachDB - -If you haven't already, you'll need to [locally install](install-cockroachdb.html), [remotely deploy](manual-deployment.html), or [orchestrate](orchestration.html) CockroachDB. - -## Enable Enterprise features - -As the CockroachDB `root` user, open the [built-in SQL shell](use-the-built-in-sql-client.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{{site.data.alerts.callout_info}} -If you've secured your deployment, you'll need to [include the flags for your certificates](create-security-certificates.html) instead of the `--insecure` flag. -{{site.data.alerts.end}} - -Now, use the `SET CLUSTER SETTING` command to set the name of your organization and the license key: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ - -Then verify your organization in response to the following query: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING cluster.organization; -~~~ - -## Use Enterprise features - -Your cluster now has access to all of CockroachDB's enterprise features for the length of the trial: - -- [`BACKUP`](backup.html) & [`RESTORE`](restore.html), which leverage your entirely cluster to create and consume consistent backups. -- [Geo-partitioning](partitioning.html) to control the physical location of your data with row-level granularity. -- [Node Maps](enable-node-map.html), which provides enhanced visuals of your cluster's nodes. -- [Use role-based access management (RBAC)](create-role.html) for simplified user management. - -## Getting help - -If you or your team need any help during your trial, our engineers are available on [CockroachDB Community Slack](https://cockroachdb.slack.com), [our forum](https://forum.cockroachlabs.com/), or [GitHub](https://github.com/cockroachdb/cockroach).

      - -## See also - -- [Enterprise Licensing](enterprise-licensing.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) diff --git a/src/current/v2.0/go-implementation.md b/src/current/v2.0/go-implementation.md deleted file mode 100644 index 1ea33a3e318..00000000000 --- a/src/current/v2.0/go-implementation.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Go Implementation -summary: CockroachDB is built in Go. -toc: false ---- - -The choice of language matters. Speed, stability, maintainability: each of these attributes of the underlying language can impact how quickly CockroachDB evolves and how well it works. Not all languages were created equal. Go is an open source programming language developed primarily at Google as a viable alternative to C++ and Java. - -- Excellent environment for building distributed systems -- Faster compile times -- Garbage collection and type safety provide stability -- Readable, well-documented code encourages open source contributions - -CockroachDB is built in Go - -## See Also - -- [Why Go Was the Right Choice for CockroachDB](https://www.cockroachlabs.com/blog/why-go-was-the-right-choice-for-cockroachdb/) -- [How to Optimize Garbage Collection in Go](https://www.cockroachlabs.com/blog/how-to-optimize-garbage-collection-in-go/) -- [The Cost and Complexity of Cgo](https://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/) -- [Outsmarting Go Dependencies in Testing Code](https://www.cockroachlabs.com/blog/outsmarting-go-dependencies-testing-code/) diff --git a/src/current/v2.0/grant-roles.md b/src/current/v2.0/grant-roles.md deleted file mode 100644 index d6851224bcd..00000000000 --- a/src/current/v2.0/grant-roles.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: GRANT <roles> -summary: The GRANT statement grants user privileges for interacting with specific databases and tables. -toc: true ---- - -New in v2.0: The `GRANT ` [statement](sql-statements.html) lets you add a [role](roles.html) or [user](create-and-manage-users.html) as a member to a role. - -{{site.data.alerts.callout_info}}GRANT <roles> is an enterprise-only feature.{{site.data.alerts.end}} - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/grant_roles.html %}
      - -## Required Privileges - -The user granting role membership must be a role admin (i.e., members with the `ADMIN OPTION`) or a superuser (i.e., a member of the `admin` role). - -## Considerations - -- Users and roles can be members of roles. -- The `root` user is automatically created as an `admin` role and assigned the `ALL` privilege for new databases. -- All privileges of a role are inherited by all its members. -- Membership loops are not allowed (direct: `A is a member of B is a member of A` or indirect: `A is a member of B is a member of C ... is a member of A`). - -## Parameters - -Parameter | Description -----------|------------ -`role_name` | The name of the role to which you want to add members. To add members to multiple roles, use a comma-separated list of role names. -`user_name` | The name of the [user](create-and-manage-users.html) or [role](roles.html) to whom you want to grant membership. To add multiple members, use a comma-separated list of user and/or role names. -`WITH ADMIN OPTION` | Designate the user as an role admin. Role admins can grant or revoke membership for the specified role. - -## Examples - -### Grant Role Membership - -{% include copy-clipboard.html %} -~~~ sql -> GRANT design TO ernie; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | false | -| design | lola | false | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -### Grant the Admin Option - -{% include copy-clipboard.html %} -~~~ sql -> GRANT design TO ERNIE WITH ADMIN OPTION; -~~~ -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | true | -| design | lola | false | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -## See Also - -- [Privileges](privileges.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [Manage Users](create-and-manage-users.html) -- [Manage Roles](roles.html) diff --git a/src/current/v2.0/grant.md b/src/current/v2.0/grant.md deleted file mode 100644 index 7de891c4752..00000000000 --- a/src/current/v2.0/grant.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: GRANT <privileges> -summary: The GRANT statement grants user privileges for interacting with specific databases and tables. -toc: true ---- - -The `GRANT ` [statement](sql-statements.html) lets you control each [role](roles.html) or [user's](create-and-manage-users.html) SQL [privileges](privileges.html) for interacting with specific databases and tables. - -For privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/grant_privileges.html %}
      - -## Required Privileges - -The user granting privileges must have the `GRANT` privilege on the target databases or tables. - -## Supported Privileges - -Roles and users can be granted the following privileges. Some privileges are applicable both for databases and tables, while other are applicable only for tables (see **Levels** in the table below). - -- When a role or user is granted privileges for a database, new tables created in the database will inherit the privileges, but the privileges can then be changed. -- When a role or user is granted privileges for a table, the privileges are limited to the table. -- The `root` user automatically belongs to the `admin` role and has the `ALL` privilege for new databases. -- For privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - -Privilege | Levels -----------|------------ -`ALL` | Database, Table -`CREATE` | Database, Table -`DROP` | Database, Table -`GRANT` | Database, Table -`SELECT` | Table -`INSERT` | Table -`DELETE` | Table -`UPDATE` | Table - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | A comma-separated list of table names. Alternately, to grant privileges to all tables, use `*`. `ON TABLE table.*` grants apply to all existing tables in a database but will not affect tables created after the grant. -`database_name` | A comma-separated list of database names.

      Privileges granted on databases will be inherited by any new tables created in the databases, but do not affect existing tables in the database. -`user_name` | A comma-separated list of [users](create-and-manage-users.html) and/or [roles](roles.html) to whom you want to grant privileges. - -## Examples - -### Grant Privileges on Databases - -{% include copy-clipboard.html %} -~~~ sql -> GRANT CREATE ON DATABASE db1, db2 TO maxroach, betsyroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE db1, db2; -~~~ - -~~~ shell -+----------+------------+------------+ -| Database | User | Privileges | -+----------+------------+------------+ -| db1 | betsyroach | CREATE | -| db1 | maxroach | CREATE | -| db1 | root | ALL | -| db2 | betsyroach | CREATE | -| db2 | maxroach | CREATE | -| db2 | root | ALL | -+----------+------------+------------+ -(6 rows) -~~~ - -### Grant Privileges on Specific Tables in a Database - -{% include copy-clipboard.html %} -~~~ sql -> GRANT DELETE ON TABLE db1.t1, db1.t2 TO betsyroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE db1.t1, db1.t2; -~~~ - -~~~ shell -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | betsyroach | DELETE | -| t1 | root | ALL | -| t2 | betsyroach | DELETE | -| t2 | root | ALL | -+-------+------------+------------+ -(4 rows) -~~~ - -### Grant Privileges on All Tables in a Database - -{% include copy-clipboard.html %} -~~~ sql -> GRANT SELECT ON TABLE db2.* TO henryroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE db2.*; -~~~ - -~~~ shell -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | henryroach | SELECT | -| t1 | root | ALL | -| t2 | henryroach | SELECT | -| t2 | root | ALL | -+-------+------------+------------+ -(4 rows) -~~~ - -## See Also - -- [Privileges](privileges.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`REVOKE `](revoke.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [Manage Users](create-and-manage-users.html) -- [Manage Roles](roles.html) diff --git a/src/current/v2.0/high-availability.md b/src/current/v2.0/high-availability.md deleted file mode 100644 index 6332cfd2ae5..00000000000 --- a/src/current/v2.0/high-availability.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: High Availability -summary: CockroachDB is designed to survive software and hardware failures, from server restarts to datacenter outages. -toc: false ---- - -CockroachDB is designed to survive software and hardware failures, from server restarts to datacenter outages. This is accomplished without confusing artifacts typical of other distributed systems (e.g., stale reads) using strongly-consistent replication as well as automated repair after failures. - -## Replication - -CockroachDB replicates your data for availability and guarantees consistency between replicas using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. You can [define the location of replicas](configure-replication-zones.html) in various ways, depending on the types of failures you want to secure against and your network topology. You can locate replicas on: - -- Different servers within a rack to tolerate server failures -- Different servers on different racks within a datacenter to tolerate rack power/network failures -- Different servers in different datacenters to tolerate large scale network or power outages - -When replicating across datacenters, be aware that the round-trip latency between datacenters will have a direct effect on your database's performance. Latency in cross-continent clusters will be noticeably worse than in clusters where all nodes are geographically close together. - -## Automated Repair - -For short-term failures, such as a server restart, CockroachDB uses Raft to continue seamlessly as long as a majority of replicas remain available. Raft makes sure that a new “leader” for each group of replicas is elected if the former leader fails, so that transactions can continue and affected replicas can rejoin their group once they’re back online. For longer-term failures, such as a server/rack going down for an extended period of time or a datacenter outage, CockroachDB automatically rebalances replicas from the missing nodes, using the unaffected replicas as sources. Using capacity information from the gossip network, new locations in the cluster are identified and the missing replicas are re-replicated in a distributed fashion using all available nodes and the aggregate disk and network bandwidth of the cluster. diff --git a/src/current/v2.0/import-data.md b/src/current/v2.0/import-data.md deleted file mode 100644 index 518c272fa18..00000000000 --- a/src/current/v2.0/import-data.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: Import Data -summary: Learn how to import data into a CockroachDB cluster. -toc: true ---- - -CockroachDB supports importing data from CSV/TSV or SQL dump files. - -{{site.data.alerts.callout_info}}To import/restore data from CockroachDB-generated enterprise license backups, see RESTORE.{{site.data.alerts.end}} - - -## Import from Tabular Data (CSV) - -If you have data exported in a tabular format (e.g., CSV or TSV), you can use the [`IMPORT`](import.html) statement. - -To use this statement, though, you must also have some kind of remote file server (such as Amazon S3 or a custom file server) that all your nodes can access. - -## Import from Generic SQL Dump - -You can execute batches of `INSERT` statements stored in `.sql` files (including those generated by [`cockroach dump`](sql-dump.html)) from the command line, importing data into your cluster. - -~~~ shell -$ cockroach sql --database=[database name] < statements.sql -~~~ - -{{site.data.alerts.callout_success}}Grouping each INSERT statement to include approximately 500-10,000 rows will provide the best performance. The number of rows depends on row size, column families, number of indexes; smaller rows and less complex schemas can benefit from larger groups of INSERTS, while larger rows and more complex schemas benefit from smaller groups.{{site.data.alerts.end}} - -## Import from PostgreSQL Dump - -If you're importing data from a PostgreSQL deployment, you can import the `.sql` file generated by the `pg_dump` command to more quickly import data. - -{{site.data.alerts.callout_success}}The .sql files generated by pg_dump provide better performance because they use the COPY statement instead of bulk INSERT statements.{{site.data.alerts.end}} - -### Create PostgreSQL SQL File - -Which `pg_dump` command you want to use depends on whether you want to import your entire database or only specific tables: - -- Entire database: - - ~~~ shell - $ pg_dump [database] > [filename].sql - ~~~ - -- Specific tables: - - ~~~ shell - $ pg_dump -t [table] [table's schema] > [filename].sql - ~~~ - -For more details, see PostgreSQL's documentation on [`pg_dump`](https://www.postgresql.org/docs/9.1/static/app-pgdump.html). - -### Reformat SQL File - -After generating the `.sql` file, you need to perform a few editing steps before importing it: - -1. Remove all statements from the file besides the `CREATE TABLE` and `COPY` statements. -2. Manually add the table's [`PRIMARY KEY`](primary-key.html#syntax) constraint to the `CREATE TABLE` statement. - This has to be done manually because PostgreSQL attempts to add the primary key after creating the table, but CockroachDB requires the primary key be defined upon table creation. -3. Review any other [constraints](constraints.html) to ensure they're properly listed on the table. -4. Remove any [unsupported elements](sql-feature-support.html). - -### Import Data - -After reformatting the file, you can import it through `psql`: - -~~~ shell -$ psql -p [port] -h [node host] -d [database] -U [user] < [file name].sql -~~~ - -For reference, CockroachDB uses these defaults: - -- `[port]`: **26257** -- `[user]`: **root** - -## See Also - -- [SQL Dump (Export)](sql-dump.html) -- [Back up Data](back-up-data.html) -- [Restore Data](restore-data.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/import.md b/src/current/v2.0/import.md deleted file mode 100644 index be4711b0fcb..00000000000 --- a/src/current/v2.0/import.md +++ /dev/null @@ -1,271 +0,0 @@ ---- -title: IMPORT -summary: Import CSV data into your CockroachDB cluster. -toc: true ---- - -The `IMPORT` [statement](sql-statements.html) imports tabular data (e.g., CSVs) into a single table. - -{{site.data.alerts.callout_info}}For details about importing SQL dumps, see Import Data.{{site.data.alerts.end}} - - -## Requirements - -Before using [`IMPORT`](import.html), you should have: - -- The schema of the table you want to import. -- The tabular data you want to import (e.g., CSV), preferably hosted on cloud storage. This location *must* be accessible to all nodes using the same address. This means that you cannot use a node's local file storage. - - For ease of use, we recommend using cloud storage. However, if that isn't readily available to you, we also have a [guide on easily creating your own file server](create-a-file-server.html). - -## Details - -### Import Targets - -Imported tables must not exist and must be created in the [`IMPORT`](import.html) statement. If the table you want to import already exists, you must drop it with [`DROP TABLE`](drop-table.html). - -You can only import a single table at a time. - -You can specify the target database in the table name in the [`IMPORT`](import.html) statement. If it's not specified there, the active database in the SQL session is used. - -### Create Table - -Your [`IMPORT`](import.html) statement must include a `CREATE TABLE` statement (representing the schema of the data you want to import) using one of the following methods: - -- A reference to a file that contains a `CREATE TABLE` statement -- An inline `CREATE TABLE` statement - -We also recommend [all secondary indexes you want to use in the `CREATE TABLE` statement](create-table.html#create-a-table-with-secondary-and-inverted-indexes-new-in-v2-0). It is possible to add secondary indexes later, but it is significantly faster to specify them during import. - -### CSV Data - -The tabular data to import must be valid [CSV files](https://tools.ietf.org/html/rfc4180), with the caveat that the comma [delimiter](#delimiter) can be set to another single character. In particular: - -- Files must be UTF-8 encoded. -- If the delimiter (`,` by default), a double quote (`"`), newline (`\n`), or carriage return (`\r`) appears in a field, the field must be enclosed by double quotes. -- If double quotes are used to enclose fields, then a double quote appearing inside a field must be escaped by preceding it with another double quote. For example: - `"aaa","b""bb","ccc"` - -CockroachDB-specific requirements: - -- If a column is of type [`BYTES`](bytes.html), it can either be a valid UTF-8 string or a [string literal](sql-constants.html#string-literals) beginning with the two characters `\`, `x`. For example, a field whose value should be the bytes `1`, `2` would be written as `\x0102`. - -### Object Dependencies - -When importing tables, you must be mindful of the following rules because [`IMPORT`](import.html) only creates single tables which must not already exist: - -- Objects that the imported table depends on must already exist -- Objects that depend on the imported table can only be created after the import completes - -### Available Storage Requirements - -Each node in the cluster is assigned an equal part of the converted CSV data, and so must have enough temp space to store it. In addition, data is persisted as a normal table, and so there must also be enough space to hold the final, replicated data. The node's first-listed/default [`store`](start-a-node.html#store) directory must have enough available storage to hold its portion of the data. - -On [`cockroach start`](start-a-node.html), if you set `--max-disk-temp-storage`, it must also be greater than the portion of the data a node will store in temp space. - -### Import File Location - -You can store the tabular data you want to import using remote cloud storage (Amazon S3, Google Cloud Platform, etc.). Alternatively, you can use an [HTTP server](create-a-file-server.html) accessible from all nodes. - -For simplicity's sake, it's **strongly recommended** to use cloud/remote storage for the data you want to import. Local files are supported; however, they must be accessible identically from all nodes in the cluster. - -### Table Users and Privileges - -Imported tables are treated as new tables, so you must [`GRANT`](grant.html) privileges to them. - -## Performance - -All nodes are used during tabular data conversion into key-value data, which means all nodes' CPU and RAM will be partially consumed by the [`IMPORT`](import.html) task in addition to serving normal traffic. - -## Viewing and Controlling Import Jobs - -After CockroachDB successfully initiates an import, it registers the import as a job, which you can view with [`SHOW JOBS`](show-jobs.html). - -After the import has been initiated, you can control it with [`PAUSE JOB`](pause-job.html), [`RESUME JOB`](resume-job.html), and [`CANCEL JOB`](cancel-job.html). - -{{site.data.alerts.callout_danger}}Pausing and then resuming an `IMPORT` job will cause it to restart from the beginning.{{site.data.alerts.end}} - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/import.html %} -
      - -{{site.data.alerts.callout_info}}The IMPORT statement cannot be used within a transaction.{{site.data.alerts.end}} - -## Required Privileges - -Only the `root` user can run [`IMPORT`](import.html). - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you want to import/create. | -| `create_table_file` | The URL of a plain text file containing the [`CREATE TABLE`](create-table.html) statement you want to use (see [this example for syntax](#use-create-table-statement-from-a-file)). | -| `table_elem_list` | The table definition you want to use (see [this example for syntax](#use-create-table-statement-from-a-statement)). | -| `file_to_import` | The URL of the file you want to import.| -| `WITH kv_option` | Control your import's behavior with [these options](#import-options). | - -### Import File URLs - -URLs for the files you want to import must use the following format: - -{% include {{ page.version.version }}/misc/external-urls.md %} - -### Import Options - -You can control the [`IMPORT`](import.html) process's behavior using any of the following key-value pairs as a `kv_option`. - -#### `delimiter` - -If not using comma as your column delimiter, you can specify another Unicode character as the delimiter. - -
      - - - - - - - - - - - - - - - - - - -
      Required?No
      Keydelimiter
      ValueThe unicode character that delimits columns in your rows
      ExampleTo use tab-delimited values: WITH delimiter = e'\t'
      - -#### `comment` - -Do not import rows that begin with this character. - - - - - - - - - - - - - - - - - - - - -
      Required?No
      Keycomment
      ValueThe unicode character that identifies rows to skip
      ExampleWITH comment = '#'
      - -#### `nullif` - -Convert values to SQL *NULL* if they match the specified string. - - - - - - - - - - - - - - - - - - - - -
      Required?No
      Keynullif
      ValueThe string that should be converted to NULL
      ExampleTo use empty columns as NULL: WITH nullif = ''
      - -## Examples - -### Use Create Table Statement from a File - -~~~ sql -> IMPORT TABLE customers -CREATE USING 'azure://acme-co/customer-create-table.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -; -~~~ - -### Use Create Table Statement from a Statement - -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -; -~~~ - -### Import a Tab-Separated File - -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.tsc?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - delimiter = e'\t' -; -~~~ - -### Skip Commented Lines - -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.tsc?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - comment = '#' -; -~~~ - -### Use Blank Characters as *NULL* - -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.tsc?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - nullif = '' -; -~~~ - -## Known Limitation - -`IMPORT` can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: - -~~~ sql -> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; -~~~ - -## See Also - -- [Create a File Server](create-a-file-server.html) -- [Importing Data](import-data.html) diff --git a/src/current/v2.0/improve-the-docs.md b/src/current/v2.0/improve-the-docs.md deleted file mode 100644 index 327ab52a9ec..00000000000 --- a/src/current/v2.0/improve-the-docs.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Improve the Docs -summary: Contribute to the improvement and expansion of CockroachDB documentation. -toc: false ---- - -The CockroachDB docs are open source just like the database itself. We welcome your contributions! - -## Write Docs - -Want to contribute to the docs? - -Find an issue with the [help-wanted](https://github.com/cockroachdb/docs/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted) label and then review [CONTRIBUTING.md](https://github.com/cockroachdb/docs/blob/master/CONTRIBUTING.md) to set yourself up and get started. You can also select **Contribute > Edit This Page** directly on a page. - -## Suggest Improvements - -See an error? Need additional details or clarification? Want a topic added to the docs? - -Select **Contribute > Report Doc Issue** or **Contribute > Suggest New Content** toward the top of the page, or [open an issue](https://github.com/cockroachdb/docs/issues/new?labels=community) directly. - diff --git a/src/current/v2.0/index.md b/src/current/v2.0/index.md deleted file mode 100755 index 99f1b245e0d..00000000000 --- a/src/current/v2.0/index.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: CockroachDB Docs -summary: CockroachDB documentation with details on installation, getting started, building an app, deployment, orchestration, and more. -tags: install, build an app, deploy -type: first_page -homepage: true -toc: false -no_toc: true -twitter: false -contribute: false ---- - -CockroachDB is the SQL database for building global, scalable cloud services that survive disasters. -
      - -
      - -
      -
      diff --git a/src/current/v2.0/indexes.md b/src/current/v2.0/indexes.md deleted file mode 100644 index 0c7154d07cd..00000000000 --- a/src/current/v2.0/indexes.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: Indexes -summary: Indexes improve your database's performance by helping SQL locate data without having to look through every row of a table. -toc: true -toc_not_nested: true ---- - -Indexes improve your database's performance by helping SQL locate data without having to look through every row of a table. - - -## How Do Indexes Work? - -When you create an index, CockroachDB "indexes" the columns you specify, which creates a copy of the columns and then sorts their values (without sorting the values in the table itself). - -After a column is indexed, SQL can easily filter its values using the index instead of scanning each row one-by-one. On large tables, this greatly reduces the number of rows SQL has to use, executing queries exponentially faster. - -For example, if you index an `INT` column and then filter it WHERE <indexed column> = 10, SQL can use the index to find values starting at 10 but less than 11. In contrast, without an index, SQL would have to evaluate _every_ row in the column for values equaling 10. - -### Creation - -Each table automatically has an index created called `primary`, which indexes either its [primary key](primary-key.html) or—if there is no primary key—a unique value for each row known as `rowid`. We recommend always defining a primary key because the index it creates provides much better performance than letting CockroachDB use `rowid`. - -The `primary` index helps filter a table's primary key but doesn't help SQL find values in any other columns. However, you can use secondary indexes to improve the performance of queries using columns not in a table's primary key. You can create them: - -- At the same time as the table with the `INDEX` clause of [`CREATE TABLE`](create-table.html#create-a-table-with-secondary-and-inverted-indexes-new-in-v2-0). In addition to explicitly defined indexes, CockroachDB automatically creates secondary indexes for columns with the [Unique constraint](unique.html). -- For existing tables with [`CREATE INDEX`](create-index.html). -- By applying the Unique constraint to columns with [`ALTER TABLE`](alter-table.html), which automatically creates an index of the constrained columns. - -To create the most useful secondary indexes, you should also check out our [best practices](#best-practices). - -### Selection - -Because each query can use only a single index, CockroachDB selects the index it calculates will scan the fewest rows (i.e., the fastest). For more detail, check out our blog post [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -To override CockroachDB's index selection, you can also force [queries to use a specific index](table-expressions.html#force-index-selection) (also known as "index hinting"). - -### Storage - -CockroachDB stores indexes directly in your key-value store. You can find more information in our blog post [Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -### Locking - -Tables are not locked during index creation thanks to CockroachDB's [schema change procedure](https://www.cockroachlabs.com/blog/how-online-schema-changes-are-possible-in-cockroachdb/). - -### Performance - -Indexes create a trade-off: they greatly improve the speed of queries, but slightly slow down writes (because new values have to be copied and sorted). The first index you create has the largest impact, but additional indexes only introduce marginal overhead. - -To maximize your indexes' performance, we recommend following a few [best practices](#best-practices). - -## Best Practices - -We recommend creating indexes for all of your common queries. To design the most useful indexes, look at each query's `WHERE` and `SELECT` clauses, and create indexes that: - -- [Index all columns](#indexing-columns) in the `WHERE` clause. -- [Store columns](#storing-columns) that are _only_ in the `SELECT` clause. - -### Indexing Columns - -When designing indexes, it's important to consider which columns you index and the order you list them. Here are a few guidelines to help you make the best choices: - -- Each table's [primary key](primary-key.html) (which we recommend always [defining](create-table.html#create-a-table-primary-key-defined)) is automatically indexed. The index it creates (called `primary`) cannot be changed, nor can you change the primary key of a table after it's been created, so this is a critical decision for every table. -- Queries can benefit from an index even if they only filter a prefix of its columns. For example, if you create an index of columns `(A, B, C)`, queries filtering `(A)` or `(A, B)` can still use the index. However, queries that do not filter `(A)` will not benefit from the index.

      This feature also lets you avoid using single-column indexes. Instead, use the column as the first column in a multiple-column index, which is useful to more queries. -- Columns filtered in the `WHERE` clause with the equality operators (`=` or `IN`) should come first in the index, before those referenced with inequality operators (`<`, `>`). -- Indexes of the same columns in different orders can produce different results for each query. For more information, see [our blog post on index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/)—specifically the section "Restricting the search space." - -### Storing Columns - -The `STORING` clause specifies columns which are not part of the index key but should be stored in the index. This optimizes queries which retrieve those columns without filtering on them, because it prevents the need to read the primary index. - -### Example - -Say we have a table with three columns, two of which are indexed: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE tbl (col1 INT, col2 INT, col3 INT, INDEX (col1, col2)); -~~~ - -If we filter on the indexed columns but retrieve the unindexed column, this requires reading `col3` from the primary index via an "index join." - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT col3 FROM tbl WHERE col1 = 10 AND col2 > 1; -~~~ - -~~~ - tree | field | description -+-----------------+-------------+-----------------------+ - render | | - └── index-join | | - │ | table | tbl@primary - │ | key columns | rowid - └── scan | | - | table | tbl@tbl_col1_col2_idx - | spans | /10/2-/11 -~~~ - -However, if we store `col3` in the index, the index join is no longer necessary. This means our query only needs to read from the secondary index, so it will be more efficient. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE tbl (col1 INT, col2 INT, col3 INT, INDEX (col1, col2) STORING (col3)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT col3 FROM tbl WHERE col1 = 10 AND col2 > 1; -~~~ - -~~~ - tree | field | description -+-----------+-------------+-------------------+ - render | | - └── scan | | - | table | tbl@tbl_col1_col2_idx - | spans | /10/2-/11 -~~~ - -## See Also - -- [Inverted Indexes](inverted-indexes.html) -- [`CREATE INDEX`](create-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW INDEX`](show-index.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/inet.md b/src/current/v2.0/inet.md deleted file mode 100644 index bdf367f76b0..00000000000 --- a/src/current/v2.0/inet.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: INET -summary: The INET data type stores an IPv4 or IPv6 address. -toc: true ---- -New in v2.0: The `INET` [data type](data-types.html) stores an IPv4 or IPv6 address. - - -## Syntax - -A constant value of type `INET` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `INET` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`INET`. - -`INET` constants can be expressed using the following formats: - -Format | Description --------|------------- -IPv4 | Standard [RFC791](https://tools.ietf.org/html/rfc791)-specified format of 4 octets expressed individually in decimal numbers and separated by periods. Optionally, the address can be followed by a subnet mask.

      Examples: `'190.0.0.0'`, `'190.0.0.0/24'` -IPv6 | Standard [RFC8200](https://tools.ietf.org/html/rfc8200)-specified format of 8 colon-separated groups of 4 hexadecimal digits. An IPv6 address can be mapped to an IPv4 address. Optionally, the address can be followed by a subnet mask.

      Examples: `'2001:4f8:3:ba:2e0:81ff:fe22:d1f1'`, `'2001:4f8:3:ba:2e0:81ff:fe22:d1f1/120'`, `'::ffff:192.168.0.1/24'` - -{{site.data.alerts.callout_info}}IPv4 addresses will sort before IPv6 addresses, including IPv4-mapped IPv6 addresses.{{site.data.alerts.end}} - -## Size - -An `INET` value is 32 bits for IPv4 or 128 bits for IPv6. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE computers ( - ip INET PRIMARY KEY, - user_email STRING, - registration_date DATE - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM computers; -~~~ -~~~ -+-------------------+--------+-------+---------+-------------+ -| Field | Type | Null | Default | Indices | -+-------------------+--------+-------+---------+-------------+ -| ip | INET | false | NULL | {"primary"} | -| user_email | STRING | true | NULL | {} | -| registration_date | DATE | true | NULL | {} | -+-------------------+--------+-------+---------+-------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO computers - VALUES - ('192.168.0.1', 'info@cockroachlabs.com', '2018-01-31'), - ('192.168.0.2/10', 'lauren@cockroachlabs.com', '2018-01-31'), - ('2001:4f8:3:ba:2e0:81ff:fe22:d1f1/120', 'test@cockroachlabs.com', '2018-01-31'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM computers; -~~~ -~~~ -+--------------------------------------+--------------------------+---------------------------+ -| ip | user_email | registration_date | -+--------------------------------------+--------------------------+---------------------------+ -| 192.168.0.1 | info@cockroachlabs.com | 2018-01-31 00:00:00+00:00 | -| 192.168.0.2/10 | lauren@cockroachlabs.com | 2018-01-31 00:00:00+00:00 | -| 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/120 | test@cockroachlabs.com | 2018-01-31 00:00:00+00:00 | -+--------------------------------------+--------------------------+---------------------------+ -~~~ - -## Supported Casting & Conversion - -`INET` values can be [cast](data-types.html#data-type-conversions-casts) to the following data type: - -- `STRING` - Converts to format `'Address/subnet'`. - -## See Also - -- [Data Types](data-types.html) -- [Functions and Operators](functions-and-operators.html) diff --git a/src/current/v2.0/information-schema.md b/src/current/v2.0/information-schema.md deleted file mode 100644 index bf28d7db845..00000000000 --- a/src/current/v2.0/information-schema.md +++ /dev/null @@ -1,332 +0,0 @@ ---- -title: Information Schema -summary: The information_schema database contains read-only views that you can use for introspection into your database's tables, columns, indexes, and views. -toc: true ---- - -CockroachDB provides a virtual schema called `information_schema` that contains information about your database's tables, columns, indexes, and views. This information can be used for introspection and reflection. - -The definition of `information_schema` is part of the SQL standard and can therefore be relied on to remain stable over time. This contrasts with CockroachDB's `SHOW` statements, which provide similar data and are meant to be stable in CockroachDB but not standardized. It also contrasts with the virtual schema `crdb_internal`, which reflects the internals of CockroachDB and may thus change across CockroachDB versions. - -{{site.data.alerts.callout_info}}The information_schema views typically represent objects that the current user has privilege to access. To ensure you can view all the objects in a database, access it as the root user.{{site.data.alerts.end}} - - -## Data Exposed by information_schema - -To perform introspection on objects, you can either read from the related `information_schema` table or use one of CockroachDB's `SHOW` statements. - -Object | Information Schema Table | Corresponding `SHOW` Statement --------|--------------|-------- -Columns | [`columns`](#columns)| [`SHOW COLUMNS`](show-columns.html) -Constraints | [`key_column_usage`](#key_column_usage), [`referential_constraints`](#referential_constraints), [`table_constraints`](#table_constraints)| [`SHOW CONSTRAINTS`](show-constraints.html) -Databases | [`schemata`](#schemata)| [`SHOW DATABASE`](show-vars.html) -Indexes | [`statistics`](#statistics)| [`SHOW INDEX`](show-index.html) -Privileges | [`schema_privileges`](#schema_privileges), [`table_privileges`](#table_privileges)| [`SHOW GRANTS`](show-grants.html) -Sequences | [`sequences`](#sequences) | [`SHOW CREATE SEQUENCE`](show-create-sequence.html) -Tables | [`tables`](#tables)| [`SHOW TABLES`](show-tables.html) -Views | [`tables`](#tables), [`views`](#views)| [`SHOW CREATE VIEW`](show-create-view.html) - -## Tables in information_schema - -The virtual schema `information_schema` contains virtual tables, also called "system views," representing the database's objects, each of which is detailed below. - -These differ from regular [SQL views](views.html) in that they are -not showing data created from the content of other tables. Instead, -CockroachDB generates the data for virtual tables when they are accessed. - -{{site.data.alerts.callout_info}} -A query can specify a table name without a database name (e.g., `SELECT * FROM information_schema.sequences`). See [Name Resolution](sql-name-resolution.html) for more information. -{{site.data.alerts.end}} - -### administrable_role_authorizations - -`administrable_role_authorizations` identifies all roles that the current user has the admin option for. - -Column | Description --------|----------- -`grantee` | The name of the user to which this role membership was granted (always the current user). - -### applicable_roles - -`applicable_roles` identifies all roles whose privileges the current user can use. This implies there is a chain of role grants from the current user to the role in question. The current user itself is also an applicable role, but is not listed. - -Column | Description --------|----------- -`grantee` | Name of the user to which this role membership was granted (always the current user). -`role_name` | Name of a role. -`is_grantable` | `YES` if the grantee has the admin option on the role; `NO` if not. - -### columns - -`columns` contains information about the columns in each table. - -Column | Description --------|----------- -`table_catalog` | Name of the database containing the table. -`table_schema` | Name of the schema containing the table. -`table_name` | Name of the table. -`column_name` | Name of the column. -`ordinal_position` | Ordinal position of the column in the table (begins at 1). -`column_default` | Default value for the column. -`is_nullable` | `YES` if the column accepts `NULL` values; `NO` if it doesn't (e.g., it has the [`NOT NULL` constraint](not-null.html)). -`data_type` | [Data type](data-types.html) of the column. -`character_maximum_length` | If `data_type` is `STRING`, the maximum length in characters of a value; otherwise `NULL`. -`character_octet_length` | If `data_type` is `STRING`, the maximum length in octets (bytes) of a value; otherwise `NULL`. -`numeric_precision` | If `data_type` is numeric, the declared or implicit precision (i.e., number of significant digits); otherwise `NULL`. -`numeric_scale` | If `data_type` is an exact numeric type, the scale (i.e., number of digits to the right of the decimal point); otherwise `NULL`. -`datetime_precision` | Always `NULL` (unsupported by CockroachDB). -`character_set_catalog` | Always `NULL` (unsupported by CockroachDB). -`character_set_schema` | Always `NULL` (unsupported by CockroachDB). -`character_set_name` | Always `NULL` (unsupported by CockroachDB). -`generation_expression` | The expression used for computing the column value in a computed column. - -### column_privileges - -`column_privileges` identifies all privileges granted on columns to or by a currently enabled role. There is one row for each combination of `grantor`, `grantee`, and column (defined by `table_catalog`, `table_schema`, `table_name`, and `column_name`). - -Column | Description --------|----------- -`grantor` | Name of the role that granted the privilege. -`grantee` | Name of the role that was granted the privilege. -`table_catalog` | Name of the database containing the table that contains the column (always the current database). -`table_schema` | Name of the schema containing the table that contains the column. -`table_name` | Name of the table. -`column_name` | Name of the column. -`privilege_type` | Name of the [privilege](privileges.html). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). - -### enabled_roles - -The `enabled_roles` view identifies enabled roles for the current user. This includes both direct and indirect roles. - -Column | Description --------|----------- -`role_name` | Name of a role. - -### key_column_usage - -`key_column_usage` identifies columns with [`PRIMARY KEY`](primary-key.html), [`UNIQUE`](unique.html), or [`FOREIGN KEY` / `REFERENCES`](foreign-key.html) constraints. - -Column | Description --------|----------- -`constraint_catalog` | Name of the database containing the constraint. -`constraint_schema` | Name of the schema containing the constraint. -`constraint_name` | Name of the constraint. -`table_catalog` | Name of the database containing the constrained table. -`table_schema` | Name of the schema containing the constrained table. -`table_name` | Name of the constrained table. -`column_name` | Name of the constrained column. -`ordinal_position` | Ordinal position of the column within the constraint (begins at 1). -`position_in_unique_constraint` | For foreign key constraints, ordinal position of the referenced column within its uniqueness constraint (begins at 1). - -### referential_constraints - -`referential_constraints` identifies all referential ([Foreign Key](foreign-key.html)) constraints. - -Column | Description --------|----------- -`constraint_catalog` | Name of the database containing the constraint. -`constraint_schema` | Name of the schema containing the constraint. -`constraint_name` | Name of the constraint. -`unique_constraint_catalog` | Name of the database containing the unique or primary key constraint that the foreign key constraint references (always the current database). -`unique_constraint_schema` | Name of the schema containing the unique or primary key constraint that the foreign key constraint references. -`unique_constraint_name` | Name of the unique or primary key constraint. -`match_option` | Match option of the foreign key constraint: `FULL`, `PARTIAL`, or `NONE`. -`update_rule` | Update rule of the foreign key constraint: `CASCADE`, `SET NULL`, `SET DEFAULT`, `RESTRICT`, or `NO ACTION`. -`delete_rule` | Delete rule of the foreign key constraint: `CASCADE`, `SET NULL`, `SET DEFAULT`, `RESTRICT`, or `NO ACTION`. -`table_name` | Name of the table containing the constraint. -`referenced_table_name` | Name of the table containing the unique or primary key constraint that the foreign key constraint references. - -### role_table_grants - -`role_table_grants` identifies which [privileges](privileges.html) have been granted on tables or views where the grantor -or grantee is a currently enabled role. This table is identical to [`table_privileges`](#table_privileges). - -Column | Description --------|----------- -`grantor` | Name of the role that granted the privilege. -`grantee` | Name of the role that was granted the privilege. -`table_catalog` | Name of the database containing the table. -`table_schema` | Name of the schema containing the table. -`table_name` | Name of the table. -`privilege_type` | Name of the [privilege](privileges.html). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). -`with_hierarchy` | Always `NULL` (unsupported by CockroachDB). - -### schema_privileges - -`schema_privileges` identifies which [privileges](privileges.html) have been granted to each user at the database level. - -Column | Description --------|----------- -`grantee` | Username of user with grant. -`table_catalog` | Name of the database containing the constrained table. -`table_schema` | Name of the schema containing the constrained table. -`privilege_type` | Name of the [privilege](privileges.html). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). - -### schemata - -`schemata` identifies the database's schemas. - -Column | Description --------|----------- -`table_catalog` | Name of the database. -`table_schema` | Name of the schema. -`default_character_set_name` | Always `NULL` (unsupported by CockroachDB). -`sql_path` | Always `NULL` (unsupported by CockroachDB). - -### sequences - -`sequences` identifies [sequences](create-sequence.html) defined in a database. - -Column | Description --------|----------- -`sequence_catalog` | Name of the database that contains the sequence. -`sequence_schema` | Name of the schema that contains the sequence. -`sequence_name` | Name of the sequence. -`data_type` | The data type of the sequence. -`numeric_precision` | The (declared or implicit) precision of the sequence `data_type`. -`numeric_precision_radix` | The base of the values in which the columns `numeric_precision` and `numeric_scale` are expressed. The value is either `2` or `10`. -`numeric_scale` | The (declared or implicit) scale of the sequence `data_type`. The scale indicates the number of significant digits to the right of the decimal point. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column `numeric_precision_radix`. -`start_value` | The first value of the sequence. -`minimum_value` | The minimum value of the sequence. -`maximum_value` | The maximum value of the sequence. -`increment` | The value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence. -`cycle_option` | Currently, all sequences are set to `NO CYCLE` and the sequence will not wrap. - -### statistics - -`statistics` identifies table [indexes](indexes.html). - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the constrained table. -`table_schema` | Name of the schema that contains the constrained table. -`table_name` | Name of the table. -`non_unique` | `NO` if the index was created with the `UNIQUE` constraint; `YES` if the index was not created with `UNIQUE`. -`index_schema` | Name of the database that contains the index. -`index_name` | Name of the index. -`seq_in_index` | Ordinal position of the column within the index (begins at 1). -`column_name` | Name of the column being indexed. -`collation` | Always `NULL` (unsupported by CockroachDB). -`cardinality` | Always `NULL` (unsupported by CockroachDB). -`direction` | `ASC` (ascending) or `DESC` (descending) order. -`storing` | `YES` if column is [stored](create-index.html#store-columns); `NO` if it's indexed or implicit. -`implicit` | `YES` if column is implicit (i.e., it is not specified in the index and not stored); `NO` if it's indexed or stored. - -### table_constraints - -`table_constraints` identifies [constraints](constraints.html) applied to tables. - -Column | Description --------|----------- -`constraint_catalog` | Name of the database containing the constraint. -`constraint_schema` | Name of the schema containing the constraint. -`constraint_name` | Name of the constraint. -`table_catalog` | Name of the database containing the constrained table. -`table_schema` | Name of the schema containing the constrained table. -`table_name` | Name of the constrained table. -`constraint_type` | Type of [constraint](constraints.html): `CHECK`, `FOREIGN KEY`, `PRIMARY KEY`, or `UNIQUE`. -`is_deferrable` | `YES` if the constraint can be deferred; `NO` if not. -`initially_deferred` | `YES` if the constraint is deferrable and initially deferred; `NO` if not. - -### table_privileges - -`table_privileges` identifies which [privileges](privileges.html) have been granted to each user at the table level. - -Column | Description --------|----------- -`grantor` | Always `NULL` (unsupported by CockroachDB). -`grantee` | Username of user with grant. -`table_catalog` | Name of the database that the grant applies to. -`table_schema` | Name of the schema that the grant applies to. -`table_name` | Name of the table that the grant applies to. -`privilege_type` | Type of [privilege](privileges.html): `SELECT`, `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, `REFERENCES`, or `TRIGGER`. -`is_grantable` | Always `NULL` (unsupported by CockroachDB). -`with_hierarchy` | Always `NULL` (unsupported by CockroachDB). - -### tables - -`tables` identifies tables and views in the database. - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the table. -`table_schema` | Name of the schema that contains the table. -`table_name` | Name of the table. -`table_type` | Type of the table: `BASE TABLE` for a normal table, `VIEW` for a view, or `SYSTEM VIEW` for a view created by CockroachDB. -`version` | Version number of the table; versions begin at 1 and are incremented each time an `ALTER TABLE` statement is issued on the table. - -### user_privileges - -`user_privileges` identifies global [privileges](privileges.html). - -{{site.data.alerts.callout_info}}Currently, CockroachDB does not support global privileges for non-root users. Therefore, this view contains global privileges only for root. -{{site.data.alerts.end}} - -Column | Description --------|----------- -`grantee` | Username of user with grant. -`table_catalog` | Name of the database that the privilege applies to. -`privilege_type` | Type of [privilege](privileges.html). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). - -### views - -`views` identifies [views](views.html) in the database. - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the view. -`table_schema` | Name of the schema that contains the view. -`table_name` | Name of the view. -`view_definition` | `AS` clause used to [create the view](views.html#creating-views). -`check_option` | Always `NULL` (unsupported by CockroachDB). -`is_updatable` | Always `NULL` (unsupported by CockroachDB). -`is_insertable_into` | Always `NULL` (unsupported by CockroachDB). -`is_trigger_updatable` | Always `NULL` (unsupported by CockroachDB). -`is_trigger_deletable` | Always `NULL` (unsupported by CockroachDB). -`is_trigger_insertable_into` | Always `NULL` (unsupported by CockroachDB). - -## Examples - -### Retrieve All Columns from an Information Schema Table - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM db_name.information_schema.table_constraints; -~~~ -~~~ -+--------------------+-------------------+-----------------+---------------+--------------+-------------+-----------------+---------------+--------------------+ -| constraint_catalog | constraint_schema | constraint_name | table_catalog | table_schema | table_name | constraint_type | is_deferrable | initially_deferred | -+--------------------+-------------------+-----------------+---------------+--------------+-------------+-----------------+---------------+--------------------+ -| jsonb_test | public | primary | jsonb_test | public | programming | PRIMARY KEY | NO | NO | -+--------------------+-------------------+-----------------+---------------+--------------+-------------+-----------------+---------------+--------------------+ -~~~ - -### Retrieve Specific Columns from an Information Schema Table - -{% include copy-clipboard.html %} -~~~ sql -> SELECT table_name, constraint_name FROM db_name.information_schema.table_constraints; -~~~ -~~~ -+-------------+-----------------+ -| table_name | constraint_name | -+-------------+-----------------+ -| programming | primary | -+-------------+-----------------+ -~~~ - -## See Also - -- [`SHOW`](show-vars.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE TABLE`](show-create-table.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW INDEX`](show-index.html) -- [`SHOW TABLES`](show-tables.html) diff --git a/src/current/v2.0/initialize-a-cluster.md b/src/current/v2.0/initialize-a-cluster.md deleted file mode 100644 index 784ea676e76..00000000000 --- a/src/current/v2.0/initialize-a-cluster.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Initialize a Cluster -summary: Perform a one-time-only initialization of a CockroachDB cluster. -toc: true ---- - -New in v1.1: This page explains the `cockroach init` [command](cockroach-commands.html), which you use to perform a one-time initialization of a new multi-node cluster. For a full walk-through of the cluster startup and initialization process, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -{{site.data.alerts.callout_info}}When starting a single-node cluster, you do not need to use the cockroach init command. You can simply run the cockroach start command without the --join flag to start and initialize the single-node cluster.{{site.data.alerts.end}} - - -## Synopsis - -~~~ shell -# Perform a one-time initialization of a cluster: -$ cockroach init - -# View help: -$ cockroach init --help -~~~ - -## Flags - -The `cockroach init` command supports the following [client connection](#client-connection) and [logging](#logging) flags. - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for details. - -### Logging - -By default, the `init` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -These examples assume that nodes have already been started with [`cockroach start`](start-a-node.html) but are waiting to be initialized as a new cluster. For a more detailed walk-through, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -### Initialize a Cluster on a Node's Machine - -
      - - -
      - -
      -1. SSH to the machine where the node has been started. - -2. Make sure the `client.root.crt` and `client.root.key` files for the `root` user are on the machine. - -3. Run the `cockroach init` command with the `--certs-dir` flag set to the directory containing the `ca.crt` file and the files for the `root` user, and with the `--host` flag set to the address of the current node: - - ~~~ shell - $ cockroach init --certs-dir=certs --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. -
      - -
      -1. SSH to the machine where the node has been started. - -2. Run the `cockroach init` command with the `--host` flag set to the address of the current node: - - ~~~ shell - $ cockroach init --insecure --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. -
      - -### Initialize a Cluster from Another Machine - -
      - - -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Create a `certs` directory and copy the CA certificate and the client certificate and key for the `root` user into the directory. - -3. Run the `cockroach init` command with the `--certs-dir` flag set to the directory containing the `ca.crt` file and the files for the `root` user, and with the `--host` flag set to the address of any node: - - ~~~ shell - $ cockroach init --certs-dir=certs --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Run the `cockroach init` command with the `--host` flag set to the address of any node: - - ~~~ shell - $ cockroach init --insecure --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. -
      - -## See Also - -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](start-a-local-cluster.html) -- [`cockroach start`](start-a-node.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/insert.md b/src/current/v2.0/insert.md deleted file mode 100644 index ebf5848b377..00000000000 --- a/src/current/v2.0/insert.md +++ /dev/null @@ -1,662 +0,0 @@ ---- -title: INSERT -summary: The INSERT statement inserts one or more rows into a table. -toc: true ---- - -The `INSERT` [statement](sql-statements.html) inserts one or more rows into a table. In cases where inserted values conflict with uniqueness constraints, the `ON CONFLICT` clause can be used to update rather than insert rows. - - -## Performance Best Practices - -- A single [multi-row `INSERT`](#insert-multiple-rows-into-an-existing-table) statement is faster than multiple single-row `INSERT` statements. To bulk-insert data into an existing table, use a multi-row `INSERT` instead of multiple single-row `INSERT` statements. -- The [`IMPORT`](import.html) statement performs better than `INSERT` when inserting rows into a new table. -- In traditional SQL databases, generating and retrieving unique IDs involves using `INSERT` with `SELECT`. In CockroachDB, use `RETURNING` clause with `INSERT` instead. See [Insert and Return Values](#insert-and-return-values) for more details. - -## Required Privileges - -The user must have the `INSERT` [privilege](privileges.html) on the table. To use `ON CONFLICT DO UPDATE`, the user must also have the `UPDATE` privilege on the table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/insert.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`table_name` | The table you want to write data to.| -`AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`column_name` | The name of a column to populate during the insert. -`select_stmt` | A [selection query](selection-queries.html). Each value must match the [data type](data-types.html) of its column. Also, if column names are listed after `INTO`, values must be in corresponding order; otherwise, they must follow the declared order of the columns in the table. -`DEFAULT VALUES` | To fill all columns with their [default values](default-value.html), use `DEFAULT VALUES` in place of `select_stmt`. To fill a specific column with its default value, leave the value out of the `select_stmt` or use `DEFAULT` at the appropriate position. See the [Insert Default Values](#insert-default-values) examples below. -`RETURNING target_list` | Return values based on rows inserted, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html). See the [Insert and Return Values](#insert-and-return-values) example below.

      Within a [transaction](transactions.html), use `RETURNING NOTHING` to return nothing in the response, not even the number of rows affected. - -### `ON CONFLICT` clause - -
      {% include {{ page.version.version }}/sql/diagrams/on_conflict.html %}
      - -Normally, when inserted values -conflict with a `UNIQUE` constraint on one or more columns, CockroachDB -returns an error. To update the affected rows instead, use an `ON -CONFLICT` clause containing the column(s) with the unique constraint -and the `DO UPDATE SET` expression set to the column(s) to be updated -(any `SET` expression supported by the [`UPDATE`](update.html) -statement is also supported here, including those with `WHERE` -clauses). To prevent the affected rows from updating while allowing -new rows to be inserted, set `ON CONFLICT` to `DO NOTHING`. See the -[Update Values `ON CONFLICT`](#update-values-on-conflict) and [Do Not -Update Values `ON CONFLICT`](#do-not-update-values-on-conflict) -examples below. - -If the values in the `SET` expression cause uniqueness conflicts, -CockroachDB will return an error. - -As a short-hand alternative to the `ON -CONFLICT` clause, you can use the [`UPSERT`](upsert.html) -statement. However, `UPSERT` does not let you specify the column with -the unique constraint; it assumes that the column is the primary -key. Using `ON CONFLICT` is therefore more flexible. - -## Examples - -All of the examples below assume you've already created a table `accounts`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE accounts( - id INT DEFAULT unique_rowid(), - balance DECIMAL -); -~~~ - -### Insert a Single Row - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (balance, id) VALUES (10000.50, 1); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 10000.5 | -+----+---------+ -~~~ - -If you do not list column names, the statement will use the columns of the table in their declared order: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ -+---------+---------+-------+----------------+ -| Field | Type | Null | Default | -+---------+---------+-------+----------------+ -| id | INT | false | unique_rowid() | -| balance | DECIMAL | true | NULL | -+---------+---------+-------+----------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts VALUES (2, 20000.75); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.50 | -| 2 | 20000.75 | -+----+----------+ -~~~ - -### Insert Multiple Rows into an Existing Table - -{{site.data.alerts.callout_success}} Multi-row inserts are faster than multiple single-row INSERT statements. As a performance best practice, we recommend batching multiple rows in one multi-row INSERT statement instead of using multiple single-row INSERT statements. Experimentally determine the optimal batch size for your application by monitoring the performance for different batch sizes (10 rows, 100 rows, 1000 rows). {{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES (3, 8100.73), (4, 9400.10); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.50 | -| 2 | 20000.75 | -| 3 | 8100.73 | -| 4 | 9400.10 | -+----+----------+ -~~~ - -### Insert Multiple Rows into a New Table - -The [`IMPORT`](import.html) statement performs better than `INSERT` when inserting rows into a new table. - -### Insert from a `SELECT` Statement - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMS FROM other_accounts; -~~~ - -~~~ -+--------+---------+-------+---------+ -| Field | Type | Null | Default | -+--------+---------+-------+---------+ -| number | INT | false | NULL | -| amount | DECIMAL | true | NULL | -+--------+---------+-------+---------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) SELECT number, amount FROM other_accounts WHERE id > 4; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 8100.73 | -| 4 | 9400.1 | -| 5 | 350.1 | -| 6 | 150 | -| 7 | 200.1 | -+----+----------+ -~~~ - -### Insert Default Values - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id) VALUES (8); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES (9, DEFAULT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id in (8, 9); -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | NULL | -| 9 | NULL | -+----+---------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts DEFAULT VALUES; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+--------------------+----------+ -| id | balance | -+--------------------+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 8100.73 | -| 4 | 9400.1 | -| 5 | 350.1 | -| 6 | 150 | -| 7 | 200.1 | -| 8 | NULL | -| 9 | NULL | -| 142933248649822209 | NULL | -+--------------------+----------+ -~~~ - -### Insert and Return Values - -In this example, the `RETURNING` clause returns the `id` values of the rows inserted, which are generated server-side by the `unique_rowid()` function. The language-specific versions assume that you have installed the relevant [client drivers](install-client-drivers.html). - -{{site.data.alerts.callout_success}}This use of RETURNING mirrors the behavior of MySQL's last_insert_id() function.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}When a driver provides a query() method for statements that return results and an exec() method for statements that do not (e.g., Go), it's likely necessary to use the query() method for INSERT statements with RETURNING.{{site.data.alerts.end}} - -
      - - - - - -
      - -
      - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (DEFAULT, 1000), (DEFAULT, 250) - RETURNING id; -~~~ - -~~~ -+--------------------+ -| id | -+--------------------+ -| 190018410823680001 | -| 190018410823712769 | -+--------------------+ -(2 rows) -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ python -# Import the driver. -import psycopg2 - -# Connect to the "bank" database. -conn = psycopg2.connect( - database='bank', - user='root', - host='localhost', - port=26257 -) - -# Make each statement commit immediately. -conn.set_session(autocommit=True) - -# Open a cursor to perform database operations. -cur = conn.cursor() - -# Insert two rows into the "accounts" table -# and return the "id" values generated server-side. -cur.execute( - 'INSERT INTO accounts (id, balance) ' - 'VALUES (DEFAULT, 1000), (DEFAULT, 250) ' - 'RETURNING id' -) - -# Print out the returned values. -rows = cur.fetchall() -print('IDs:') -for row in rows: - print([str(cell) for cell in row]) - -# Close the database connection. -cur.close() -conn.close() -~~~ - -The printed values would look like: - -~~~ -IDs: -['190019066706952193'] -['190019066706984961'] -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ ruby -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'root', - dbname: 'bank', - host: 'localhost', - port: 26257 -) - -# Insert two rows into the "accounts" table -# and return the "id" values generated server-side. -conn.exec( - 'INSERT INTO accounts (id, balance) '\ - 'VALUES (DEFAULT, 1000), (DEFAULT, 250) '\ - 'RETURNING id' -) do |res| - -# Print out the returned values. -puts "IDs:" - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() -~~~ - -The printed values would look like: - -~~~ -IDs: -{"id"=>"190019066706952193"} -{"id"=>"190019066706984961"} -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ go -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - //Connect to the "bank" database. - db, err := sql.Open( - "postgres", - "postgresql://root@localhost:26257/bank?sslmode=disable" - ) - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Insert two rows into the "accounts" table - // and return the "id" values generated server-side. - rows, err := db.Query( - "INSERT INTO accounts (id, balance) " + - "VALUES (DEFAULT, 1000), (DEFAULT, 250) " + - "RETURNING id", - ) - if err != nil { - log.Fatal(err) - } - - // Print out the returned values. - defer rows.Close() - fmt.Println("IDs:") - for rows.Next() { - var id int - if err := rows.Scan(&id); err != nil { - log.Fatal(err) - } - fmt.Printf("%d\n", id) - } -} -~~~ - -The printed values would look like: - -~~~ -IDs: -190019066706952193 -190019066706984961 -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ js -var async = require('async'); - -// Require the driver. -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'root', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -pg.connect(config, function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Insert two rows into the "accounts" table - // and return the "id" values generated server-side. - client.query( - `INSERT INTO accounts (id, balance) - VALUES (DEFAULT, 1000), (DEFAULT, 250) - RETURNING id;`, - next - ); - } - ], - function (err, results) { - if (err) { - console.error('error inserting into and selecting from accounts', err); - finish(); - } - // Print out the returned values. - console.log('IDs:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); -~~~ - -The printed values would look like: - -~~~ -IDs: -{ id: '190019066706952193' } -{ id: '190019066706984961' } -~~~ - -
      - -### Update Values `ON CONFLICT` - -When a uniqueness conflict is detected, CockroachDB stores the row in a temporary table called `excluded`. This example demonstrates how you use the columns in the temporary `excluded` table to apply updates on conflict: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 500.50) - ON CONFLICT (id) - DO UPDATE SET balance = excluded.balance; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | 500.50 | -+----+---------+ -~~~ - - -You can also update the row using an existing value: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 500.50) - ON CONFLICT (id) - DO UPDATE SET balance = accounts.balance + excluded.balance; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | 1001.00 | -+----+---------+ -~~~ - -You can also use a `WHERE` clause to apply the `DO UPDATE SET` expression conditionally: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 700) - ON CONFLICT (id) - DO UPDATE SET balance = excluded.balance - WHERE excluded.balance > accounts.balance; -~~~ - -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | 800 | -+----+---------+ -(1 row) -~~~ - -### Do Not Update Values `ON CONFLICT` - -In this example, we get an error from a uniqueness conflict: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | 500.5 | -+----+---------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES (8, 125.50); -~~~ - -~~~ -pq: duplicate key value (id)=(8) violates unique constraint "primary" -~~~ - -In this example, we use `ON CONFLICT DO NOTHING` to ignore the uniqueness error and prevent the affected row from being updated: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 125.50) - ON CONFLICT (id) - DO NOTHING; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | 500.5 | -+----+---------+ -~~~ - -In this example, `ON CONFLICT DO NOTHING` prevents the first row from updating while allowing the second row to be inserted: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 125.50), (10, 450) - ON CONFLICT (id) - DO NOTHING; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id in (8, 10); -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 8 | 500.5 | -| 10 | 450 | -+----+---------+ -~~~ - -## See Also - -- [Selection Queries](selection-queries.html) -- [`DELETE`](delete.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/install-client-drivers.md b/src/current/v2.0/install-client-drivers.md deleted file mode 100644 index b201c1567c9..00000000000 --- a/src/current/v2.0/install-client-drivers.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Install Client Drivers -summary: CockroachDB supports the PostgreSQL wire protocol, so you can use any available PostgreSQL client drivers. -toc: false ---- - -CockroachDB supports the PostgreSQL wire protocol, so most available PostgreSQL client drivers should work with CockroachDB. - -{{site.data.alerts.callout_info}}This page features drivers that we have tested enough to claim beta-level support. This means that applications using advanced or obscure features of a driver may encounter incompatibilities. If you encounter problems, please open an issue with details to help us make progress toward full support.{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}}For code samples using these drivers, see the Build an App with CockroachDB tutorials.{{site.data.alerts.end}} - -App Language | Recommended Driver --------------|------------------- -Go | [pq](https://godoc.org/github.com/lib/pq) -Python | [psycopg2](http://initd.org/psycopg/) -Ruby | [pg](https://rubygems.org/gems/pg) -Java | [jdbc](https://jdbc.postgresql.org) -Node.js | [pg](https://www.npmjs.com/package/pg) -C | [libpq](http://www.postgresql.org/docs/9.5/static/libpq.html) -C++ | [libpqxx](https://github.com/jtv/libpqxx) -C# (.NET) | [Npgsql](http://www.npgsql.org/) -Clojure | [java.jdbc](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/) -PHP | [php-pgsql](https://www.php.net/manual/en/book.pgsql.php) -Rust | postgres {% comment %} This link is in HTML instead of Markdown because HTML proofer dies bc of https://github.com/rust-lang/crates.io/issues/163 {% endcomment %} diff --git a/src/current/v2.0/install-cockroachdb.html b/src/current/v2.0/install-cockroachdb.html deleted file mode 100644 index 025e3127d86..00000000000 --- a/src/current/v2.0/install-cockroachdb.html +++ /dev/null @@ -1,463 +0,0 @@ ---- -title: Install CockroachDB -summary: Install CockroachDB on Mac, Linux, or Windows. Sign up for product release notes. -tags: download, binary, homebrew -toc: false -allowed_hashes: [os-mac, os-linux, os-windows] ---- - - - -
      - - - -
      - -
      -

      See Release Notes for what's new in the latest release, {{ page.release_info.version }}.

      - -
      -{% if page.version.stable %} -Use
      Homebrew
      -{% endif %} -Download the
      Binary
      -Build from
      Source
      -Use
      Docker
      -
      - -{% if page.version.stable %} -
      -
        -
      1. -

        Install Homebrew.

        -
      2. -
      3. -

        Instruct Homebrew to install CockroachDB:

        - -
        - icon/buttons/copy - -
        -
        $ brew install cockroach
        -
      4. -
      5. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="1" %} -
      6. -
      -

      What's Next?

      -

      Quick start a single- or multi-node cluster locally and talk to it via the built-in SQL client.

      - -{% include {{ page.version.version }}/misc/diagnostics-callout.html %} - -
      -{% endif %} - -
      -
        -
      1. -

        Download the CockroachDB archive for OS X, and extract the binary:

        - -
        - icon/buttons/copy - -
        -
        $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.darwin-10.9-amd64.tgz | tar -xz
        -
      2. -
      3. -

        Copy the binary into your PATH so it's easy to execute cockroach commands from any shell:

        - - {% include copy-clipboard.html %}
        cp -i cockroach-{{ page.release_info.version }}.darwin-10.9-amd64/cockroach /usr/local/bin/
        -

        If you get a permissions error, prefix the command with sudo.

        -
      4. -
      5. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="2" %} -
      6. -
      -

      What's Next?

      -

      Quick start a single- or multi-node cluster locally and talk to it via the built-in SQL client.

      - -{% include {{ page.version.version }}/misc/diagnostics-callout.html %} - -
      - - - - -
      - - - - diff --git a/src/current/v2.0/int.md b/src/current/v2.0/int.md deleted file mode 100644 index c289464d310..00000000000 --- a/src/current/v2.0/int.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: INT -summary: CockroachDB supports various signed integer data types. -toc: true ---- - -CockroachDB supports various signed integer [data types](data-types.html). - -{{site.data.alerts.callout_info}} -For instructions showing how to auto-generate integer values (e.g., to auto-number rows in a table), see [this FAQ entry](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb). -{{site.data.alerts.end}} - - -## Names and Aliases - -Name | Allowed Width | Aliases ------|-------|-------- -`INT` | 64-bit | `INTEGER`
      `INT8`
      `INT64`
      `BIGINT` -`INT4` | 32-bit | None -`INT2` | 16-bit | `SMALLINT` -`BIT` | 1-bit | None -`BIT(n)` | n-bit | None - -## Syntax - -A constant value of type `INT` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `42`, `-1234`, or `0xCAFE`. - -## Size - -The different integer types place different constraints on the range of allowable values, but all integers are stored in the same way regardless of type. Smaller values take up less space than larger ones (based on the numeric value, not the data type). - -You can use the `BIT(n)` type, with `n` from 1 to 64, to constrain integers based on their corresponding binary values. For example, `BIT(5)` would allow `31` because it corresponds to the five-digit binary integer `11111`, but would not allow `32` because it corresponds to the six-digit binary integer `100000`, which is 1 bit too long. See the [example](#examples) below for a demonstration. - -{{site.data.alerts.callout_info}}BIT values are input and displayed in decimal format by default like all other integers, not in binary format. Also note that BIT is equivalent to BIT(1).{{site.data.alerts.end}} - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE ints (a INT PRIMARY KEY, b SMALLINT, c BIT(5)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM ints; -~~~ - -~~~ -+-------+----------+-------+---------+-------------+ -| Field | Type | Null | Default | Indices | -+-------+----------+-------+---------+-------------+ -| a | INT | false | NULL | {"primary"} | -| b | SMALLINT | true | NULL | {} | -| c | BIT(5) | true | NULL | {} | -+-------+----------+-------+---------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO ints VALUES (1, 32, 32); -~~~ - -~~~ -pq: bit string too long for type BIT(5) (column "c") -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO ints VALUES (1, 32, 31); -~~~ - -~~~ -INSERT 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM ints; -~~~ - -~~~ -+---+----+----+ -| a | b | c | -+---+----+----+ -| 1 | 32 | 31 | -+---+----+----+ -(1 row) -~~~ - -## Supported Casting & Conversion - -`INT` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`DECIMAL` | –– -`FLOAT` | Loses precision if the `INT` value is larger than 2^53 in magnitude -`BOOL` | **0** converts to `false`; all other values convert to `true` -`DATE` | Converts to days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`TIMESTAMP` | Converts to seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`INTERVAL` | Converts to microseconds -`STRING` | –– - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/interleave-in-parent.md b/src/current/v2.0/interleave-in-parent.md deleted file mode 100644 index 0c46b1a60c3..00000000000 --- a/src/current/v2.0/interleave-in-parent.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: INTERLEAVE IN PARENT -summary: Interleaving tables improves query performance by optimizing the key-value structure of closely related table's data. -toc: true -toc_not_nested: true ---- - -Interleaving tables improves query performance by optimizing the key-value structure of closely related tables, attempting to keep data on the same [key-value range](frequently-asked-questions.html#how-does-cockroachdb-scale) if it's likely to be read and written together. - -{{site.data.alerts.callout_info}}Interleaving tables does not affect their behavior within SQL.{{site.data.alerts.end}} - - -## How Interleaved Tables Work - -When tables are interleaved, data written to one table (known as the **child**) is inserted directly into another (known as the **parent**) in the key-value store. This is accomplished by matching the child table's Primary Key to the parent's. - -### Interleave Prefix - -For interleaved tables to have Primary Keys that can be matched, the child table must use the parent table's entire Primary Key as a prefix of its own Primary Key––these matching columns are referred to as the **interleave prefix**. It's easiest to think of these columns as representing the same data, which is usually implemented with Foreign Keys. - -{{site.data.alerts.callout_success}}To formally enforce the relationship between each table's interleave prefix columns, we recommend using Foreign Key constraints.{{site.data.alerts.end}} - -For example, if you want to interleave `orders` into `customers` and the Primary Key of customers is `id`, you need to create a column representing `customers.id` as the first column in the Primary Key of `orders`—e.g., with a column called `customer`. So the data representing `customers.id` is the interleave prefix, which exists in the `orders` table as the `customer` column. - -### Key-Value Structure - -When you write data into the child table, it is inserted into the key-value store immediately after the parent table's key matching the interleave prefix. - -For example, if you interleave `orders` into `customers`, the `orders` data is written directly within the `customers` table in the key-value store. The following is a crude, illustrative example of what the keys would look like in this structure: - -~~~ -/customers/1 -/customers/1/orders/1000 -/customers/1/orders/1002 -/customers/2 -/customers/2/orders/1001 -/customers/2/orders/1003 -... -/customers/n/ -/customers/n/orders/ -~~~ - -By writing data in this way, related data is more likely to remain on the same key-value range, which can make it much faster to read from and write to. Using the above example, all of customer 1's data is going to be written to the same range, including its representation in both the `customers` and `orders` tables. - -## When to Interleave Tables - -{% include {{ page.version.version }}/faq/when-to-interleave-tables.html %} - -### Interleaved Hierarchy - -Interleaved tables typically work best when the tables form a hierarchy. For example, you could interleave the table `orders` (as the child) into the table `customers` (as the parent, which represents the people who placed the orders). You can extend this example by also interleaving the tables `invoices` (as a child) and `packages` (as a child) into `orders` (as the parent). - -The entire set of these relationships is referred to as the **interleaved hierarchy**, which contains all of the tables related through [interleave prefixes](#interleave-prefix). - -### Benefits - -In general, reads, writes, and joins of values related through the interleave prefix are *much* faster. However, you can also improve performance with any of the following: - -- Filtering more columns in the interleave prefix (from left to right). - - For example, if the interleave prefix of `packages` is `(customer, order)`, filtering on `customer` would be fast, but filtering on `customer` *and* `order` would be faster. - -- Using only tables in the interleaved hierarchy. - -### Tradeoffs - -- In general, reads and deletes over ranges of table values (e.g., `WHERE column > value`) in interleaved tables are slower. - - However, an exception to this is performing operations on ranges of table values in the greatest descendant in the interleaved hierarchy that filters on all columns of the interleave prefix with constant values. - - For example, if the interleave prefix of `packages` is `(customer, order)`, filtering on the entire interleave prefix with constant values while calculating a range of table values on another column, like `WHERE customer = 1 AND order = 1001 AND delivery_date > DATE '2016-01-25'`, would still be fast. - -- If the amount of interleaved data stored for any Primary Key value of the root table is larger than [a key-value range's maximum size](configure-replication-zones.html#replication-zone-format) (64MB by default), the interleaved optimizations will be diminished. - - For example, if one customer has 200MB of order data, their data is likely to be spread across multiple key-value ranges and CockroachDB will not be able to access it as quickly, despite it being interleaved. - -## Syntax - -
      -{% include {{ page.version.version }}/sql/diagrams/interleave.html %} -
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `CREATE TABLE ...` | For help with this section of the syntax, [`CREATE TABLE`](create-table.html). -| `INTERLEAVE IN PARENT table_name` | The name of the parent table you want to interleave the new child table into. | -| `name_list` | A comma-separated list of columns from the child table's Primary Key that represent the parent table's Primary Key (i.e., the interleave prefix). | - -## Requirements - -- You can only interleave tables when creating the child table. - -- Each child table's Primary Key must contain its parent table's Primary Key as a prefix (known as the **interleave prefix**). - - For example, if the parent table's primary key is `(a INT, b STRING)`, the child table's primary key could be `(a INT, b STRING, c DECIMAL)`. - - {{site.data.alerts.callout_info}}This requirement is enforced only by ensuring that the columns use the same data types. However, we recommend ensuring the columns refer to the same values by using the Foreign Key constraint.{{site.data.alerts.end}} - -- Interleaved tables cannot be the child of more than 1 parent table. However, each parent table can have many children tables. Children tables can also be parents of interleaved tables. - -## Recommendations - -- Use interleaved tables when your schema forms a hierarchy, and the Primary Key of the root table (for example, a "user ID" or "account ID") is a parameter to most of your queries. - -- To enforce the relationship between the parent and children table's Primary Keys, use [Foreign Key constraints](foreign-key.html) on the child table. - -- In cases where you're uncertain if interleaving tables will improve your queries' performance, test how tables perform under load when they're interleaved and when they aren't. - -## Examples - -### Interleaving Tables - -This example creates an interleaved hierarchy between `customers`, `orders`, and `packages`, as well as the appropriate Foreign Key constraints. You can see that each child table uses its parent table's Primary Key as a prefix of its own Primary Key (the **interleave prefix**). - -~~~ sql -> CREATE TABLE customers ( - id INT PRIMARY KEY, - name STRING(50) - ); - -> CREATE TABLE orders ( - customer INT, - id INT, - total DECIMAL(20, 5), - PRIMARY KEY (customer, id), - CONSTRAINT fk_customer FOREIGN KEY (customer) REFERENCES customers - ) INTERLEAVE IN PARENT customers (customer) - ; - -> CREATE TABLE packages ( - customer INT, - "order" INT, - id INT, - address STRING(50), - delivered BOOL, - delivery_date DATE, - PRIMARY KEY (customer, "order", id), - CONSTRAINT fk_order FOREIGN KEY (customer, "order") REFERENCES orders - ) INTERLEAVE IN PARENT orders (customer, "order") - ; -~~~ - -### Key-Value Storage Example - -It can be easier to understand what interleaving tables does by seeing what it looks like in the key-value store. For example, using the above example of interleaving `orders` in `customers`, we could insert the following values: - -~~~ sql -> INSERT INTO customers - (id, name) VALUES - (1, 'Ha-Yun'), - (2, 'Emanuela'); - -> INSERT INTO orders - (customer, id, total) VALUES - (1, 1000, 100.00), - (2, 1001, 90.00), - (1, 1002, 80.00), - (2, 1003, 70.00); -~~~ - -Using an illustrative format of the key-value store (keys are on the left; values are represented by `-> value`), the data would be written like this: - -~~~ -/customers/ -> 'Ha-Yun' -/customers//orders/ -> 100.00 -/customers//orders/ -> 80.00 -/customers/ -> 'Emanuela' -/customers//orders/ -> 90.00 -/customers//orders/ -> 70.00 -~~~ - -You'll notice that `customers.id` and `orders.customer` are written into the same position in the key-value store. This is how CockroachDB relates the two table's data for the interleaved structure. By storing data this way, accessing any of the `orders` data alongside the `customers` is much faster. - -{{site.data.alerts.callout_info}}If we didn't set Foreign Key constraints between customers.id and orders.customer and inserted orders.customer = 3, the data would still get written into the key-value in the expected location next to the customers table identifier, but SELECT * FROM customers WHERE id = 3 would not return any values.{{site.data.alerts.end}} - -To better understand how CockroachDB writes key-value data, see our blog post [Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [Foreign Keys](foreign-key.html) -- [Column Families](column-families.html) diff --git a/src/current/v2.0/internal/version-switcher-page-data.json b/src/current/v2.0/internal/version-switcher-page-data.json deleted file mode 100644 index 5ec30bf893f..00000000000 --- a/src/current/v2.0/internal/version-switcher-page-data.json +++ /dev/null @@ -1,17 +0,0 @@ ---- -layout: none ---- - -{%- capture page_folder -%}/{{ page.version.version }}/{%- endcapture -%} -{%- assign pages = site.pages | where_exp: "pages", "pages.url contains page_folder" | where_exp: "pages", "pages.name != '404.md'" -%} -{ -{%- for x in pages -%} -{%- assign key = x.url | replace: page_folder, "" -%} -{%- if x.key -%} - {%- assign key = x.key -%} -{%- endif %} - {{ key | jsonify }}: { - "url": {{ x.url | jsonify }} - }{% unless forloop.last %},{% endunless -%} -{% endfor %} -} \ No newline at end of file diff --git a/src/current/v2.0/interval.md b/src/current/v2.0/interval.md deleted file mode 100644 index 7f57dd36317..00000000000 --- a/src/current/v2.0/interval.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -title: INTERVAL -summary: The INTERVAL data type stores a value that represents a span of time. -toc: true ---- - -The `INTERVAL` [data type](data-types.html) stores a value that represents a span of time. - - -## Syntax - -A constant value of type `INTERVAL` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `INTERVAL` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`INTERVAL`. - -`INTERVAL` constants can be expressed using the following formats: - -Format | Description --------|-------- -SQL Standard | `INTERVAL 'Y-M D H:M:S'`

      `Y-M D`: Using a single value defines days only; using two values defines years and months. Values must be integers.

      `H:M:S`: Using a single value defines seconds only; using two values defines hours and minutes. Values can be integers or floats.

      Note that each side is optional. -ISO 8601 | `INTERVAL 'P1Y2M3DT4H5M6S'` -Traditional PostgreSQL | `INTERVAL '1 year 2 months 3 days 4 hours 5 minutes 6 seconds'` -Golang | `INTERVAL '1h2m3s4ms5us6ns'`

      Note that `ms` is milliseconds, `us` is microseconds, and `ns` is nanoseconds. Also, all fields support both integers and floats. - -CockroachDB also supports using uninterpreted -[string literals](sql-constants.html#string-literals) in contexts -where a `INTERVAL` value is otherwise expected. - -Intervals are stored internally as months, days, and nanoseconds. - -## Size - -An `INTERVAL` column supports values up to 24 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Example - -~~~ sql -> CREATE TABLE intervals (a INT PRIMARY KEY, b INTERVAL); -~~~ - -~~~ -CREATE TABLE -~~~ - -~~~ sql -> SHOW COLUMNS FROM intervals; -~~~ - -~~~ -+-------+----------+-------+---------+ -| Field | Type | Null | Default | -+-------+----------+-------+---------+ -| a | INT | false | NULL | -| b | INTERVAL | true | NULL | -+-------+----------+-------+---------+ -~~~ - -~~~ sql -> INSERT INTO intervals VALUES - (1, INTERVAL '1h2m3s4ms5us6ns'), - (2, INTERVAL '1 year 2 months 3 days 4 hours 5 minutes 6 seconds'), - (3, INTERVAL '1-2 3 4:5:6'); -~~~ - -~~~ -INSERT 3 -~~~ - -~~~ sql -> SELECT * FROM intervals; -~~~ - -~~~ -+---+------------------+ -| a | b | -+---+------------------+ -| 1 | 1h2m3.004005006s | -| 2 | 14m3d4h5m6s | -| 3 | 14m3d4h5m6s | -+---+------------------+ -(3 rows) -~~~ - -## Supported Casting & Conversion - -`INTERVAL` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts to number of seconds (second precision) -`DECIMAL` | Converts to number of seconds (nanosecond precision) -`FLOAT` | Converts to number of picoseconds -`STRING` | Converts to `h-m-s` format (nanosecond precision) -`TIME` | New in v2.0: Converts to `HH:MM:SS.SSSSSS`, the time equivalent to the interval after midnight (microsecond precision) - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/inverted-indexes.md b/src/current/v2.0/inverted-indexes.md deleted file mode 100644 index a5b8de44ff6..00000000000 --- a/src/current/v2.0/inverted-indexes.md +++ /dev/null @@ -1,198 +0,0 @@ ---- -title: Inverted Indexes -summary: Inverted indexes improve your database's performance and usefulness by helping SQL locate schemaless data in a JSONB column. -toc: true ---- - -New in v2.0: Inverted indexes improve your database's performance by helping SQL locate the schemaless data in a [`JSONB`](jsonb.html) column. - -{{site.data.alerts.callout_success}}For a hands-on demonstration of using an inverted index to improve query performance on a JSONB column, see the JSON tutorial.{{site.data.alerts.end}} - - -## How Do Inverted Indexes Work? - -Standard [indexes](indexes.html) work well for searches based on prefixes of sorted data. However, schemaless data like [`JSONB`](jsonb.html) cannot be queried without a full table scan, since it does not adhere to ordinary value prefix comparison operators. `JSONB` needs to be indexed in a more detailed way than what a standard index provides. This is where inverted indexes prove useful. - -Inverted indexes filter on components of tokenizable data. The `JSONB` data type is built on two structures that can be tokenized: - -- **Objects** - Collections of key-value pairs where each key-value pair is a token. -- **Arrays** - Ordered lists of values where every value in the array is a token. - -For example, take the following `JSONB` value in column `person`: - -~~~ json -{ - "firstName": "John", - "lastName": "Smith", - "age": 25, - "address": { - "state": "NY", - "postalCode": "10021" - }, - "cars": [ - "Subaru", - "Honda" - ] -} -~~~ - -An inverted index for this object would have an entry per component, mapping it back to the original object: - -~~~ -"firstName": "John" -"lastName": "Smith" -"age": 25 -"address": "state": "NY" -"address": "postalCode": "10021" -"cars" : "Subaru" -"cars" : "Honda" -~~~ - -This lets you to search based on subcomponents. - -### Creation - -You can use inverted indexes to improve the performance of queries using `JSONB` columns. You can create them: - -- At the same time as the table with the `INVERTED INDEX` clause of [`CREATE TABLE`](create-table.html#create-a-table-with-secondary-and-inverted-indexes-new-in-v2-0). -- For existing tables with [`CREATE INVERTED INDEX`](create-index.html). -- Using the following PostgreSQL-compatible syntax: - - ~~~ sql - > CREATE INDEX ON USING GIN (); - ~~~ - -### Selection - -If a query contains a filter against an indexed `JSONB` column that uses any of the supported operators, the inverted index is added to the set of index candidates. - -Because each query can use only a single index, CockroachDB selects the index it calculates will scan the fewest rows (i.e., the fastest). For more detail, check out our blog post [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -To override CockroachDB's index selection, you can also force [queries to use a specific index](table-expressions.html#force-index-selection) (also known as "index hinting"). - -### Storage - -CockroachDB stores indexes directly in your key-value store. You can find more information in our blog post [Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -### Locking - -Tables are not locked during index creation thanks to CockroachDB's [schema change procedure](https://www.cockroachlabs.com/blog/how-online-schema-changes-are-possible-in-cockroachdb/). - -### Performance - -Indexes create a trade-off: they greatly improve the speed of queries, but slightly slow down writes (because new values have to be copied and sorted). The first index you create has the largest impact, but additional indexes only introduce marginal overhead. - -### Comparisons -Currently, inverted indexes only support equality comparisons using the `=` operator. If you require comparisons using `>`, `<=`, etc., you can create an index on a computed column using your JSON payload, and then create a regular index on that. So if you wanted to write a query where the value of "foo" is greater than three, you would: - -1. Create your table with a computed column: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE test ( - id INT, - data JSONB, - foo INT AS ((data->>'foo')::INT) STORED - ); - ~~~ - -2. Create an index on your computed column: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE INDEX test_idx ON test (foo); - ~~~ - -3. Execute your query with your comparison: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM test where foo > 3; - ~~~ - -## Example - -In this example, let's create a table with a `JSONB` column and an inverted index: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - profile_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - last_updated TIMESTAMP DEFAULT now(), - user_profile JSONB, - INVERTED INDEX user_details (user_profile) - ); -~~~ - -Then, insert a few rows a data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO users (user_profile) VALUES - ('{"first_name": "Lola", "last_name": "Dog", "location": "NYC", "online" : true, "friends" : 547}'), - ('{"first_name": "Ernie", "status": "Looking for treats", "location" : "Brooklyn"}'), - ('{"first_name": "Carl", "last_name": "Kimball", "location": "NYC", "breed": "Boston Terrier"}' - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT *, jsonb_pretty(user_profile) FROM users; -~~~ -~~~ -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+------------------------------------+ -| profile_id | last_updated | user_profile | jsonb_pretty | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+------------------------------------+ -| 81330a51-80b2-44aa-b793-1b8d84ba69c9 | 2018-03-13 18:26:24.521541+00:00 | {"breed": "Boston Terrier", "first_name": "Carl", "last_name": | { | -| | | "Kimball", "location": "NYC"} | | -| | | | "breed": "Boston Terrier", | -| | | | "first_name": "Carl", | -| | | | "last_name": "Kimball", | -| | | | "location": "NYC" | -| | | | } | -| 81c87adc-a49c-4bed-a59c-3ac417756d09 | 2018-03-13 18:26:24.521541+00:00 | {"first_name": "Ernie", "location": "Brooklyn", "status": "Looking for | { | -| | | treats"} | | -| | | | "first_name": "Ernie", | -| | | | "location": "Brooklyn", | -| | | | "status": "Looking for treats" | -| | | | } | -| ec0a4942-b0aa-4a04-80ae-591b3f57721e | 2018-03-13 18:26:24.521541+00:00 | {"first_name": "Lola", "friends": 547, "last_name": "Dog", "location": | { | -| | | "NYC", "online": true} | | -| | | | "first_name": "Lola", | -| | | | "friends": 547, | -| | | | "last_name": "Dog", | -| | | | "location": "NYC", | -| | | | "online": true | -| | | | } | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+------------------------------------+ -~~~ - -Now, run a query that filters on the `JSONB` column: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM users where user_profile @> '{"location":"NYC"}'; -~~~ -~~~ -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| profile_id | last_updated | user_profile | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| 81330a51-80b2-44aa-b793-1b8d84ba69c9 | 2018-03-13 18:26:24.521541+00:00 | {"breed": "Boston Terrier", "first_name": "Carl", "last_name": | -| | | "Kimball", "location": "NYC"} | -| ec0a4942-b0aa-4a04-80ae-591b3f57721e | 2018-03-13 18:26:24.521541+00:00 | {"first_name": "Lola", "friends": 547, "last_name": "Dog", "location": | -| | | "NYC", "online": true} | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -(2 rows) -~~~ - -## See Also - -- [`JSONB`](jsonb.html) -- [JSON tutorial](demo-json-support.html) -- [Computed Columns](computed-columns.html) -- [`CREATE INDEX`](create-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW INDEX`](show-index.html) -- [Indexes](indexes.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/joins.md b/src/current/v2.0/joins.md deleted file mode 100644 index 5c7469c4e1e..00000000000 --- a/src/current/v2.0/joins.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: Join Expressions -summary: Join expressions combine data from two or more table expressions. -toc: true ---- - -Join expressions, also called "joins", combine the results of two or -more table expressions based on conditions on the values of particular columns. - -Join expressions define a data source in the `FROM` sub-clause of [simple `SELECT` clauses](select-clause.html), or as parameter to [`TABLE`](selection-queries.html#table-clause). Joins are a particular kind of [table expression](table-expressions.html). - - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/joined_table.html %} -
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`joined_table` | Another join expression. -`table_ref` | A [table expression](table-expressions.html). -`a_expr` | A [scalar expression](scalar-expressions.html) to use as [`ON` join condition](#supported-join-conditions). -`name` | A column name to use as [`USING` join condition](#supported-join-conditions) - -## Supported Join Types - -CockroachDB supports the following uses of `JOIN`. - -### Inner Joins - -Only the rows from the left and right operand that match the condition are returned. - -~~~ -
      [ INNER ] JOIN
      ON -
      [ INNER ] JOIN
      USING(, , ...) -
      NATURAL [ INNER ] JOIN
      -
      CROSS JOIN
      -~~~ - -### Left Outer Joins - -For every left row where there is no match on the right, `NULL` values are returned for the columns on the right. - -~~~ -
      LEFT [ OUTER ] JOIN
      ON -
      LEFT [ OUTER ] JOIN
      USING(, , ...) -
      NATURAL LEFT [ OUTER ] JOIN
      -~~~ - -### Right Outer Joins - -For every right row where there is no match on the left, `NULL` values are returned for the columns on the left. - -~~~ -
      RIGHT [ OUTER ] JOIN
      ON -
      RIGHT [ OUTER ] JOIN
      USING(, , ...) -
      NATURAL RIGHT [ OUTER ] JOIN
      -~~~ - -### Full Outer Joins - -For every row on one side of the join where there is no match on the other side, `NULL` values are returned for the columns on the non-matching side. - -~~~ -
      FULL [ OUTER ] JOIN
      ON -
      FULL [ OUTER ] JOIN
      USING(, , ...) -
      NATURAL FULL [ OUTER ] JOIN
      -~~~ - -## Supported Join Conditions - -CockroachDB supports the following conditions to match rows in a join: - -- No condition with `CROSS JOIN`: each row on the left is considered - to match every row on the right. -- `ON` predicates: a Boolean [scalar expression](scalar-expressions.html) - is evaluated to determine whether the operand rows match. -- `USING`: the named columns are compared pairwise from the left and - right rows; left and right rows are considered to match if the - columns are equal pairwise. -- `NATURAL`: generates an implicit `USING` condition using all the - column names that are present in both the left and right table - expressions. - -{{site.data.alerts.callout_danger}}NATURAL is supported for -compatibility with PostgreSQL; its use in new applications is -discouraged, because its results can silently change in unpredictable -ways when new columns are added to one of the join -operands.{{site.data.alerts.end}} - - -## Performance Best Practices - -{{site.data.alerts.callout_info}}CockroachDBs is currently undergoing major changes to evolve and improve the performance of queries using joins. The restrictions and workarounds listed in this section will be lifted or made unnecessary over time.{{site.data.alerts.end}} - -- Joins over [interleaved tables](interleave-in-parent.html) are usually (but not always) processed more effectively than over non-interleaved tables. - -- When no indexes can be used to satisfy a join, CockroachDB may load all the rows in memory that satisfy the condition one of the join operands before starting to return result rows. This may cause joins to fail if the join condition or other `WHERE` clauses are insufficiently selective. - -- Outer joins are generally processed less efficiently than inner joins. Prefer using inner joins whenever possible. Full outer joins are the least optimized. - -- Use [`EXPLAIN`](explain.html) over queries containing joins to verify that indexes are used. - -- See [Index Best Practices](performance-best-practices-overview.html#indexes-best-practices). - -## See Also - -- [Scalar Expressions](scalar-expressions.html) -- [Table Expressions](table-expressions.html) -- [Simple `SELECT` Clause](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`EXPLAIN`](explain.html) -- [Performance Best Practices - Overview](performance-best-practices-overview.html) -- [SQL join operation (Wikipedia)](https://en.wikipedia.org/wiki/Join_(SQL)) -- [CockroachDB's first implementation of SQL joins (CockroachDB Blog)](https://www.cockroachlabs.com/blog/cockroachdbs-first-join/) -- [On the Way to Better SQL Joins in CockroachDB (CockroachDB Blog)](https://www.cockroachlabs.com/blog/better-sql-joins-in-cockroachdb/) diff --git a/src/current/v2.0/jsonb.md b/src/current/v2.0/jsonb.md deleted file mode 100644 index 806795d924c..00000000000 --- a/src/current/v2.0/jsonb.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: JSONB -summary: The JSONB data type stores JSON (JavaScript Object Notation) data. -toc: true ---- - -New in v2.0: The `JSONB` [data type](data-types.html) stores JSON (JavaScript Object Notation) data as a binary representation of the `JSONB` value, which eliminates whitespace, duplicate keys, and key ordering. `JSONB` supports [inverted indexes](inverted-indexes.html). - -{{site.data.alerts.callout_success}}For a hands-on demonstration of storing and querying JSON data from a third-party API, see the JSON tutorial.{{site.data.alerts.end}} - - -## Alias - -In CockroachDB, `JSON` is an alias for `JSONB`. - -{{site.data.alerts.callout_info}}In PostgreSQL, JSONB and JSON are two different data types. In CockroachDB, the JSONB / JSON data type is similar in behavior to the JSONB data type in PostgreSQL. -{{site.data.alerts.end}} - -## Considerations - -- The [primary key](primary-key.html), [foreign key](foreign-key.html), and [unique](unique.html) [constraints](constraints.html) cannot be used on `JSONB` values. -- A standard [index](indexes.html) cannot be created on a `JSONB` column; you must use an [inverted index](inverted-indexes.html). - -## Syntax - -The syntax for the `JSONB` data type follows the format specified in [RFC8259](https://tools.ietf.org/html/rfc8259). A constant value of type `JSONB` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals) or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `JSONB`. - -There are six types of `JSONB` values: - -- `null` -- Boolean -- String -- Number (i.e., [`decimal`](decimal.html), **not** the standard `int64`) -- Array (i.e., an ordered sequence of `JSONB` values) -- Object (i.e., a mapping from strings to `JSONB` values) - -Examples: - -- `'{"type": "account creation", "username": "harvestboy93"}'` -- `'{"first_name": "Ernie", "status": "Looking for treats", "location" : "Brooklyn"}'` - -{{site.data.alerts.callout_info}}If duplicate keys are included in the input, only the last value is kept.{{site.data.alerts.end}} - -## Size - -The size of a `JSONB` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## `JSONB` Functions - -Function | Description ----------|------------ -`jsonb_array_elements()` | Expands a `JSONB` array to a set of `JSONB` values. -`jsonb_build_object(...)` | Builds a `JSONB` object out of a variadic argument list that alternates between keys and values. -`jsonb_each()` | Expands the outermost `JSONB` object into a set of key-value pairs. -`jsonb_object_keys()` | Returns sorted set of keys in the outermost `JSONB` object. -`jsonb_pretty()` | Returns the given `JSONB` value as a `STRING` indented and with newlines. See the [example](#retrieve-formatted-jsonb-data) below. - -For the full list of supported `JSONB` functions, see [Functions and Operators](functions-and-operators.html#jsonb-functions). - -## `JSONB` Operators - -Operator | Description | Example | ----------|-------------|---------| -`->` | Access a `JSONB` field, returning a `JSONB` value. | `SELECT '{"foo":"bar"}'::JSONB->'foo' = '"bar"'::JSONB;` -`->>` | Access a `JSONB` field, returning a string. | `SELECT '{"foo":"bar"}'::JSONB->>'foo' = 'bar'::STRING;` -`@>` | Tests whether the left `JSONB` field contains the right `JSONB` field. | `SELECT ('{"foo": {"baz": 3}, "bar": 2}'::JSONB @> '{"foo": {"baz":3}}'::JSONB ) = true;` - -For the full list of supported `JSONB` operators, see [Functions and Operators](functions-and-operators.html). - -## Examples - -### Create a Table with a `JSONB` Column - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - profile_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - last_updated TIMESTAMP DEFAULT now(), - user_profile JSONB - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM users; -~~~ -~~~ -+--------------+-----------+-------+-------------------+-------------+ -| Field | Type | Null | Default | Indices | -+--------------+-----------+-------+-------------------+-------------+ -| profile_id | UUID | false | gen_random_uuid() | {"primary"} | -| last_updated | TIMESTAMP | true | now() | {} | -| user_profile | JSON | true | NULL | {} | -+--------------+-----------+-------+-------------------+-------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO users (user_profile) VALUES - ('{"first_name": "Lola", "last_name": "Dog", "location": "NYC", "online" : true, "friends" : 547}'), - ('{"first_name": "Ernie", "status": "Looking for treats", "location" : "Brooklyn"}'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ -~~~ -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| profile_id | last_updated | user_profile | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| 33c0a5d8-b93a-4161-a294-6121ee1ade93 | 2018-02-27 16:39:28.155024+00:00 | {"first_name": "Lola", "friends": 547, "last_name": "Dog", "location": | -| | | "NYC", "online": true} | -| 6a7c15c9-462e-4551-9e93-f389cf63918a | 2018-02-27 16:39:28.155024+00:00 | {"first_name": "Ernie", "location": "Brooklyn", "status": "Looking for | -| | | treats"} | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -~~~ - -### Retrieve Formatted `JSONB` Data - -To retrieve `JSONB` data with easier-to-read formatting, use the `jsonb_pretty()` function. For example, retrieve data from the table you created in the [first example](#create-a-table-with-a-jsonb-column): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT profile_id, last_updated, jsonb_pretty(user_profile) FROM users; -~~~ -~~~ -+--------------------------------------+----------------------------------+------------------------------------+ -| profile_id | last_updated | jsonb_pretty | -+--------------------------------------+----------------------------------+------------------------------------+ -| 33c0a5d8-b93a-4161-a294-6121ee1ade93 | 2018-02-27 16:39:28.155024+00:00 | { | -| | | "first_name": "Lola", | -| | | "friends": 547, | -| | | "last_name": "Dog", | -| | | "location": "NYC", | -| | | "online": true | -| | | } | -| 6a7c15c9-462e-4551-9e93-f389cf63918a | 2018-02-27 16:39:28.155024+00:00 | { | -| | | "first_name": "Ernie", | -| | | "location": "Brooklyn", | -| | | "status": "Looking for treats" | -| | | } | -+--------------------------------------+----------------------------------+------------------------------------+ -~~~ - -### Retrieve Specific Fields from a `JSONB` Value - -To retrieve a specific field from a `JSONB` value, use the `->` operator. For example, retrieve a field from the table you created in the [first example](#create-a-table-with-a-jsonb-column): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT user_profile->'first_name',user_profile->'location' FROM users; -~~~ -~~~ -+----------------------------+--------------------------+ -| user_profile->'first_name' | user_profile->'location' | -+----------------------------+--------------------------+ -| "Lola" | "NYC" | -| "Ernie" | "Brooklyn" | -+----------------------------+--------------------------+ -~~~ - -You can also use the `->>` operator to return `JSONB` field values as `STRING` values: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT user_profile->>'first_name', user_profile->>'location' FROM users; -~~~ -~~~ -+-----------------------------+---------------------------+ -| user_profile->>'first_name' | user_profile->>'location' | -+-----------------------------+---------------------------+ -| Lola | NYC | -| Ernie | Brooklyn | -+-----------------------------+---------------------------+ -~~~ - -For the full list of functions and operators we support, see [Functions and Operators](functions-and-operators.html). - -### Create a Table with a `JSONB` Column and a Computed Column - -{% include {{ page.version.version }}/computed-columns/jsonb.md %} - -## Supported Casting & Conversion - -`JSONB` values can be [cast](data-types.html#data-type-conversions-casts) to the following data type: - -- `STRING` - -## See Also - -- [JSON tutorial](demo-json-support.html) -- [Inverted Indexes](inverted-indexes.html) -- [Computed Columns](computed-columns.html) -- [Data Types](data-types.html) -- [Functions and Operators](functions-and-operators.html) diff --git a/src/current/v2.0/keywords-and-identifiers.md b/src/current/v2.0/keywords-and-identifiers.md deleted file mode 100644 index e1ab13e979d..00000000000 --- a/src/current/v2.0/keywords-and-identifiers.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Keywords & Identifiers -toc: false ---- - -SQL statements consist of two fundamental components: - -- [__Keywords__](#keywords): Words with specific meaning in SQL like `CREATE`, `INDEX`, and `BOOL` -- [__Identifiers__](#identifiers): Names for things like databases and some functions - -## Keywords - -Keywords make up SQL's vocabulary and can have specific meaning in statements. Each SQL keyword that CockroachDB supports is on one of four lists: - -- [Reserved Keywords](sql-grammar.html#reserved_keyword) -- [Type Function Name Keywords](sql-grammar.html#type_func_name_keyword) -- [Column Name Keywords](sql-grammar.html#col_name_keyword) -- [Unreserved Keywords](sql-grammar.html#unreserved_keyword) - -Reserved keywords have fixed meanings and are not typically allowed as identifiers. All other types of keywords are considered non-reserved; they have special meanings in certain contexts and can be used as identifiers in other contexts. - -### Keyword Uses - -Most users asking about keywords want to know more about them in terms of: - -- __Names of objects__, covered on this page in [Identifiers](#identifiers) -- __Syntax__, covered in our pages [SQL Statements](sql-statements.html) and [SQL Grammar](sql-grammar.html) - -## Identifiers - -Identifiers are most commonly used as names of objects like databases, tables, or columns—because of this, the terms "name" and "identifier" are often used interchangeably. However, identifiers also have less-common uses, such as changing column labels with `SELECT`. - -### Rules for Identifiers - -In our [SQL grammar](sql-grammar.html), all values that accept an `identifier` must: - -- Begin with a Unicode letter or an underscore (_). Subsequent characters can be letters, underscores, digits (0-9), or dollar signs ($). -- Not equal any [SQL keyword](#keywords) unless the keyword is accepted by the element's syntax. For example, [`name`](sql-grammar.html#name) accepts Unreserved or Column Name keywords. - -To bypass either of these rules, simply surround the identifier with double-quotes ("). You can also use double-quotes to preserve case-sensitivity in database, table, view, and column names. However, all references to such identifiers must also include double-quotes. - -{{site.data.alerts.callout_info}}Some statements have additional requirements for identifiers. For example, each table in a database must have a unique name. These requirements are documented on each statement's page.{{site.data.alerts.end}} - -## See Also - -- [SQL Statements](sql-statements.html) -- [Full SQL Grammar](sql-grammar.html) diff --git a/src/current/v2.0/known-limitations.md b/src/current/v2.0/known-limitations.md deleted file mode 100644 index 2af0e9da188..00000000000 --- a/src/current/v2.0/known-limitations.md +++ /dev/null @@ -1,302 +0,0 @@ ---- -title: Known Limitations in CockroachDB v2.0 -summary: Learn about newly identified limitations in CockroachDB as well as unresolved limitations identified in earlier releases. -toc: true ---- - -This page describes newly identified limitations in the CockroachDB {{page.release_info.version}} release as well as unresolved limitations identified in earlier releases. - -## New Limitations - -### Changes to the default replication zone are not applied to existing replication zones - -{% include {{page.version.version}}/known-limitations/system-range-replication.md %} - -### Silent validation error with `DECIMAL` values - -Under the following conditions, the value received by CockroachDB will be different than that sent by the client and may cause incorrect data to be inserted or read from the database, without a visible error message: - -1. A query uses placeholders (e.g., `$1`) to pass values to the server. -2. A value of type [`DECIMAL`](decimal.html) is passed. -3. The decimal value is encoded using the binary format. - -Most client drivers and frameworks use the text format to pass placeholder values and are thus unaffected by this limitation. However, we know that the [Ecto framework](https://github.com/elixir-ecto/ecto) for Elixir is affected, and others may be as well. If in doubt, use [SQL statement logging](query-behavior-troubleshooting.html#cluster-wide-execution-logs) to control how CockroachDB receives decimal values from your client. - -### Enterprise backup/restore during rolling upgrades - -{{site.data.alerts.callout_info}}Resolved as of v2.0.1. See #24515.{{site.data.alerts.end}} - -In the upgrade process, after upgrading all binaries to v2.0, it's recommended to monitor the cluster's stability and performance for at least one day and only then finalize the upgrade by increasing the `version` cluster setting. However, in the window during which binaries are running v2.0 but the cluster version is still not increased, it is not possible to run enterprise [`BACKUP`](backup.html) and [`RESTORE`](restore.html) jobs. - -### Write and update limits for a single statement - -A single statement can perform at most 64MiB of combined updates. When a statement exceeds these limits, its transaction gets aborted. Currently, `INSERT INTO ... SELECT FROM` and `CREATE TABLE AS SELECT` queries may encounter these limits. - -To increase these limits, you can update the [cluster-wide setting](cluster-settings.html) `kv.raft.command.max_size`, but note that increasing this setting can affect the memory utilization of nodes in the cluster. For `INSERT INTO .. SELECT FROM` queries in particular, another workaround is to manually page through the data you want to insert using separate transactions. - -In the v1.1 release, the limit referred to a whole transaction (i.e., the sum of changes done by all statements) and capped both the number and the size of update. In this release, there's only a size limit, and it applies independently to each statement. Note that even though not directly restricted any more, large transactions can have performance implications on the cluster. - -### Memory flags with non-integer values and a unit suffix - -{{site.data.alerts.callout_info}}Resolved as of v2.0.1. See #24388.{{site.data.alerts.end}} - -The `--cache` and `--max-sql-memory` flags of the [`cockroach start`](start-a-node.html) command do not support non-integer values with a unit suffix, for example, `--cache=1.5GiB`. - -As a workaround, use integer values or a percentage, for example, `--cache=1536MiB`. - -### Import with a high amount of disk contention - -[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: - -~~~ sql -> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; -~~~ - -### Check constraints with `INSERT ... ON CONFLICT` - -{{site.data.alerts.callout_info}}Resolved as of v2.0.4. See #26699.{{site.data.alerts.end}} - -[`CHECK`](check.html) constraints are not properly enforced on updated values resulting from [`INSERT ... ON CONFLICT`](insert.html) statements. Consider the following example: - -~~~ sql -> CREATE TABLE ab (a INT PRIMARY KEY, b INT, CHECK (b < 1)); -~~~ - -A simple `INSERT` statement that fails the Check constraint fails as it should: - -~~~ sql -> INSERT INTO ab (a,b) VALUES (1, 12312); -~~~ - -~~~ -pq: failed to satisfy CHECK constraint (b < 1) -~~~ - -However, the same statement with `INSERT ... ON CONFLICT` incorrectly succeeds and results in a row that fails the constraint: - -~~~ sql -> INSERT INTO ab (a, b) VALUES (1,0); -- create some initial valid value -~~~ - -~~~ sql -> INSERT INTO ab (a, b) VALUES (1,0) ON CONFLICT (a) DO UPDATE SET b = 123132; -~~~ - -~~~ sql -> SELECT * FROM ab; -~~~ - -~~~ -+---+--------+ -| a | b | -+---+--------+ -| 1 | 123132 | -+---+--------+ -(1 row) -~~~ - -### Referring to a CTE by name more than once - -{% include {{ page.version.version }}/known-limitations/cte-by-name.md %} - -### Using CTEs with data-modifying statements - -{% include {{ page.version.version }}/known-limitations/cte-with-dml.md %} - -### Using CTEs with views - -{% include {{ page.version.version }}/known-limitations/cte-with-view.md %} - -### Using CTEs with `VALUES` clauses - -{% include {{ page.version.version }}/known-limitations/cte-in-values-clause.md %} - -### Using CTEs with Set Operations - -{% include {{ page.version.version }}/known-limitations/cte-in-set-expression.md %} - -### Assigning latitude/longitude for the Node Map - -{% include {{ page.version.version }}/known-limitations/node-map.md %} - -### Placeholders in `PARTITION BY` - -{% include {{ page.version.version }}/known-limitations/partitioning-with-placeholders.md %} - -### Adding a column with sequence-based `DEFAULT` values - -It is currently not possible to [add a column](add-column.html) to a table when the column uses a [sequence](create-sequence.html) as the [`DEFAULT`](default-value.html) value, for example: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t (x INT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t(x) VALUES (1), (2), (3); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE s; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t ADD COLUMN y INT DEFAULT nextval('s'); -~~~ - -~~~ -ERROR: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context -SQLSTATE: 0A000 -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/42508) - -## Unresolved Limitations - -### Database and table renames are not transactional - -Database and table renames using [`RENAME DATABASE`](rename-database.html) and [`RENAME TABLE`](rename-table.html) are not transactional. - -Specifically, when run inside a [`BEGIN`](begin-transaction.html) ... [`COMMIT`](commit-transaction.html) block, it’s possible for a rename to be half-done - not persisted in storage, but visible to other nodes or other transactions. For more information, see [Table renaming considerations](rename-table.html#table-renaming-considerations). For an issue tracking this limitation, see [cockroach#12123](https://github.com/cockroachdb/cockroach/issues/12123). - -### Available capacity metric in the Admin UI - -{% include v2.0/misc/available-capacity-metric.md %} - -### Schema changes within transactions - -Within a single [transaction](transactions.html): - -- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. -- A [`CREATE TABLE`](create-table.html) statement containing [`FOREIGN KEY`](foreign-key.html) or [`INTERLEAVE`](interleave-in-parent.html) clauses cannot be followed by statements that reference the new table. -- A table cannot be dropped and then recreated with the same name. This is not possible within a single transaction because `DROP TABLE` does not immediately drop the name of the table. As a workaround, split the [`DROP TABLE`](drop-table.html) and [`CREATE TABLE`](create-table.html) statements into separate transactions. - -### Schema changes between executions of prepared statements - -When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE users (id INT PRIMARY KEY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -PREPARE prep1 AS SELECT * FROM users; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD COLUMN name STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO users VALUES (1, 'Max Roach'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -EXECUTE prep1; -~~~ - -~~~ -ERROR: cached plan must not change result type -SQLSTATE: 0A000 -~~~ - -It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible. - - -### `INSERT ON CONFLICT` vs. `UPSERT` - -When inserting/updating all columns of a table, and the table has no secondary indexes, we recommend using an [`UPSERT`](upsert.html) statement instead of the equivalent [`INSERT ON CONFLICT`](insert.html) statement. Whereas `INSERT ON CONFLICT` always performs a read to determine the necessary writes, the `UPSERT` statement writes without reading, making it faster. - -This issue is particularly relevant when using a simple SQL table of two columns to [simulate direct KV access](frequently-asked-questions.html#can-i-use-cockroachdb-as-a-key-value-store). In this case, be sure to use the `UPSERT` statement. - -### Using `\|` to perform a large input in the SQL shell - -In the [built-in SQL shell](use-the-built-in-sql-client.html), using the [`\|`](use-the-built-in-sql-client.html#sql-shell-commands) operator to perform a large number of inputs from a file can cause the server to close the connection. This is because `\|` sends the entire file as a single query to the server, which can exceed the upper bound on the size of a packet the server can accept from any client (16MB). - -As a workaround, [execute the file from the command line](use-the-built-in-sql-client.html#execute-sql-statements-from-a-file) with `cat data.sql | cockroach sql` instead of from within the interactive shell. - -### New values generated by `DEFAULT` expressions during `ALTER TABLE ADD COLUMN` - -When executing an [`ALTER TABLE ADD COLUMN`](add-column.html) statement with a [`DEFAULT`](default-value.html) expression, new values generated: - -- use the default [search path](sql-name-resolution.html#search-path) regardless of the search path configured in the current session via `SET SEARCH_PATH`. -- use the UTC time zone regardless of the time zone configured in the current session via [`SET TIME ZONE`](set-vars.html). -- have no default database regardless of the default database configured in the current session via [`SET DATABASE`](set-vars.html), so you must specify the database of any tables they reference. -- use the transaction timestamp for the `statement_timestamp()` function regardless of the time at which the `ALTER` statement was issued. - -### Load-based lease rebalancing in uneven latency deployments - -When nodes are started with the [`--locality`](start-a-node.html#flags-changed-in-v2-0) flag, CockroachDB attempts to place the replica lease holder (the replica that client requests are forwarded to) on the node closest to the source of the request. This means as client requests move geographically, so too does the replica lease holder. - -However, you might see increased latency caused by a consistently high rate of lease transfers between datacenters in the following case: - -- Your cluster runs in datacenters which are very different distances away from each other. -- Each node was started with a single tier of `--locality`, e.g., `--locality=datacenter=a`. -- Most client requests get sent to a single datacenter because that's where all your application traffic is. - -To detect if this is happening, open the [Admin UI](admin-ui-access-and-navigate.html), select the **Queues** dashboard, hover over the **Replication Queue** graph, and check the **Leases Transferred / second** data point. If the value is consistently larger than 0, you should consider stopping and restarting each node with additional tiers of locality to improve request latency. - -For example, let's say that latency is 10ms from nodes in datacenter A to nodes in datacenter B but is 100ms from nodes in datacenter A to nodes in datacenter C. To ensure A's and B's relative proximity is factored into lease holder rebalancing, you could restart the nodes in datacenter A and B with a common region, `--locality=region=foo,datacenter=a` and `--locality=region=foo,datacenter=b`, while restarting nodes in datacenter C with a different region, `--locality=region=bar,datacenter=c`. - -### Overload resolution for collated strings - -Many string operations are not properly overloaded for [collated strings](collate.html), for example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT 'string1' || 'string2'; -~~~ - -~~~ -+------------------------+ -| 'string1' || 'string2' | -+------------------------+ -| string1string2 | -+------------------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ('string1' collate en) || ('string2' collate en); -~~~ - -~~~ -pq: unsupported binary operator: || -~~~ - -### Max size of a single column family - -When creating or updating a row, if the combined size of all values in a single [column family](column-families.html) exceeds the max range size (64MiB by default) for the table, the operation may fail, or cluster performance may suffer. - -As a workaround, you can either [manually split a table's columns into multiple column families](column-families.html#manual-override), or you can [create a table-specific zone configuration](configure-replication-zones.html#create-a-replication-zone-for-a-table) with an increased max range size. - -### Simultaneous client connections and running queries on a single node - -When a node has both a high number of client connections and running queries, the node may crash due to memory exhaustion. This is due to CockroachDB not accurately limiting the number of clients and queries based on the amount of available RAM on the node. - -To prevent memory exhaustion, monitor each node's memory usage and ensure there is some margin between maximum CockroachDB memory usage and available system RAM. For more details about memory usage in CockroachDB, see [this blog post](https://www.cockroachlabs.com/blog/memory-usage-cockroachdb/). - -### SQL subexpressions and memory usage - -Many SQL subexpressions (e.g., `ORDER BY`, `UNION`/`INTERSECT`/`EXCEPT`, `GROUP BY`, subqueries) accumulate intermediate results in RAM on the node processing the query. If the operator attempts to process more rows than can fit into RAM, the node will either crash or report a memory capacity error. For more details about memory usage in CockroachDB, see [this blog post](https://www.cockroachlabs.com/blog/memory-usage-cockroachdb/). - -### Query planning for `OR` expressions - -Given a query like `SELECT * FROM foo WHERE a > 1 OR b > 2`, even if there are appropriate indexes to satisfy both `a > 1` and `b > 2`, the query planner performs a full table or index scan because it cannot use both conditions at once. - -### Privileges for `DELETE` and `UPDATE` - -Every [`DELETE`](delete.html) or [`UPDATE`](update.html) statement constructs a `SELECT` statement, even when no `WHERE` clause is involved. As a result, the user executing `DELETE` or `UPDATE` requires both the `DELETE` and `SELECT` or `UPDATE` and `SELECT` [privileges](privileges.html) on the table. - -### `cockroach dump` does not support cyclic foreign key references - -{% include {{ page.version.version }}/known-limitations/dump-cyclic-foreign-keys.md %} diff --git a/src/current/v2.0/kubernetes-performance.md b/src/current/v2.0/kubernetes-performance.md deleted file mode 100644 index dcc5de65b0e..00000000000 --- a/src/current/v2.0/kubernetes-performance.md +++ /dev/null @@ -1,560 +0,0 @@ ---- -title: CockroachDB Performance on Kubernetes -summary: How running CockroachDB in Kubernetes affects its performance and how to get the best possible performance when running in Kubernetes. -toc: true ---- - -Kubernetes provides many useful abstractions for deploying and operating distributed systems, but some of the abstractions come with a performance overhead and an increase in underlying system complexity. This page explains potential bottlenecks to be aware of when [running CockroachDB in Kubernetes](orchestrate-cockroachdb-with-kubernetes.html) and shows you how to optimize your deployment for better performance. - -
      - -## Prerequisites - -Before you focus on optimizing a Kubernetes-orchestrated CockroachDB cluster: - -1. Go through the documentation for [running a CockroachDB cluster on Kubernetes](orchestrate-cockroachdb-with-kubernetes.html) to familiarize yourself with the necessary Kubernetes terminology and deployment abstractions. -2. Verify that CockroachDB performs up to your requirements for your workload on identical hardware without Kubernetes. You may find that you need to [modify your workload](performance-best-practices-overview.html) or use [different machine specs](recommended-production-settings.html#hardware) to achieve the performance you need, and it's better to determine that up front than after spending a bunch of time trying to optimize your Kubernetes deployment. - -## Performance factors - -There are a number of independent factors that affect the performance you observe when running CockroachDB on Kubernetes. Some are more significant than others or easier to fix than others, so feel free to pick and choose the improvements that best fit your situation. Note that most of these changes are easiest to make before you create your CockroachDB cluster. If you already have a running CockroachDB cluster in Kubernetes that you need to modify while keeping it running, extra work may be needed and extra care and testing is strongly recommended. - -In a number of the sections below, we have shown how to modify excerpts from our provided Kubernetes configuration YAML files. You can find the most up-to-date versions of these files on Github, [one for running CockroachDB in secure mode](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml) and one for [running CockroachDB in insecure mode](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml). - -You can also use a [performance-optimized configuration file for secure mode](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml) or [insecure mode](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml). Be sure to modify the file wherever there is a `TODO` comment. - -### Version of CockroachDB - -Because CockroachDB is under very active development, there are typically substantial performance gains in each release. If you aren't running the latest release and aren't getting the performance you desire, you should try the latest and see how much it helps. - -### Client workload - -Your workload is the single most important factor in database performance. Read through our [SQL performance best practices](performance-best-practices-overview.html) to determine whether there are any easy changes that you can make to speed up your application. - -### Machine size - -The size of the machines you're using isn't a Kubernetes-specific concern, but it's always a good place to start if you want more performance. See our [hardware recommendations](recommended-production-settings.html#hardware) for specific suggestions, but using machines with more CPU will almost always allow for greater throughput. Be aware that because Kubernetes runs a set of processes on every machine in a cluster, you typically will get more bang for your buck by using fewer large machines than more small machines. - -### Disk type - -CockroachDB makes heavy use of the disks you provide it, so using faster disks is an easy way to improve your cluster's performance. Our provided configuration does not specify what type of disks it wants, so in most environments Kubernetes will auto-provision disks of the default type. In the common cloud environments (AWS, GCP, Azure) this means you'll get slow disks that aren't optimized for database workloads (e.g., HDDs on GCE, SSDs without provisioned IOPS on AWS). However, we [strongly recommend using SSDs](recommended-production-settings.html#hardware) for the best performance, and Kubernetes makes it relatively easy to use them. - -#### Creating a different disk type - -Kubernetes exposes the disk types used by its volume provisioner via its [`StorageClass` API object](https://kubernetes.io/docs/concepts/storage/storage-classes/). Each cloud environment has its own default `StorageClass`, but you can easily change the default or create a new named class which you can then ask for when asking for volumes. To do this, pick the type of volume provisioner you want to use from the list in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/), take the example YAML file they provide, modify it to have the disk type you want, then run `kubectl create -f `. For example, in order to use the `pd-ssd` disk type on Google Compute Engine or Google Kubernetes Engine, you can use a `StorageClass` file like this: - -~~~ yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: -provisioner: kubernetes.io/gce-pd -parameters: - type: pd-ssd -~~~ - -You can then use this new disk type either by configuring the CockroachDB YAML file to request it or by making it the default. You may also want to set additional parameters as documented in the list of Kubernetes storage classes, such as configuring the `iopsPerGB` if you're creating a `StorageClass` for AWS's `io1` Provisioned IOPS volume type. - -#### Configuring the disk type used by CockroachDB - -To use a new `StorageClass` without making it the default in your cluster, you have to modify your application's YAML file to ask for it. In the CockroachDB `StatefulSet` configuration, that means adding a line to its `VolumeClaimTemplates` section. For example, that would mean taking these lines of the CockroachDB config file: - -~~~ yaml - volumeClaimTemplates: - - metadata: - name: datadir - spec: - accessModes: - - "ReadWriteOnce" - resources: - requests: - storage: 1Gi -~~~ - -And adding a `storageClassName` field to the `spec`, changing them to: - -~~~ yaml - volumeClaimTemplates: - - metadata: - name: datadir - spec: - accessModes: - - "ReadWriteOnce" - storageClassName: - resources: - requests: - storage: 1Gi -~~~ - -If you make this change then run `kubectl create -f` on your YAML file, Kubernetes should create volumes for you using your new `StorageClass`. - -#### Changing the default disk type - -If you want your new `StorageClass` to be the default for all volumes in your cluster, you have to run a couple of commands to inform Kubernetes of what you want. First, get the names of your `StorageClass`es. Then remove the current default and add yours as the new default. - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl get storageclasses -~~~ - -~~~ -NAME PROVISIONER -ssd kubernetes.io/gce-pd -standard (default) kubernetes.io/gce-pd -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' -~~~ - -~~~ -storageclass "standard" patched -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl patch storageclass ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' -~~~ - -~~~ -storageclass "ssd" patched -~~~ - -Note that if you are running an older version of Kubernetes, you may need to use a beta version of the annotation instead of the form used above. In particular, on v1.8 of Kubernetes you need to use `storageclass.beta.kubernetes.io/is-default-class`. To determine for sure which to use, run `kubectl describe storageclass` and copy the annotation used by the current default. - -### Disk size - -On some cloud providers (notably including all GCP disks and the AWS io1 disk type), the number of IOPS available to a disk is directly correlated to the size of the disk. In such cases, increasing the size of your disks can make for significantly better CockroachDB performance, as well as less risk of filling them up. Doing so is easy -- before you create your CockroachDB cluster, modify the `VolumeClaimTemplate` in the CockroachDB YAML file to ask for more space. For example, to give each CockroachDB instance 1TB of disk space, you'd change: - -~~~ yaml - volumeClaimTemplates: - - metadata: - name: datadir - spec: - accessModes: - - "ReadWriteOnce" - resources: - requests: - storage: 1Gi -~~~ - -To instead be: - -~~~ yaml - volumeClaimTemplates: - - metadata: - name: datadir - spec: - accessModes: - - "ReadWriteOnce" - resources: - requests: - storage: 1024Gi -~~~ - -Since [GCE disk IOPS scale linearly with disk size](https://cloud.google.com/compute/docs/disks/performance#type_comparison), a 1TiB disk gives 1024 times as many IOPS as a 1GiB disk, which can make a very large difference for write-heavy workloads. - -### Local disks - -Up to this point, we have been assuming that you will be running CockroachDB in a `StatefulSet`, using auto-provisioned remotely attached disks. However, using local disks typically provides better performance than remotely attached disks, such as SSD Instance Store Volumes instead of EBS Volumes on AWS or Local SSDs instead of Persistent Disks on GCE. `StatefulSet`s have historically not supported using local disks, but [beta support for using "local" `PersistentVolume`s was added in Kubernetes v1.10](https://kubernetes.io/docs/concepts/storage/volumes/#local). We do not recommend using this for production data until the feature is more mature, but it's a promising development. - -There is also the option of using local disks if you do not run CockroachDB in a `StatefulSet`, but instead use a `DaemonSet`. For more details on what this entails, see the section on [Running in a DaemonSet](#running-in-a-daemonset). - -Note that when running with local disks, there is a greater chance of experiencing a disk failure than when using the cloud providers' network-attached disks that are often replicated underneath the covers. Consequently, you may want to [configure replication zones](configure-replication-zones.html) to increase the replication factor of your data to 5 from its default of 3 when using local disks. - -### Resource requests and limits - -When you ask Kubernetes to run a pod, either directly or indirectly through another resource type such as a `StatefulSet`, you can tell it to reserve certain amounts of CPU and/or memory for each container in the pod or to limit the CPU and/or memory of each container. Doing one or both of these can have different implications depending on how utilized your Kubernetes cluster is. For the authoritative information on this topic, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). - -#### Resource requests - -Resource requests allow you to reserve a certain amount of CPU or memory for your container. If you add resource requests to your CockroachDB YAML file, Kubernetes will schedule each CockroachDB pod onto a node with sufficient unreserved resources and will ensure the pods are guaranteed the reserved resources using the applicable Linux container primitives. If you are running other workloads in your Kubernetes cluster, setting resource requests is very strongly recommended to ensure good performance, because if you do not set them then CockroachDB could be starved of CPU cycles or OOM stopped before less important processes. - -To determine how many resources are usable on your Kubernetes nodes, you can run: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl describe nodes -~~~ - -~~~ -Name: gke-perf-default-pool-aafee20c-k4t8 -[...] -Capacity: - cpu: 4 - memory: 15393536Ki - pods: 110 -Allocatable: - cpu: 3920m - memory: 12694272Ki - pods: 110 -[...] -Non-terminated Pods: (2 in total) - Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits - --------- ---- ------------ ---------- --------------- ------------- - kube-system kube-dns-778977457c-kqtlr 260m (6%) 0 (0%) 110Mi (0%) 170Mi (1%) - kube-system kube-proxy-gke-perf-default-pool-aafee20c-k4t8 100m (2%) 0 (0%) 0 (0%) 0 (0%) -Allocated resources: - (Total limits may be over 100 percent, i.e., overcommitted.) - CPU Requests CPU Limits Memory Requests Memory Limits - ------------ ---------- --------------- ------------- - 360m (9%) 0 (0%) 110Mi (0%) 170Mi (1%) -~~~ - -This will output a lot of information for each of the nodes in your cluster, but if you focus in on the right parts you'll see how many "allocatable" resources are available on each node and how many resources are already being used by other pods. The "allocatable" resources are how much CPU and memory Kubernetes is willing to provide to pods running on the machine. The difference between the node's "capacity" and its "allocatable" resources is taken up by the operating system and Kubernetes' management processes. The "m" in "3920m" stands for "milli-CPUs", meaning "thousandths of a CPU". - -You'll also see a number of pods running here that you may not have realized were in your cluster. Kubernetes runs a handful of pods in the `kube-system` namespace that are part of the cluster infrastructure. These may make it tough to attempt to reserve all the allocatable space on your nodes for CockroachDB, since some of them are essential for the Kubernetes cluster's health. If you want to run CockroachDB on every node in your cluster, you'll have to leave room for these processes. If you are only running CockroachDB on a subset of the nodes in your cluster, you can choose to take up all the "allocatable" space other than what is being used by the `kube-system` pods that are on all the nodes in the cluster, such as `kube-proxy` or the `fluentd` logging agent. - -Note that it will be difficult to truly use up all of the allocatable space in the current versions of Kubernetes (v1.10 or older) because you'd have to manually preempt the `kube-system` pods that are already on the nodes you want CockroachDB to run on (by deleting them). This should become easier in future versions of Kubernetes when its [Pod Priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) feature gets promoted from alpha to beta. Once that feature is more widely available, you could set the CockroachDB pods to a higher priority, causing the Kubernetes scheduler to preempt and reschedule the `kube-system` pods onto other machines. - -Once you've picked out an amount of CPU and memory to reserve for Cockroach, you'll have to configure the resource request in your CockroachDB YAML file. They should go underneath the `containers` heading. For example, to use most of the available resources on the machines described above, you'd change these lines of your YAML config file: - -~~~ yaml - containers: - - name: cockroachdb - image: {{page.release_info.docker_image}}:{{page.release_info.version}} - imagePullPolicy: IfNotPresent - ports: - - containerPort: 26257 - name: grpc - - containerPort: 8080 - name: http -~~~ - -To be: - -~~~ yaml - containers: - - name: cockroachdb - image: {{page.release_info.docker_image}}:{{page.release_info.version}} - imagePullPolicy: IfNotPresent - resources: - requests: - cpu: "3500m" - memory: "12300Mi" - ports: - - containerPort: 26257 - name: grpc - - containerPort: 8080 - name: http -~~~ - -When you create the `StatefulSet`, you'll want to check to make sure that all the CockroachDB pods are scheduled successfully. If you see any get stuck in the pending state, run `kubectl describe pod ` and check the `Events` for information about why they're still pending. You may need to manually preempt pods on one or more nodes by running `kubectl delete pod` on them to make room for the CockroachDB pods. As long as the pods you delete were created by a higher-level Kubernetes object such as a `Deployment` or a `StatefulSet`, they'll be safely recreated on another node. - -#### Resource limits - -Resource limits are conceptually similar to resource requests, but serve a different purpose. They let you cap the resources used by a pod to no more than the provided limit, which can have a couple of different uses. For one, it makes for more predictable performance because your pods will not be allowed to use any excess capacity on their machines, meaning that they will not have more resources available to them at some times (during lulls in traffic) than others (busy periods where the other pods on a machine are also fully utilizing their reserved resources). Secondly, it also increases the ["Quality of Service" guaranteed by the Kubernetes runtime](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md) on Kubernetes versions 1.8 and below, making the pods less likely to be preempted when a machine is oversubscribed. Finally, memory limits in particular limit the amount of memory that the container knows is available to it, which help when you specify percentages for the CockroachDB `--cache` and `--max-sql-memory` flags, as our default configuration file does. - -Setting resource limits works about the same as setting resource requests. If you wanted to set resource limits in addition to requests on the config from the [Resource Requests](#resource-requests) section above, you'd change the config to: - -~~~ yaml - containers: - - name: cockroachdb - image: {{page.release_info.docker_image}}:{{page.release_info.version}} - imagePullPolicy: IfNotPresent - resources: - requests: - cpu: "3500m" - memory: "12300Mi" - limits: - cpu: "3500m" - memory: "12300Mi" - ports: - - containerPort: 26257 - name: grpc - - containerPort: 8080 - name: http -~~~ - -The pods would then be restricted to only use the resource they have reserved and guaranteed to not be preempted except in very exceptional circumstances. This typically will not give you better performance on an under-utilized Kubernetes cluster, but will give you more predictable performance as other workloads are run. - -{{site.data.alerts.callout_danger}}While setting memory limits is strongly recommended, setting CPU limits can hurt tail latencies as currently implemented by Kubernetes. We recommend not setting CPU limits at all unless you have explicitly enabled the non-default Static CPU Management Policy when setting up your Kubernetes cluster, and even then only setting integer (non-fractional) CPU limits and memory limits exactly equal to the corresponding requests.{{site.data.alerts.end}} - -#### Default resource requests and limits - -Note that even if you do not manually set resource requests yourself, you're likely unknowingly using them anyways. In many installations of Kubernetes, a [`LimitRange`](https://kubernetes.io/docs/tasks/administer-cluster/cpu-default-namespace/) is preconfigured for the `default` namespace that applies a default CPU request of `100m`, or one-tenth of a CPU. You can see this configuration by running - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl describe limitranges -~~~ - -~~~ -Name: limits -Namespace: default -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -Container cpu - - 100m - - -~~~ - -Experimentally, this does not appear to have a noticeable effect on CockroachDB's performance when a Kubernetes cluster isn't heavily utilized, but do not be surprised if you see CPU requests on your pods that you didn't set. - -### Other pods on the same machines as CockroachDB - -As discovered in the above section on [Resource Requests and Limits](#resource-requests-and-limits), there will always be pods other than just CockroachDB running in your Kubernetes cluster, even if you do not create any other pods of your own. You can see them at any time by running: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl get pods --all-namespaces -~~~ - -~~~ -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system event-exporter-v0.1.7-5c4d9556cf-6v7lf 2/2 Running 0 2m -kube-system fluentd-gcp-v2.0.9-6rvmk 2/2 Running 0 2m -kube-system fluentd-gcp-v2.0.9-m2xgp 2/2 Running 0 2m -kube-system fluentd-gcp-v2.0.9-sfgps 2/2 Running 0 2m -kube-system fluentd-gcp-v2.0.9-szwwn 2/2 Running 0 2m -kube-system heapster-v1.4.3-968544ffd-5tsb8 3/3 Running 0 1m -kube-system kube-dns-778977457c-4s7vv 3/3 Running 0 1m -kube-system kube-dns-778977457c-ls6fq 3/3 Running 0 2m -kube-system kube-dns-autoscaler-7db47cb9b7-x2cc4 1/1 Running 0 2m -kube-system kube-proxy-gke-test-default-pool-828d39a7-dbn0 1/1 Running 0 2m -kube-system kube-proxy-gke-test-default-pool-828d39a7-nr06 1/1 Running 0 2m -kube-system kube-proxy-gke-test-default-pool-828d39a7-rc4m 1/1 Running 0 2m -kube-system kube-proxy-gke-test-default-pool-828d39a7-trd1 1/1 Running 0 2m -kube-system kubernetes-dashboard-768854d6dc-v7ng8 1/1 Running 0 2m -kube-system l7-default-backend-6497bcdb4d-2kbh4 1/1 Running 0 2m -~~~ - -These ["cluster add-ons"](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) provide a variety of basic services like managing DNS entries for services within the cluster, powering the Kubernetes dashboard UI, or collecting logs or metrics from all the pods running in the cluster. If you do not like having them take up space in your cluster, you can prevent some of them from running by configuring your Kubernetes cluster appropriately. For example, on GKE, you can create a cluster with the minimal set of addons by running: - -{% include copy-clipboard.html %} -~~~ shell -$ gcloud container clusters create --no-enable-cloud-logging --no-enable-cloud-monitoring --addons="" -~~~ - -However, essentials like `kube-proxy` and `kube-dns` are effectively required to have a compliant Kubernetes cluster. This means that you'll always have some pods that aren't yours running in your cluster, so it's important to understand and account for the possible effects of CockroachDB having to share a machine with other processes. The more processes there are on the same machine as a CockroachDB pod, the worse and less predictable its performance will likely be. To protect against this, it's strongly recommended to run with [Resource Requests](#resource-requests) on your CockroachDB pods to provide some level of CPU and memory isolation. - -Setting resource requests isn't a panacea, though. There can still be contention for shared resources like network I/O or, in [exceptional](https://sysdig.com/blog/container-isolation-gone-wrong/) cases, internal kernel data structures. For these reasons and because of the Kubernetes infrastructure processes running on each machine, CockroachDB running on Kubernetes simply cannot reach quite the same levels of performance as running directly on dedicated machines. Thankfully, it can at least get quite close if you use Kubernetes wisely. - -If for some reason setting appropriate resource requests still isn't getting you the performance you expect, you might want to consider going all the way to [dedicated nodes](#dedicated-nodes). - -#### Client applications on the same machines as CockroachDB - -Running client applications such as benchmarking applications on the same machines as CockroachDB can be even worse than just having Kubernetes system pods on the same machines. They are very likely to end up competing for resources, because when the applications get more loaded than usual, so will the CockroachDB processes. The best way to avoid this is to [set resource requests and limits](#resource-requests-and-limits), but if you are unwilling or unable to do that for some reason, you can also set [anti-affinity scheduling policies](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) on your client applications. Anti-affinity policies are placed in the pod spec, so if you wanted to change our provided example load generator app, you'd change [these lines](https://github.com/cockroachdb/cockroach/blob/98c506c48f3517d1ac1aadb6a09e1b23ad672c37/cloud/kubernetes/example-app.yaml#L11-L12): - -~~~ yaml - spec: - containers: -~~~ - -To be: - -~~~ yaml - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - loadgen - topologyKey: kubernetes.io/hostname - - weight: 99 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - cockroachdb - topologyKey: kubernetes.io/hostname - containers: -~~~ - -This configuration will first prefer to put the `loadgen` pods on different nodes from each other, which is important for the fault tolerance of the `loadgen` pods themselves. As a secondary priority, it will attempt to put the pods on nodes that do not already have a running `CockroachDB` pod. This will ensure the best possible balance of fault tolerance and performance for the load generator and CockroachDB cluster. - -### Networking - -[Kubernetes asks a lot of the network that it runs on](https://kubernetes.io/docs/concepts/cluster-administration/networking/) in order to provide a routable IP address and an isolated Linux network namespace to each pod in the cluster, among its other requirements. While this document isn't nearly large enough to properly explain the details, and those details themselves can depend heavily on specifically how you have set up the network for your cluster, it suffices to say that Docker and Kubernetes' networking abstractions often come with a performance penalty for high-throughput distributed applications such as CockroachDB. - -If you really want to eke more performance out of your cluster, networking is a good target to at least experiment with. You can either replace your cluster's networking solution with a more performant one or bypass most of the networking overhead by using the host machines' networks directly. - -#### Networking solutions - -If you aren't using a hosted Kubernetes service, you'll typically have to choose how to set up the network when you're creating a Kubernetes cluster. There are [a lot of solutions out there](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this), and they can have significantly different performance characteristics and functionality. We do not endorse any networking software or configurations in particular, but want to call out that your choice can have a meaningful affect on performance compared to running CockroachDB outside of Kubernetes. - -#### Using the host's network - -If you are already content with your cluster's networking setup or do not want to have to mess with it, Kubernetes does offer an escape hatch for exceptional cases that lets you avoid network performance overhead -- the `hostNetwork` setting, which allows you to run pods using their host machine's network directly and bypass the layers of abstraction. This comes with a number of downsides, of course. For example, two pods using `hostNetwork` on the same machine cannot use the same ports, and it also can have serious security implications if your machines are reachable on the public Internet. If you want to give it a try, though, to see what effects it has for your workload, you just have to add two lines to the CockroachDB YAML configuration file and to any client applications that desperately need better performance, changing: - -~~~ yaml - spec: - affinity: -~~~ - -To be: - -~~~ yaml - spec: - hostNetwork: true - dnsPolicy: ClusterFirstWithHostNet - affinity: -~~~ - -`hostNetwork: true` tells Kubernetes to put the pods in the host machine's network namespace, using its IP address, hostname, and entire networking stack. The `dnsPolicy: ClusterFirstWithHostNet` line tells Kubernetes to configure the pods to still be able to use the cluster's DNS infrastructure for service discovery. - -This will not work miracles, so use it with caution. In our testing, it pretty reliably gives about a 6% improvement in database throughput when running [our `kv` load generator](https://hub.docker.com/r/cockroachdb/loadgen-kv/) against a 3-node cluster on GKE. - -### Running in a DaemonSet - -In all of the examples so far, we've been using the standard CockroachDB `StatefulSet` configuration file and tweaking it slightly. An alternative that comes with a different set of tradeoffs is to completely switch from using a `StatefulSet` for orchestration to using a `DaemonSet`. A [`DaemonSet`](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) is a Kubernetes type that runs a pod on all nodes matching some selection criteria. - -This comes with a few main benefits -- it's a more natural abstraction for cordoning off onto [dedicated nodes](#dedicated-nodes), it naturally pairs with [using the host's network](#using-the-hosts-network) since you're already coupling CockroachDB processes one-to-one with nodes, and it allows you to use [local disks](#local-disks) without relying on the beta support for using local disks with `StatefulSets`. The biggest tradeoff is that you're limiting Kubernetes' ability to help your cluster recover from failures. It cannot create new pods to replace pods on nodes that fail because it's already running a CockroachDB pod on all the matching nodes. This matches the behavior of running CockroachDB directly on a set of physical machines that are only manually replaced by human operators. - -To set up a CockroachDB `DaemonSet`, a little more work is needed than for a `StatefulSet`. We will use [the provided `DaemonSet` configuration file template from the CockroachDB Github repository](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-daemonset-insecure.yaml) as our base. - -First of all, unless you want CockroachDB running on every machine in your Kubernetes cluster, you should pick out which nodes you want to run CockroachDB on using either [node labels](#node-labels) or [node taints](#node-taints). Once you have chosen or created the nodes, configure them and the `DaemonSet` YAML file appropriately as described in the relevant [Dedicated Nodes](#dedicated-nodes) section. - -Then, you must set the addresses in the CockroachDB `--join` flag in the YAML file. The file defaults to [using the host's network](#using-the-hosts-network), so we need to use the host machines' IP addresses or hostnames as join addresses. Pick out two or three of them to include and replace the list (`10.128.0.4,10.128.0.5,10.128.0.3`) in the provided file. Be aware that if the machines you choose are removed from the Kubernetes cluster, you will need to update your `--join` flag values or else new CockroachDB instances will not be able to join the cluster. - -Then, pick out the directory from the host that you would like to store CockroachDB's data in and replace the `path: /tmp/cockroach-data` line in the config file with your desired directory. If you're using local SSD, this should be wherever the SSDs are mounted on the machines. - -After taking those steps and making any other desired modifications, you should be all set to create the `DaemonSet`: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl create -f cockroachdb-daemonset.yaml -~~~ - -~~~ -daemonset "cockroachdb" created -~~~ - -To initialize the cluster pick one of the pod names and run: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl exec -it -- ./cockroach init --insecure -~~~ - -~~~ -Cluster successfully initialized -~~~ - -### Dedicated nodes - -If your Kubernetes cluster is made up of heterogeneous hardware, it's very possible that you'd like to make sure CockroachDB only runs on certain machines. If you want to get as much performance as possible out of a set of machines, you might also want to make sure that nothing other than CockroachDB is run on them. - -#### Node labels - -Node labels and node selectors are a way to tell Kubernetes which nodes you want a pod to be allowed on. To label a node, you can just use the `kubectl label node` command as such, substituting in your node's name and your preferred key-value pair for the label: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl label node = -~~~ - -Some Kubernetes installation tools allow you to automatically apply labels to certain nodes. For example, when creating a new [GKE Node Pool](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools), you can use the `--node-labels` flag to the `gcloud container node-pools create` command. - -Once you do set up labels for all the nodes you want, you can then [use a `NodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) to control where your pods are allowed to be scheduled. For example, in the `DaemonSet` file from the above example, you would change the lines: - -~~~ yaml - spec: - hostNetwork: true - containers: -~~~ - -To be: - -~~~ yaml - spec: - nodeSelector: - : - hostNetwork: true - containers: -~~~ - -#### Node taints - -Alternatively, if you want to make sure that CockroachDB is the only thing running on a set of machines, you're better off using a pair of complementary features called [`Taints` and `Tolerations`](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to instruct Kubernetes not to schedule anything else on them. You can set them up in a very similar fashion to how you can set up node labels and node selectors: - -{% include copy-clipboard.html %} -~~~ shell -$ kubectl taint node =:NoSchedule -~~~ - -Just like for [node labels](#node-labels), some Kubernetes installation tools allow you to automatically apply taints to certain nodes. For example, when creating a new [GKE Node Pool](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools), you can use the `--node-taints` flag to the `gcloud container node-pools create` command. - -Once you have applied the appropriate `Taint`s to each machine you want to only run CockroachDB, add the corresponding `Toleration`s to your CockroachDB config file. For example, in the `DaemonSet` file from the above example, you would change the lines: - -~~~ yaml - spec: - hostNetwork: true - containers: -~~~ - -To be: - -~~~ yaml - spec: - tolerations: - - key: - operator: "Equal" - value: - effect: "NoSchedule" - hostNetwork: true - containers: -~~~ - -Note that this will only prevent non-CockroachDB pods from running on these machines. It will not prevent CockroachDB from running on all the other machines, so in most cases you would also pair corresponding [node labels](#node-labels) and node selectors with them to create truly dedicated nodes, making for a resulting config file snippet that looks like: - - -~~~ yaml - spec: - tolerations: - - key: - operator: "Equal" - value: - effect: "NoSchedule" - nodeSelector: - : - hostNetwork: true - containers: -~~~ - -## Modifying an existing CockroachDB cluster - -Kubernetes makes it easy to modify some, but not all, of an existing resource's configuration. Certain changes are easy, such as changing the CPU and memory requests, adding nodes to a cluster, or upgrading to a new CockroachDB Docker image. Others are very difficult and error prone to do in-place, such as changing from a `StatefulSet` to a `DaemonSet`. To update a resource's configuration, there are a few commands available to you. - -* If you have configuration files with the desired modifications in them, you can just run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f .yaml - ~~~ -* If you want to open up a text editor and manually make the desired changes to your `StatefulSet`'s YAML configuration file, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl edit statefulset cockroachdb - ~~~ - - For a `DaemonSet`, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl edit daemonset cockroachdb - ~~~ - -* If you want a one-liner, construct the appropriate JSON and run something like: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"{{page.release_info.docker_image}}:VERSION"}] - ~~~ - -See [the Kubernetes documentation on in-place updates](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) or the `kubectl --help` output for more information on these commands. - -## See Also - -- [Orchestrate CockroachDB with Kubernetes](orchestrate-cockroachdb-with-kubernetes.html) -- [Production Checklist](recommended-production-settings.html) -- [SQL Performance Best Practices](performance-best-practices-overview.html) -- [Troubleshooting Performance Issues](query-behavior-troubleshooting.html#performance-issues) diff --git a/src/current/v2.0/learn-cockroachdb-sql.md b/src/current/v2.0/learn-cockroachdb-sql.md deleted file mode 100644 index 962fdf29476..00000000000 --- a/src/current/v2.0/learn-cockroachdb-sql.md +++ /dev/null @@ -1,409 +0,0 @@ ---- -title: Learn CockroachDB SQL -summary: Learn some of the most essential CockroachDB SQL statements. -toc: true ---- - -This page walks you through some of the most essential CockroachDB SQL statements. For a complete list and related details, see [SQL Statements](sql-statements.html). - -{{site.data.alerts.callout_info}}CockroachDB aims to provide standard SQL with extensions, but some standard SQL functionality is not yet available. See our SQL Feature Support page for more details.{{site.data.alerts.end}} - - -## Create a Database - -CockroachDB comes with a single default `system` database, which contains CockroachDB metadata and is read-only. To create a new database, use [`CREATE DATABASE`](create-database.html) followed by a database name: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Database names must follow [these identifier rules](keywords-and-identifiers.html#identifiers). To avoid an error in case the database already exists, you can include `IF NOT EXISTS`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE IF NOT EXISTS bank; -~~~ - -When you no longer need a database, use [`DROP DATABASE`](drop-database.html) followed by the database name to remove the database and all its objects: - -{% include copy-clipboard.html %} -~~~ sql -> DROP DATABASE bank; -~~~ - -## Show Databases - -To see all databases, use the [`SHOW DATABASES`](show-databases.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+--------------------+ -| Database | -+--------------------+ -| bank | -| system | -+--------------------+ -(2 rows) -~~~ - -## Set the Default Database - -To set the default database, use the [`SET`](set-vars.html#examples) statement: - -{% include copy-clipboard.html %} -~~~ sql -> SET DATABASE = bank; -~~~ - -When working with the default database, you do not need to reference it explicitly in statements. To see which database is currently the default, use the `SHOW DATABASE` statement (note the singular form): - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASE; -~~~ - -~~~ -+----------+ -| database | -+----------+ -| bank | -+----------+ -(1 row) -~~~ - -## Create a Table - -To create a table, use [`CREATE TABLE`](create-table.html) followed by a table name, the column names, and the [data type](data-types.html) and [constraint](constraints.html), if any, for each column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE accounts ( - id INT PRIMARY KEY, - balance DECIMAL -); -~~~ - -Table and column names must follow [these rules](keywords-and-identifiers.html#identifiers). Also, when you do not explicitly define a [primary key](primary-key.html), CockroachDB will automatically add a hidden `rowid` column as the primary key. - -To avoid an error in case the table already exists, you can include `IF NOT EXISTS`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS accounts ( - id INT PRIMARY KEY, - balance DECIMAL -); -~~~ - -To show all of the columns from a table, use [`SHOW COLUMNS FROM`](show-columns.html) followed by the table name: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ -+---------+---------+-------+---------+-----------+ -| Field | Type | Null | Default | Indices | -+---------+---------+-------+---------+-----------+ -| id | INT | false | NULL | {primary} | -| balance | DECIMAL | true | NULL | {} | -+---------+---------+-------+---------+-----------+ -(2 rows) -~~~ - -When you no longer need a table, use [`DROP TABLE`](drop-table.html) followed by the table name to remove the table and all its data: - -{% include copy-clipboard.html %} -~~~ sql -> DROP TABLE accounts; -~~~ - -## Show Tables - -To see all tables in the active database, use the [`SHOW TABLES`](show-tables.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES; -~~~ - -~~~ -+----------+ -| Table | -+----------+ -| accounts | -| users | -+----------+ -(2 rows) -~~~ - -To view tables in a database that's not active, use `SHOW TABLES FROM` followed by the name of the database: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM animals; -~~~ - -~~~ -+-----------+ -| Table | -+-----------+ -| aardvarks | -| elephants | -| frogs | -| moles | -| pandas | -| turtles | -+-----------+ -(6 rows) -~~~ - -## Insert Rows into a Table - -To insert a row into a table, use [`INSERT INTO`](insert.html) followed by the table name and then the column values listed in the order in which the columns appear in the table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts VALUES (1, 10000.50); -~~~ - -If you want to pass column values in a different order, list the column names explicitly and provide the column values in the corresponding order: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (balance, id) VALUES - (25000.00, 2); -~~~ - -To insert multiple rows into a table, use a comma-separated list of parentheses, each containing column values for one row: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts VALUES - (3, 8100.73), - (4, 9400.10); -~~~ - -[Default values](default-value.html) are used when you leave specific columns out of your statement, or when you explicitly request default values. For example, both of the following statements would create a row with `balance` filled with its default value, in this case `NULL`: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id) VALUES - (5); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES - (6, DEFAULT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id in (5, 6); -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 5 | NULL | -| 6 | NULL | -+----+---------+ -(2 rows) -~~~ - -## Create an Index -[Indexes](indexes.html) help locate data without having to look through every row of a table. They're automatically created for the [primary key](primary-key.html) of a table and any columns with a [Unique constraint](unique.html). - -To create an index for non-unique columns, use [`CREATE INDEX`](create-index.html) followed by an optional index name and an `ON` clause identifying the table and column(s) to index. For each column, you can choose whether to sort ascending (`ASC`) or descending (`DESC`). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX balance_idx ON accounts (balance DESC); -~~~ - -You can create indexes during table creation as well; just include the `INDEX` keyword followed by an optional index name and the column(s) to index: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE accounts ( - id INT PRIMARY KEY, - balance DECIMAL, - INDEX balance_idx (balance) -); -~~~ - -## Show Indexes on a Table - -To show the indexes on a table, use [`SHOW INDEX FROM`](show-index.html) followed by the name of the table: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM accounts; -~~~ - -~~~ -+----------+-------------+--------+-----+---------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+----------+-------------+--------+-----+---------+-----------+---------+----------+ -| accounts | primary | true | 1 | id | ASC | false | false | -| accounts | balance_idx | false | 1 | balance | DESC | false | false | -| accounts | balance_idx | false | 2 | id | ASC | false | true | -+----------+-------------+--------+-----+---------+-----------+---------+----------+ -(3 rows) -~~~ - -## Query a Table - -To query a table, use [`SELECT`](select-clause.html) followed by a comma-separated list of the columns to be returned and the table from which to retrieve the data: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT balance FROM accounts; -~~~ - -~~~ -+----------+ -| balance | -+----------+ -| 10000.50 | -| 25000.00 | -| 8100.73 | -| 9400.10 | -| NULL | -| NULL | -+----------+ -(6 rows) -~~~ - -To retrieve all columns, use the `*` wildcard: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.50 | -| 2 | 25000.00 | -| 3 | 8100.73 | -| 4 | 9400.10 | -| 5 | NULL | -| 6 | NULL | -+----+----------+ -(6 rows) -~~~ - -To filter the results, add a `WHERE` clause identifying the columns and values to filter on: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts WHERE balance > 9000; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 2 | 25000 | -| 1 | 10000.5 | -| 4 | 9400.1 | -+----+---------+ -(3 rows) -~~~ - -To sort the results, add an `ORDER BY` clause identifying the columns to sort by. For each column, you can choose whether to sort ascending (`ASC`) or descending (`DESC`). - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts ORDER BY balance DESC; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 2 | 25000 | -| 1 | 10000.5 | -| 4 | 9400.1 | -| 3 | 8100.73 | -| 5 | NULL | -| 6 | NULL | -+----+---------+ -(6 rows) -~~~ - -## Update Rows in a Table - -To update rows in a table, use [`UPDATE`](update.html) followed by the table name, a `SET` clause identifying the columns to update and their new values, and a `WHERE` clause identifying the rows to update: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET balance = balance - 5.50 WHERE balance < 10000; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.50 | -| 2 | 25000.00 | -| 3 | 8095.23 | -| 4 | 9394.60 | -| 5 | NULL | -| 6 | NULL | -+----+----------+ -(6 rows) -~~~ - -If a table has a primary key, you can use that in the `WHERE` clause to reliably update specific rows; otherwise, each row matching the `WHERE` clause is updated. When there's no `WHERE` clause, all rows in the table are updated. - -## Delete Rows in a Table - -To delete rows from a table, use [`DELETE FROM`](delete.html) followed by the table name and a `WHERE` clause identifying the rows to delete: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM accounts WHERE id in (5, 6); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.50 | -| 2 | 25000.00 | -| 3 | 8095.23 | -| 4 | 9394.60 | -+----+----------+ -(4 rows) -~~~ - -Just as with the `UPDATE` statement, if a table has a primary key, you can use that in the `WHERE` clause to reliably delete specific rows; otherwise, each row matching the `WHERE` clause is deleted. When there's no `WHERE` clause, all rows in the table are deleted. - -## What's Next? - -- Explore all [SQL Statements](sql-statements.html) -- [Use the built-in SQL client](use-the-built-in-sql-client.html) to execute statements from a shell or directly from the command line -- [Install the client driver](install-client-drivers.html) for your preferred language and [build an app](build-an-app-with-cockroachdb.html) -- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, and fault tolerance diff --git a/src/current/v2.0/limit-offset.md b/src/current/v2.0/limit-offset.md deleted file mode 100644 index a4a6432cce9..00000000000 --- a/src/current/v2.0/limit-offset.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Limiting Query Results -summary: LIMIT and OFFSET restrict an operation to a few row. -toc: true ---- - -The `LIMIT` and `OFFSET` clauses restrict the operation of: - -- A [selection query](selection-queries.html), including when it occurs -as part of [`INSERT`](insert.html) or [`UPSERT`](upsert.html). -- [`UPDATE`](update.html) and [`DELETE`](delete.html) statements. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/limit_clause.html %} -
      - -
      -{% include {{ page.version.version }}/sql/diagrams/offset_clause.html %} -
      - -`LIMIT` restricts the operation to only retrieve `limit_val` number of rows. - -`OFFSET` restricts the operation to skip the first `offset_value` number of rows. -It is often used in conjunction with `LIMIT` to "paginate" through retrieved rows. - -For PostgreSQL compatibility, CockroachDB also supports `FETCH FIRST -limit_val ROWS ONLY` and `FETCH NEXT limit_val ROWS ONLY` as aliases -for `LIMIT`. If `limit_val` is omitted, then one row is fetched. - -## Examples - -For example uses with `SELECT`, see [Limiting Row Count and -Pagination](selection-queries.html#limiting-row-count-and-pagination). - -## See Also - -- [`DELETE`](delete.html) -- [`UPDATE`](delete.html) -- [`INSERT`](insert.html) -- [`UPSERT`](upsert.html) -- [Selection Queries](selection-queries.html) diff --git a/src/current/v2.0/manage-long-running-queries.md b/src/current/v2.0/manage-long-running-queries.md deleted file mode 100644 index 66048ea1d5a..00000000000 --- a/src/current/v2.0/manage-long-running-queries.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: Manage Long-Running Queries -summary: Learn how to identify and cancel long-running queries. -toc: true ---- - -This page shows you how to identify and, if necessary, cancel SQL queries that are taking longer than expected to process. - -{{site.data.alerts.callout_info}}Schema changes (statements beginning with ALTER) cannot currently be cancelled. However, to monitor the progress of schema changes, you can use SHOW JOBS.{{site.data.alerts.end}} - - -## Identify Long-Running Queries - -Use the [`SHOW QUERIES`](show-queries.html) statement to list details about currently active SQL queries, including each query's `start` timestamp: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW QUERIES; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| 14db657443230c3e0000000000000001 | 1 | root | 2017-08-16 18:00:50.675151+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54119 | test_app | false | executing | -| 14db657443b68c7d0000000000000001 | 1 | root | 2017-08-16 18:00:50.684818+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54123 | test_app | false | executing | -| 14db65744382c2340000000000000001 | 1 | root | 2017-08-16 18:00:50.681431+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54103 | test_app | false | executing | -| 14db657443c9dc660000000000000001 | 1 | root | 2017-08-16 18:00:50.686083+00:00 | SHOW CLUSTER QUERIES | 192.168.12.56:54108 | cockroach | NULL | preparing | -| 14db657443e30a850000000000000003 | 3 | root | 2017-08-16 18:00:50.68774+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.58:54118 | test_app | false | executing | -| 14db6574439f477d0000000000000003 | 3 | root | 2017-08-16 18:00:50.6833+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.58:54122 | test_app | false | executing | -| 14db6574435817d20000000000000002 | 2 | root | 2017-08-16 18:00:50.678629+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54121 | test_app | false | executing | -| 14db6574433c621f0000000000000002 | 2 | root | 2017-08-16 18:00:50.676813+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54124 | test_app | false | executing | -| 14db6574436f71d50000000000000002 | 2 | root | 2017-08-16 18:00:50.680165+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54117 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -(9 rows) -~~~ - -You can also filter for queries that have been running for a certain amount of time. For example, to find queries that have been running for more than 3 hours, you would run the following: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE start < (now() - INTERVAL '3 hours'); -~~~ - -## Cancel Long-Running Queries - -Once you've identified a long-running query via [`SHOW QUERIES`](show-queries.html), note the `query_id` and use it with the [`CANCEL QUERY`](cancel-query.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -When a query is successfully cancelled, CockroachDB sends a `query execution canceled` error to the client that issued the query. - -- If the canceled query was a single, stand-alone statement, no further action is required by the client. -- If the canceled query was part of a larger, multi-statement [transaction](transactions.html), the client should then issue a [`ROLLBACK`](rollback-transaction.html) statement. - -## Improve Query Performance - -After cancelling a long-running query, use the [`EXPLAIN`](explain.html) statement to examine it. It's possible that the query was slow because it performs a full-table scan. In these cases, you can likely improve the query's performance by [adding an index](create-index.html). - -*(More guidance around query performance optimization forthcoming.)* - -## See Also - -- [`SHOW QUERIES`](show-queries.html) -- [`CANCEL QUERY`](cancel-query.html) -- [`EXPLAIN`](explain.html) -- [Query Behavior Troubleshooting](query-behavior-troubleshooting.html) diff --git a/src/current/v2.0/manual-deployment.md b/src/current/v2.0/manual-deployment.md deleted file mode 100644 index e5f7e7ccc7d..00000000000 --- a/src/current/v2.0/manual-deployment.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Manual Deployment -summary: Learn how to deploy CockroachDB manually on-premises or on popular cloud platforms. -toc: false ---- - -Use the following guides to deploy CockroachDB manually on-premises or on popular cloud platforms: - -- [On-Premises](deploy-cockroachdb-on-premises.html) -- [Amazon Web Services (AWS)](deploy-cockroachdb-on-aws.html) -- [Digital Ocean](deploy-cockroachdb-on-digital-ocean.html) -- [Google Cloud Platform (GCE)](deploy-cockroachdb-on-google-cloud-platform.html) -- [Microsoft Azure](deploy-cockroachdb-on-microsoft-azure.html) - -{{site.data.alerts.callout_success}}If you're just getting started with CockroachDB, you might want use a local cluster to learn the basics of the database.{{site.data.alerts.end}} - -## See Also - -- [Production Checklist](recommended-production-settings.html) -- [Orchestrated Deployment](orchestration.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/v2.0/monitor-cockroachdb-with-prometheus.md b/src/current/v2.0/monitor-cockroachdb-with-prometheus.md deleted file mode 100644 index 269942d8894..00000000000 --- a/src/current/v2.0/monitor-cockroachdb-with-prometheus.md +++ /dev/null @@ -1,188 +0,0 @@ ---- -title: Monitor CockroachDB with Prometheus -summary: How to pull CockroachDB's time series metrics into Prometheus. -toc: true ---- - -CockroachDB generates detailed time series metrics for each node in a cluster. This page shows you how to pull these metrics into [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying time series data. It also shows you how to connect [Grafana](https://grafana.com/) and [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) to Prometheus for flexible data visualizations and notifications. - -{{site.data.alerts.callout_success}}For details about other monitoring options, see Monitoring and Alerting. {{site.data.alerts.end}} - - -## Before You Begin - -- Make sure you have already started a CockroachDB cluster, either [locally](start-a-local-cluster.html) or in a [production environment](manual-deployment.html). - -- Note that all files used in this tutorial can be found in the [`monitoring`](https://github.com/cockroachdb/cockroach/tree/master/monitoring) directory of the CockroachDB repository. - -## Step 1. Install Prometheus - -1. Download the [2.x Prometheus tarball](https://prometheus.io/download/) for your OS. - -2. Extract the binary and add it to your `PATH`. This makes it easy to start Prometheus from any shell. - -3. Make sure Prometheus installed successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ prometheus --version - ~~~ - - ~~~ - prometheus, version 2.2.1 (branch: HEAD, revision: bc6058c81272a8d938c05e75607371284236aadc) - build user: root@149e5b3f0829 - build date: 20180314-14:21:40 - go version: go1.10 - ~~~ - -## Step 2. Configure Prometheus - -1. Download the starter [Prometheus configuration file](https://github.com/cockroachdb/cockroach/blob/master/monitoring/prometheus.yml) for CockroachDB: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/prometheus.yml \ - -O prometheus.yml - ~~~ - - When you examine the configuration file, you'll see that it is set up to scrape the time series metrics of a single, insecure local node every 10 seconds: - - `scrape_interval: 10s` defines the scrape interval. - - `metrics_path: '/_status/vars'` defines the Prometheus-specific CockroachDB endpoint for scraping time series metrics. - - `scheme: 'http'` specifies that the cluster being scraped is insecure. - - `targets: ['localhost:8080']` specifies the hostname and `http-port` of the Cockroach node to collect time series metrics on. - -2. Edit the configuration file to match your deployment scenario: - - Scenario | Config Change - ---------|-------------- - Multi-node local cluster | Expand the `targets` field to include `'localhost:'` for each additional node. - Production cluster | Change the `targets` field to include `':'` for each node in the cluster. Also, be sure your network configuration allows TCP communication on the specified ports. - Secure cluster | Uncomment `scheme: 'https'` and comment out `scheme: 'http'`. - -4. Create a `rules` directory and download the [aggregation rules](https://github.com/cockroachdb/cockroach/blob/master/monitoring/rules/aggregation.rules.yml) and [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/monitoring/rules/alerts.rules.yml) for CockroachDB into it: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir rules - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cd rules - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -P rules https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/rules/aggregation.rules.yml - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -P rules https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/rules/alerts.rules.yml - ~~~ - -## Step 3. Start Prometheus - -1. Start the Prometheus server, with the `--config.file` flag pointing to the configuration file: - - {% include copy-clipboard.html %} - ~~~ shell - $ prometheus --config.file=prometheus.yml - ~~~ - - ~~~ - INFO[0000] Starting prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12) source=main.go:77 - INFO[0000] Build context (go=go1.7.3, user=root@e685d23d8809, date=20161128-10:02:41) source=main.go:78 - INFO[0000] Loading configuration file prometheus.yml source=main.go:250 - INFO[0000] Loading series map and head chunks... source=storage.go:354 - INFO[0000] 0 series loaded. source=storage.go:359 - INFO[0000] Listening on :9090 source=web.go:248 - INFO[0000] Starting target manager... source=targetmanager.go:63 - ~~~ - -2. Point your browser to `http://:9090`, where you can use the Prometheus UI to query, aggregate, and graph CockroachDB time series metrics. - - Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, point your browser to `http://:8080/_status/vars`. - - For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/). - -## Step 4. Send notifications with Alertmanager - -Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. In step 2, you already downloaded CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/monitoring/rules/alerts.rules.yml). Now, download, configure, and start [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/). - -1. Download the [latest Alertmanager tarball](https://prometheus.io/download/#alertmanager) for your OS. - -2. Extract the binary and add it to your `PATH`. This makes it easy to start Alertmanager from any shell. - -3. Make sure Alertmanager installed successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ alertmanager --version - ~~~ - - ~~~ - alertmanager, version 0.15.0-rc.1 (branch: HEAD, revision: acb111e812530bec1ac6d908bc14725793e07cf3) - build user: root@f278953f13ef - build date: 20180323-13:07:06 - go version: go1.10 - ~~~ - -4. [Edit the Alertmanager configuration file](https://prometheus.io/docs/alerting/configuration/) that came with the binary, `simple.yml`, to specify the desired receivers for notifications. - -5. Start the Alertmanager server, with the `--config.file` flag pointing to the configuration file: - - {% include copy-clipboard.html %} - ~~~ shell - $ alertmanager --config.file=simple.yml - ~~~ - -6. Point your browser to `http://:9093`, where you can use the Alertmanager UI to define rules for [silencing alerts](https://prometheus.io/docs/alerting/alertmanager/#silences). - -## Step 5. Visualize metrics in Grafana - -Although Prometheus lets you graph metrics, [Grafana](https://grafana.com/) is a much more powerful visualization tool that integrates with Prometheus easily. - -1. [Install and start Grafana for your OS](https://grafana.com/grafana/download). - -2. Point your browser to `http://:3000` and log into the Grafana UI with the default username/password, `admin/admin`, or create your own account. - -3. [Add Prometheus as a datasource](http://docs.grafana.org/datasources/prometheus/), and configure the datasource as follows: - - Field | Definition - ------|----------- - Name | Prometheus - Default | True - Type | Prometheus - Url | `http://:9090` - Access | Direct - -4. Download the starter [Grafana dashboards](https://github.com/cockroachdb/cockroach/tree/master/monitoring/grafana-dashboards) for CockroachDB: - - {% include copy-clipboard.html %} - ~~~ shell - # runtime dashboard: node status, including uptime, memory, and cpu. - $ wget https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/grafana-dashboards/runtime.json - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # storage dashboard: storage availability. - $ wget https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/grafana-dashboards/storage.json - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # sql dashboard: sql queries/transactions. - $ wget https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/grafana-dashboards/sql.json - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # replicas dashboard: replica information and operations. - $ wget https://raw.githubusercontent.com/cockroachdb/cockroach/master/monitoring/grafana-dashboards/replication.json - ~~~ - -5. [Add the dashboards to Grafana](http://docs.grafana.org/reference/export_import/#importing-a-dashboard). - -## See Also - -- [Monitoring and Alerting](monitoring-and-alerting.html) diff --git a/src/current/v2.0/monitoring-and-alerting.md b/src/current/v2.0/monitoring-and-alerting.md deleted file mode 100644 index 72165a8febd..00000000000 --- a/src/current/v2.0/monitoring-and-alerting.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -title: Monitoring and Alerting -summary: Monitor the health and performance of a cluster and alert on critical events and metrics. -toc: true - ---- - -Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -This page explains available monitoring tools and critical events and metrics to alert on. - - -## Monitoring Tools - -### Admin UI - -The [built-in Admin UI](admin-ui-overview.html) gives you essential metrics about a cluster's health, such as the number of live, dead, and suspect nodes, the number of unavailable ranges, and the queries per second and service latency across the cluster. It is accessible from every node at `http://:`, or `http://:8080` by default. - -{{site.data.alerts.callout_danger}}Because the Admin UI is built into CockroachDB, if a cluster becomes unavailable, most of the Admin UI becomes unavailable as well. Therefore, it's essential to plan additional methods of monitoring cluster health as described below.{{site.data.alerts.end}} - -### Prometheus Endpoint - -Every node of a CockroachDB cluster exports granular timeseries metrics at `http://:/_status/vars`. The metrics are formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data, but the format is **easy-to-parse** and can be massaged to work with other third-party monitoring systems (e.g., [Sysdig](https://sysdig.atlassian.net/wiki/plugins/servlet/mobile?contentId=64946336#content/view/64946336) and [Stackdriver](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd)). - -For a tutorial on using Prometheus, see [Monitor CockroachDB with Prometheus](monitor-cockroachdb-with-prometheus.html). - -{% include copy-clipboard.html %} -~~~ shell -$ curl http://localhost:8080/_status/vars -~~~ - -~~~ -# HELP gossip_infos_received Number of received gossip Info objects -# TYPE gossip_infos_received counter -gossip_infos_received 0 -# HELP sys_cgocalls Total number of cgo calls -# TYPE sys_cgocalls gauge -sys_cgocalls 3501 -# HELP sys_cpu_sys_percent Current system cpu percentage -# TYPE sys_cpu_sys_percent gauge -sys_cpu_sys_percent 1.098855319644276e-10 -# HELP replicas_quiescent Number of quiesced replicas -# TYPE replicas_quiescent gauge -replicas_quiescent{store="1"} 20 -... -~~~ - -{{site.data.alerts.callout_info}}In addition to using the exported timeseries data to monitor a cluster via an external system, you can write alerting rules against them to make sure you are promptly notified of critical events or issues that may require intervention or investigation. See Events to Alert On for more details.{{site.data.alerts.end}} - -### Health Endpoints - -CockroachDB provides two HTTP endpoints for checking the health of individual nodes. - -#### /health - -If a node is down, the `http://:/health` endpoint returns a `Connnection refused` error: - -{% include copy-clipboard.html %} -~~~ shell -$ curl http://localhost:8080/health -~~~ - -~~~ -curl: (7) Failed to connect to localhost port 8080: Connection refused -~~~ - -Otherwise, it returns an HTTP `200 OK` status response code with details about the node: - -~~~ -{ - "nodeId": 1, - "address": { - "networkField": "tcp", - "addressField": "JESSEs-MBP:26257" - }, - "buildInfo": { - "goVersion": "go1.9", - "tag": "v2.0-alpha.20180212-629-gf1271b232-dirty", - "time": "2018/02/21 04:09:53", - "revision": "f1271b2322a4a1060461707bdccd77b6d5a1843e", - "cgoCompiler": "4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)", - "platform": "darwin amd64", - "distribution": "CCL", - "type": "development", - "dependencies": null - } -} -~~~ - -#### /health?ready=1 - -New in v2.0: The `http://:/health?ready=1` endpoint returns an HTTP `503 Service Unavailable` status response code with an error in the following scenarios: - -- The node is being [decommissioned](remove-nodes.html) or in the process of [shutting down](stop-a-node.html) and is therefore not able to accept SQL connections and execute queries. This is especially useful for making sure load balancers do not direct traffic to nodes that are live but not "ready", which is a necessary check during [rolling upgrades](upgrade-cockroach-version.html). - {{site.data.alerts.callout_success}}If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the server.shutdown.drain_wait cluster setting to cause a node to return 503 Service Unavailable even before it has started shutting down.{{site.data.alerts.end}} -- The node is unable to communicate with a majority of the other nodes in the cluster, likely because the cluster is unavailable due to too many nodes being down. - -{% include copy-clipboard.html %} -~~~ shell -$ curl http://localhost:8080/health?ready=1 -~~~ - -~~~ -{ - "error": "node is not ready", - "code": 14 -} -~~~ - -Otherwise, it returns an HTTP `200 OK` status response code with an empty body: - -~~~ -{ - -} -~~~ - -### Raw Status Endpoints - -Several endpoints return raw status metrics in JSON at `http://:/#/debug`. Feel free to investigate and use these endpoints, but note that they are subject to change. - -Raw Status Endpoints - -### Node Status Command - -The [`cockroach node status`](view-node-details.html) command gives you metrics about the health and status of each node. - -- With the `--ranges` flag, you get granular range and replica details, including unavailability and under-replication. -- With the `--stats` flag, you get granular disk usage details. -- With the `--decommission` flag, you get details about the [node decommissioning](remove-nodes.html) process. -- With the `--all` flag, you get all of the above. - -## Events to Alert On - -Active monitoring helps you spot problems early, but it is also essential to create alerting rules that promptly send notifications when there are events that require investigation or intervention. This section identifies the most important events to create alerting rules for, with the [Prometheus Endpoint](#prometheus-endpoint) metrics to use for detecting the events. - -{{site.data.alerts.callout_success}}If you use Prometheus for monitoring, you can also use our pre-defined alerting rules with Alertmanager. See Monitor CockroachDB with Prometheus for guidance.{{site.data.alerts.end}} - -### Node is down - -- **Rule:** Send an alert when a node has been down for 5 minutes or more. - -- **How to detect:** If a node is down, its `_status/vars` endpoint will return a `Connection refused` error. Otherwise, the `liveness_livenodes` metric will be the total number of live nodes in the cluster. - -### Node is restarting too frequently - -- **Rule:** Send an alert if a node has restarted more than 5 times in 10 minutes. - -- **How to detect:** Calculate this using the number of times the `sys_uptime` metric in the node's `_status/vars` output was reset back to zero. The `sys_uptime` metric gives you the length of time, in seconds, that the `cockroach` process has been running. - -### Node is running low on disk space - -- **Rule:** Send an alert when a node has less than 15% of free space remaining. - -- **How to detect:** Divide the `capacity` metric by the `capacity_available` metric in the node's `_status/vars` output. - -### Node is not executing SQL - -- **Rule:** Send an alert when a node is not executing SQL despite having connections. - -- **How to detect:** The `sql_conns` metric in the node's `_status/vars` output will be greater than `0` while the `sql_query_count` metric will be `0`. You can also break this down by statement type using `sql_select_count`, `sql_insert_count`, `sql_update_count`, and `sql_delete_count`. - -### CA certificate expires soon - -- **Rule:** Send an alert when the CA certificate on a node will expire in less than a year. - -- **How to detect:** Calculate this using the `security_certificate_expiration_ca` metric in the node's `_status/vars` output. - -### Node certificate expires soon - -- **Rule:** Send an alert when a node's certificate will expire in less than a year. - -- **How to detect:** Calculate this using the `security_certificate_expiration_node` metric in the node's `_status/vars` output. - -## See Also - -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/v2.0/multi-active-availability.md b/src/current/v2.0/multi-active-availability.md deleted file mode 100644 index 0a70e5256eb..00000000000 --- a/src/current/v2.0/multi-active-availability.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Multi-Active Availability -summary: Learn about CockroachDB's high availability model, known as Multi-Active Availability. -toc: true ---- - -CockroachDB's availability model is described as "Multi-Active Availability." In essence, multi-active availability provides benefits similar to traditional notions of high availability, but also lets you read and write from every node in your cluster without generating any conflicts. - - -## What is High Availability? - -High availability lets an application continue running even if a system hosting one of its services fails. This is achieved by scaling the application's services horizontally, i.e., replicating the service across many machines or systems. If any one of them fails, the others can simply step in and perform the same service. - -Before diving into the details of CockroachDB's multi-active availability, we'll review the two most common high availability designs: [Active-Passive](#active-passive) and [Active-Active](#active-active) systems. - -### Active-Passive - -In active-passive systems, all traffic is routed to a single, "active" replica. Changes to the replica's state are then copied to a backup "passive" replica, in an attempt to always mirror the active replica as closely as possible. - -However, this design has downsides: - -- If you use asynchronous replication, you cannot guarantee that any data is ever successfully replicated to passive followers––meaning you can easily lose data. Depending on your industry, this could have pretty dire consequences. -- If you use synchronous replication and any passive replicas fail, you have to either sacrifice availability for the entire application or risk inconsistencies. - -### Active-Active - -In active-active systems, multiple replicas run identical services, and traffic is routed to all of them. If any replica fails, the others simply handle the traffic that would've been routed to it. - -For databases, though, active-active replication is incredibly difficult to instrument for most workloads. For example, if you let multiple replicas handle writes for the same keys, how do you keep them consistent? - -#### Example: Conflicts with Active-Active Replication - -For this example, we have 2 replicas (**A**, **B**) in an active-active high availability cluster. - -1. **A** receives a write for key `xyz` of `'123'`, and then immediately fails. -2. **B** receives a read of key `xyz`, and returns a `NULL` because it cannot find the key. -3. **B** then receives a write for key `xyz` of `'456'`. -4. **A** is restarted and attempts to rejoin **B**––but what do you do about key `xyz`? There's an inconsistency in the system without a clear way to resolve it. - -{{site.data.alerts.callout_info}}In this example, the cluster remained active the entire time. But in terms of the CAP theorem, this is an AP system; it favored being available instead of consistent when partitions occur.{{site.data.alerts.end}} - -## What is Multi-Active Availability? - -Multi-active availability is CockroachDB's version of high availability (keeping your application online in the face of partial failures), which we've designed to avoid the downsides of both active-passive and traditional active-active systems. - -Like active-active designs, all replicas can handle traffic, including both reads and writes. However, CockroachDB improves upon that design by also ensuring that data remains consistent across them, which we achieve by using "consensus replication." In this design, replication requests are sent to at least 3 replicas, and are only considered committed when a majority of replicas acknowledge that they've received it. This means that you can still have failures without compromising availability. - -To prevent conflicts and guarantee your data's consistency, clusters that lose a majority of replicas stop responding because they've lost the ability to reach a consensus on the state of your data. When a majority of replicas are restarted, your database resumes operation. - -### Consistency Example - -For this example, we have 3 CockroachDB nodes (**A**, **B**, **C**) in a multi-active availability cluster. - -1. **A** receives a write on `xyz` of `'123'`. It communicates this write to nodes **B** and **C**, who confirm that they've received the write, as well. Once **A** receives the first confirmation, the change is committed. -2. **A** fails. -3. **B** receives a read of key `xyz`, and returns the result `'123'`. -4. **C** then receives an update for key `xyz` to the values `'456'`. It communicates this write to node **B**, who confirms that its received the write, as well. After receiving the confirmation, the change is committed. -5. **A** is restarted and rejoins the cluster. It receives an update that the key `xyz` had its value changed to `'456'`. - -{{site.data.alerts.callout_info}}In this example, if nodes B or C failed at any time, the cluster would have stopped responding. In terms of the CAP theorem, this is a CP system; it favored being consistent instead of available when partitions occur.{{site.data.alerts.end}} - -## What's next? - -To get a greater understanding of how CockroachDB is a survivable system that enforces strong consistency, check out our [architecture documentation](architecture/overview.html). - -To see Multi-Active Availability in action, see this [availability demo](demo-fault-tolerance-and-recovery.html). diff --git a/src/current/v2.0/not-null.md b/src/current/v2.0/not-null.md deleted file mode 100644 index 8ebd13239ed..00000000000 --- a/src/current/v2.0/not-null.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: Not Null Constraint -summary: The NOT NULL constraint specifies the column may not contain NULL values. -toc: true ---- - -The Not Null [constraint](constraints.html) specifies a column may not contain *NULL* values. - - -## Details - -- `INSERT` or `UPDATE` statements containing *NULL* values are rejected. This includes `INSERT` statements that do not include values for any columns that do not have a [Default Value constraint](default-value.html). - - For example, if the table `foo` has columns `a` and `b` (and `b` *does not* have a Default Value), when you run the following command: - - ~~~ sql - > INSERT INTO foo (a) VALUES (1); - ~~~ - - CockroachDB tries to write a *NULL* value into column `b`. If that column has the Not Null constraint, the `INSERT` statement is rejected. - -- You can only define the Not Null constraint when [creating a table](#syntax); you cannot add it to an existing table. However, you can [migrate data](constraints.html#table-migrations-to-add-or-change-immutable-constraints) from your current table to a new table with the constraint you want to use. - {{site.data.alerts.callout_info}}In the future we plan to support adding the Not Null constraint to existing tables.{{site.data.alerts.end}} - -- For more information about *NULL*, see [Null Handling](null-handling.html). - -## Syntax - -You can only apply the Not Null constraint to individual columns. - -
      -{% include {{ page.version.version }}/sql/diagrams/not_null_column_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the constrained column. | -| `column_type` | The constrained column's [data type](data-types.html). | -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -## Usage Example - -~~~ sql -> CREATE TABLE IF NOT EXISTS customers ( - customer_id INT PRIMARY KEY, - cust_name STRING(30) NULL, - cust_email STRING(100) NOT NULL - ); - -> INSERT INTO customers (customer_id, cust_name, cust_email) VALUES (1, 'Smith', NULL); -~~~ -~~~ -pq: null value in column "cust_email" violates not-null constraint -~~~ -~~~ sql -> INSERT INTO customers (customer_id, cust_name) VALUES (1, 'Smith'); -~~~ -~~~ -pq: null value in column "cust_email" violates not-null constraint -~~~ - -## See Also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [Check constraint](check.html) -- [Default Value constraint](default-value.html) -- [Foreign Key constraint](foreign-key.html) -- [Primary Key constraint](primary-key.html) -- [Unique constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v2.0/null-handling.md b/src/current/v2.0/null-handling.md deleted file mode 100644 index 7a1117bc668..00000000000 --- a/src/current/v2.0/null-handling.md +++ /dev/null @@ -1,411 +0,0 @@ ---- -title: NULL Handling -summary: Learn how NULL values are handled in CockroachDB SQL. -toc: true ---- - -This page summarizes how `NULL` values are handled in CockroachDB -SQL. Each topic is demonstrated via the [built-in SQL -client](use-the-built-in-sql-client.html). - -{{site.data.alerts.callout_info}}When using the built-in client, NULL values are displayed using the word NULL. This distinguishes them from a character field that contains an empty string ("").{{site.data.alerts.end}} - - -## NULLs and Simple Comparisons - -Any simple comparison between a value and `NULL` results in -`NULL`. The remaining cases are described in the next section. - -This behavior is consistent with PostgreSQL as well as all other major RDBMS's. - -~~~ sql -> CREATE TABLE t1( - a INT, - b INT, - c INT -); - -> INSERT INTO t1 VALUES(1, 0, 0); -> INSERT INTO t1 VALUES(2, 0, 1); -> INSERT INTO t1 VALUES(3, 1, 0); -> INSERT INTO t1 VALUES(4, 1, 1); -> INSERT INTO t1 VALUES(5, NULL, 0); -> INSERT INTO t1 VALUES(6, NULL, 1); -> INSERT INTO t1 VALUES(7, NULL, NULL); - -> SELECT * FROM t1; -~~~ -~~~ -+---+------+------+ -| a | b | c | -+---+------+------+ -| 1 | 0 | 0 | -| 2 | 0 | 1 | -| 3 | 1 | 0 | -| 4 | 1 | 1 | -| 5 | NULL | 0 | -| 6 | NULL | 1 | -| 7 | NULL | NULL | -+---+------+------+ -~~~ -~~~ sql -> SELECT * FROM t1 WHERE b < 10; -~~~ -~~~ -+---+---+---+ -| a | b | c | -+---+---+---+ -| 1 | 0 | 0 | -| 2 | 0 | 1 | -| 3 | 1 | 0 | -| 4 | 1 | 1 | -+---+---+---+ -~~~ -~~~ sql -> SELECT * FROM t1 WHERE NOT b > 10; -~~~ -~~~ -+---+---+---+ -| a | b | c | -+---+---+---+ -| 1 | 0 | 0 | -| 2 | 0 | 1 | -| 3 | 1 | 0 | -| 4 | 1 | 1 | -+---+---+---+ -~~~ -~~~ sql -> SELECT * FROM t1 WHERE b < 10 OR c = 1; -~~~ -~~~ -+---+------+---+ -| a | b | c | -+---+------+---+ -| 1 | 0 | 0 | -| 2 | 0 | 1 | -| 3 | 1 | 0 | -| 4 | 1 | 1 | -| 6 | NULL | 1 | -+---+------+---+ -~~~ -~~~ sql -> SELECT * FROM t1 WHERE b < 10 AND c = 1; -~~~ -~~~ -+---+---+---+ -| a | b | c | -+---+---+---+ -| 2 | 0 | 1 | -| 4 | 1 | 1 | -+---+---+---+ -~~~ -~~~ sql -> SELECT * FROM t1 WHERE NOT (b < 10 AND c = 1); -~~~ -~~~ -+---+------+---+ -| a | b | c | -+---+------+---+ -| 1 | 0 | 0 | -| 3 | 1 | 0 | -| 5 | NULL | 0 | -+---+------+---+ -~~~ -~~~ sql -> SELECT * FROM t1 WHERE NOT (c = 1 AND b < 10); -~~~ -~~~ -+---+------+---+ -| a | b | c | -+---+------+---+ -| 1 | 0 | 0 | -| 3 | 1 | 0 | -| 5 | NULL | 0 | -+---+------+---+ -~~~ - -Use the `IS NULL` or `IS NOT NULL` clauses when checking for `NULL` values. - -~~~ sql -> SELECT * FROM t1 WHERE b IS NULL AND c IS NOT NULL; -~~~ -~~~ -+---+------+---+ -| a | b | c | -+---+------+---+ -| 5 | NULL | 0 | -| 6 | NULL | 1 | -+---+------+---+ -~~~ - -## NULLs and Conditional Operators - -The [conditional -operators](scalar-expressions.html#conditional-expressions) -(including `IF`, `COALESCE`, `IFNULL`) only evaluate some -operands depending on the value of a condition operand, so their -result is not always `NULL` depending on the given operands. - -For example, `COALESCE(1, NULL)` will always return `1` even though -the second operand is `NULL`. - -## NULLs and Ternary Logic - -`AND`, `OR` and `IS` implement ternary logic, as follows. - -| Expression | Result | -|-------------------|---------| -| `FALSE AND FALSE` | `FALSE` | -| `FALSE AND TRUE` | `FALSE` | -| `FALSE AND NULL` | `FALSE` | -| `TRUE AND FALSE` | `FALSE` | -| `TRUE AND TRUE` | `TRUE` | -| `TRUE AND NULL` | `NULL` | -| `NULL AND FALSE` | `FALSE` | -| `NULL AND TRUE` | `NULL` | -| `NULL AND NULL` | `NULL` | - -| Expression | Result | -|------------------|---------| -| `FALSE OR FALSE` | `FALSE` | -| `FALSE OR TRUE` | `TRUE` | -| `FALSE OR NULL` | `NULL` | -| `TRUE OR FALSE` | `TRUE` | -| `TRUE OR TRUE` | `TRUE` | -| `TRUE OR NULL` | `TRUE` | -| `NULL OR FALSE` | `NULL` | -| `NULL OR TRUE` | `TRUE` | -| `NULL OR NULL` | `NULL` | - -| Expression | Result | -|------------------|---------| -| `FALSE IS FALSE` | `TRUE` | -| `FALSE IS TRUE` | `FALSE` | -| `FALSE IS NULL` | `FALSE` | -| `TRUE IS FALSE` | `FALSE` | -| `TRUE IS TRUE` | `TRUE` | -| `TRUE IS NULL` | `FALSE` | -| `NULL IS FALSE` | `FALSE` | -| `NULL IS TRUE` | `FALSE` | -| `NULL IS NULL` | `TRUE` | - -## NULLs and Arithmetic - -Arithmetic operations involving a `NULL` value will yield a `NULL` result. - -~~~ sql -> SELECT a, b, c, b*0, b*c, b+c FROM t1; -~~~ -~~~ -+---+------+------+-------+-------+-------+ -| a | b | c | b * 0 | b * c | b + c | -+---+------+------+-------+-------+-------+ -| 1 | 0 | 0 | 0 | 0 | 0 | -| 2 | 0 | 1 | 0 | 0 | 1 | -| 3 | 1 | 0 | 0 | 0 | 1 | -| 4 | 1 | 1 | 0 | 1 | 2 | -| 5 | NULL | 0 | NULL | NULL | NULL | -| 6 | NULL | 1 | NULL | NULL | NULL | -| 7 | NULL | NULL | NULL | NULL | NULL | -+---+------+------+-------+-------+-------+ -~~~ - -## NULLs and Aggregate Functions - -Aggregate [functions](functions-and-operators.html) are those that operate on a set of rows and return a single value. The example data has been repeated here to make it easier to understand the results. - -~~~ sql -> SELECT * FROM t1; -~~~ -~~~ -+---+------+------+ -| a | b | c | -+---+------+------+ -| 1 | 0 | 0 | -| 2 | 0 | 1 | -| 3 | 1 | 0 | -| 4 | 1 | 1 | -| 5 | NULL | 0 | -| 6 | NULL | 1 | -| 7 | NULL | NULL | -+---+------+------+ -~~~ -~~~ sql -> SELECT COUNT(*), COUNT(b), SUM(b), AVG(b), MIN(b), MAX(b) FROM t1; -~~~ -~~~ -+----------+----------+--------+--------------------+--------+--------+ -| COUNT(*) | COUNT(b) | SUM(b) | AVG(b) | MIN(b) | MAX(b) | -+----------+----------+--------+--------------------+--------+--------+ -| 7 | 4 | 2 | 0.5000000000000000 | 0 | 1 | -+----------+----------+--------+--------------------+--------+--------+ -~~~ - -Note the following: - -- `NULL` values are not included in the `COUNT()` of a column. `COUNT(*)` returns 7 while `COUNT(b)` returns 4. - -- `NULL` values are not considered as high or low values in `MIN()` or `MAX()`. - -- `AVG(b)` returns `SUM(b)/COUNT(b)`, which is different than `AVG(*)` as `NULL` values are not considered in the `COUNT(b)` of rows. See [NULLs as Other Values](#nulls-as-other-values) for more details. - - -## NULL as a Distinct Value - -`NULL` values are considered distinct from other values and are included in the list of distinct values from a column. - -~~~ sql -> SELECT DISTINCT b FROM t1; -~~~ -~~~ -+------+ -| b | -+------+ -| 0 | -| 1 | -| NULL | -+------+ -~~~ - -However, counting the number of distinct values excludes `NULL`s, which is consistent with the `COUNT()` function. - -~~~ sql -> SELECT COUNT(DISTINCT b) FROM t1; -~~~ -~~~ -+-------------------+ -| count(DISTINCT b) | -+-------------------+ -| 2 | -+-------------------+ -~~~ - -## NULLs as Other Values - -In some cases, you may want to include `NULL` values in arithmetic or aggregate function calculations. To do so, use the `IFNULL()` function to substitute a value for `NULL` during calculations. - -For example, let's say you want to calculate the average value of column `b` as being the `SUM()` of all numbers in `b` divided by the total number of rows, regardless of whether `b`'s value is `NULL`. In this case, you would use `AVG(IFNULL(b, 0))`, where `IFNULL(b, 0)` substitutes a value of zero (0) for `NULL`s during the calculation. - -~~~ sql -> SELECT COUNT(*), COUNT(b), SUM(b), AVG(b), AVG(IFNULL(b, 0)), MIN(b), MAX(b) FROM t1; -~~~ -~~~ -+----------+----------+--------+--------------------+--------------------+--------+--------+ -| COUNT(*) | COUNT(b) | SUM(b) | AVG(b) | AVG(IFNULL(b, 0)) | MIN(b) | MAX(b) | -+----------+----------+--------+--------------------+--------------------+--------+--------+ -| 7 | 4 | 2 | 0.5000000000000000 | 0.2857142857142857 | 0 | 1 | -+----------+----------+--------+--------------------+--------------------+--------+--------+ -~~~ - -## NULLs and Set Operations - -`NULL` values are considered as part of a `UNION` [set operation](selection-queries.html#set-operations). - -~~~ sql -> SELECT b FROM t1 UNION SELECT b FROM t1; -~~~ -~~~ -+------+ -| b | -+------+ -| 0 | -| 1 | -| NULL | -+------+ -~~~ - - -## NULLs and Sorting - -When [sorting a column](query-order.html) containing `NULL` values, CockroachDB sorts `NULL` values first with `ASC` and last with `DESC`. This differs from PostgreSQL, which sorts `NULL` values last with `ASC` and first with `DESC`. - -Note that the `NULLS FIRST` and `NULLS LAST` options of the `ORDER BY` clause are not implemented in CockroachDB, so you cannot change where `NULL` values appear in the sort order. - -~~~ sql -> SELECT * FROM t1 ORDER BY b ASC; -~~~ -~~~ -+---+------+------+ -| a | b | c | -+---+------+------+ -| 6 | NULL | 1 | -| 5 | NULL | 0 | -| 7 | NULL | NULL | -| 1 | 0 | 0 | -| 2 | 0 | 1 | -| 4 | 1 | 1 | -| 3 | 1 | 0 | -+---+------+------+ -~~~ -~~~ sql -> SELECT * FROM t1 ORDER BY b DESC; -~~~ -~~~ -+---+------+------+ -| a | b | c | -+---+------+------+ -| 4 | 1 | 1 | -| 3 | 1 | 0 | -| 2 | 0 | 1 | -| 1 | 0 | 0 | -| 7 | NULL | NULL | -| 6 | NULL | 1 | -| 5 | NULL | 0 | -+---+------+------+ -~~~ - -## NULLs and Unique Constraints - -`NULL` values are not considered unique. Therefore, if a table has a Unique constraint on one or more columns that are optional (nullable), it is possible to insert multiple rows with `NULL` values in those columns, as shown in the example below. - -~~~ sql -> CREATE TABLE t2(a INT, b INT UNIQUE); - -> INSERT INTO t2 VALUES(1, 1); -> INSERT INTO t2 VALUES(2, NULL); -> INSERT INTO t2 VALUES(3, NULL); - -> SELECT * FROM t2; -~~~ -~~~ -+---+------+ -| a | b | -+---+------+ -| 1 | 1 | -| 2 | NULL | -| 3 | NULL | -+---+------+ -~~~ - -## NULLs and CHECK Constraints - -A [Check constraint](check.html) expression that evaluates to `NULL` is considered to pass, allowing for concise expressions like `discount < price` without worrying about adding `OR discount IS NULL` clauses. When non-null validation is desired, the usual Not Null constraint can be used along side a Check constraint. - -~~~ sql -> CREATE TABLE products (id STRING PRIMARY KEY, price INT NOT NULL CHECK (price > 0), discount INT, CHECK (discount <= price)); - -> INSERT INTO products (id, price) VALUES ('ncc-1701-d', 100); -> INSERT INTO products (id, price, discount) VALUES ('ncc-1701-a', 100, 50); - -> SELECT * FROM products; -~~~ -~~~ -+----------+-------+----------+ -| id | price | discount | -+----------+-------+----------+ -| ncc1701a | 100 | 50 | -| ncc1701d | 100 | NULL | -+----------+-------+----------+ -~~~ -~~~ sql -> INSERT INTO products (id, price) VALUES ('ncc-1701-b', -5); -~~~ -~~~ -failed to satisfy CHECK constraint (price > 0) -~~~ -~~~ sql -> INSERT INTO products (id, price, discount) VALUES ('ncc-1701-b', 100, 150); -~~~ -~~~ -failed to satisfy CHECK constraint (discount <= price) -~~~ diff --git a/src/current/v2.0/open-source.md b/src/current/v2.0/open-source.md deleted file mode 100644 index 212133c9d17..00000000000 --- a/src/current/v2.0/open-source.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Open Source -summary: CockroachDB is completely open source. -toc: false ---- - -Run on your laptop, development cluster, and public or private cloud without complex licensing, mock implementations, or inscrutable closed-source error output. Be a part of our vibrant community of developers and users! And if you really love databases, you can contribute to the design and implementation as it evolves. - -- Keep your options open and avoid vendor lock-in -- Easy experimentation and enhancement -- Bigger and more active community for support and troubleshooting -- Debug problems through your entire stack - -CockroachDB is open source diff --git a/src/current/v2.0/operational-faqs.md b/src/current/v2.0/operational-faqs.md deleted file mode 100644 index f90de03e8fe..00000000000 --- a/src/current/v2.0/operational-faqs.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: Operational FAQs -summary: Get answers to frequently asked questions about operating CockroachDB. -toc: true -toc_not_nested: true ---- - - -## Why is my process hanging when I try to start it in the background? - -The first question that needs to be asked is whether or not you have previously -run a multi-node cluster using the same data directory. If you haven't, then you -should check out our [Cluster Setup Troubleshooting -docs](cluster-setup-troubleshooting.html). If you have previously started and -stopped a multi-node cluster and are now trying to bring it back up, you're in -the right place. - -In order to keep your data consistent, CockroachDB only works when at least a -majority of its nodes are running. This means that if only one node of a three -node cluster is running, that one node will not be able to do anything. The -`--background` flag of [`cockroach start`](start-a-node.html) causes the start -command to wait until the node has fully initialized and is able to start -serving queries. - -Together, these two facts mean that the `--background` flag will cause -`cockroach start` to hang until a majority of nodes are running. In order to -restart your cluster, you should either use multiple terminals so that you can -start multiple nodes at once or start each node in the background using your -shell's functionality (e.g., `cockroach start &`) instead of the `--background` -flag. - -## Why is memory usage increasing despite lack of traffic? - -Like most databases, CockroachDB caches the most recently accessed data in memory so that it can provide faster reads, and [its periodic writes of timeseries data](#why-is-disk-usage-increasing-despite-lack-of-writes) cause that cache size to increase until it hits its configured limit. For information about manually controlling the cache size, see [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size). - -## Why is disk usage increasing despite lack of writes? - -The timeseries data used to power the graphs in the admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. - -## Can I reduce or disable the storage of timeseries data? New in v2.0 - -Yes. By default, CockroachDB stores timeseries data for the last 30 days for display in the Admin UI, but you can [reduce the interval for timeseries storage](#reduce-the-interval-for-timeseries-storage) or [disable timeseries storage entirely](#disable-timeseries-storage-entirely). - -{{site.data.alerts.callout_info}}After reducing or disabling timeseries storage, it can take up to 24 hours for timeseries data to be deleted and for the change to be reflected in Admin UI metrics.{{site.data.alerts.end}} - -### Reduce the interval for timeseries storage - -To reduce the interval for storage of timeseries data, change the `timeseries.resolution_10s.storage_duration` cluster setting to an [`INTERVAL`](interval.html) value less than `720h0m0s` (30 days). For example, to store timeseries data for the last 15 days, run the following [`SET CLUSTER SETTING`](set-cluster-setting.html) command: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING timeseries.resolution_10s.storage_duration = '360h0m0s'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING timeseries.resolution_10s.storage_duration; -~~~ - -~~~ -+--------------------------------------------+ -| timeseries.resolution_10s.storage_duration | -+--------------------------------------------+ -| 360h | -+--------------------------------------------+ -(1 row) -~~~ - -### Disable timeseries storage entirely - -{{site.data.alerts.callout_info}}Disabling timeseries storage is recommended only if you exclusively use a third-party tool such as Prometheus for timeseries monitoring. Prometheus and other such tools do not rely on CockroachDB-stored timeseries data; instead, they ingest metrics exported by CockroachDB from memory and then store the data themselves.{{site.data.alerts.end}} - -To disable the storage of timeseries data entirely, run the following command: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING timeseries.storage.enabled = false; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING timeseries.storage.enabled; -~~~ - -~~~ -+----------------------------+ -| timeseries.storage.enabled | -+----------------------------+ -| false | -+----------------------------+ -(1 row) -~~~ - -If you want all existing timeseries data to be deleted, change the `timeseries.resolution_10s.storage_duration` cluster setting as well: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING timeseries.resolution_10s.storage_duration = '0s'; -~~~ - -## Why would increasing the number of nodes not result in more operations per second? - -If queries operate on different data, then increasing the number -of nodes should improve the overall throughput (transactions/second or QPS). - -However, if your queries operate on the same data, you may be -observing transaction contention. See [Understanding and Avoiding -Transaction -Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) -for more details. - -## Why does CockroachDB collect anonymized cluster usage details by default? - -Collecting information about CockroachDB's real world usage helps us prioritize the development of product features. We choose our default as "opt-in" to strengthen the information we receive from our collection efforts, but we also make a careful effort to send only anonymous, aggregate usage statistics. See [Diagnostics Reporting](diagnostics-reporting.html) for a detailed look at what information is sent and how to opt-out. - -## What happens when node clocks are not properly synchronized? - -{% include {{ page.version.version }}/faq/clock-synchronization-effects.md %} - -## How can I tell how well node clocks are synchronized? - -{% include {{ page.version.version }}/faq/clock-synchronization-monitoring.html %} - -You can also see these metrics in [the Clock Offset graph](admin-ui-runtime-dashboard.html#clock-offset) on the Admin UI's Runtime dashboard as of the v2.0 release. - -## How do I prepare for planned node maintenance? - -{% include {{ page.version.version }}/faq/planned-maintenance.md %} - -## See Also - -- [Product FAQs](frequently-asked-questions.html) -- [SQL FAQs](sql-faqs.html) diff --git a/src/current/v2.0/orchestrate-a-local-cluster-with-kubernetes-insecure.md b/src/current/v2.0/orchestrate-a-local-cluster-with-kubernetes-insecure.md deleted file mode 100644 index 5403a588a1a..00000000000 --- a/src/current/v2.0/orchestrate-a-local-cluster-with-kubernetes-insecure.md +++ /dev/null @@ -1,178 +0,0 @@ ---- -title: Orchestration -summary: Orchestrate the deployment and management of an local cluster using Kubernetes. -toc: true ---- - -Other tutorials in this section feature the ways that CockroachDB automates operations for you. On top of this built-in automation, you can use a third-party [orchestration](orchestration.html) system to simplify and automate even more of your operations, from deployment to scaling to overall cluster management. - -This page walks you through a simple demonstration, using the open-source Kubernetes orchestration system. Starting with a few configuration files, you'll quickly create an insecure 3-node local cluster. You'll run a load generator against the cluster and then simulate node failure, watching how Kubernetes auto-restarts without the need for any manual intervention. You'll then scale the cluster with a single command before shutting the cluster down, again with a single command. - -{{site.data.alerts.callout_info}}To orchestrate a physically distributed cluster in production, see Orchestrated Deployment.{{site.data.alerts.end}} - - -## Before You Begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology: - -Feature | Description ---------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of local storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

      When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. - -## Step 1. Start Kubernetes - -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to managed Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} - -2. Start a local Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ minikube start - ~~~ - -## Step 2. Start CockroachDB nodes - -When starting a cluster manually, you run the cockroach start command multiple times, once per node. In this step, you use a Kubernetes StatefulSet configuration instead, reducing the effort of starting 3 nodes to a single command. - -{% include {{ page.version.version }}/orchestration/start-cluster.md %} - -## Step 3. Initialize the cluster - -{% include {{ page.version.version }}/orchestration/initialize-cluster-insecure.md %} - -## Step 4. Test the cluster - -To test the cluster, launch a temporary pod for using the built-in SQL client, and then use a deployment configuration file to run a high-traffic load generator against the cluster from another pod. - -{% include {{ page.version.version }}/orchestration/test-cluster-insecure.md %} - -4. Use our [`example-app.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/example-app.yaml) file to launch a pod and run a load generator against the cluster from the pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/example-app.yaml - ~~~ - - ~~~ - deployment "example" created - ~~~ - -5. Verify that the pod for the load generator was added successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 28m - cockroachdb-1 1/1 Running 0 27m - cockroachdb-2 1/1 Running 0 10m - example-545f866f5-2gsrs 1/1 Running 0 25m - ~~~ - -## Step 5. Monitor the cluster - -To access the [Admin UI](admin-ui-overview.html) and monitor the cluster's state and the load generator's activity: - -1. Port-forward from your local machine to one of the pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward cockroachdb-0 8080 - ~~~ - - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - -2. Go to http://localhost:8080 and click **Metrics** on the left-hand navigation bar. - -3. On the **Overview** dashboard, note that there are 3 healthy nodes with many SQL inserts executing per second across them. - - CockroachDB Admin UI - -4. Click the **Databases** tab on the left to verify that the `bank` database you created manually, as well as the `kv` database created by the load generated, are listed. - -## Step 6. Simulate node failure - -{% include {{ page.version.version }}/orchestration/kubernetes-simulate-failure.md %} - -## Step 7. Scale the cluster - -1. Use the `kubectl scale` command to add a pod for another CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=4 - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ - -2. Verify that the pod for a fourth node, `cockroachdb-3`, was added successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 28m - cockroachdb-1 1/1 Running 0 27m - cockroachdb-2 1/1 Running 0 10m - cockroachdb-3 1/1 Running 0 5s - example-545f866f5-2gsrs 1/1 Running 0 25m - ~~~ - -## Step 8. Stop the cluster - -- **If you plan to restart the cluster**, use the `minikube stop` command. This shuts down the minikube virtual machine but preserves all the resources you created: - - {% include copy-clipboard.html %} - ~~~ shell - $ minikube stop - ~~~ - - ~~~ - Stopping local Kubernetes cluster... - Machine stopped. - ~~~ - - You can restore the cluster to its previous state with `minikube start`. - -- **If you do not plan to restart the cluster**, use the `minikube delete` command. This shuts down and deletes the minikube virtual machine and all the resources you created, including persistent volumes: - - {% include copy-clipboard.html %} - ~~~ shell - $ minikube delete - ~~~ - - ~~~ - Deleting local Kubernetes cluster... - Machine deleted. - ~~~ - - {{site.data.alerts.callout_success}}To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.{{site.data.alerts.end}} - -## See Also - -Use a local cluster to explore these other core CockroachDB features: - -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [JSON Support](demo-json-support.html) - -You might also want to learn how to [orchestrate a production deployment of CockroachDB with Kubernetes](orchestrate-cockroachdb-with-kubernetes.html). diff --git a/src/current/v2.0/orchestrate-cockroachdb-with-docker-swarm-insecure.md b/src/current/v2.0/orchestrate-cockroachdb-with-docker-swarm-insecure.md deleted file mode 100644 index 5a497375e6a..00000000000 --- a/src/current/v2.0/orchestrate-cockroachdb-with-docker-swarm-insecure.md +++ /dev/null @@ -1,323 +0,0 @@ ---- -title: Orchestrate CockroachDB with Docker Swarm -summary: How to orchestrate the deployment and management of an insecure three-node CockroachDB cluster as a Docker swarm. -toc: true - ---- - - - -This page shows you how to orchestrate the deployment and management of an insecure three-node CockroachDB cluster as a [swarm of Docker Engines](https://docs.docker.com/engine/swarm/). - -If you plan to use CockroachDB in production, we recommend using a secure cluster instead. Select **Secure** above for instructions. - - -## Before You Begin - -Before you begin, it's helpful to review some terminology: - -Feature | Description ---------|------------ -instance | A physical or virtual machine. In this tutorial, you'll use three, one per CockroachDB node. -[Docker Engine](https://docs.docker.com/engine/) | This is the core Docker application that creates and runs containers. In this tutorial, you'll install and start Docker Engine on each of your three instances. -[swarm](https://docs.docker.com/engine/swarm/key-concepts/#/swarm) | A swarm is a group of Docker Engines joined into a single, virtual host. -[swarm node](https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/) | Each member of a swarm is considered a node. In this tutorial, each instance will be a swarm node, one as the master node and the two others as worker nodes. You'll submit service definitions to the master node, which will dispatch work to the worker nodes. -[service](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/) | A service is the definition of the tasks to execute on swarm nodes. In this tutorial, you'll define three services, each starting a CockroachDB node inside a container and joining it into a single cluster. Each service also ensures a stable network identity on restart via a resolvable DNS name. -[overlay network](https://docs.docker.com/engine/userguide/networking/#/an-overlay-network-with-docker-engine-swarm-mode) | An overlay network enables communication between the nodes of a swarm. In this tutorial, you'll create an overlay network and use it in each of your services. - -## Step 1. Create instances - -Create three instances, one for each node of your cluster. - -- For GCE-specific instructions, read through step 2 of [Deploy CockroachDB on GCE](deploy-cockroachdb-on-google-cloud-platform-insecure.html). -- For AWS-specific instructions, read through step 2 of [Deploy CockroachDB on AWS](deploy-cockroachdb-on-aws-insecure.html). - -Be sure to configure your network to allow TCP communication on these ports: - -- `26257` for inter-node communication (i.e., working as a cluster) and connecting with applications -- `8080` for exposing your Admin UI - -## Step 2. Install Docker Engine - -On each instance: - -1. [Install and start Docker Engine](https://docs.docker.com/engine/installation/). - -2. Confirm that the Docker daemon is running in the background: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker version - ~~~ - -## Step 3. Start the swarm - -1. On the instance where you want to run your manager node, [initialize the swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/). - - Take note of the output for `docker swarm init` as it includes the command you'll use in the next step. It should look like this: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker swarm init --advertise-addr 10.142.0.2 - ~~~ - - ~~~ - Swarm initialized: current node (414z67gr5cgfalm4uriu4qdtm) is now a manager. - - To add a worker to this swarm, run the following command: - - $ docker swarm join \ - --token SWMTKN-1-5vwxyi6zl3cc62lqlhi1jrweyspi8wblh2i3qa7kv277fgy74n-e5eg5c7ioxypjxlt3rpqorh15 \ - 10.142.0.2:2377 - - To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. - ~~~ - -2. On the other two instances, [create a worker node joined to the swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/add-nodes/) by running the `docker swarm join` command in the output from step 1, for example: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker swarm join \ - --token SWMTKN-1-5vwxyi6zl3cc62lqlhi1jrweyspi8wblh2i3qa7kv277fgy74n-e5eg5c7ioxypjxlt3rpqorh15 \ - 10.142.0.2:2377 - ~~~ - - ~~~ - This node joined a swarm as a worker. - ~~~ - -3. On the instance running your manager node, verify that your swarm is running: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker node ls - ~~~ - - ~~~ - ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS - 414z67gr5cgfalm4uriu4qdtm * instance-1 Ready Active Leader - ae144s35dx1p1lcegh6bblyed instance-2 Ready Active - aivjg2joxyvzvbksjsix27khy instance-3 Ready Active - ~~~ - -## Step 4. Create an overlay network - -On the instance running your manager node, create an overlay network so that the containers in your swarm can talk to each other: - -{% include copy-clipboard.html %} -~~~ shell -$ sudo docker network create --driver overlay --attachable cockroachdb -~~~ - -The `--attachable` option enables non-swarm containers running on Docker to access services on the network, which makes the service easier to use interactively. - -## Step 5. Start the CockroachDB cluster - -1. On the instance running your manager node, create one swarm service for each CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ - # Start the first service: - $ sudo docker service create \ - --replicas 1 \ - --name cockroachdb-1 \ - --hostname cockroachdb-1 \ - --network cockroachdb \ - --mount type=volume,source=cockroachdb-1,target=/cockroach/cockroach-data,volume-driver=local \ - --stop-grace-period 60s \ - --publish 8080:8080 \ - cockroachdb/cockroach:{{page.release_info.version}} start \ - --join=cockroachdb-1:26257,cockroachdb-2:26257,cockroachdb-3:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --logtostderr \ - --insecure - ~~~ - - {% include copy-clipboard.html %} - ~~~ - # Start the second service: - $ sudo docker service create \ - --replicas 1 \ - --name cockroachdb-2 \ - --hostname cockroachdb-2 \ - --network cockroachdb \ - --mount type=volume,source=cockroachdb-2,target=/cockroach/cockroach-data,volume-driver=local \ - --stop-grace-period 60s \ - cockroachdb/cockroach:{{page.release_info.version}} start \ - --join=cockroachdb-1:26257,cockroachdb-2:26257,cockroachdb-3:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --logtostderr \ - --insecure - ~~~ - - {% include copy-clipboard.html %} - ~~~ - # Start the third service: - $ sudo docker service create \ - --replicas 1 \ - --name cockroachdb-3 \ - --hostname cockroachdb-3 \ - --network cockroachdb \ - --mount type=volume,source=cockroachdb-3,target=/cockroach/cockroach-data,volume-driver=local \ - --stop-grace-period 60s \ - cockroachdb/cockroach:{{page.release_info.version}} start \ - --join=cockroachdb-1:26257,cockroachdb-2:26257,cockroachdb-3:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --logtostderr \ - --insecure - ~~~ - - These commands each create a service that starts a container, joins it to the overlay network, and starts a CockroachDB node inside the container mounted to a local volume for persistent storage. Let's look at each part: - - `sudo docker service create`: The Docker command to create a new service. - - `--replicas`: The number of containers controlled by the service. Since each service will control one container running one CockroachDB node, this will always be `1`. - - `--name`: The name for the service. - - `--hostname`: The hostname of the container. It will listen for connections on this address. - - `--network`: The overlay network for the container to join. See [Step 4. Create an overlay network](#step-4-create-an-overlay-network) for more details. - - `--mount`: This flag mounts a local volume with the same name as the service. This means that data and logs for the node running in this container will be stored in `/cockroach/cockroach-data` on the instance and will be reused on restart as long as restart happens on the same instance, which is not guaranteed. - {{site.data.alerts.callout_info}}If you plan on replacing or adding instances, it's recommended to use remote storage instead of local disk. To do so, create a remote volume for each CockroachDB instance using the volume driver of your choice, and then specify that volume driver instead of the volume-driver=local part of the command above, e.g., volume-driver=gce if using the GCE volume driver. - - `--stop-grace-period`: This flag sets a grace period to give CockroachDB enough time to shut down gracefully, when possible. - - `--publish`: This flag makes the Admin UI accessible at the IP of any instance running a swarm node on port `8080`. Note that, even though this flag is defined only in the first node's service, the swarm exposes this port on every swarm node using a routing mesh. See [Publishing ports](https://docs.docker.com/engine/swarm/services/#publish-ports) for more details. - - `cockroachdb/cockroach:{{page.release_info.version}} start ...`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode and instruct other cluster members to talk to each other using their persistent network addresses, which match the services' names. - -2. Verify that all three services were created successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker service ls - ~~~ - - ~~~ - ID NAME MODE REPLICAS IMAGE - a6g0ur6857j6 cockroachdb-1 replicated 1/1 cockroachdb/cockroach:{{page.release_info.version}} - dr81a756gaa6 cockroachdb-2 replicated 1/1 cockroachdb/cockroach:{{page.release_info.version}} - il4m7op1afg9 cockroachdb-3 replicated 1/1 cockroachdb/cockroach:{{page.release_info.version}} - ~~~ - - {{site.data.alerts.callout_success}}The service definitions tell the CockroachDB nodes to log to stderr, so if you ever need access to a node's logs for troubleshooting, use sudo docker logs <container id> from the instance on which the container is running.{{site.data.alerts.end}} - -3. Now all the CockroachDB nodes are running, but we still have to explicitly tell them to initialize a new cluster together. To do so, use the `sudo docker run` command to run the `cockroach init` command against one of the nodes. The `cockroach init` command will initialize the cluster, bringing it into a usable state. - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm --network=cockroachdb cockroachdb/cockroach:{{page.release_info.version}} init --host=cockroachdb-1 --insecure - ~~~ - - -## Step 6. Use the built-in SQL client - -1. Use the `sudo docker run` command to start a new container attached to the CockroachDB network, run the built-in SQL shell, and connect it to the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm --network=cockroachdb cockroachdb/cockroach:{{page.release_info.version}} sql --host=cockroachdb-1 --insecure - ~~~ - -2. Create an `insecurenodetest` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ - -3. Use **CTRL-D**, **CTRL-C**, or `\q` to exit the SQL shell. - -## Step 7. Monitor the cluster - -To view your cluster's Admin UI, open a browser and go to `http://:8080`. - -{{site.data.alerts.callout_info}}It's possible to access the Admin UI from outside of the swarm because you published port 8080 externally in the first node's service definition.{{site.data.alerts.end}} - -On this page, verify that the cluster is running as expected: - -1. View **Node list** to ensure that all of your nodes successfully joined the cluster. - -2. Click the **Databases** tab on the left to verify that `insecurenodetest` is listed. - -## Step 8. Simulate node failure - -Since we have three service definitions, one for each node, Docker swarm will ensure that there are three nodes running at all times. If a node fails, Docker swarm will automatically create another node with the same network identity and storage. - -To see this in action: - -1. On any instance, use the `sudo docker ps` command to get the ID of the container running the CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker ps | grep cockroachdb - ~~~ - - ~~~ - 9539871cc769 cockroachdb/cockroach:{{page.release_info.version}} "/cockroach/cockroach" 10 minutes ago Up 10 minutes 8080/tcp, 26257/tcp cockroachdb-0.1.0wigdh8lx0ylhuzm4on9bbldq - ~~~ - -2. Use `sudo docker kill` to remove the container, which implicitly stops the node: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker kill 9539871cc769 - ~~~ - -3. Verify that the node was restarted in a new container: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker ps | grep cockroachdb - ~~~ - - ~~~ - 4a58f86e3ced cockroachdb/cockroach:{{page.release_info.version}} "/cockroach/cockroach" 7 seconds ago Up 1 seconds 8080/tcp, 26257/tcp cockroachdb-0.1.cph86kmhhcp8xzq6a1nxtk9ng - ~~~ - -4. Back in the Admin UI, view the **Node list** and verify that all 3 nodes are live. - -## Step 9. Scale the cluster - -To increase the number of nodes in your CockroachDB cluster: - -1. Create an additional instance (see [Step 1](#step-1-create-instances)). -2. Install Docker Engine on the instance (see [Step 2](#step-2-install-docker-engine)). -3. Join the instance to the swarm as a worker node (see [Step 3.2](#step-3-start-the-swarm)). -4. Create a new service to start another node and join it to the CockroachDB cluster (see [Step 5.1](#step-5-start-the-cockroachdb-cluster)). - -## Step 10. Stop the cluster - -To stop the CockroachDB cluster, on the instance running your manager node, remove the services: - -{% include copy-clipboard.html %} -~~~ shell -$ sudo docker service rm cockroachdb-0 cockroachdb-1 cockroachdb-2 -~~~ - -~~~ -cockroachdb-0 -cockroachdb-1 -cockroachdb-2 -~~~ - -You may want to remove the persistent volumes used by the services as well. To do this, on each instance: - -{% include copy-clipboard.html %} -~~~ shell -# Identify the name of the local volume: -$ sudo docker volume ls -~~~ - -~~~ -cockroachdb-0 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Remove the local volume: -$ sudo docker volume rm cockroachdb-0 -~~~ - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/orchestrate-cockroachdb-with-docker-swarm.md b/src/current/v2.0/orchestrate-cockroachdb-with-docker-swarm.md deleted file mode 100644 index 0a9c8ce7af5..00000000000 --- a/src/current/v2.0/orchestrate-cockroachdb-with-docker-swarm.md +++ /dev/null @@ -1,554 +0,0 @@ ---- -title: Orchestrate CockroachDB with Docker Swarm -summary: How to orchestrate the deployment and management of a secure three-node CockroachDB cluster as a Docker swarm. -toc: true - ---- - -
      - - -
      - -This page shows you how to orchestrate the deployment and management of a secure three-node CockroachDB cluster as a [swarm of Docker Engines](https://docs.docker.com/engine/swarm/). - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - - -## Before You Begin - -Before you begin, it's helpful to review some terminology: - -Feature | Description ---------|------------ -instance | A physical or virtual machine. In this tutorial, you'll use three, one per CockroachDB node. -[Docker Engine](https://docs.docker.com/engine/) | This is the core Docker application that creates and runs containers. In this tutorial, you'll install and start Docker Engine on each of your three instances. -[swarm](https://docs.docker.com/engine/swarm/key-concepts/#/swarm) | A swarm is a group of Docker Engines joined into a single, virtual host. -[swarm node](https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/) | Each member of a swarm is considered a node. In this tutorial, each instance will be a swarm node, one as the master node and the two others as worker nodes. You'll submit service definitions to the master node, which will dispatch work to the worker nodes. -[service](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/) | A service is the definition of the tasks to execute on swarm nodes. In this tutorial, you'll define three services, each starting a CockroachDB node inside a container and joining it into a single cluster. Each service also ensures a stable network identity on restart via a resolvable DNS name. -[secret](https://docs.docker.com/engine/swarm/secrets/) | A secret is Docker's mechanism for managing sensitive data that a container needs at runtime. Since CockroachDB uses TLS certificates to authenticate and encrypt inter-node and client/node communication, you'll create a secret per certificate and use the secrets in your services. -[overlay network](https://docs.docker.com/engine/userguide/networking/#/an-overlay-network-with-docker-engine-swarm-mode) | An overlay network enables communication between the nodes of a swarm. In this tutorial, you'll create an overlay network and use it in each of your services. - -## Step 1. Create instances - -Create three instances, one for each node of your cluster. - -- For GCE-specific instructions, read through step 2 of [Deploy CockroachDB on GCE](deploy-cockroachdb-on-google-cloud-platform-insecure.html). -- For AWS-specific instructions, read through step 2 of [Deploy CockroachDB on AWS](deploy-cockroachdb-on-aws-insecure.html). - -Be sure to configure your network to allow TCP communication on these ports: - -- `26257` for inter-node communication (i.e., working as a cluster) and connecting with applications -- `8080` for exposing your Admin UI - -## Step 2. Install Docker Engine - -On each instance: - -1. [Install and start Docker Engine](https://docs.docker.com/engine/installation/). - -2. Confirm that the Docker daemon is running in the background: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker version - ~~~ - -## Step 3. Start the swarm - -1. On the instance where you want to run your manager node, [initialize the swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/). - - Take note of the output for `docker swarm init` as it includes the command you'll use in the next step. It should look like this: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker swarm init --advertise-addr 10.142.0.2 - ~~~ - - ~~~ - Swarm initialized: current node (414z67gr5cgfalm4uriu4qdtm) is now a manager - To add a worker to this swarm, run the following command - $ docker swarm join \ - --toke SWMTKN-1-5vwxyi6zl3cc62lqlhi1jrweyspi8wblh2i3qa7kv277fgy74n-e5eg5c7ioxypjxlt3rpqorh15 \ - 10.142.0.2:237 - To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. - ~~~ - -2. On the other two instances, [create a worker node joined to the swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/add-nodes/) by running the `docker swarm join` command in the output from step 1, for example: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker swarm join \ - --to SWMTKN-1-5vwxyi6zl3cc62lqlhi1jrweyspi8wblh2i3qa7kv277fgy74n-e5eg5c7ioxypjxlt3rpqorh15 \ - 10.142.0.2:2377 - ~~~ - - ~~~ - This node joined a swarm as a worker. - ~~~ - -3. On the instance running your manager node, verify that your swarm is running: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker node ls - ~~~ - - ~~~ - ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS - 414z67gr5cgfalm4uriu4qdtm * instance-1 Ready Active Leader - ae144s35dx1p1lcegh6bblyed instance-2 Ready Active - aivjg2joxyvzvbksjsix27khy instance-3 Ready Active - ~~~ - -## Step 4. Create an overlay network - -On the instance running your manager node, create an overlay network so that the containers in your swarm can talk to each other: - -{% include copy-clipboard.html %} -~~~ shell -$ sudo docker network create --driver overlay --attachable cockroachdb -~~~ - -+The `--attachable` option enables non-swarm containers running on Docker to access services on the network, which makes the service easier to use interactively. - -## Step 5. Create security resources - -A secure CockroachDB cluster uses TLS certificates for encrypted inter-node and client/node authentication and communication. In this step, you'll install CockroachDB on the instance running your manager node, use the [`cockroach cert`](create-security-certificates.html) command to generate certificate authority (CA), node, and client certificate and key pairs, and use the [`docker secret create`](https://docs.docker.com/engine/reference/commandline/secret_create/) command to assign these files to Docker [secrets](https://docs.docker.com/engine/swarm/secrets/) for use by your Docker services. - -1. On the instance running your manager node, install CockroachDB from our latest binary: - - {% include copy-clipboard.html %} - ~~~ shell - # Get the latest CockroachDB tarball: - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Extract the binary: - $ tar -xzf cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - --strip=1 cockroach-{{ page.release_info.version }}.linux-amd64/cockroach - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Move the binary: - $ sudo mv cockroach /usr/local/bin/ - ~~~ - -2. Create a `certs` directory and a safe directory to keep your CA key: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - -3. Create the CA certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls certs - ~~~ - - ~~~ - ca.crt - ~~~ - -4. Create a Docker secret for the `ca.crt` file using the [`docker secret create`](https://docs.docker.com/engine/reference/commandline/secret_create/) command: - - {{site.data.alerts.callout_danger}}Store the ca.key file somewhere safe and keep a backup; if you lose it, you will not be able to add new nodes or clients to your cluster.{{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create ca-crt certs/ca.crt - ~~~ - - This command assigns a name to the secret (`ca-crt`) and identifies the location of the cockroach-generated CA certificate file. You can use a different secret name, if you like, but be sure to reference the correct name when starting the CockroachDB nodes in the next step. - -5. Create the certificate and key for the first node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - cockroachdb-1 \ - localhost \ - 127.0.0.1 \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls certs - ~~~ - - ~~~ - ca.crt - node.crt - node.key - ~~~ - - This command issues the certificate/key pair to the service name you will use for the node later (`cockroachdb-1`) as well as to local addresses that will make it easier to run the built-in SQL shell and other CockroachDB client commands in the same container as the node. - -6. Create Docker secrets for the first node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-1-crt certs/node.crt - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-1-key certs/node.key - ~~~ - - Again, these commands assign names to the secrets (`cockroachdb-1-crt` and `cockroachdb-1-key`) and identify the location of the cockroach-generated certificate and key files. - -7. Create the certificate and key for the second node, using the `--overwrite` flag to replace the files created for the first node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node --overwrite \ - cockroachdb-2 \ - localhost \ - 127.0.0.1 \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls certs - ~~~ - - ~~~ - ca.crt - node.crt - node.key - ~~~ - -8. Create Docker secrets for the second node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-2-crt certs/node.crt - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-2-key certs/node.key - ~~~ - -9. Create the certificate and key for the third node, again using the `--overwrite` flag to replace the files created for the second node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node --overwrite \ - cockroachdb-3 \ - localhost \ - 127.0.0.1 \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls certs - ~~~ - - ~~~ - ca.crt - node.crt - node.key - ~~~ - -10. Create Docker secrets for the third node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-3-crt certs/node.crt - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-3-key certs/node.key - ~~~ - -11. Create a client certificate and key for the `root` user: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -12. Create Docker secrets for the `root` user's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-root-crt certs/client.root.crt - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker secret create cockroachdb-root-key certs/client.root.key - ~~~ - -## Step 6. Start the CockroachDB cluster - -1. On the instance running your manager node, create one swarm service for each CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ - # Create the first service: - $ sudo docker service create \ - --replicas 1 \ - --name cockroachdb-1 \ - --hostname cockroachdb-1 \ - --network cockroachdb \ - --mount type=volume,source=cockroachdb-1,target=/cockroach/cockroach-data,volume-driver=local \ - --stop-grace-period 60s \ - --publish 8080:8080 \ - --secret source=ca-crt,target=ca.crt \ - --secret source=cockroachdb-1-crt,target=node.crt \ - --secret source=cockroachdb-1-key,target=node.key,mode=0600 \ - --secret source=cockroachdb-root-crt,target=client.root.crt \ - --secret source=cockroachdb-root-key,target=client.root.key,mode=0600 \ - cockroachdb/cockroach:{{page.release_info.version}} start \ - --join=cockroachdb-1:26257,cockroachdb-2:26257,cockroachdb-3:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --logtostderr \ - --certs-dir=/run/secrets - ~~~ - - {% include copy-clipboard.html %} - ~~~ - # Create the second service: - $ sudo docker service create \ - --replicas 1 \ - --name cockroachdb-2 \ - --hostname cockroachdb-2 \ - --network cockroachdb \ - --stop-grace-period 60s \ - --mount type=volume,source=cockroachdb-2,target=/cockroach/cockroach-data,volume-driver=local \ - --secret source=ca-crt,target=ca.crt \ - --secret source=cockroachdb-2-crt,target=node.crt \ - --secret source=cockroachdb-2-key,target=node.key,mode=0600 \ - --secret source=cockroachdb-root-crt,target=client.root.crt \ - --secret source=cockroachdb-root-key,target=client.root.key,mode=0600 \ - cockroachdb/cockroach:{{page.release_info.version}} start \ - --join=cockroachdb-1:26257,cockroachdb-2:26257,cockroachdb-3:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --logtostderr \ - --certs-dir=/run/secrets - ~~~ - - {% include copy-clipboard.html %} - ~~~ - # Create the third service: - $ sudo docker service create \ - --replicas 1 \ - --name cockroachdb-3 \ - --hostname cockroachdb-3 \ - --network cockroachdb \ - --mount type=volume,source=cockroachdb-3,target=/cockroach/cockroach-data,volume-driver=local \ - --stop-grace-period 60s \ - --secret source=ca-crt,target=ca.crt \ - --secret source=cockroachdb-3-crt,target=node.crt \ - --secret source=cockroachdb-3-key,target=node.key,mode=0600 \ - --secret source=cockroachdb-root-crt,target=client.root.crt \ - --secret source=cockroachdb-root-key,target=client.root.key,mode=0600 \ - cockroachdb/cockroach:{{page.release_info.version}} start \ - --join=cockroachdb-1:26257,cockroachdb-2:26257,cockroachdb-3:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --logtostderr \ - --certs-dir=/run/secrets - ~~~ - - These commands each create a service that starts a container securely, joins it to the overlay network, and starts a CockroachDB node inside the container mounted to a local volume for persistent storage. Let's look at each part: - - `sudo docker service create`: The Docker command to create a new service. - - `--replicas`: The number of containers controlled by the service. Since each service will control one container running one CockroachDB node, this will always be `1`. - - `--name`: The name for the service. - - `--hostname`: The hostname of the container. It will listen for connections on this address. - - `--network`: The overlay network for the container to join. See [Step 4. Create an overlay network](#step-4-create-an-overlay-network) for more details. - - `--mount`: This flag mounts a local volume with the same name as the service. This means that data and logs for the node running in this container will be stored in `/cockroach/cockroach-data` on the instance and will be reused on restart as long as restart happens on the same instance, which is not guaranteed. - {{site.data.alerts.callout_info}}If you plan on replacing or adding instances, it's recommended to use remote storage instead of local disk. To do so, create a remote volume for each CockroachDB instance using the volume driver of your choice, and then specify that volume driver instead of the volume-driver=local part of the command above, e.g., volume-driver=gce if using the GCE volume driver.{{site.data.alerts.end}} - - `--stop-grace-period`: This flag sets a grace period to give CockroachDB enough time to shut down gracefully, when possible. - - `--publish`: This flag makes the Admin UI accessible at the IP of any instance running a swarm node on port `8080`. Note that, even though this flag is defined only in the first node's service, the swarm exposes this port on every swarm node using a routing mesh. See [Publishing ports](https://docs.docker.com/engine/swarm/services/#publish-ports) for more details. - - `--secret`: These flags identify the secrets to use in securing the node. They must reference the secret names defined in step 5. For the node and client certificate and key secrets, the `source` field identifies the relevant secret, and the `target` field defines the name to be used in `cockroach start` and `cockroach sql` flags. For the node and client key secrets, the `mode` field also sets the file permissions to `0600`; if this isn't set, Docker will assign a default file permission of `0444`, which will not work with CockroachDB's built-in SQL client. - - `cockroachdb/cockroach:{{page.release_info.version}} start ...`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode and instruct other cluster members to talk to each other using their persistent network addresses, which match the services' names. - -2. Verify that all three services were created successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker service ls - ~~~ - - ~~~ - ID NAME MODE REPLICAS IMAGE - a6g0ur6857j6 cockroachdb-1 replicated 1/1 cockroachdb/cockroach:{{page.release_info.version}} - dr81a756gaa6 cockroachdb-2 replicated 1/1 cockroachdb/cockroach:{{page.release_info.version}} - il4m7op1afg9 cockroachdb-3 replicated 1/1 cockroachdb/cockroach:{{page.release_info.version}} - ~~~ - - {{site.data.alerts.callout_success}}The service definitions tell the CockroachDB nodes to log to stderr, so if you ever need access to a node's logs for troubleshooting, use sudo docker logs <container id> from the instance on which the container is running.{{site.data.alerts.end}} - -3. Now all the CockroachDB nodes are running, but we still have to explicitly tell them to initialize a new cluster together. To do so, use the `sudo docker run` command to run the `cockroach init` command against one of the nodes. The `cockroach init` command will initialize the cluster, bringing it into a usable state. - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm --network cockroachdb --mount type=bind,source="$(pwd)/certs",target=/cockroach/certs,readonly cockroachdb/cockroach:{{page.release_info.version}} init --host=cockroachdb-1 --certs-dir=certs - ~~~ - - We mount the `certs` directory as a volume inside the container because it contains the `root` user's client certificate and key, which we need to talk to the cluster. - -## Step 7. Use the built-in SQL client - -1. Use the `sudo docker run` command to start a new container attached to the CockroachDB network, run the built-in SQL shell, and connect it to the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm --network cockroachdb --mount type=bind,source="$(pwd)/certs",target=/cockroach/certs,readonly cockroachdb/cockroach:{{page.release_info.version}} sql --host=cockroachdb-1 --certs-dir=certs - ~~~ - -2. Create a `securenodetest` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE securenodetest; - ~~~ - -3. Use **CTRL-D**, **CTRL-C**, or `\q` to exit the SQL shell. - -## Step 8. Monitor the cluster - -To view your cluster's Admin UI, open a browser and go to `https://:8080`. - -{{site.data.alerts.callout_info}}It's possible to access the Admin UI from outside of the swarm because you published port 8080 externally in the first node's service definition. However, your browser will consider the CockroachDB-created certificate invalid, so you’ll need to click through a warning message to get to the UI.{{site.data.alerts.end}} - -On this page, verify that the cluster is running as expected: - -1. View **Node List** to ensure that all of your nodes successfully joined the cluster. - -2. Click the **Databases** tab on the left to verify that `securenodetest` is listed. - -## Step 9. Simulate node failure - -Since we have three service definitions, one for each node, Docker Swarm will ensure that there are three nodes running at all times. If a node fails, Docker Swarm will automatically create another node with the same network identity and storage. - -To see this in action: - -1. On any instance, use the `sudo docker ps` command to get the ID of the container running the CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker ps | grep cockroachdb - ~~~ - - ~~~ - 32769a6dd664 cockroachdb/cockroach:{{page.release_info.version}} "/cockroach/cockroach" 10 minutes ago Up 10 minutes 8080/tcp, 26257/tcp cockroachdb-2.1.0wigdh8lx0ylhuzm4on9bbldq - ~~~ - -2. Use `sudo docker kill` to remove the container, which implicitly stops the node: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker kill - ~~~ - -3. Verify that the node was restarted in a new container: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker ps | grep cockroachdb - ~~~ - - ~~~ - 4a58f86e3ced cockroachdb/cockroach:{{page.release_info.version}} "/cockroach/cockroach" 7 seconds ago Up 1 seconds 8080/tcp, 26257/tcp cockroachdb-2.1.cph86kmhhcp8xzq6a1nxtk9ng - ~~~ - -4. Back in the Admin UI, view **Node List** and verify that all 3 nodes are live. - -## Step 10. Scale the cluster - -To increase the number of nodes in your CockroachDB cluster: - -1. Create an additional instance (see [Step 1](#step-1-create-instances)). -2. Install Docker Engine on the instance (see [Step 2](#step-2-install-docker-engine)). -3. Join the instance to the swarm as a worker node (see [Step 3.2](#step-3-start-the-swarm)). -4. Create security resources for the node (see [Step 5.7 and 5.8](#step-5-create-security-resources)). -5. Create a new service to start another node and join it to the CockroachDB cluster (see [Step 6.1](#step-6-start-the-cockroachdb-cluster)). - -## Step 11. Stop the cluster - -To stop the CockroachDB cluster, on the instance running your manager node, remove the services: - -{% include copy-clipboard.html %} -~~~ shell -$ sudo docker service rm cockroachdb-1 cockroachdb-2 cockroachdb-3 -~~~ - -~~~ -cockroachdb-1 -cockroachdb-2 -cockroachdb-3 -~~~ - -You may want to remove the persistent volumes and secrets used by the services as well. To do this, on each instance: - -{% include copy-clipboard.html %} -~~~ shell -# Identify the name of the local volume: -$ sudo docker volume ls -~~~ - -~~~ -cockroachdb-1 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Remove the local volume: -$ sudo docker volume rm cockroachdb-1 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Identify the name of secrets: -$ sudo docker secrets ls -~~~ - -~~~ -ca-crt -cockroachdb-1-crt -cockroachdb-1-key -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Remove the secrets: -$ sudo docker secret rm ca-crt cockroachdb-1-crt cockroachdb-1-key -~~~ - -## See Also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes-insecure.md b/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes-insecure.md deleted file mode 100644 index fef391f8175..00000000000 --- a/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes-insecure.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: Orchestrate CockroachDB with Kubernetes (Insecure) -summary: How to orchestrate the deployment, management, and monitoring of an insecure 3-node CockroachDB cluster with Kubernetes. -toc: true -canonical: /stable/deploy-cockroachdb-with-kubernetes-insecure ---- - - - -This page shows you how to orchestrate the deployment, management, and monitoring of an insecure 3-node CockroachDB cluster in a single [Kubernetes](http://kubernetes.io/) cluster, using the [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) feature. - -To deploy across multiple Kubernetes clusters in different geographic regions instead, see [Kubernetes Multi-Cluster Deployment](orchestrate-cockroachdb-with-kubernetes-multi-cluster.html). Also, for details about potential performance bottlenecks to be aware of when running CockroachDB in Kubernetes and guidance on how to optimize your deployment for better performance, see [CockroachDB Performance on Kubernetes](kubernetes-performance.html). - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology and current limitations. - -### Kubernetes terminology - -Feature | Description ---------|------------ -instance | A physical or virtual machine. In this tutorial, you'll create GCE or AWS instances and join them into a single Kubernetes cluster from your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one or more Docker containers. In this tutorial, each pod will run on a separate instance and include one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of networked storage (Persistent Disk on GCE, Elastic Block Store on AWS) mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

      This tutorial assumes that dynamic volume provisioning is available. When that is not the case, [persistent volume claims](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) need to be created manually. - -### Limitations - -{% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} - -## Step 1. Start Kubernetes - -{% include {{ page.version.version }}/orchestration/start-kubernetes.md %} - -## Step 2. Start CockroachDB nodes - -{% include {{ page.version.version }}/orchestration/start-cluster.md %} - -## Step 3. Initialize the cluster - -{% include {{ page.version.version }}/orchestration/initialize-cluster-insecure.md %} - -## Step 4. Use the built-in SQL client - -{% include {{ page.version.version }}/orchestration/test-cluster-insecure.md %} - -## Step 5. Access the Admin UI - -{% include {{ page.version.version }}/orchestration/monitor-cluster.md %} - -## Step 6. Simulate node failure - -{% include {{ page.version.version }}/orchestration/kubernetes-simulate-failure.md %} - -## Step 7. Set up monitoring and alerting - -{% include {{ page.version.version }}/orchestration/kubernetes-prometheus-alertmanager.md %} - -## Step 8. Maintain the cluster - -### Scale the cluster - -{% include {{ page.version.version }}/orchestration/kubernetes-scale-cluster.md %} - -3. Verify that a fourth pod was added successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - alertmanager-cockroachdb-0 2/2 Running 0 2m - alertmanager-cockroachdb-1 2/2 Running 0 2m - alertmanager-cockroachdb-2 2/2 Running 0 2m - cockroachdb-0 1/1 Running 0 9m - cockroachdb-1 1/1 Running 0 9m - cockroachdb-2 1/1 Running 0 7m - cockroachdb-3 0/1 Pending 0 5s - prometheus-cockroachdb-0 3/3 Running 1 5m - prometheus-operator-85dd478dbb-66lvb 1/1 Running 0 6m - ~~~ - -### Upgrade the cluster - -{% include {{ page.version.version }}/orchestration/kubernetes-upgrade-cluster.md %} - -### Stop the cluster - -To shut down the CockroachDB cluster: - -1. Delete all of the resources you created, including the logs and remote persistent volumes: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pods,statefulsets,services,persistentvolumeclaims,persistentvolumes,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=cockroachdb - ~~~ - - ~~~ - pod "cockroachdb-0" deleted - pod "cockroachdb-1" deleted - pod "cockroachdb-2" deleted - pod "cockroachdb-3" deleted - service "alertmanager-cockroachdb" deleted - service "cockroachdb" deleted - service "cockroachdb-public" deleted - persistentvolumeclaim "datadir-cockroachdb-0" deleted - persistentvolumeclaim "datadir-cockroachdb-1" deleted - persistentvolumeclaim "datadir-cockroachdb-2" deleted - persistentvolumeclaim "datadir-cockroachdb-3" deleted - poddisruptionbudget "cockroachdb-budget" deleted - job "cluster-init" deleted - clusterrolebinding "prometheus" deleted - clusterrole "prometheus" deleted - serviceaccount "prometheus" deleted - alertmanager "cockroachdb" deleted - prometheus "cockroachdb" deleted - prometheusrule "prometheus-cockroachdb-rules" deleted - servicemonitor "cockroachdb" deleted - ~~~ - -2. Stop Kubernetes: - -{% include {{ page.version.version }}/orchestration/stop-kubernetes.md %} - -## See also - -- [Kubernetes Multi-Cluster Deployment](orchestrate-cockroachdb-with-kubernetes-multi-cluster.html) -- [Kubernetes Performance Guide](kubernetes-performance.html) -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md b/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md deleted file mode 100644 index 38d78ed6708..00000000000 --- a/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md +++ /dev/null @@ -1,582 +0,0 @@ ---- -title: Orchestrate CockroachDB Across Multiple Kubernetes Clusters -summary: Orchestrate the deployment, management, and monitoring of CockroachDB across multiple Kubernetes clusters in different regions. -toc: true ---- - -This page shows you how to orchestrate a secure CockroachDB deployment across three [Kubernetes](http://kubernetes.io/) clusters, each in a different geographic region, using the [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) feature to manage the containers within each cluster and linking them together via DNS. - -To deploy in a single Kubernetes cluster instead, see [Kubernetes Single-Cluster Deployment](orchestrate-cockroachdb-with-kubernetes.html). Also, for details about potential performance bottlenecks to be aware of when running CockroachDB in Kubernetes and guidance on how to optimize your deployment for better performance, see [CockroachDB Performance on Kubernetes](kubernetes-performance.html). - -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology and current limitations. - -### Kubernetes terminology - -Feature | Description ---------|------------ -instance | A physical or virtual machine. In this tutorial, you'll run instances as part of three independent Kubernetes clusters, each in a different region. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one or more Docker containers. In this tutorial, each pod will run on a separate instance and include one Docker container running a single CockroachDB node. You'll start with 3 pods in each region and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of networked storage (Persistent Disk on GCE, Elastic Block Store on AWS) mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

      This tutorial assumes that dynamic volume provisioning is available. When that is not the case, [persistent volume claims](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) need to be created manually. -[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) | RBAC, or Role-Based Access Control, is the system Kubernetes uses to manage permissions within the cluster. In order to take an action (e.g., `get` or `create`) on an API resource (e.g., a `pod`), the client must have a `Role` that allows it to do so. -[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) | A namespace provides a scope for resources and names within a Kubernetes cluster. Names of resources need to be unique within a namespace, but not across namespaces. Most Kubernetes client commands will use the `default` namespace by default, but can operate on resources in other namespaces as well if told to do so. -[kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) | `kubectl` is the command-line interface for running commands against Kubernetes clusters. -[kubectl context](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration) | A `kubectl` "context" specifies a Kubernetes cluster to connect to and authentication for doing so. You can set a context as the default using the `kubectl use-context ` command such that all future `kubectl` commands will talk to that cluster, or you can specify the `--context=` flag on almost any `kubectl` command to tell it which cluster you want to run the command against. We will make heavy use of the `--context` flag in these instructions in order to run commands against the different regions' Kubernetes clusters. - -### UX differences from running in a single cluster - -These instructions create a StatefulSet that runs CockroachDB in each of the Kubernetes clusters you provide to the configuration scripts. These StatefulSets can be scaled independently of each other by running `kubectl` commands against the appropriate cluster. These steps will also point each Kubernetes cluster's DNS server at the other clusters' DNS servers so that DNS lookups for certain zone-scoped suffixes (e.g., "*.us-west1-a.svc.cluster.local") can be deferred to the appropriate cluster's DNS server. However, in order to make this work, we create the StatefulSets in namespaces named after the zone in which the cluster is running. This means that in order to run a command against one of the pods, you have to run, e.g., `kubectl logs cockroachdb-0 --namespace=us-west1-a` instead of just `kubectl logs cockroachdb-0`. Alternatively, you can [configure your `kubectl` context to default to using that namespace for commands run against that cluster](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference). - -Note that the CockroachDB pods being in a non-default namespace means that if we didn't do anything about it then any client applications wanting to talk to CockroachDB from the default namespace would need to talk to a zone-scoped service name such as "cockroachdb-public.us-west1-a" rather than just the normal "cockroachdb-public" that they would use in a single-cluster setting. However, the setup script used by these instructions sets up an additional [`ExternalName` service](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) in the default namespace such that the clients in the default namespace can simply talk to the "cockroachdb-public" address. - -Finally, if you haven't worked with multiple Kubernetes clusters often before, you may find yourself forgetting to think about which cluster you want to run a given command against, and thus getting confusing results to your commands. Remember that you will either have to run `kubectl use-context ` frequently to switch contexts between commands or you will have to append `--context=` on most commands you run to ensure they are run on the correct cluster. - -### Limitations - -#### Kubernetes version - -Kubernetes 1.18 or higher is required. - -#### Exposing DNS servers - -In the approach documented here, the DNS servers from each Kubernetes cluster are hooked together by exposing them via a load balanced IP address that is visible to the public Internet. This is because [Google Cloud Platform's Internal Load Balancers do not currently support clients in one region using a load balancer in another region](https://cloud.google.com/load-balancing/docs/internal/#deploying_internal_load_balancing_with_clients_across_vpn_or_interconnect). - -None of the services in your Kubernetes cluster will be accessible publicly, but their names could leak out to a motivated attacker. If this is unacceptable, please let us know and we can demonstrate other options. [Your voice could also help convince Google to allow clients from one region to use an Internal Load Balancer in another](https://issuetracker.google.com/issues/111021512), eliminating the problem. - -## Step 1. Start Kubernetes clusters - -Our multi-region deployment approached relies on pod IP addresses being routable across three distinct Kubernetes clusters and regions. The hosted Google Kubernetes Engine (GKE) service satisfies this requirement, so that is the environment featured here. If you want to run on another cloud or on-premises, use this [basic network test](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/multiregion#pod-to-pod-connectivity) to see if it will work. - -1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation. - - This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_success}}The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide.{{site.data.alerts.end}} - -2. From your local workstation, start the first Kubernetes cluster, specifying the [zone](https://cloud.google.com/compute/docs/regions-zones/) it should run in: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb1 --zone= - ~~~ - - ~~~ - Creating cluster cockroachdb1...done. - ~~~ - - This creates GKE instances in the zone specified and joins them into a single Kubernetes cluster named `cockroachdb1`. - - The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb1...done` message and details about your cluster. - -3. Start the second Kubernetes cluster, specifying the [zone](https://cloud.google.com/compute/docs/regions-zones/) it should run in: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb2 --zone= - ~~~ - - ~~~ - Creating cluster cockroachdb2...done. - ~~~ - -4. Start the third Kubernetes cluster, specifying the [zone](https://cloud.google.com/compute/docs/regions-zones/) it should run in: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb3 --zone= - ~~~ - - ~~~ - Creating cluster cockroachdb3...done. - ~~~ - -5. Get the `kubectl` "contexts" for your clusters: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl config get-contexts - ~~~ - - ~~~ - CURRENT NAME CLUSTER AUTHINFO NAMESPACE - * gke_cockroach-shared_us-east1-b_cockroachdb1 gke_cockroach-shared_us-east1-b_cockroachdb1 gke_cockroach-shared_us-east1-b_cockroachdb1 - gke_cockroach-shared_us-west1-a_cockroachdb2 gke_cockroach-shared_us-west1-a_cockroachdb2 gke_cockroach-shared_us-west1-a_cockroachdb2 - gke_cockroach-shared_us-central1-a_cockroachdb3 gke_cockroach-shared_us-central1-a_cockroachdb3 gke_cockroach-shared_us-central1-a_cockroachdb3 - ~~~ - - {{site.data.alerts.callout_info}} - All of the `kubectl` commands in this tutorial use the `--context` flag to tell `kubectl` which Kubernetes cluster to talk to. Each Kubernetes cluster operates independently; you have to tell each of them what to do separately, and when you want to get the status of something in a particular cluster, you have to make it clear to `kubectl` which cluster you're interested in. - - The context with `*` in the `CURRENT` column indicates the cluster that `kubectl` will talk to by default if you do not specify the `--context` flag. - {{site.data.alerts.end}} - -6. Get the email address associated with your Google Cloud account: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud info | grep Account - ~~~ - - ~~~ - Account: [your.google.cloud.email@example.org] - ~~~ - - {{site.data.alerts.callout_danger}} - This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com. - {{site.data.alerts.end}} - -6. For each Kubernetes cluster, [create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the email address and relevant "context" name from the previous steps: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user= --context= - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user= --context= - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user= --context= - ~~~ - -## Step 2. Start CockroachDB - -1. Create a directory and download the required script and configuration files into it: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir multiregion - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cd multiregion - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -OOOOOOOOO \ - https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/multiregion/{README.md,client-secure.yaml,cluster-init-secure.yaml,cockroachdb-statefulset-secure.yaml,dns-lb.yaml,example-app-secure.yaml,external-name-svc.yaml,setup.py,teardown.py} - ~~~ - -2. At the top of the `setup.py` script, fill in the `contexts` map with the zones of your clusters and their "context" names, for example: - - ~~~ - context = { - 'us-east1-b': 'gke_cockroach-shared_us-east1-b_cockroachdb1', - 'us-west1-a': 'gke_cockroach-shared_us-west1-a_cockroachdb2', - 'us-central1-a': 'gke_cockroach-shared_us-central1-a_cockroachdb3', - } - ~~~ - - You retrieved the `kubectl` "contexts" in an earlier step. To get them again, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl config get-contexts - ~~~ - -3. In the `setup.py` script, fill in the `regions` map with the zones and corresponding regions of your clusters, for example: - - ~~~ - $ regions = { - 'us-east1-b': 'us-east1', - 'us-west1-a': 'us-west1', - 'us-central1-a': 'us-central1', - } - ~~~ - - Setting regions is optional, but recommended, because it improves CockroachDB's ability to diversify data placement if you use more than one zone in the same region. If you aren't specifying regions, just leave the map empty. - -4. If you haven't already, [install CockroachDB locally and add it to your `PATH`](install-cockroachdb.html). The `cockroach` binary will be used to generate certificates. - - If the `cockroach` binary is not on your `PATH`, in the `setup.py` script, set the `cockroach_path` variable to the path to the binary. - -5. Optionally, to optimize your deployment for better performance, review [CockroachDB Performance on Kubernetes](kubernetes-performance.html) and make the desired modifications to the `cockroachdb-statefulset-secure.yaml` file. - -6. Run the `setup.py` script: - - {% include copy-clipboard.html %} - ~~~ shell - $ python setup.py - ~~~ - - As the script creates various resources and creates and initializes the CockroachDB cluster, you'll see a lot of output, eventually ending with `job "cluster-init-secure" created`. - -7. Confirm that the CockroachDB pods in each cluster say `1/1` in the `READY` column, indicating that they've successfully joined the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --selector app=cockroachdb --all-namespaces --context= - ~~~ - - ~~~ - NAMESPACE NAME READY STATUS RESTARTS AGE - us-east1-b cockroachdb-0 1/1 Running 0 14m - us-east1-b cockroachdb-1 1/1 Running 0 14m - us-east1-b cockroachdb-2 1/1 Running 0 14m - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --selector app=cockroachdb --all-namespaces --context= - ~~~ - - ~~~ - NAMESPACE NAME READY STATUS RESTARTS AGE - us-central1-a cockroachdb-0 1/1 Running 0 14m - us-central1-a cockroachdb-1 1/1 Running 0 14m - us-central1-a cockroachdb-2 1/1 Running 0 14m - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --selector app=cockroachdb --all-namespaces --context= - ~~~ - - ~~~ - NAMESPACE NAME READY STATUS RESTARTS AGE - us-west1-a cockroachdb-0 1/1 Running 0 14m - us-west1-a cockroachdb-1 1/1 Running 0 14m - us-west1-a cockroachdb-2 1/1 Running 0 14m - ~~~ - - If you notice that only one of the Kubernetes clusters' pods are marked as `READY`, you likely also need to configure a network firewall rule that will allow the pods in the different clusters to talk to each other. You can run the following command to create a firewall rule allowing traffic on port 26257 (the port used by CockroachDB for inter-node traffic) within your private GCE network. It will not allow any traffic in from outside your private network: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud compute firewall-rules create allow-cockroach-internal --allow=tcp:26257 --source-ranges=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 - ~~~ - - ~~~ - Creating firewall...done. - NAME NETWORK DIRECTION PRIORITY ALLOW DENY - allow-cockroach-internal default INGRESS 1000 tcp:26257 - ~~~ - -{{site.data.alerts.callout_success}} -In each Kubernetes cluster, the StatefulSet configuration sets all CockroachDB nodes to write to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs --namespace= --context=` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} - -## Step 3. Use the built-in SQL client - -1. Use the `client-secure.yaml` file to launch a pod and keep it running indefinitely, specifying the context of the Kubernetes cluster to run it in: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f client-secure.yaml --context= - ~~~ - - ~~~ - pod "cockroachdb-client-secure" created - ~~~ - - The pod uses the `root` client certificate created earlier by the `setup.py` script. Note that this will work from any of the three Kubernetes clusters as long as you use the correct namespace and context combination. - -2. Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html), again specifying the namespace and context of the Kubernetes cluster where the pod is running: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure --context= -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Server version: CockroachDB CCL v2.0.5 (x86_64-unknown-linux-gnu, built 2018/08/13 17:59:42, go1.10) (same version as client) - # Cluster ID: 99346e82-9817-4f62-b79b-fdd5d57f8bda - # - # Enter \? for a brief introduction. - # - warning: no current database set. Use SET database = to change, CREATE DATABASE to make a new database. - root@cockroachdb-public:26257/> - ~~~ - -3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - +----+---------+ - | id | balance | - +----+---------+ - | 1 | 1000.5 | - +----+---------+ - (1 row) - ~~~ - -3. Exit the SQL shell and pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - - The pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html), such as `cockroach node` or `cockroach zone`, repeat step 2 using the appropriate command. - - If you'd prefer to delete the pod and recreate it when needed, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure --context= - ~~~ - -## Step 4. Access the Web UI - -To access the cluster's [Web UI](admin-ui-overview.html): - -1. Port-forward from your local machine to a pod in one of your Kubernetes clusters: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward cockroachdb-0 8080 --namespace= --context= - ~~~ - - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - - {{site.data.alerts.callout_info}} - The `port-forward` command must be run on the same machine as the web browser in which you want to view the Web UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring `kubectl` locally and running the above `port-forward` command on your local machine. - {{site.data.alerts.end}} - -2. Go to https://localhost:8080. - -3. In the UI, check the **Node List** to verify that all nodes are running, and then click the **Databases** tab on the left to verify that `bank` is listed. - -## Step 5. Simulate datacenter failure - -One of the major benefits of running a multi-region cluster is that an entire datacenter or region can go down without affecting the availability of the CockroachDB cluster as a whole. - -To see this in action: - -1. Scale down one of the StatefulSets to zero pods, specifying the namespace and context of the Kubernetes cluster where it's running: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=0 --namespace= --context= - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ - -2. In the Admin UI, the **Cluster Overview** will soon show the three nodes from that region as **Suspect**. If you wait for 5 minutes or more, they will be listed as **Dead**. Note that even though there are three dead nodes, the other nodes are all healthy, and any clients using the database in the other regions will continue to work just fine. - -3. When you're done verifying that the cluster still fully functions with one of the regions down, you can bring the region back up by running: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 --namespace= --context= - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ - -## Step 6. Maintain the cluster - -### Scale the cluster - -Each of your Kubernetes clusters contains 3 nodes that pods can run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new worker node and then edit your StatefulSet configuration to add another pod. - -1. [Resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). - -2. Use the `kubectl scale` command to add a pod to the StatefulSet in the Kubernetes cluster where you want to add a CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=4 --namespace= --context= - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ - -3. Verify that a fourth pod was added successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --namespace= --context= - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 1h - cockroachdb-1 1/1 Running 0 1h - cockroachdb-2 1/1 Running 0 7m - cockroachdb-3 1/1 Running 0 44s - cockroachdb-client-secure 1/1 Running 0 26m - ~~~ - -### Upgrade the cluster - -As new versions of CockroachDB are released, it's strongly recommended to upgrade to newer versions in order to pick up bug fixes, performance improvements, and new features. The [general CockroachDB upgrade documentation](upgrade-cockroach-version.html) provides best practices for how to prepare for and execute upgrades of CockroachDB clusters, but the mechanism of actually stopping and restarting processes in Kubernetes is somewhat special. - -Kubernetes knows how to carry out a safe rolling upgrade process of the CockroachDB nodes. When you tell it to change the Docker image used in the CockroachDB StatefulSet, Kubernetes will go one-by-one, stopping a node, restarting it with the new image, and waiting for it to be ready to receive client requests before moving on to the next one. For more information, see [the Kubernetes documentation](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets). - -1. Decide how the upgrade will be finalized. - - {{site.data.alerts.callout_info}}This step is relevant only when upgrading from v2.0.x to v2.1. For upgrades within the v2.1.x series, skip this step.{{site.data.alerts.end}} - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain performance improvements and bug fixes introduced in v2.1. After finalization, however, it will no longer be possible to perform a downgrade to v2.0. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade. - - We recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade: - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure --context= -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public - ~~~ - - 2. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.0'; - ~~~ - -2. For each Kubernetes cluster, kick off the upgrade process by changing the desired Docker image. To do so, pick the version that you want to upgrade to, then run the following command, replacing "VERSION" with your desired new version and specifying the relevant namespace and "context" name for the Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb --namespace= --context= --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]' - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb --namespace= --context= --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]' - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb --namespace= --context= --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]' - ~~~ - -3. If you then check the status of the pods in each Kubernetes cluster, you should see one of them being restarted: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --selector app=cockroachdb --all-namespaces --context= - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --selector app=cockroachdb --all-namespaces --context= - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods --selector app=cockroachdb --all-namespaces --context= - ~~~ - - This will continue until all of the pods have restarted and are running the new image. - -4. Finish the upgrade. - - {{site.data.alerts.callout_info}}This step is relevant only when upgrading from v2.0.x to v2.1. For upgrades within the v2.1.x series, skip this step.{{site.data.alerts.end}} - - If you disabled auto-finalization in step 1 above, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - Once you are satisfied with the new version, re-enable auto-finalization: - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure --context= -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public - ~~~ - - 2. Re-enable auto-finalization: - - {% include copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - -### Stop the cluster - -1. To delete all of the resources created in your clusters, copy the `contexts` map from `setup.py` into `teardown.py`, and then run `teardown.py`: - - ~~~ shell - $ python teardown.py - ~~~ - - ~~~ - namespace "us-east1-b" deleted - service "kube-dns-lb" deleted - configmap "kube-dns" deleted - pod "kube-dns-5dcfcbf5fb-l4xwt" deleted - pod "kube-dns-5dcfcbf5fb-tddp2" deleted - namespace "us-west1-a" deleted - service "kube-dns-lb" deleted - configmap "kube-dns" deleted - pod "kube-dns-5dcfcbf5fb-8csc9" deleted - pod "kube-dns-5dcfcbf5fb-zlzn7" deleted - namespace "us-central1-a" deleted - service "kube-dns-lb" deleted - configmap "kube-dns" deleted - pod "kube-dns-5dcfcbf5fb-6ngmw" deleted - pod "kube-dns-5dcfcbf5fb-lcfxd" deleted - ~~~ - -2. Stop each Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters delete cockroachdb1 --zone= - ~~~ - - ~~~ - Deleting cluster cockroachdb1...done. - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters delete cockroachdb2 --zone= - ~~~ - - ~~~ - Deleting cluster cockroachdb2...done. - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters delete cockroachdb3 --zone= - ~~~ - - ~~~ - Deleting cluster cockroachdb3...done. - ~~~ - -## See also - -- [Kubernetes Single-Cluster Deployment](orchestrate-cockroachdb-with-kubernetes.html) -- [Kubernetes Performance Guide](kubernetes-performance.html) -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes.md b/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes.md deleted file mode 100644 index 68afc43a6b0..00000000000 --- a/src/current/v2.0/orchestrate-cockroachdb-with-kubernetes.md +++ /dev/null @@ -1,490 +0,0 @@ ---- -title: Orchestrate CockroachDB with Kubernetes -summary: How to orchestrate the deployment, management, and monitoring of a secure 3-node CockroachDB cluster with Kubernetes. -toc: true -secure: true -canonical: /stable/deploy-cockroachdb-with-kubernetes ---- - -
      - - -
      - -This page shows you how to orchestrate the deployment, management, and monitoring of a secure 3-node CockroachDB cluster in a single [Kubernetes](http://kubernetes.io/) cluster, using the [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) feature. - -To deploy across multiple Kubernetes clusters in different geographic regions instead, see [Kubernetes Multi-Cluster Deployment](orchestrate-cockroachdb-with-kubernetes-multi-cluster.html). Also, for details about potential performance bottlenecks to be aware of when running CockroachDB in Kubernetes and guidance on how to optimize your deployment for better performance, see [CockroachDB Performance on Kubernetes](kubernetes-performance.html). - -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology and current limitations. - -### Kubernetes terminology - -Feature | Description ---------|------------ -instance | A physical or virtual machine. In this tutorial, you'll create GCE or AWS instances and join them into a single Kubernetes cluster from your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one or more Docker containers. In this tutorial, each pod will run on a separate instance and include one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of networked storage (Persistent Disk on GCE, Elastic Block Store on AWS) mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

      This tutorial assumes that dynamic volume provisioning is available. When that is not the case, [persistent volume claims](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) need to be created manually. -[CSR](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) | A CSR, or Certificate Signing Request, is a request to have a TLS certificate signed by the Kubernetes cluster's built-in CA. As each pod is created, it issues a CSR for the CockroachDB node running in the pod, which must be manually checked and approved. The same is true for clients as they connect to the cluster. -[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) | RBAC, or Role-Based Access Control, is the system Kubernetes uses to manage permissions within the cluster. In order to take an action (e.g., `get` or `create`) on an API resource (e.g., a `pod` or `CSR`), the client must have a `Role` that allows it to do so. This tutorial creates the RBAC resources necessary for CockroachDB to create and access certificates. - -### Limitations - -{% include {{ page.version.version }}/orchestration/kubernetes-limitations.md %} - -## Step 1. Start Kubernetes - -{% include {{ page.version.version }}/orchestration/start-kubernetes.md %} - -## Step 2. Start CockroachDB nodes - -{% include {{ page.version.version }}/orchestration/start-cluster.md %} - -## Step 3. Approve node certificates - -As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod. - -1. Get the name of the `Pending` CSR for pod 1: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.node.cockroachdb-0 1m system:serviceaccount:default:default Pending - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 4m kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 4m kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 5m kubelet Approved,Issued - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - -2. Examine the CSR for pod 1: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.cockroachdb-0 - ~~~ - - ~~~ - Name: default.node.cockroachdb-0 - Labels: - Annotations: - CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500 - Requesting User: system:serviceaccount:default:default - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - cockroachdb-0.cockroachdb.default.svc.cluster.local - cockroachdb-public - IP Addresses: 127.0.0.1 - 10.48.1.6 - Events: - ~~~ - -3. If everything looks correct, approve the CSR for pod 1: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.cockroachdb-0 - ~~~ - - ~~~ - certificatesigningrequest "default.node.cockroachdb-0" approved - ~~~ - -4. Repeat steps 1-3 for the other 2 pods. - -## Step 4. Initialize the cluster - -1. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the nodes into a single cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml - ~~~ - - ~~~ - job "cluster-init-secure" created - ~~~ - -4. Approve the CSR for the one-off pod from which cluster initialization happens: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.client.root - ~~~ - - ~~~ - certificatesigningrequest "default.client.root" approved - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job - should be considered successful and the CockroachDB pods should soon be - considered `Ready`: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init-secure - ~~~ - - ~~~ - NAME DESIRED SUCCESSFUL AGE - cluster-init-secure 1 1 2m - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 3m - cockroachdb-1 1/1 Running 0 3m - cockroachdb-2 1/1 Running 0 3m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} - -## Step 5. Use the built-in SQL client - -To use the built-in SQL client, you need to launch a pod that runs indefinitely with the `cockroach` binary inside it, check and approve the CSR for the pod, get a shell into the pod, and then start the built-in SQL client. - -1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml - ~~~ - - ~~~ - pod "cockroachdb-client-secure" created - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -2. Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client) - # Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/> - ~~~ - -3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - +----+---------+ - | id | balance | - +----+---------+ - | 1 | 1000.5 | - +----+---------+ - (1 row) - ~~~ - -3. Exit the SQL shell and pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -{{site.data.alerts.callout_success}}This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other cockroach client commands, such as cockroach node or cockroach zone, repeat step 2 using the appropriate cockroach command.

      If you'd prefer to delete the pod and recreate it when needed, run kubectl delete pod cockroachdb-client-secure{{site.data.alerts.end}} - -## Step 6. Access the Admin UI - -{% include {{ page.version.version }}/orchestration/monitor-cluster.md %} - -## Step 7. Simulate node failure - -{% include {{ page.version.version }}/orchestration/kubernetes-simulate-failure.md %} - -## Step 8. Set up monitoring and alerting - -{% include {{ page.version.version }}/orchestration/kubernetes-prometheus-alertmanager.md %} - -## Step 9. Maintain the cluster - -### Scale the cluster - -{% include {{ page.version.version }}/orchestration/kubernetes-scale-cluster.md %} - -3. Get the name of the `Pending` CSR for the new pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-3 2m system:serviceaccount:default:default Pending - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - -4. Examine the CSR for the new pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.cockroachdb-3 - ~~~ - - ~~~ - Name: default.node.cockroachdb-0 - Labels: - Annotations: - CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500 - Requesting User: system:serviceaccount:default:default - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - cockroachdb-0.cockroachdb.default.svc.cluster.local - cockroachdb-public - IP Addresses: 127.0.0.1 - 10.48.1.6 - Events: - ~~~ - -5. If everything looks correct, approve the CSR for the new pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest "default.node.cockroachdb-3" approved - ~~~ - -6. Verify that the new pod started successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-3 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ~~~ - -8. Back in the Admin UI, view **Node List** to ensure that the fourth node successfully joined the cluster. - -### Upgrade the cluster - -{% include {{ page.version.version }}/orchestration/kubernetes-upgrade-cluster.md %} - -### Stop the cluster - -To shut down the CockroachDB cluster: - -1. Delete all of the resources associated with the `cockroachdb` label, including the logs, remote persistent volumes, and Prometheus and Alertmanager resources: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pods,statefulsets,services,persistentvolumeclaims,persistentvolumes,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=cockroachdb - ~~~ - - ~~~ - pod "cockroachdb-0" deleted - pod "cockroachdb-1" deleted - pod "cockroachdb-2" deleted - service "alertmanager-cockroachdb" deleted - service "cockroachdb" deleted - service "cockroachdb-public" deleted - persistentvolumeclaim "datadir-cockroachdb-0" deleted - persistentvolumeclaim "datadir-cockroachdb-1" deleted - persistentvolumeclaim "datadir-cockroachdb-2" deleted - poddisruptionbudget "cockroachdb-budget" deleted - job "cluster-init-secure" deleted - rolebinding "cockroachdb" deleted - clusterrolebinding "cockroachdb" deleted - clusterrolebinding "prometheus" deleted - role "cockroachdb" deleted - clusterrole "cockroachdb" deleted - clusterrole "prometheus" deleted - serviceaccount "cockroachdb" deleted - serviceaccount "prometheus" deleted - alertmanager "cockroachdb" deleted - prometheus "cockroachdb" deleted - prometheusrule "prometheus-cockroachdb-rules" deleted - servicemonitor "cockroachdb" deleted - ~~~ - -2. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -3. Get the names of the CSRs for the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.cockroachdb-3 12m system:serviceaccount:default:default Approved,Issued - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ~~~ - -4. Delete the CSRs that you created: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete csr default.client.root default.node.cockroachdb-0 default.node.cockroachdb-1 default.node.cockroachdb-2 default.node.cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest "default.client.root" deleted - certificatesigningrequest "default.node.cockroachdb-0" deleted - certificatesigningrequest "default.node.cockroachdb-1" deleted - certificatesigningrequest "default.node.cockroachdb-2" deleted - certificatesigningrequest "default.node.cockroachdb-3" deleted - ~~~ - -5. Get the names of the secrets for the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get secrets - ~~~ - - ~~~ - NAME TYPE DATA AGE - alertmanager-cockroachdb Opaque 1 1h - default-token-d9gff kubernetes.io/service-account-token 3 5h - default.client.root Opaque 2 5h - default.node.cockroachdb-0 Opaque 2 5h - default.node.cockroachdb-1 Opaque 2 5h - default.node.cockroachdb-2 Opaque 2 5h - default.node.cockroachdb-3 Opaque 2 5h - prometheus-operator-token-bpdv8 kubernetes.io/service-account-token 3 3h - ~~~ - -6. Delete the secrets that you created: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete secrets alertmanager-cockroachdb default.client.root default.node.cockroachdb-0 default.node.cockroachdb-1 default.node.cockroachdb-2 default.node.cockroachdb-3 - ~~~ - - ~~~ - secret "alertmanager-cockroachdb" deleted - secret "default.client.root" deleted - secret "default.node.cockroachdb-0" deleted - secret "default.node.cockroachdb-1" deleted - secret "default.node.cockroachdb-2" deleted - secret "default.node.cockroachdb-3" deleted - ~~~ - -7. Stop Kubernetes: - -{% include {{ page.version.version }}/orchestration/stop-kubernetes.md %} - -## See also - -- [Kubernetes Single-Cluster Deployment](orchestrate-cockroachdb-with-kubernetes.html) -- [Kubernetes Performance Guide](kubernetes-performance.html) -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v2.0/orchestration.md b/src/current/v2.0/orchestration.md deleted file mode 100644 index 92a63f03f44..00000000000 --- a/src/current/v2.0/orchestration.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Orchestration -summary: Learn how to run CockroachDB with popular open-source orchestration systems. -toc: false -canonical: /stable/kubernetes-overview ---- - -Orchestration systems automate the deployment, scaling, and management of containerized applications. Combined with CockroachDB's [automated sharding](frequently-asked-questions.html#how-does-cockroachdb-scale) and [fault tolerance](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), they have the potential to lower operator overhead to almost nothing. - -Use the following guides to run CockroachDB with popular open-source orchestration systems: - -- [Kubernetes Deployment](orchestrate-cockroachdb-with-kubernetes.html) -- [Kubernetes Performance Optimization](kubernetes-performance.html) -- [Docker Swarm Deployment](orchestrate-cockroachdb-with-docker-swarm.html) - -{{site.data.alerts.callout_success}}If you're just getting started with CockroachDB, you might want to orchestrate a local cluster to learn the basics of the database.{{site.data.alerts.end}} - -## See Also - -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/v2.0/parallel-statement-execution.md b/src/current/v2.0/parallel-statement-execution.md deleted file mode 100644 index 299b97259e5..00000000000 --- a/src/current/v2.0/parallel-statement-execution.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Parallel Statement Execution -summary: The parallel statement execution feature allows parallel execution of multiple independent SQL statements within a transaction. -toc: true ---- - -CockroachDB supports parallel execution of [independent](parallel-statement-execution.html#when-to-use-parallel-statement-execution) [`INSERT`](insert.html), [`UPDATE`](update.html), [`UPSERT`](upsert.html), and [`DELETE`](delete.html) statements within a single [transaction](transactions.html). Executing statements in parallel helps reduce aggregate latency and improve performance. - - -## Why Use Parallel Statement Execution - -SQL engines traditionally execute the SQL statements in a transaction sequentially. The server executes each statement to completion and sends the return value of each statement to the client. Only after the client receives the return value of a statement, it sends the next SQL statement to be executed. - -In the case of a traditional single-node SQL database, statements are executed on the single machine, and so the execution does not result in any communication latency. However, in the case of a distributed and replicated database like CockroachDB, execution of statements can span multiple nodes. The coordination between nodes results in communication latency. Executing SQL statements sequentially results in higher cumulative latency. - -With parallel statement execution, however, multiple SQL statements within a transaction are executed at the same time, thereby reducing the aggregate latency. - -## How Parallel Statement Execution Works - -Let's understand how sequential and parallel execution works in the following scenario: - - -- Suppose we want to update a user's last name, favorite movie, and favorite song on a social networking application. -- The database has three tables that need to be updated: `users`, `favorite_movies`, and `favorite_songs`. - -Then the traditional transaction to update the user's information is as follows: - -~~~ sql -> BEGIN; -> UPDATE users SET last_name = 'Smith' WHERE id = 1; -> UPDATE favorite_movies SET movies = 'The Matrix' WHERE user_id = 1; -> UPDATE favorite_songs SET songs = 'All this time' WHERE user_id = 1; -> COMMIT; -~~~ - -While executing the SQL statements in the transaction sequentially, the server sends a return value after executing a statement. The client can send the next statement to be executed only after it receives the return value of the previous statement. This is often described as a "conversational API," as demonstrated by the following conceptual diagram: - -CockroachDB Parallel Statement Execution - -The SQL statements in our sample scenario can be executed in parallel since they are independent of each other. To execute statements in parallel, the client should be able to send the next statement to be executed without waiting for the return value of the earlier statement. In CockroachDB, on appending the `RETURNING NOTHING` clause with SQL statements, the server sends an acknowledgment immediately, instead of waiting to complete the statement execution and sending the return value to the client. The client sends the next statement to be executed on receiving the acknowledgment. This allows CockroachDB to execute the statements in parallel. The statements are executed in parallel until CockroachDB encounters a **barrier statement**. A barrier statement is any statement without the `RETURNING NOTHING` clause. The server executes a barrier statement sequentially. - -In our sample scenario, the transaction would be as follows: - -~~~ sql -> BEGIN; -> UPDATE users SET last_name = 'Smith' WHERE id = 1 RETURNING NOTHING; -> UPDATE favorite_movies SET movies = 'The Matrix' WHERE user_id = 1 RETURNING NOTHING; -> UPDATE favorite_songs SET songs = 'All this time' WHERE user_id = 1 RETURNING NOTHING; -> COMMIT; -~~~ - -In this case, because the `UPDATE` statements within the transaction are independent of each other, they can be executed in parallel without affecting the results. The `COMMIT` statement is the barrier statement and is executed sequentially. A barrier statement is executed only after all the parallel statements preceding it have finished executing. - -The following conceptual diagram shows how the transaction is executed sequentially and in parallel. The diagram also shows how executing statements in parallel reduces the aggregate latency. - -CockroachDB Parallel Statement Execution - -### Perceived delay in execution of barrier statements - -As stated earlier, the server executes a barrier statement only after all the preceding parallel statements have finished executing. So it may seem as if the barrier statement is taking longer to execute, but it is waiting on the parallel statements. Even then, the total time required for parallel execution of statements followed by the sequential execution of the barrier statement should be less than the time required for the sequential execution of all statements. - -Referring to the previous diagram, the server executes all `UPDATE` statements to completion before executing `COMMIT`. Hence it might seem as if `COMMIT` is taking longer to execute, but it is, in fact, waiting on the `UPDATE` statements to finish executing. - -### Error message mismatch - -With sequential execution, as soon as an error happens, the transaction is aborted and an error message is sent to the client. However, with parallel execution, the message is sent not when the error is encountered but after the next barrier statement. This can result in the client receiving an error message that doesn't match the statement being executed. The following diagram illustrates this concept: - -CockroachDB Parallel Statement Execution Error Mismatch - -### `RETURNING NOTHING` clause appended to dependent statements - -If two consecutive statements are not independent, and yet a `RETURNING NOTHING` clause is added to the statements, CockroachDB detects the dependence and executes the statements sequentially. This means that you can use the `RETURNING NOTHING` clause with SQL statements without worrying about their dependence. - -Revising our sample scenario, suppose we want to create a new user on the social networking app. We need to create entries for the last name of the user, their favorite movie, and favorite song. We need to insert entries into three tables: `users`, `favorite_movies`, and `favorite_songs`. The transaction would be as follows: - -~~~ sql -> BEGIN; -> INSERT INTO users VALUES last_name = 'Pavlo' WHERE id = 2 RETURNING NOTHING; -> INSERT INTO favorite_movies VALUES movies = 'Godfather' WHERE user_id = 2 RETURNING NOTHING; -> INSERT INTO facvorite_songs VALUES songs = 'Remember' WHERE user_id = 2 RETURNING NOTHING; -> COMMIT; -~~~ - -In this case, the second and third `INSERT` statements are dependent on the first `INSERT` statement because the movies and songs tables both have a foreign key constraint on the users table. So even though we append the `RETURNING NOTHING` clause to the first statement, CockroachDB executes the first statement sequentially. After the first statement is executed to completion, the second and third `INSERT` statements are executed in parallel. The following conceptual diagram shows how the transaction is executed in sequential and parallel modes: - -CockroachDB Parallel Statement Hybrid Execution - -## When to Use Parallel Statement Execution - -SQL statements within a single transaction can be executed in parallel if the statements are independent. CockroachDB considers SQL statements within a single transaction to be independent if their execution can be safely reordered without affecting their results. - -For example, the following statements are considered independent since reordering the statements does not affect the results: - -~~~ sql -> INSERT INTO a VALUES (100); -> INSERT INTO b VALUES (100); -~~~ - -~~~ sql -> INSERT INTO a VALUES (100); -> INSERT INTO a VALUES (200); -~~~ - -The following pairs of statements are dependent since reordering them will affect their results: - -~~~ sql -> UPDATE a SET b = 2 WHERE y = 1; -> UPDATE a SET b = 3 WHERE y = 1; -~~~ - -~~~ sql -> UPDATE a SET y = true WHERE y = false; -> UPDATE a SET y = false WHERE y = true; -~~~ - - -{{site.data.alerts.callout_info}}Parallel statement execution in CockroachDB is different than parallel query execution in PostgreSQL. For PostgreSQL, parallel query execution refers to “creating multiple query processes that divide the workload of a single SQL statement and executing them in parallel”. For CockroachDB’s parallel statement execution, an individual SQL statement is not divided into processes. Instead, multiple independent SQL statements within a single transaction are executed in parallel.{{site.data.alerts.end}} - -## See Also - -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`DELETE`](delete.html) -- [`UPSERT`](upsert.html) diff --git a/src/current/v2.0/partition-by.md b/src/current/v2.0/partition-by.md deleted file mode 100644 index 48415d5a0bc..00000000000 --- a/src/current/v2.0/partition-by.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: PARTITION BY -summary: Use the ALTER TABLE statement to define partitions and subpartitions, repartition, or unpartition a table. -toc: true ---- - -New in v2.0 `PARTITION BY` is a subcommand of [`ALTER TABLE`](alter-table.html) that is used to define partitions and subpartitions on a table, and repartition or unpartition a table. - -{{site.data.alerts.callout_info}}Defining table partitions is an enterprise-only feature.{{site.data.alerts.end}} - -## Primary Key Requirements - -The [primary key required for partitioning](partitioning.html#partition-using-primary-key) is different from the conventional primary key: The unique identifier in the primary key requires to be prefixed with all columns you want to partition and subpartition the table on, in the order in which you want to nest your subpartitions. - -As of CockroachDB v2.0, you cannot alter the primary key after it has been defined while [creating the table](create-table.html#create-a-table-with-partitions-new-in-v2-0). If the primary key in your existing table does not meet the requirements, you will not be able to use the `ALTER TABLE` statement to define partitions or subpartitions on the existing table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/alter_table_partition_by.html %} -
      - -## Parameters - -Parameter | Description | ------------|-------------| -`table_name` | The name of the table you want to define partitions for. | -`name_list` | List of columns you want to define partitions on (in the order they are defined in the primary key).| -`list_partitions` | Name of list partition followed by the list of values to be included in the partition. -`range_partitions` | Name of range partition followed by the range of values to be included in the partition. - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Examples - -### Define a List Partition on an Existing Table - -Suppose we have an existing table named `students_by_list` in a global online learning portal, and the primary key of the table is defined as `(country, id)`. We can define partitions on the table by list: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students_by_list PARTITION BY LIST (country) - (PARTITION north_america VALUES IN ('CA','US'), - PARTITION australia VALUES IN ('AU','NZ'), - PARTITION DEFAULT VALUES IN (default)); -~~~ - -### Define a Range Partition on an Existing Table - -Suppose we have an another existing table named `students_by_range` and the primary key of the table is defined as `(expected_graduation_date, id)`. We can define partitions on the table by range: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students_by_range PARTITION BY RANGE (expected_graduation_date) - (PARTITION graduated VALUES FROM (MINVALUE) TO ('2017-08-15'), - PARTITION current VALUES FROM ('2017-08-15') TO (MAXVALUE)); -~~~ - -### Define a Subpartitions on an Existing Table - -Suppose we have an yet another existing table named `students` with the primary key defined as `(country, expected_graduation_date, id)`. We can define partitions and subpartitions on the table: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students PARTITION BY LIST (country)( - PARTITION australia VALUES IN ('AU','NZ') PARTITION BY RANGE (expected_graduation_date)(PARTITION graduated_au VALUES FROM (MINVALUE) TO ('2017-08-15'), PARTITION current_au VALUES FROM ('2017-08-15') TO (MAXVALUE)), - PARTITION north_america VALUES IN ('US','CA') PARTITION BY RANGE (expected_graduation_date)(PARTITION graduated_us VALUES FROM (MINVALUE) TO ('2017-08-15'), PARTITION current_us VALUES FROM ('2017-08-15') TO (MAXVALUE)) - ); -~~~ - -### Repartition a Table - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students_by_range PARTITION BY RANGE (expected_graduation_date) ( - PARTITION graduated VALUES FROM (MINVALUE) TO ('2018-08-15'), - PARTITION current VALUES FROM ('2018-08-15') TO (MAXVALUE)); -~~~ - -### Unpartition a Table - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students PARTITION BY NOTHING; -~~~ - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [`ALTER TABLE`](alter-table.html) -- [Define Table Partitions](partitioning.html) diff --git a/src/current/v2.0/partitioning.md b/src/current/v2.0/partitioning.md deleted file mode 100644 index f6285f716c9..00000000000 --- a/src/current/v2.0/partitioning.md +++ /dev/null @@ -1,551 +0,0 @@ ---- -title: Define Table Partitions -summary: Partitioning is an enterprise feature that gives you row-level control of how and where your data is stored. -toc: true ---- - -New in v2.0 CockroachDB allows you to define table partitions, thus giving you row-level control of how and where your data is stored. Partitioning enables you to reduce latencies and costs and can assist in meeting regulatory requirements for your data. - -{{site.data.alerts.callout_info}}Table partitioning is an enterprise-only feature.{{site.data.alerts.end}} - - -## Why Use Table Partitioning - -Table partitioning helps you reduce latency and cost: - -- **Geo-partitioning** allows you to keep user data close to the user, which reduces the distance that the data needs to travel, thereby **reducing latency**. To geo-partition a table, define location-based partitions while creating a table, create location-specific zone configurations, and apply the zone configurations to the corresponding partitions. -- **Archival-partitioning** allows you to store infrequently-accessed data on slower and cheaper storage, thereby **reducing costs**. To archival-partition a table, define frequency-based partitions while creating a table, create frequency-specific zone configurations with appropriate storage devices constraints, and apply the zone configurations to the corresponding partitions. - -## How It Works - -Table partitioning involves a combination of CockroachDB features: - -- [Node Attributes](#node-attributes) -- [Enterprise License](#enterprise-license) -- [Table Creation](#table-creation) -- [Replication Zones](#replication-zones) - -### Node Attributes - -To store partitions in specific locations (e.g., geo-partitioning), or on machines with specific attributes (e.g., archival-partitioning), the nodes of your cluster must be [started](start-a-node.html) with the relevant flags: - -- Use the `--locality` flag to assign key-value pairs that describe the location of a node, for example, `--locality=region=east,datacenter=us-east-1`. -- Use the `--attrs` flag to specify node capability, which might include specialized hardware or number of cores, for example, `--attrs=ram:64gb`. -- Use the `attrs` field of the `--store` flag to specify disk type or capability, for example,`--store=path=/mnt/ssd01,attrs=ssd`. - -For more details about these flags, see the [`cockroach start`](start-a-node.html) documentation. - -### Enterprise License - -You must have a valid enterprise license to use table partitioning features. For details about requesting and setting a trial or full enterprise license, see [Enterprise Licensing](enterprise-licensing.html). - -Note that the following features do not work with an **expired license**: - -- Creating new table partitions or adding new zone configurations for partitions -- Changing the partitioning scheme on any table or index -- Changing the zone config for a partition - -However, the following features continue to work even with an expired enterprise license: - -- Querying a partitioned table (for example, `SELECT foo PARTITION`) -- Inserting or updating data in a partitioned table -- Dropping a partitioned table -- Unpartitioning a partitioned table -- Making non-partitioning changes to a partitioned table (for example, adding a column/index/foreign key/check constraint) - -### Table Creation - -You can define partitions and subpartitions over one or more columns of a table. During [table creation](create-table.html), you declare which values belong to each partition in one of two ways: - -- **List partitioning**: Enumerate all possible values for each partition. List partitioning is a good choice when the number of possible values is small. List partitioning is well-suited for geo-partitioning. -- **Range partitioning**: Specify a contiguous range of values for each partition by specifying lower and upper bounds. Range partitioning is a good choice when the number of possible values is too large to explicitly list out. Range partitioning is well-suited for archival-partitioning. - -#### Partition by List - -[`PARTITION BY LIST`](partition-by.html) lets you map one or more tuples to a partition. - -To partition a table by list, use the [`PARTITION BY LIST`](partition-by.html) syntax while creating the table. While defining a list partition, you can also set the `DEFAULT` partition that acts as a catch-all if none of the rows match the requirements for the defined partitions. - -See [Partition by List](#define-table-partitions-by-list) example below for more details. - -#### Partition by Range - -[`PARTITION BY RANGE`](partition-by.html) lets you map ranges of tuples to a partition. - -To define a table partition by range, use the [`PARTITION BY RANGE`](partition-by.html) syntax while creating the table. While defining a range partition, you can use CockroachDB-defined `MINVALUE` and `MAXVALUE` parameters to define the lower and upper bounds of the ranges respectively. - -{{site.data.alerts.callout_info}}The lower bound of a range partition is inclusive, while the upper bound is exclusive. For range partitions, NULL is considered less than any other data, which is consistent with our key encoding ordering and ORDER BY behavior.{{site.data.alerts.end}} - -Partition values can be any SQL expression, but it’s only evaluated once. If you create a partition with value `< (now() - '1d')` on 2017-01-30, it would be contain all values less than 2017-01-29. It would not update the next day, it would continue to contain values less than 2017-01-29. - -See [Partition by Range](#define-table-partitions-by-range) example below for more details. - -#### Partition using Primary Key - -The primary key required for partitioning is different from the conventional primary key. To define the primary key for partitioning, prefix the unique identifier(s) in the primary key with all columns you want to partition and subpartition the table on, in the order in which you want to nest your subpartitions. - -For instance, consider the database of a global online learning portal that has a table for students of all the courses across the world. If you want to geo-partition the table based on the countries of the students, then the primary key needs to be defined as: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (country, id)); -~~~ - -**Primary Key Considerations** - -- For v2.0, you cannot change the primary key after you create the table. Provision for all future subpartitions by including those columns in the primary key. In the example of the online learning portal, if you think you might want to subpartition based on `expected_graduation_date` in the future, define the primary key as `(country, expected_graduation_date, id)`. v2.1 will allow you to change the primary key. -- The order in which the columns are defined in the primary key is important. The partitions and subpartitions need to follow that order. In the example of the online learning portal, if you define the primary key as `(country, expected_graduation_date, id)`, the primary partition is by `country`, and then subpartition is by `expected_graduation_date`. You can’t skip `country` and partition by `expected_graduation_date`. - -#### Partition using Secondary Index - -The primary key discussed above has two drawbacks: - -- It does not enforce that the identifier column is globally unique. -- It does not provide fast lookups on the identifier. - -To ensure uniqueness or fast lookups, create a unique, unpartitioned secondary index on the identifier. - -Indexes can also be partitioned, but are not required to be. Each partition is required to have a name that is unique among all partitions on that table and its indexes. For example, the following `CREATE INDEX` scenario will fail because it reuses the name of a partition of the primary key: - -~~~ sql -CREATE TABLE foo (a STRING PRIMARY KEY, b STRING) PARTITION BY LIST (a) ( - PARTITION bar VALUES IN ('bar'), - PARTITION default VALUES IN (DEFAULT) -); -CREATE INDEX foo_b_idx ON foo (b) PARTITION BY LIST (b) ( - PARTITION baz VALUES IN ('baz'), - PARTITION default VALUES IN (DEFAULT) -); -~~~ - -Consider using a naming scheme that uses the index name to avoid conflicts. For example, the partitions above could be named `primary_idx_bar`, `primary_idx_default`, `b_idx_baz`, `b_idx_default`. - -#### Define Partitions on Interleaved Tables - -For [interleaved tables](interleave-in-parent.html), partitions can be defined only on the root table of the interleave hierarchy, while children are interleaved the same as their parents. - -### Replication Zones - -On their own, partitions are inert and simply apply a label to the rows of the table that satisfy the criteria of the defined partitions. Applying functionality to a partition requires creating and applying [replication zone](configure-replication-zones.html) to the corresponding partitions. - -CockroachDB uses the most granular zone config available. Zone configs that target a partition are considered more granular than those that target a table or index, which in turn are considered more granular than those that target a database. - -## Examples - -### Define Table Partitions by List - -Consider a global online learning portal, RoachLearn, that has a database containing a table of students across the world. Suppose we have two datacenters: one in the United States and another in Australia. To reduce latency, we want to keep the students' data closer to their locations: - -- We want to keep the data of the students located in the United States and Canada in the United States datacenter. -- We want to keep the data of students located in Australia and New Zealand in the Australian datacenter. - -#### Step 1. Identify the partitioning method - -We want to geo-partition the table to keep the students' data closer to their locations. We can achieve this by partitioning on `country` and using the `PARTITION BY LIST` syntax. - -#### Step 2. Start each node with its datacenter location specified in the `--locality` flag - -{% include copy-clipboard.html %} -~~~ shell -# Start the node in the US datacenter: -$ cockroach start --insecure \ ---locality=datacenter=us1 \ ---store=node1 \ ---host= \ ---port=26257 \ ---http-port=8080 \ ---join=:26257,:26258 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Start the node in the AUS datacenter: -$ cockroach start --insecure \ ---locality=datacenter=au1 \ ---store=node2 \ ---host= \ ---port=26258 \ ---http-port=8081 \ ---join=:26257,:26258 -~~~ - -#### Step 3. Set the enterprise license - -To set the enterprise license, see [Set the Trial or Enterprise License Key](enterprise-licensing.html#set-the-trial-or-enterprise-license-key). - -#### Step 4. Create a table with the appropriate partitions - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students_by_list ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (country, id)) - PARTITION BY LIST (country) - (PARTITION north_america VALUES IN ('CA','US'), - PARTITION australia VALUES IN ('AU','NZ'), - PARTITION DEFAULT VALUES IN (default)); -~~~ - -#### Step 5. Create and apply corresponding zone configurations - -Create appropriate zone configurations: - -{% include copy-clipboard.html %} -~~~ shell -$ cat > north_america.zone.yml -constraints: [+datacenter=us1] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat > australia.zone.yml -constraints: [+datacenter=au1] -~~~ - -Apply zone configurations to corresponding partitions: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students_by_list.north_america --insecure -f north_america.zone.yml -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students_by_list.australia --insecure -f australia.zone.yml -~~~ - -#### Step 6. Verify table partitions - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM TABLE students_by_list; -~~~ - -You should see the following output: - -~~~ sql -+-----------------+-----------------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------------+-----------------+----------+----------+--------------+ -| NULL | /"AU" | 251 | {1,2,3} | 1 | -| /"AU" | /"AU"/PrefixEnd | 257 | {1,2,3} | 1 | -| /"AU"/PrefixEnd | /"CA" | 258 | {1,2,3} | 1 | -| /"CA" | /"CA"/PrefixEnd | 252 | {1,2,3} | 1 | -| /"CA"/PrefixEnd | /"NZ" | 253 | {1,2,3} | 1 | -| /"NZ" | /"NZ"/PrefixEnd | 256 | {1,2,3} | 1 | -| /"NZ"/PrefixEnd | /"US" | 259 | {1,2,3} | 1 | -| /"US" | /"US"/PrefixEnd | 254 | {1,2,3} | 1 | -| /"US"/PrefixEnd | NULL | 255 | {1,2,3} | 1 | -+-----------------+-----------------+----------+----------+--------------+ -(9 rows) - -Time: 7.209032ms -~~~ - -### Define Table Partitions by Range - -Suppose we want to store the data of current students on fast and expensive storage devices (e.g., SSD) and store the data of the graduated students on slower, cheaper storage devices (e.g., HDD). - -#### Step 1. Identify the partitioning method - -We want to archival-partition the table to keep newer data on faster devices and older data on slower devices. We can achieve this by partitioning the table by date and using the `PARTITION BY RANGE` syntax. - -#### Step 2. Set the enterprise license - -To set the enterprise license, see [Set the Trial or Enterprise License Key](enterprise-licensing.html#set-the-trial-or-enterprise-license-key). - -#### Step 3. Start each node with the appropriate storage device specified in the `--store` flag - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---store=path=/mnt/1,attrs=ssd \ ---host= \ ---port=26257 \ ---http-port=8080 \ ---join=:26257,:26258 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---store=path=/mnt/2,attrs=hdd \ ---host= \ ---port=26258 \ ---http-port=8081 \ ---join=:26257,:26258 -~~~ - -#### Step 4. Create a table with the appropriate partitions - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students_by_range ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (expected_graduation_date, id)) - PARTITION BY RANGE (expected_graduation_date) - (PARTITION graduated VALUES FROM (MINVALUE) TO ('2017-08-15'), - PARTITION current VALUES FROM ('2017-08-15') TO (MAXVALUE)); -~~~ - -#### Step 5. Create and apply corresponding zone configurations - -Create appropriate zone configurations: - -{% include copy-clipboard.html %} -~~~ shell -$ cat > current.zone.yml -constraints: [+ssd] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat > graduated.zone.yml -constraints: [+hdd] -~~~ - -Apply zone configurations to corresponding partitions: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students_by_range.current --insecure -f current.zone.yml -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students_by_range.graduated --insecure -f graduated.zone.yml -~~~ - -#### Step 6. Verify table partitions - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM TABLE students_by_range; -~~~ - -You should see the following output: - -~~~ sql -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | /17393 | 244 | {1,2,3} | 1 | -| /17393 | NULL | 242 | {1,2,3} | 1 | -+-----------+---------+----------+----------+--------------+ -(2 rows) - -Time: 5.850903ms -~~~ - -### Define Subpartitions on a Table - -A list partition can itself be partitioned, forming a subpartition. There is no limit on the number of levels of subpartitioning; that is, list partitions can be infinitely nested. - -{{site.data.alerts.callout_info}}Range partitions cannot be subpartitioned.{{site.data.alerts.end}} - -Going back to RoachLearn's scenario, suppose we want to do all of the following: - -- Keep the students' data close to their location. -- Store the current students' data on faster storage devices. -- Store the graduated students' data on slower, cheaper storage devices (example: HDD). - -#### Step 1. Identify the Partitioning method - -We want to geo-partition as well as archival-partition the table. We can achieve this by partitioning the table first by location and then by date. - -#### Step 2. Start each node with the appropriate storage device specified in the `--store` flag - -Start a node in the US datacenter: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---host= \ ---locality=datacenter=us1 \ ---store=path=/mnt/1,attrs=ssd \ ---store=path=/mnt/2,attrs=hdd \ -~~~ - -Start a node in the AUS datacenter: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---host= \ ---locality=datacenter=au1 \ ---store=path=/mnt/3,attrs=ssd \ ---store=path=/mnt/4,attrs=hdd \ ---join=:26257 -~~~ - -Initialize the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --insecure --host= -~~~ - -#### Step 3. Set the enterprise license - -To set the enterprise license, see [Set the Trial or Enterprise License Key](enterprise-licensing.html#set-the-trial-or-enterprise-license-key). - -#### Step 4. Create a table with the appropriate partitions - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (country, expected_graduation_date, id)) - PARTITION BY LIST (country)( - PARTITION australia VALUES IN ('AU','NZ') PARTITION BY RANGE (expected_graduation_date)(PARTITION graduated_au VALUES FROM (MINVALUE) TO ('2017-08-15'), PARTITION current_au VALUES FROM ('2017-08-15') TO (MAXVALUE)), - PARTITION north_america VALUES IN ('US','CA') PARTITION BY RANGE (expected_graduation_date)(PARTITION graduated_us VALUES FROM (MINVALUE) TO ('2017-08-15'), PARTITION current_us VALUES FROM ('2017-08-15') TO (MAXVALUE)) - ); -~~~ - -Subpartition names must be unique within a table. In our example, even though `graduated` and `current` are sub-partitions of distinct partitions, they still need to be uniquely named. Hence the names `graduated_au`, `graduated_us`, and `current_au` and `current_us`. - -#### Step 5. Create and apply corresponding zone configurations - -Create appropriate zone configurations: - -{% include copy-clipboard.html %} -~~~ shell -$ cat > current_us.zone.yml -constraints: [+ssd,+datacenter=us1] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat > graduated_us.zone.yml -constraints: [+hdd,+datacenter=us1] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat > current_au.zone.yml -constraints: [+ssd,+datacenter=au1] -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat > graduated_au.zone.yml -constraints: [+hdd,+datacenter=au1] -~~~ - -Apply zone configurations to corresponding partitions: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students.current_us --insecure -f current_us.zone.yml -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students.graduated_us --insecure -f graduated_us.zone.yml -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students.current_au --insecure -f current_au.zone.yml -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach zone set roachlearn.students.graduated_au --insecure -f graduated_au.zone.yml -~~~ - -#### Step 6. Verify table partitions - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM TABLE students; -~~~ - -You should see the following output: - -~~~ sql -+-----------------+-----------------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------------+-----------------+----------+----------+--------------+ -| NULL | /"AU" | 260 | {1,2,3} | 1 | -| /"AU" | /"AU"/17393 | 268 | {1,2,3} | 1 | -| /"AU"/17393 | /"AU"/PrefixEnd | 266 | {1,2,3} | 1 | -| /"AU"/PrefixEnd | /"CA" | 267 | {1,2,3} | 1 | -| /"CA" | /"CA"/17393 | 265 | {1,2,3} | 1 | -| /"CA"/17393 | /"CA"/PrefixEnd | 261 | {1,2,3} | 1 | -| /"CA"/PrefixEnd | /"NZ" | 262 | {1,2,3} | 3 | -| /"NZ" | /"NZ"/17393 | 284 | {1,2,3} | 3 | -| /"NZ"/17393 | /"NZ"/PrefixEnd | 282 | {1,2,3} | 3 | -| /"NZ"/PrefixEnd | /"US" | 283 | {1,2,3} | 3 | -| /"US" | /"US"/17393 | 281 | {1,2,3} | 3 | -| /"US"/17393 | /"US"/PrefixEnd | 263 | {1,2,3} | 1 | -| /"US"/PrefixEnd | NULL | 264 | {1,2,3} | 1 | -+-----------------+-----------------+----------+----------+--------------+ -(13 rows) - -Time: 11.586626ms -~~~ - -### Repartition a Table - -Consider the partitioned table of students of RoachLearn. Suppose the table has been partitioned on range to store the current students on fast and expensive storage devices (example: SSD) and store the data of the graduated students on slower, cheaper storage devices(example: HDD). Now suppose we want to change the date after which the students will be considered current to `2018-08-15`. We can achieve this by using the [`PARTITION BY`](partition-by.html) subcommand of the [`ALTER TABLE`](alter-table.html) command. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students_by_range PARTITION BY RANGE (expected_graduation_date) ( - PARTITION graduated VALUES FROM (MINVALUE) TO ('2018-08-15'), - PARTITION current VALUES FROM ('2018-08-15') TO (MAXVALUE)); -~~~ - -### Unpartition a Table - -You can remove the partitions on a table by using the [`PARTITION BY NOTHING`](partition-by.html) syntax: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE students PARTITION BY NOTHING; -~~~ - -## Locality–Resilience Tradeoff - -There is a tradeoff between making reads/writes fast and surviving failures. Consider a partition with three replicas of `roachlearn.students` for Australian students. - -- If only one replica is pinned to an Australian datacenter, then reads may be fast (via leases follow the workload) but writes will be slow. -- If two replicas are pinned to an Australian datacenter, then reads and writes will be fast (as long as the cross-ocean link has enough bandwidth that the third replica doesn’t fall behind). If those two replicas are in the same datacenter, then the loss of one datacenter can lead to data unavailability, so some deployments may want two separate Australian datacenters. -- If all three replicas are in Australian datacenters, then three Australian datacenters are needed to be resilient to a datacenter loss. - -## How CockroachDB's Partitioning Differs from Other Databases - -Other databases use partitioning for three additional use cases: secondary indexes, sharding, and bulk loading/deleting. CockroachDB addresses these use-cases not by using partitioning, but in the following ways: - -- **Changes to secondary indexes:** CockroachDB solves these changes through online schema changes. Online schema changes are a superior feature to partitioning because they require zero-downtime and eliminate the potential for consistency problems. -- **Sharding:** CockroachDB automatically shards data as a part of its distributed database architecture. -- **Bulk Loading & Deleting:** CockroachDB does not have a feature that supports this use case as of now. - -## Known Limitations - -{% include {{ page.version.version }}/known-limitations/partitioning-with-placeholders.md %} - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [`ALTER TABLE`](alter-table.html) -- [Computed Columns](computed-columns.html) diff --git a/src/current/v2.0/pause-job.md b/src/current/v2.0/pause-job.md deleted file mode 100644 index 9a2c2a960aa..00000000000 --- a/src/current/v2.0/pause-job.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: PAUSE JOB -summary: The PAUSE JOB statement lets you temporarily halt the process of potentially long-running jobs. -toc: true ---- - -New in v1.1: The `PAUSE JOB` [statement](sql-statements.html) lets you pause [`BACKUP`](backup.html), [`RESTORE`](restore.html), and [`IMPORT`](import.html) jobs. - -After pausing jobs, you can resume them with [`RESUME JOB`](resume-job.html). - -{{site.data.alerts.callout_info}}You cannot pause schema changes.{{site.data.alerts.end}} - - -## Required Privileges - -By default, only the `root` user can control a job. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/pause_job.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`job_id` | The ID of the job you want to pause, which can be found with [`SHOW JOBS`](show-jobs.html). - -## Examples - -### Pause a Restore Job - -~~~ sql -> SHOW JOBS; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ -~~~ sql -> PAUSE JOB 27536791415282; -~~~ - -## See Also - -- [`RESUME JOB`](resume-job.html) -- [`SHOW JOBS`](show-jobs.html) -- [`CANCEL JOB`](cancel-job.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`IMPORT`](import.html) \ No newline at end of file diff --git a/src/current/v2.0/performance-benchmarking-with-tpc-c.md b/src/current/v2.0/performance-benchmarking-with-tpc-c.md deleted file mode 100644 index ad77d4e6b4e..00000000000 --- a/src/current/v2.0/performance-benchmarking-with-tpc-c.md +++ /dev/null @@ -1,428 +0,0 @@ ---- -title: Performance Benchmarking with TPC-C -summary: Learn how to benchmark CockroachDB against TPC-C. -toc: true - ---- - -New in v2.0:This page walks you through [TPC-C](http://www.tpc.org/tpcc/) performance benchmarking on CockroachDB. It measures tpmC (new order transactions/minute) on two TPC-C datasets: - -- 1,000 warehouses (for a total dataset size of 200GB) on 3 nodes -- 10,000 warehouses (for a total dataset size of 2TB) on 30 nodes - -These two points on the spectrum show how CockroachDB scales from modest-sized production workloads to larger-scale deployments. This demonstrates how CockroachDB achieves high OLTP performance of over 128,000 tpmC on a TPC-C dataset over 2TB in size. - -## Benchmark a small cluster - -### Step 1. Create 3 Google Cloud Platform GCE instances - -1. [Create 3 instances](https://cloud.google.com/compute/docs/instances/create-start-instance) for your CockroachDB nodes. While creating each instance: - - Use the `n1-highcpu-16` machine type. - - For our TPC-C benchmarking, we use `n1-highcpu-16` machines. Currently, we believe this (or higher vCPU count machines) is the best configuration for CockroachDB under high traffic scenarios. - - [Create and mount a local SSD using a SCSI interface](https://cloud.google.com/compute/docs/disks/local-ssd#create_local_ssd). - - We attach a single local SSD to each virtual machine. Local SSDs are low latency disks attached to each VM, which maximizes performance. We chose this configuration because it best resembles what a bare metal deployment would look like, with machines directly connected to one physical disk each. We do not recommend using network-attached block storage. - - [Optimize the local SSD for write performance](https://cloud.google.com/compute/docs/disks/performance#optimize_local_ssd) (see the **Disable write cache flushing** section). - - To apply the Admin UI firewall rule you created earlier, click **Management, disk, networking, SSH keys**, select the **Networking** tab, and then enter `cockroachdb` in the **Network tags** field. - -2. Note the internal IP address of each `n1-highcpu-16` instance. You'll need these addresses when starting the CockroachDB nodes. - -3. Create a fourth instance for running the TPC-C benchmark. - -{{site.data.alerts.callout_danger}} -This configuration is intended for performance benchmarking only. For production deployments, there are other important considerations, such as ensuring that data is balanced across at least three availability zones for resiliency. See the [Production Checklist](recommended-production-settings.html) for more details. -{{site.data.alerts.end}} - - - -### Step 2. Start a 3-node cluster - -1. SSH to the first `n1-highcpu-16` instance. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n1-highcpu-16` instances. - -5. From the fourth `n1-highcpu-16` instance, run the [`cockroach init`](initialize-a-cluster.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=localhost - ~~~ - - Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Web UI, and the SQL URL for clients. - -### Step 3. Load data for the benchmark - -CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the TPC-C workload. - -1. SSH to the fourth instance (the one not running a CockroachDB node), download `workload`, and make it executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST - ~~~ - -2. Rename and copy `workload` into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i workload.LATEST /usr/local/bin/workload - ~~~ - -3. Start the TPC-C workload, pointing it at the [connection string of a node](connection-parameters.html#connect-using-a-url) and including any connection parameters: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./workload.LATEST fixtures load tpcc \ - --warehouses=1000 \ - "postgres://root@:26257?sslmode=disable" - ~~~ - - This command runs the TPC-C workload against the cluster. This will take about an hour and loads 1,000 "warehouses" of data. - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `workload run tpcc --help`. For details about other load generators included in `workload`, use `workload run --help`. - {{site.data.alerts.end}} - -4. To monitor the load generator's progress, follow along with the process on the **Admin UI > Jobs** table. - - Open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. Follow along with the process on the **Admin UI > Jobs** table. - -### Step 4. Run the benchmark - -Still on the fourth instance, run `workload` for five minutes against the other 3 instances: - -{% include copy-clipboard.html %} -~~~ shell -$ ./workload.LATEST run tpcc \ ---ramp=30s \ ---warehouses=1000 \ ---duration=300s \ ---split \ ---scatter \ -"postgres://root@:26257?sslmode=disable postgres://root@:26257?sslmode=disable postgres://root@:26257?sslmode=disable [...space separated list]" -~~~ - -### Step 5. Interpret the results - -Once the `workload` has finished running, you should see a final output line: - -~~~ shell -_elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms) - 298.9s 13154.0 102.3% 75.1 71.3 113.2 130.0 184.5 436.2 -~~~ - -You will also see some audit checks and latency statistics for each individual query. For this run, some of those checks might indicate that they were `SKIPPED` due to insufficient data. For a more comprehensive test, run `workload` for a longer duration (e.g., two hours). The `tpmC` (new order transactions/minute) number is the headline number and `efc` ("efficiency") tells you how close CockroachDB gets to theoretical maximum `tpmC`. - -The [TPC-C specification](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-c_v5.11.0.pdf) has p90 latency requirements in the order of seconds, but as you see here, CockroachDB far surpasses that requirement with p90 latencies in the hundreds of milliseconds. - -## Benchmark a large cluster - -The methodology for reproducing CockroachDB's 30-node, 10,000 warehouse TPC-C result is similar to that for the [3-node, 1,000 warehouse example](#benchmark-a-small-cluster). The only difference (besides the larger node count and dataset) is that you will use CockroachDB's [partitioning](partitioning.html) feature to ensure replicas for any given section of data are located on the same nodes that will be queried by the load generator for that section of data. Partitioning helps distribute the workload evenly across the cluster. - -### Before you start - -Benchmarking a large cluster uses [partitioning](partitioning.html). You must have a valid enterprise license to use partitioning features. For details about requesting and setting a trial or full enterprise license, see [Enterprise Licensing](enterprise-licensing.html). - -### Step 1. Create 30 Google Cloud Platform GCE instances - -1. [Create 30 instances](https://cloud.google.com/compute/docs/instances/create-start-instance) for your CockroachDB nodes. While creating each instance: - - Use the `n1-highcpu-16` machine type. - - For our TPC-C benchmarking, we use `n1-highcpu-16` machines. Currently, we believe this (or higher vCPU count machines) is the best configuration for CockroachDB under high traffic scenarios. - - [Create and mount a local SSD](https://cloud.google.com/compute/docs/disks/local-ssd#create_local_ssd). - - We attach a single local SSD to each virtual machine. Local SSDs are low latency disks attached to each VM, which maximizes performance. We chose this configuration because it best resembles what a bare metal deployment would look like, with machines directly connected to one physical disk each. We do not recommend using network-attached block storage. - - [Optimize the local SSD for write performance](https://cloud.google.com/compute/docs/disks/performance#optimize_local_ssd) (see the **Disable write cache flushing** section). - - To apply the Admin UI firewall rule you created earlier, click **Management, disk, networking, SSH keys**, select the **Networking** tab, and then enter `cockroachdb` in the **Network tags** field. - -2. Note the internal IP address of each `n1-highcpu-16` instance. You'll need these addresses when starting the CockroachDB nodes. - -3. Create a 31st instance for running the TPC-C benchmark. - -{{site.data.alerts.callout_danger}} -This configuration is intended for performance benchmarking only. For production deployments, there are other important considerations, such as ensuring that data is balanced across at least three availability zones for resiliency. See the [Production Checklist](recommended-production-settings.html) for more details. -{{site.data.alerts.end}} - -### Step 2. Add an enterprise license - -For this benchmark, you will use partitioning, which is an enterprise feature. For details about requesting and setting a trial or full enterprise license, see [Enterprise Licensing](enterprise-licensing.html). - -To add an enterprise license to your cluster once it is started, [use the built-in SQL client](use-the-built-in-sql-client.html) as follows: - -1. SSH to the 31st instance (the one not running a CockroachDB node) and launch the built-in SQL client: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -2. Add your enterprise license: - - {% include copy-clipboard.html %} - ~~~ shell - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -3. Exit the interactive shell, using `\q` or `ctrl-d`. - -### Step 3. Start a 30-node cluster - -1. SSH to the first `n1-highcpu-16` instance. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-host= \ - --join=:26257,:26257,:26257, [...] \ - --cache=.25 \ - --max-sql-memory=.25 \ - --locality=rack=1 \ - --background - ~~~ - - Each node will start with a [locality](start-a-node.html#locality) that includes an artificial "rack number" (e.g., `--locality=rack=1`). Use 10 racks for 30 nodes so that every tenth node is part of the same rack (e.g., `--locality=rack=2`, `--locality=rack=3`, ...). - -4. Repeat steps 1 - 3 for the other 29 `n1-highcpu-16` instances. - -5. From the 31st `n1-highcpu-16` instance, run the [`cockroach init`](initialize-a-cluster.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=localhost - ~~~ - - Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Web UI, and the SQL URL for clients. - -### Step 4. Load data for the benchmark - -CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -2. Still on the 31st instance (the one not running a CockroachDB node), download `workload`, and make it executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST - ~~~ - -3. Rename and copy `workload` into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i workload.LATEST /usr/local/bin/workload - ~~~ - -4. Start the TPC-C workload, pointing it at the [connection string of a node](connection-parameters.html#connect-using-a-url) and including any connection parameters: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./workload.LATEST fixtures load tpcc \ - --warehouses=10000 \ - "postgres://root@:26257?sslmode=disable" - ~~~ - - This command runs the TPC-C workload against the cluster. This will take at about an hour and loads 10,000 "warehouses" of data. - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `workload run tpcc --help`. For details about other load generators included in `workload`, use `workload run --help`. - {{site.data.alerts.end}} - -4. To monitor the load generator's progress, follow along with the process on the **Admin UI > Jobs** table. - - Open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - -### Step 5. Increase the snapshot rate - -To [increase the snapshot rate](cluster-settings.html), which helps speed up this large-scale data movement: - -1. Still on the 31st instance, launch the built-in SQL client: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -2. Set the cluster setting to increase the snapshot rate: - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.snapshot_rebalance.max_rate='64MiB'; - ~~~ - -3. Exit the interactive shell, using `\q` or `ctrl-d`. - -### Step 6. Partition the database - -Next, [partition your database](partitioning.html) to divide all of the TPC-C tables and indexes into ten partitions, one per rack, and then use [zone configurations](configure-replication-zones.html) to pin those partitions to a particular rack. - -1. Still on the 31st instance, start the partitioning: - - {% include copy-clipboard.html %} - ~~~ shell - $ ulimit -n 10000 && workload.LATEST run tpcc \ - --partitions=10 \ - --split \ - --scatter \ - --warehouses=10000 \ - --duration=1s \ - "postgres://root@:26257?sslmode=disable" - ~~~ - - This command runs the TPC-C workload against the cluster for 1 second, long enough to add the partitions. - - Partitioning the data will take at least 12 hours. It takes this long because all of the data (over 2TB replicated for TPC-C-10K) needs to be moved to the right locations. - -2. To watch the progress, follow along with the process on the **Admin UI > Metrics > Queues > Replication Queue** graph. Change the timeframe to **Last 10 Min** to view a more granular graph. - - Open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Once the Replication Queue gets to `0` for all actions and stays there, the cluster should be finished rebalancing and is ready for testing. - -### Step 7. Run the benchmark - -Still on the 31st instance, run `workload` for five minutes against the other 30 instances: - -~~~ shell -$ ulimit -n 10000 && ./workload.LATEST run tpcc \ ---warehouses=10000 \ ---ramp=30s \ ---duration=300s \ ---split \ ---scatter \ -"postgres://root@:26257?sslmode=disable postgres://root@:26257?sslmode=disable postgres://root@:26257?sslmode=disable [...space separated list]" -~~~ - -### Step 8. Interpret the results - -Once the `workload` has finished running, you should see a final output line similar to the output in [Benchmark a small cluster](#benchmark-a-small-cluster). The `tpmC` should be about 10x higher, reflecting the increase in the number of warehouses: - -~~~ shell -_elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms) - 291.6s 131109.8 102.0% 115.3 88.1 184.5 268.4 637.5 4295.0 -~~~ - -You will also see some audit checks and latency statistics for each individual query. For this run, some of those checks might indicate that they were `SKIPPED` due to insufficient data. For a more comprehensive test, run `workload` for a longer duration (e.g., two hours). The `tpmC` (new order transactions/minute) number is the headline number and `efc` ("efficiency") tells you how close CockroachDB gets to theoretical maximum `tpmC`. - -The [TPC-C specification](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-c_v5.11.0.pdf) has p90 latency requirements in the order of seconds, but as you see here, CockroachDB far surpasses that requirement with p90 latencies in the hundreds of milliseconds. - - - -## See also - -- [Benchmarking CockroachDB 2.0: A Performance Report](https://www.cockroachlabs.com/guides/cockroachdb-performance/) -- [SQL Performance Best Practices](performance-best-practices-overview.html) -- [Deploy CockroachDB on Digital Ocean](deploy-cockroachdb-on-digital-ocean.html) diff --git a/src/current/v2.0/performance-best-practices-overview.md b/src/current/v2.0/performance-best-practices-overview.md deleted file mode 100644 index 485fb22904e..00000000000 --- a/src/current/v2.0/performance-best-practices-overview.md +++ /dev/null @@ -1,317 +0,0 @@ ---- -title: SQL Performance Best Practices -summary: Best practices for optimizing SQL performance in CockroachDB. -toc: true ---- - -This page provides best practices for optimizing SQL performance in CockroachDB. - -{{site.data.alerts.callout_success}} -For a demonstration of some of these techniques, see [Performance Tuning](performance-tuning.html). -{{site.data.alerts.end}} - -## Multi-Row DML Best Practices - -### Use Multi-Row DML instead of Multiple Single-Row DMLs - -For `INSERT`, `UPSERT`, and `DELETE` statements, a single multi-row DML is faster than multiple single-row DMLs. Whenever possible, use multi-row DML instead of multiple single-row DMLs. - -For more information, see: - -- [Insert Multiple Rows](insert.html#insert-multiple-rows-into-an-existing-table) -- [Upsert Multiple Rows](upsert.html#upsert-multiple-rows) -- [Delete Multiple Rows](delete.html#delete-specific-rows) -- [How to improve IoT application performance with multi-row DML](https://www.cockroachlabs.com/blog/multi-row-dml/) - -### Use `TRUNCATE` instead of `DELETE` to Delete All Rows in a Table - -The [`TRUNCATE`](truncate.html) statement removes all rows from a table by dropping the table and recreating a new table with the same name. This performs better than using `DELETE`, which performs multiple transactions to delete all rows. - -## Bulk Insert Best Practices - -### Use Multi-Row `INSERT` Statements for Bulk Inserts into Existing Tables - -To bulk-insert data into an existing table, batch multiple rows in one multi-row `INSERT` statement and do not include the `INSERT` statements within a transaction. Experimentally determine the optimal batch size for your application by monitoring the performance for different batch sizes (10 rows, 100 rows, 1000 rows). For more information, see [Insert Multiple Rows](insert.html#insert-multiple-rows-into-an-existing-table). - -### Use `IMPORT` instead of `INSERT` for Bulk Inserts into New Tables - -To bulk-insert data into a brand new table, the [`IMPORT`](import.html) statement performs better than `INSERT`. - -## Execute Statements in Parallel - -CockroachDB supports parallel execution of [independent](parallel-statement-execution.html#when-to-use-parallel-statement-execution) [`INSERT`](insert.html), [`UPDATE`](update.html), [`UPSERT`](upsert.html), and [`DELETE`](delete.html) statements within a single [transaction](transactions.html). Executing statements in parallel helps reduce aggregate latency and improve performance. To execute statements in parallel, append the `RETURNING NOTHING` clause to the statements in a transaction. For more information, see [Parallel Statement Execution](parallel-statement-execution.html). - -## Assign Column Families - -A column family is a group of columns in a table that is stored as a single key-value pair in the underlying key-value store. - -When a table is created, all columns are stored as a single column family. This default approach ensures efficient key-value storage and performance in most cases. However, when frequently updated columns are grouped with seldom updated columns, the seldom updated columns are nonetheless rewritten on every update. Especially when the seldom updated columns are large, it's therefore more performant to [assign them to a distinct column family](column-families.html). - -## Interleave Tables - -[Interleaving tables](interleave-in-parent.html) improves query performance by optimizing the key-value structure of closely related tables, attempting to keep data on the same key-value range if it's likely to be read and written together. This is particularly helpful if the tables are frequently joined on the columns that consist of the interleaving relationship. - -## Unique ID Best Practices - -A traditional approach for generating unique IDs is one of the following: - -- Monotonically increase `INT` IDs by using transactions with roundtrip `SELECT`s. -- Use the [`SERIAL`](serial.html) pseudo-type for a column to generate random unique IDs. - -The first approach does not take advantage of the parallelization possible in a distributed database like CockroachDB. - -The bottleneck with the second approach is that IDs generated temporally near each other have similar values and are located physically near each other in a table. This can cause a hotspot for reads and writes in a table. - -The best practice in CockroachDB is to generate unique IDs using the `UUID` type, which generates random unique IDs in parallel, thus improving performance. - -### Use `UUID` to Generate Unique IDs - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -### Use `INSERT` with the `RETURNING` Clause to Generate Unique IDs - -If something prevents you from using `UUID` to generate unique IDs, you might resort to using `INSERT`s with `SELECT`s to return IDs. Instead, [use the `RETURNING` clause with the `INSERT` statement](insert.html#insert-and-return-values) for improved performance. - -#### Generate Monotonically-Increasing Unique IDs - -Suppose the table schema is as follows: - -~~~ sql -> CREATE TABLE X ( - ID1 INT, - ID2 INT, - ID3 INT DEFAULT 1, - PRIMARY KEY (ID1,ID2) - ); -~~~ - -The common approach would be to use a transaction with an `INSERT` followed by a `SELECT`: - -~~~ sql -> BEGIN; - -> INSERT INTO X VALUES (1,1,1) - ON CONFLICT (ID1,ID2) - DO UPDATE SET ID3=X.ID3+1; - -> SELECT * FROM X WHERE ID1=1 AND ID2=1; - -> COMMIT; -~~~ - -However, the performance best practice is to use a `RETURNING` clause with `INSERT` instead of the transaction: - -~~~ sql -> INSERT INTO X VALUES (1,1,1),(2,2,2),(3,3,3) - ON CONFLICT (ID1,ID2) - DO UPDATE SET ID3=X.ID3 + 1 - RETURNING ID1,ID2,ID3; -~~~ - -#### Generate Random Unique IDs - -Suppose the table schema is as follows: - -~~~ sql -> CREATE TABLE X ( - ID1 INT, - ID2 INT, - ID3 INT DEFAULT unique_rowid(), - PRIMARY KEY (ID1,ID2) - ); -~~~ - -The common approach to generate random Unique IDs is a transaction using a `SELECT` statement: - -~~~ sql -> BEGIN; - -> INSERT INTO X VALUES (1,1); - -> SELECT * FROM X WHERE ID1=1 AND ID2=1; - -> COMMIT; -~~~ - -However, the performance best practice is to use a `RETURNING` clause with `INSERT` instead of the transaction: - -~~~ sql -> INSERT INTO X VALUES (1,1),(2,2),(3,3) - RETURNING ID1,ID2,ID3; -~~~ - -## Indexes Best Practices - -### Use Secondary Indexes - -You can use secondary indexes to improve the performance of queries using columns not in a table's primary key. You can create them: - -- At the same time as the table with the `INDEX` clause of [`CREATE TABLE`](create-table.html#create-a-table-with-secondary-and-inverted-indexes-new-in-v2-0). In addition to explicitly defined indexes, CockroachDB automatically creates secondary indexes for columns with the [Unique constraint](unique.html). -- For existing tables with [`CREATE INDEX`](create-index.html). -- By applying the Unique constraint to columns with [`ALTER TABLE`](alter-table.html), which automatically creates an index of the constrained columns. - -To create the most useful secondary indexes, check out our [best practices](indexes.html#best-practices). - -### Use Indexes for Faster Joins - -CockroachDB supports both [merge joins](https://en.wikipedia.org/wiki/Sort-merge_join) and [hash joins](https://en.wikipedia.org/wiki/Hash_join). CockroachDB uses merge joins whenever possible because they are more performant than hash joins computationally and in terms of memory. However, merge joins are possible only when the tables being joined are indexed on the relevant columns; when this condition is not met, CockroachDB resorts to the slower hash joins. - -#### Why are merge joins faster than hash joins? - -Merge joins are computationally less expensive and do not require additional memory. They are performed on the indexed columns of two tables as follows: - -- CockroachDB takes one row from each table and compares them. -- If the rows are equal, CockroachDB returns the rows. -- If the rows are not equal, CockroachDB discards the lower-value row and repeats the process with the next row until all rows are processed. - -In contrast, hash joins are computationally expensive and require additional memory. They are performed on two tables as follows: - -- CockroachDB creates an in-memory hash table on the smaller table. -- CockroachDB then uses the hash table and scans the larger table to find matching rows from the smaller table. - -#### Why create indexes to perform merge joins? - -A merge join requires both tables to be indexed on the merge columns. In case this condition is not met, CockroachDB resorts to the slower hash joins. So while using `JOIN` on two tables, first create indexes on the tables and then use the `JOIN` operator. - -Also note that merge joins can be used only with [distributed query processing](https://www.cockroachlabs.com/blog/local-and-distributed-processing-in-cockroachdb/). - -### Drop Unused Indexes - -Though indexes improve read performance, they incur an overhead for every write. In some cases, like the use cases discussed above, the tradeoff is worth it. However, if an index is unused, it slows down DML operations. Therefore, [drop unused indexes](drop-index.html) whenever possible. - -## Join Best Practices - -See [Join Performance Best Practices](joins.html#performance-best-practices). - -## Subquery Best Practices - -See [Subquery Performance Best Practices](subqueries.html#performance-best-practices). - -## Table Scans Best Practices - -### Avoid `SELECT *` for Large Tables - -For large tables, avoid table scans (that is, reading the entire table data) whenever possible. Instead, define the required fields in a `SELECT` statement. - -#### Example - -Suppose the table schema is as follows: - -~~~ sql -> CREATE TABLE accounts ( - id INT, - customer STRING, - address STRING, - balance INT - nominee STRING - ); -~~~ - -Now if we want to find the account balances of all customers, an inefficient table scan would be: - -~~~ sql -> SELECT * FROM ACCOUNTS; -~~~ - -This query retrieves all data stored in the table. A more efficient query would be: - -~~~ sql - > SELECT CUSTOMER, BALANCE FROM ACCOUNTS; -~~~ - -This query returns the account balances of the customers. - -### Avoid `SELECT DISTINCT` for Large Tables - -`SELECT DISTINCT` allows you to obtain unique entries from a query by removing duplicate entries. However, `SELECT DISTINCT` is computationally expensive. As a performance best practice, use [`SELECT` with the `WHERE` clause](select-clause.html#filter-rows) instead. - -### Use `AS OF SYSTEM TIME` to Decrease Conflicts with Long-Running Queries - -If you have long-running queries (such as analytics queries that perform full table scans) that can tolerate slightly out-of-date reads, consider using the [`... AS OF SYSTEM TIME` clause](select-clause.html#select-historical-data-time-travel). Using this, your query returns data as it appeared at a distinct point in the past and will not cause [conflicts](architecture/transaction-layer.html#transaction-conflicts) with other concurrent transactions, which can increase your application's performance. - -However, because `AS OF SYSTEM TIME` returns historical data, your reads might be stale. - -## Understanding and Avoiding Transaction Contention - -Transaction contention occurs when the following three conditions are met: - -- There are multiple concurrent transactions or statements (sent by - multiple clients connected simultaneously to a single CockroachDB - cluster). -- They operate on the same data, specifically over table rows with the - same index key values (either on [primary keys](primary-key.html) or - secondary [indexes](indexes.html), or via - [interleaving](interleave-in-parent.html)) or using index key values - that are close to each other, and thus place the indexed data on the - same [data ranges](architecture/overview.html). -- At least some of the transactions write or modify the data. - -A set of transactions that all contend on the same keys will be -limited in performance to the maximum processing speed of a single -node (limited horizontal scalability). Non-contended transactions are -not affected in this way. - -There are two levels of contention: - -- Transactions that operate on the same range but different index key - values will be limited by the overall hardware capacity of a single - node (the range lease holder). - -- Transactions that operate on the same index key values - (specifically, that operate on the same [column - family](column-families.html) for a given index key) will be more - strictly serialized to obey transaction isolation semantics. - -Transaction contention can also increase the rate of transaction -restarts, and thus make the proper implementation of [client-side -transaction -retries](transactions.html#client-side-transaction-retries) more -critical. - -To avoid contention, multiple strategies can be applied: - -- Use index key values with a more random distribution of values, so - that transactions over different rows are more likely to operate on - separate data ranges. See the [SQL FAQs](sql-faqs.html) on row IDs for - suggestions. - -- Make transactions smaller, so that each transaction has less work to - do. In particular, avoid multiple client-server exchanges per - transaction. For example, use [common table - expressions](common-table-expressions.html) to group multiple - [`SELECT`](select-clause.html) and - [`INSERT`](insert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html)/[`UPSERT`](upsert.html) - clauses together in a single SQL statement. - -- When replacing values in a row, use [`UPSERT`](upsert.html) and - specify values for all columns in the inserted rows. This will - usually have the best performance under contention, compared to - combinations of [`SELECT`](select-clause.html), - [`INSERT`](insert.html), and [`UPDATE`](update.html). - -- Increase - [normalization](https://en.wikipedia.org/wiki/Database_normalization) - of the data to place parts of the same records that are modified by - different transactions in different tables. Note however that this - is a double-edged sword, because denormalization can also increase - performance by creating multiple copies of often-referenced data in - separate ranges. - -- If the application strictly requires operating on very few different - index key values, consider using [`ALTER ... SPLIT - AT`](split-at.html) so that each index key value can be served by - a separate group of nodes in the cluster. - -It is always best to avoid contention as much as possible via the -design of the schema and application. However, sometimes contention is -unavoidable. To maximize performance in the presence of contention, -you'll need to maximize the performance of a single range. -To achieve this, multiple strategies can be applied: - -- Minimize the network distance between the replicas of a range, - possibly using zone configs and partitioning. -- Use the fastest storage devices available. -- If the contending transactions operate on different keys within the - same range, add more CPU power (more cores) per node. Note however - that this is less likely to provide an improvement if the - transactions all operate on the same key. diff --git a/src/current/v2.0/performance-tuning.md b/src/current/v2.0/performance-tuning.md deleted file mode 100644 index e04f54f6171..00000000000 --- a/src/current/v2.0/performance-tuning.md +++ /dev/null @@ -1,2262 +0,0 @@ ---- -title: Performance Tuning -summary: Essential techniques for getting fast reads and writes in a single- and multi-region CockroachDB deployment. -toc: true ---- - -This tutorial shows you essential techniques for getting fast reads and writes in CockroachDB, starting with a single-region deployment and expanding into multiple regions. - -For a comprehensive list of tuning recommendations, only some of which are demonstrated here, see [SQL Performance Best Practices](performance-best-practices-overview.html). - -## Overview - -### Topology - -You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload: - -Perf tuning topology - -{{site.data.alerts.callout_info}} -Within a single GCE zone, network latency between instances should be sub-millisecond. -{{site.data.alerts.end}} - -You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload: - -Perf tuning topology - -To reproduce the performance demonstrated in this tutorial: - -- For each CockroachDB node, you'll use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk. -- For running the client application workload, you'll use smaller instances, such as `n1-standard-1`. - -### Schema - -Your schema and data will be based on the fictional peer-to-peer vehicle-sharing app, MovR, that was featured in the [CockroachDB 2.0 demo](https://www.youtube.com/watch?v=v2QK5VgLx6E): - -Perf tuning schema - -A few notes about the schema: - -- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated. -- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling. -- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later. -- The `rides` table contains both `city` and the seemingly redundant `vehicle_city`. This redundancy is necessary because, while it is not possible to apply more than one foreign key constraint to a single column, you will need to apply two foreign key constraints to the `rides` table, and each will require city as part of the constraint. The duplicate `vehicle_city`, which is kept in sync with `city` via a [`CHECK` constraint](check.html), lets you overcome [this limitation](https://github.com/cockroachdb/cockroach/issues/23580). - -### Important concepts - -To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first review some important [CockroachDB architectural concepts](architecture/overview.html): - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -As mentioned above, when a query is executed, the cluster routes the request to the leaseholder for the range containing the relevant data. If the query touches multiple ranges, the request goes to multiple leaseholders. For a read request, only the leaseholder of the relevant range retrieves the data. For a write request, the Raft consensus protocol dictates that a majority of the replicas of the relevant range must agree before the write is committed. - -Let's consider how these mechanics play out in some hypothetical queries. - -#### Read scenario - -First, imagine a simple read scenario where: - -- There are 3 nodes in the cluster. -- There are 3 small tables, each fitting in a single range. -- Ranges are replicated 3 times (the default). -- A query is executed against node 2 to read from table 3. - -Perf tuning concepts - -In this case: - -1. Node 2 (the gateway node) receives the request to read from table 3. -2. The leaseholder for table 3 is on node 3, so the request is routed there. -3. Node 3 returns the data to node 2. -4. Node 2 responds to the client. - -If the query is received by the node that has the leaseholder for the relevant range, there are fewer network hops: - -Perf tuning concepts - -#### Write scenario - -Now imagine a simple write scenario where a query is executed against node 3 to write to table 1: - -Perf tuning concepts - -In this case: - -1. Node 3 (the gateway node) receives the request to write to table 1. -2. The leaseholder for table 1 is on node 1, so the request is routed there. -3. The leaseholder is the same replica as the Raft leader (as is typical), so it simultaneously appends the write to its own Raft log and notifies its follower replicas on nodes 2 and 3. -4. As soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leader and the write is committed to the key-values on the agreeing replicas. In this diagram, the follower on node 2 acknowledged the write, but it could just as well have been the follower on node 3. Also note that the follower not involved in the consensus agreement usually commits the write very soon after the others. -5. Node 1 returns acknowledgement of the commit to node 3. -6. Node 3 responds to the client. - -Just as in the read scenario, if the write request is received by the node that has the leaseholder and Raft leader for the relevant range, there are fewer network hops: - -Perf tuning concepts - -#### Network and I/O bottlenecks - -With the above examples in mind, it's always important to consider network latency and disk I/O as potential performance bottlenecks. In summary: - -- For reads, hops between the gateway node and the leaseholder add latency. -- For writes, hops between the gateway node and the leaseholder/Raft leader, and hops between the leaseholder/Raft leader and Raft followers, add latency. In addition, since Raft log entries are persisted to disk before a write is committed, disk I/O is important. - -## Single-region deployment - - - -### Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster) -- **8080** (`tcp:8080`) for accessing the Web UI - -Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, if you want to access the Web UI from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls): - -Field | Recommended Value -------|------------------ -Name | **cockroachweb** -Source filter | IP ranges -Source IP ranges | Your local network's IP ranges -Allowed protocols | **tcp:8080** -Target tags | `cockroachdb` - -{{site.data.alerts.callout_info}} -The **tag** feature will let you easily apply the rule to your instances. -{{site.data.alerts.end}} - -### Step 2. Create instances - -You'll start with a 3-node CockroachDB cluster in the `us-east1-b` GCE zone, with an extra instance for running a client application workload. - -1. [Create 3 instances](https://cloud.google.com/compute/docs/instances/create-start-instance) for your CockroachDB nodes. While creating each instance: - - Select the `us-east1-b` [zone](https://cloud.google.com/compute/docs/regions-zones/). - - Use the `n1-standard-4` machine type (4 vCPUs, 15 GB memory). - - Use the Ubuntu 16.04 OS image. - - [Create and mount a local SSD](https://cloud.google.com/compute/docs/disks/local-ssd#create_local_ssd). - - To apply the Web UI firewall rule you created earlier, click **Management, disk, networking, SSH keys**, select the **Networking** tab, and then enter `cockroachdb` in the **Network tags** field. - -2. Note the internal IP address of each `n1-standard-4` instance. You'll need these addresses when starting the CockroachDB nodes. - -3. Create a separate instance for running a client application workload, also in the `us-east1-b` zone. This instance can be smaller, such as `n1-standard-1`. - -### Step 3. Start a 3-node cluster - -1. SSH to the first `n1-standard-4` instance. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-east1,zone=us-east1-b \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances. - -5. On any of the `n1-standard-4` instances, run the [`cockroach init`](initialize-a-cluster.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=localhost - ~~~ - - Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Web UI, and the SQL URL for clients. - -### Step 4. Import the Movr dataset - -Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle). - -1. SSH to the fourth instance, the one not running a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -4. Start the [built-in SQL shell](use-the-built-in-sql-client.html), pointing it at one of the CockroachDB nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
      - ~~~ - -5. Create the `movr` database and set it as the default: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE movr; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET DATABASE = movr; - ~~~ - -6. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables: - - {% include copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE users ( - id UUID NOT NULL, - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv' - ); - ~~~ - - ~~~ - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - | job_id | status | fraction_completed | rows | index_entries | system_records | bytes | - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - | 370636591722889217 | succeeded | 1 | 0 | 0 | 0 | 0 | - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - (1 row) - - Time: 3.409449563s - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE vehicles ( - id UUID NOT NULL, - city STRING NOT NULL, - type STRING NULL, - owner_id UUID NULL, - creation_time TIMESTAMP NULL, - status STRING NULL, - ext JSON NULL, - mycol STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv' - ); - ~~~ - - ~~~ - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - | job_id | status | fraction_completed | rows | index_entries | system_records | bytes | - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - | 370636877487505409 | succeeded | 1 | 0 | 0 | 0 | 0 | - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - (1 row) - - Time: 5.646142826s - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE rides ( - id UUID NOT NULL, - city STRING NOT NULL, - vehicle_city STRING NULL, - rider_id UUID NULL, - vehicle_id UUID NULL, - start_address STRING NULL, - end_address STRING NULL, - start_time TIMESTAMP NULL, - end_time TIMESTAMP NULL, - revenue DECIMAL(10,2) NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), - INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC), - CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv' - ); - ~~~ - - ~~~ - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - | job_id | status | fraction_completed | rows | index_entries | system_records | bytes | - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - | 370636986413285377 | succeeded | 1 | 0 | 0 | 0 | 0 | - +--------------------+-----------+--------------------+------+---------------+----------------+-------+ - (1 row) - - Time: 42.781522085s - ~~~ - - {{site.data.alerts.callout_success}} - You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](admin-ui-jobs-page.html) of the Web UI. - {{site.data.alerts.end}} - -7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables: - - Referencing columns | Referenced columns - --------------------|------------------- - `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id` - `rides.city`, `rides.rider_id` | `users.city`, `users.id` - `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id` - - As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE vehicles - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, owner_id) - REFERENCES users (city, id); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, rider_id) - REFERENCES users (city, id); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_vehicle_city_ref_vehicles - FOREIGN KEY (vehicle_city, vehicle_id) - REFERENCES vehicles (city, id); - ~~~ - -8. Exit the built-in SQL shell: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -### Step 5. Install the Python client - -When measuring SQL performance, it's best to run a given statement multiple times and look at the average and/or cumulative latency. For that purpose, you'll install and use a Python testing client. - -1. Still on the fourth instance, make sure all of the system software is up-to-date: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get update && sudo apt-get -y upgrade - ~~~ - -2. Install the `psycopg2` driver: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install python-psycopg2 - ~~~ - -3. Download the Python client: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/performance/tuning.py \ - && chmod +x tuning.py - ~~~ - - As you'll see below, this client lets you pass command-line flags: - - Flag | Description - -----|------------ - `--host` | The IP address of the target node. This is used in the client's connection string. - `--statement` | The SQL statement to execute. - `--repeat` | The number of times to repeat the statement. This defaults to 20. - - When run, the client prints the average time in seconds across all repetitions of the statement. Optionally, you can pass two other flags, `--time` to print the execution time in seconds for each repetition of the statement, and `--cumulative` to print the cumulative time in seconds for all repetitions. `--cumulative` is particularly useful when testing writes. - - {{site.data.alerts.callout_success}} - To get similar help directly in your shell, use `./tuning.py --help`. - {{site.data.alerts.end}} - -### Step 6. Test/tune read performance - -- [Filtering by the primary key](#filtering-by-the-primary-key) -- [Filtering by a non-indexed column (full table scan)](#filtering-by-a-non-indexed-column-full-table-scan) -- [Filtering by a secondary index](#filtering-by-a-secondary-index) -- [Filtering by a secondary index storing additional columns](#filtering-by-a-secondary-index-storing-additional-columns) -- [Joining data from different tables](#joining-data-from-different-tables) -- [Using `IN (list)` with a subquery](#using-in-list-with-a-subquery) -- [Using `IN (list)` with explicit values](#using-in-list-with-explicit-values) - -#### Filtering by the primary key - -Retrieving a single row based on the primary key will usually return in 2ms or less: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT * FROM rides WHERE city = 'boston' AND id = '000007ef-fa0f-4a6e-a089-ce74aa8d2276'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['id', 'city', 'vehicle_city', 'rider_id', 'vehicle_id', 'start_address', 'end_address', 'start_time', 'end_time', 'revenue'] -['000007ef-fa0f-4a6e-a089-ce74aa8d2276', 'boston', 'boston', 'd66c386d-4b7b-48a7-93e6-f92b5e7916ab', '6628bbbc-00be-4891-bc00-c49f2f16a30b', '4081 Conner Courts\nSouth Taylor, VA 86921', '2808 Willis Wells Apt. 931\nMccoyberg, OH 10303-4879', '2018-07-20 01:46:46.003070', '2018-07-20 02:27:46.003070', '44.25'] - -Times (milliseconds): -[24.547100067138672, 0.7688999176025391, 0.6949901580810547, 0.8230209350585938, 0.698089599609375, 0.7278919219970703, 0.6978511810302734, 0.5998611450195312, 0.7150173187255859, 0.7338523864746094, 0.6768703460693359, 0.7460117340087891, 0.7028579711914062, 0.7121562957763672, 0.7579326629638672, 0.8080005645751953, 1.0089874267578125, 0.7259845733642578, 0.6411075592041016, 0.7269382476806641, 0.6339550018310547, 0.7460117340087891, 0.9441375732421875, 0.8139610290527344, 0.6990432739257812, 0.6339550018310547, 0.7319450378417969, 0.637054443359375, 0.6501674652099609, 0.7278919219970703, 0.7069110870361328, 0.5779266357421875, 0.6208419799804688, 0.9050369262695312, 0.7741451263427734, 0.5650520324707031, 0.6079673767089844, 0.6191730499267578, 0.7388591766357422, 0.5598068237304688, 0.6401538848876953, 0.6659030914306641, 0.6489753723144531, 0.621795654296875, 0.7548332214355469, 0.6010532379150391, 0.6990432739257812, 0.6699562072753906, 0.6210803985595703, 0.7240772247314453] - -Average time (milliseconds): -1.18108272552 -~~~ - -{{site.data.alerts.callout_info}} -When reading from a table or index for the first time in a session, the query will be slower than usual because the node issuing the query loads the schema of the table or index into memory first. For this reason, the first query took 24ms, whereas all others were sub-millisecond. -{{site.data.alerts.end}} - -Retrieving a subset of columns will usually be even faster: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT rider_id, vehicle_id \ -FROM rides \ -WHERE city = 'boston' AND id = '000007ef-fa0f-4a6e-a089-ce74aa8d2276'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['rider_id', 'vehicle_id'] -['d66c386d-4b7b-48a7-93e6-f92b5e7916ab', '6628bbbc-00be-4891-bc00-c49f2f16a30b'] - -Times (milliseconds): -[1.2311935424804688, 0.7009506225585938, 0.5898475646972656, 0.6151199340820312, 0.5660057067871094, 0.6620883941650391, 0.5691051483154297, 0.5369186401367188, 0.5609989166259766, 0.5290508270263672, 0.5939006805419922, 0.5769729614257812, 0.5638599395751953, 0.5381107330322266, 0.61798095703125, 0.5879402160644531, 0.6008148193359375, 0.5900859832763672, 0.5190372467041016, 0.5409717559814453, 0.51116943359375, 0.5400180816650391, 0.5490779876708984, 0.4870891571044922, 0.5340576171875, 0.49591064453125, 0.5669593811035156, 0.4971027374267578, 0.5729198455810547, 0.514984130859375, 0.5309581756591797, 0.5099773406982422, 0.5550384521484375, 0.5328655242919922, 0.5559921264648438, 0.5319118499755859, 0.5059242248535156, 0.5719661712646484, 0.49614906311035156, 0.6041526794433594, 0.5080699920654297, 0.5240440368652344, 0.49591064453125, 0.5681514739990234, 0.5118846893310547, 0.5359649658203125, 0.5450248718261719, 0.5650520324707031, 0.5249977111816406, 0.5669593811035156] - -Average time (milliseconds): -0.566024780273 -~~~ - -#### Filtering by a non-indexed column (full table scan) - -You'll get generally poor performance when retrieving a single row based on a column that is not in the primary key or any secondary index: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT * FROM users WHERE name = 'Natalie Cunningham'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['id', 'city', 'name', 'address', 'credit_card'] -['02cc9e5b-1e91-4cdb-87c4-726b4ea7219a', 'boston', 'Natalie Cunningham', '97477 Lee Path\nKimberlyport, CA 65960', '4532613656695680'] - -Times (milliseconds): -[31.939983367919922, 4.055023193359375, 3.988981246948242, 4.395008087158203, 4.045009613037109, 3.838062286376953, 6.09898567199707, 4.03904914855957, 3.9091110229492188, 5.933046340942383, 6.157875061035156, 6.323814392089844, 4.379987716674805, 3.982067108154297, 4.28009033203125, 4.118919372558594, 4.222869873046875, 4.041910171508789, 3.9288997650146484, 4.031896591186523, 4.085063934326172, 3.996133804321289, 4.001140594482422, 6.031990051269531, 5.98597526550293, 4.163026809692383, 5.931854248046875, 5.897998809814453, 3.9229393005371094, 3.8909912109375, 3.7729740142822266, 3.9768218994140625, 3.9958953857421875, 4.265069961547852, 4.204988479614258, 4.142999649047852, 4.3659210205078125, 6.074190139770508, 4.015922546386719, 4.418849945068359, 3.9381980895996094, 4.222869873046875, 4.694938659667969, 3.9060115814208984, 3.857851028442383, 3.8509368896484375, 3.969907760620117, 4.241943359375, 4.032135009765625, 3.9670467376708984] - -Average time (milliseconds): -4.99066352844 -~~~ - -To understand why this query performs poorly, use the SQL client built into the `cockroach` binary to [`EXPLAIN`](explain.html) the query plan: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT * FROM users WHERE name = 'Natalie Cunningham';" -~~~ - -~~~ -+------+-------+---------------+ -| Tree | Field | Description | -+------+-------+---------------+ -| scan | | | -| | table | users@primary | -| | spans | ALL | -+------+-------+---------------+ -(3 rows) -~~~ - -The row with `spans | ALL` shows you that, without a secondary index on the `name` column, CockroachDB scans every row of the `users` table, ordered by the primary key (`city`/`id`), until it finds the row with the correct `name` value. - -#### Filtering by a secondary index - -To speed up this query, add a secondary index on `name`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="CREATE INDEX on users (name);" -~~~ - -The query will now return much faster: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT * FROM users WHERE name = 'Natalie Cunningham'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['id', 'city', 'name', 'address', 'credit_card'] -['02cc9e5b-1e91-4cdb-87c4-726b4ea7219a', 'boston', 'Natalie Cunningham', '97477 Lee Path\nKimberlyport, CA 65960', '4532613656695680'] - -Times (milliseconds): -[3.4589767456054688, 1.6651153564453125, 1.547098159790039, 1.9190311431884766, 1.7499923706054688, 1.6219615936279297, 1.5749931335449219, 1.7859935760498047, 1.5561580657958984, 1.6391277313232422, 1.5120506286621094, 1.5139579772949219, 1.6808509826660156, 1.708984375, 1.4798641204833984, 1.544952392578125, 1.653909683227539, 1.6129016876220703, 1.7309188842773438, 1.5811920166015625, 1.7628669738769531, 1.5459060668945312, 1.6429424285888672, 1.6558170318603516, 1.7898082733154297, 1.6138553619384766, 1.6868114471435547, 1.5490055084228516, 1.7120838165283203, 1.6911029815673828, 1.5289783477783203, 1.5990734100341797, 1.6109943389892578, 1.5058517456054688, 1.5058517456054688, 1.6798973083496094, 1.7499923706054688, 1.5850067138671875, 1.4929771423339844, 1.6651153564453125, 1.5921592712402344, 1.6739368438720703, 1.6529560089111328, 1.6019344329833984, 1.6429424285888672, 1.5649795532226562, 1.605987548828125, 1.550912857055664, 1.6069412231445312, 1.6779899597167969] - -Average time (milliseconds): -1.66565418243 -~~~ - -To understand why performance improved from 4.99ms (without index) to 1.66ms (with index), use [`EXPLAIN`](explain.html) to see the new query plan: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT * FROM users WHERE name = 'Natalie Cunningham';" -~~~ - -~~~ -+------------+-------+-------------------------------------------------------+ -| Tree | Field | Description | -+------------+-------+-------------------------------------------------------+ -| index-join | | | -| ├── scan | | | -| │ | table | users@users_name_idx | -| │ | spans | /"Natalie Cunningham"-/"Natalie Cunningham"/PrefixEnd | -| └── scan | | | -| | table | users@primary | -+------------+-------+-------------------------------------------------------+ -(6 rows) -~~~ - -This shows you that CockroachDB starts with the secondary index (`table | users@users_name_idx`). Because it is sorted by `name`, the query can jump directly to the relevant value (`spans | /"Natalie Cunningham"-/"Natalie Cunningham"/PrefixEnd`). However, the query needs to return values not in the secondary index, so CockroachDB grabs the primary key (`city`/`id`) stored with the `name` value (the primary key is always stored with entries in a secondary index), jumps to that value in the primary index, and then returns the full row. - -Thinking back to the [earlier discussion of ranges and leaseholders](#important-concepts), because the `users` table is small (under 64 MiB), the primary index and all secondary indexes are contained in a single range with a single leaseholder. If the table were bigger, however, the primary index and secondary index could reside in separate ranges, each with its own leaseholder. In this case, if the leaseholders were on different nodes, the query would require more network hops, further increasing latency. - -#### Filtering by a secondary index storing additional columns - -When you have a query that filters by a specific column but retrieves a subset of the table's total columns, you can improve performance by [storing](indexes.html#storing-columns) those additional columns in the secondary index to prevent the query from needing to scan the primary index as well. - -For example, let's say you frequently retrieve a user's name and credit card number: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT name, credit_card FROM users WHERE name = 'Natalie Cunningham'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['name', 'credit_card'] -['Natalie Cunningham', '4532613656695680'] - -Times (milliseconds): -[2.338886260986328, 1.7859935760498047, 1.9490718841552734, 1.550912857055664, 1.4331340789794922, 1.4619827270507812, 1.425027847290039, 1.8270015716552734, 1.6829967498779297, 1.6028881072998047, 1.628875732421875, 1.4889240264892578, 1.497030258178711, 1.5380382537841797, 1.486063003540039, 1.5859603881835938, 1.7290115356445312, 1.7409324645996094, 1.5869140625, 1.6489028930664062, 1.7418861389160156, 1.5971660614013672, 1.619100570678711, 1.6379356384277344, 1.6028881072998047, 1.6531944274902344, 1.667022705078125, 1.6241073608398438, 1.5468597412109375, 1.5778541564941406, 1.6779899597167969, 1.5718936920166016, 1.5950202941894531, 1.6407966613769531, 1.538991928100586, 1.8379688262939453, 1.7008781433105469, 1.837015151977539, 1.5687942504882812, 1.7828941345214844, 1.7290115356445312, 1.6810894012451172, 1.7969608306884766, 1.5821456909179688, 1.569986343383789, 1.5740394592285156, 1.8229484558105469, 1.7371177673339844, 1.7681121826171875, 1.6360282897949219] - -Average time (milliseconds): -1.65812492371 -~~~ - -With the current secondary index on `name`, CockroachDB still needs to scan the primary index to get the credit card number: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT name, credit_card FROM users WHERE name = 'Natalie Cunningham';" -~~~ - -~~~ -+-----------------+-------+-------------------------------------------------------+ -| Tree | Field | Description | -+-----------------+-------+-------------------------------------------------------+ -| render | | | -| └── index-join | | | -| ├── scan | | | -| │ | table | users@users_name_idx | -| │ | spans | /"Natalie Cunningham"-/"Natalie Cunningham"/PrefixEnd | -| └── scan | | | -| | table | users@primary | -+-----------------+-------+-------------------------------------------------------+ -(7 rows) -~~~ - -Let's drop and recreate the index on `name`, this time storing the `credit_card` value in the index: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="DROP INDEX users_name_idx;" -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="CREATE INDEX ON users (name) STORING (credit_card);" -~~~ - -Now that `credit_card` values are stored in the index on `name`, CockroachDB only needs to scan that index: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT name, credit_card FROM users WHERE name = 'Natalie Cunningham';" -~~~ - -~~~ -+-----------+-------+-------------------------------------------------------+ -| Tree | Field | Description | -+-----------+-------+-------------------------------------------------------+ -| render | | | -| └── scan | | | -| | table | users@users_name_idx | -| | spans | /"Natalie Cunningham"-/"Natalie Cunningham"/PrefixEnd | -+-----------+-------+-------------------------------------------------------+ -(4 rows) -~~~ - -This results in even faster performance, reducing latency from 1.65ms (index without storing) to 1.04ms (index with storing): - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT name, credit_card FROM users WHERE name = 'Natalie Cunningham'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['name', 'credit_card'] -['Natalie Cunningham', '4532613656695680'] - -Times (milliseconds): -[1.8949508666992188, 1.2660026550292969, 1.2140274047851562, 1.110076904296875, 1.4989376068115234, 1.1739730834960938, 1.2331008911132812, 0.9701251983642578, 0.9019374847412109, 0.9038448333740234, 1.016855239868164, 0.9331703186035156, 0.9179115295410156, 0.9288787841796875, 0.888824462890625, 0.9429454803466797, 0.9410381317138672, 1.001119613647461, 0.9438991546630859, 0.9849071502685547, 1.0221004486083984, 1.013040542602539, 1.0149478912353516, 0.9579658508300781, 1.0061264038085938, 1.0559558868408203, 1.0788440704345703, 1.0411739349365234, 0.9610652923583984, 0.9639263153076172, 1.1239051818847656, 0.9639263153076172, 1.058816909790039, 0.949859619140625, 0.9739398956298828, 1.046895980834961, 0.9260177612304688, 1.0569095611572266, 1.033782958984375, 1.1029243469238281, 0.9710788726806641, 1.0311603546142578, 0.9870529174804688, 1.1179447174072266, 1.0349750518798828, 1.088857650756836, 1.1060237884521484, 1.0170936584472656, 1.0180473327636719, 1.0519027709960938] - -Average time (milliseconds): -1.04885578156 -~~~ - -#### Joining data from different tables - -Secondary indexes are crucial when [joining](joins.html) data from different tables as well. - -For example, let's say you want to count the number of users who started rides on a given day. To do this, you need to use a join to get the relevant rides from the `rides` table and then map the `rider_id` for each of those rides to the corresponding `id` in the `users` table, counting each mapping only once: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT count(DISTINCT users.id) \ -FROM users \ -INNER JOIN rides ON rides.rider_id = users.id \ -WHERE start_time BETWEEN '2018-07-20 00:00:00' AND '2018-07-21 00:00:00'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['count'] -['1998'] - -Times (milliseconds): -[1663.2239818572998, 841.871976852417, 844.9788093566895, 1043.7190532684326, 1047.544002532959, 1049.0870475769043, 1079.737901687622, 1049.543857574463, 1069.1118240356445, 1104.2020320892334, 1071.1669921875, 1080.1141262054443, 1066.741943359375, 1071.8858242034912, 1073.8670825958252, 1054.008960723877, 1089.4761085510254, 1048.2399463653564, 1033.8318347930908, 1078.5980224609375, 1054.8391342163086, 1095.6230163574219, 1056.9767951965332, 1082.8359127044678, 1048.3272075653076, 1050.3859519958496, 1084.2180252075195, 1082.1950435638428, 1101.97114944458, 1079.9469947814941, 1065.234899520874, 1051.058053970337, 1105.48996925354, 1119.469165802002, 1089.8759365081787, 1082.5989246368408, 1074.9430656433105, 1067.4428939819336, 1066.5888786315918, 1069.6449279785156, 1067.9738521575928, 1082.4880599975586, 1037.9269123077393, 1042.2871112823486, 1130.7330131530762, 1150.7518291473389, 1165.3728485107422, 1136.9531154632568, 1120.3861236572266, 1126.8589496612549] - -Average time (milliseconds): -1081.04698181 -~~~ - -To understand what's happening, use [`EXPLAIN`](explain.html) to see the query plan: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT count(DISTINCT users.id) \ -FROM users \ -INNER JOIN rides ON rides.rider_id = users.id \ -WHERE start_time BETWEEN '2018-07-20 00:00:00' AND '2018-07-21 00:00:00';" -~~~ - -~~~ -+---------------------+----------+-------------------+ -| Tree | Field | Description | -+---------------------+----------+-------------------+ -| group | | | -| └── render | | | -| └── join | | | -| │ | type | inner | -| │ | equality | (id) = (rider_id) | -| ├── scan | | | -| │ | table | users@primary | -| │ | spans | ALL | -| └── scan | | | -| | table | rides@primary | -| | spans | ALL | -+---------------------+----------+-------------------+ -(11 rows) -~~~ - -Reading from bottom up, you can see that CockroachDB does a full table scan (`spans | ALL`) first on `rides` to get all rows with a `start_time` in the specified range and then does another full table scan on `users` to find matching rows and calculate the count. - -Given that the `rides` table is large, its data is split across several ranges. Each range is replicated and has a leaseholder. At least some of these leaseholders are likely located on different nodes. This means that the full table scan of `rides` involves several network hops to various leaseholders before finally going to the leaseholder for `users` to do a full table scan there. - -To track this specifically, let's use the [`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html) statement to find out where the relevant leaseholders reside for `rides` and `users`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE rides;" -~~~ - -~~~ -+------------------------------------------------------------------------+------------------------------------------------------------------------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+------------------------------------------------------------------------+------------------------------------------------------------------------+----------+----------+--------------+ -| NULL | /"boston"/"\xfe\xdd?\xbb4\xabOV\x84\x00M\x89#-a6"/PrefixEnd | 23 | {1,2,3} | 1 | -| /"boston"/"\xfe\xdd?\xbb4\xabOV\x84\x00M\x89#-a6"/PrefixEnd | /"los angeles"/"\xf1\xe8\x99eǵI\x16\xb9w\a\xd01\xcc\b\xa4"/PrefixEnd | 25 | {1,2,3} | 2 | -| /"los angeles"/"\xf1\xe8\x99eǵI\x16\xb9w\a\xd01\xcc\b\xa4"/PrefixEnd | /"new york"/"\xebV\xf5\xe6P%L$\x92\xd2\xdf&\a\x81\xeeO"/PrefixEnd | 26 | {1,2,3} | 1 | -| /"new york"/"\xebV\xf5\xe6P%L$\x92\xd2\xdf&\a\x81\xeeO"/PrefixEnd | /"san francisco"/"\xda\xc5B\xe0\x0e\fK)\x98:\xe6[@\x05\x91*"/PrefixEnd | 27 | {1,2,3} | 2 | -| /"san francisco"/"\xda\xc5B\xe0\x0e\fK)\x98:\xe6[@\x05\x91*"/PrefixEnd | /"seattle"/"\xd4ˆ?\x98\x98FA\xa7m\x84\xba\xac\xf5\xbfI"/PrefixEnd | 28 | {1,2,3} | 3 | -| /"seattle"/"\xd4ˆ?\x98\x98FA\xa7m\x84\xba\xac\xf5\xbfI"/PrefixEnd | /"washington dc"/"Ņ\x06\x9d\xc2LEq\xb8 \ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;" -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | NULL | 51 | {1,2,3} | 2 | -+-----------+---------+----------+----------+--------------+ -(1 row) -~~~ - -The results above tell us: - -- The `rides` table is split across 7 ranges, with four leaseholders on node 1, two leaseholders on node 2, and one leaseholder on node 3. -- The `users` table is just a single range with its leaseholder on node 2. - -Now, given the `WHERE` condition of the join, the full table scan of `rides`, across all of its 7 ranges, is particularly wasteful. To speed up the query, you can create a secondary index on the `WHERE` condition (`rides.start_time`) storing the join key (`rides.rider_id`): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="CREATE INDEX ON rides (start_time) STORING (rider_id);" -~~~ - -{{site.data.alerts.callout_info}} -The `rides` table contains 1 million rows, so adding this index will take a few minutes. -{{site.data.alerts.end}} - -Adding the secondary index reduced the query time from 1081.04ms to 71.89ms: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT count(DISTINCT users.id) \ -FROM users \ -INNER JOIN rides ON rides.rider_id = users.id \ -WHERE start_time BETWEEN '2018-07-20 00:00:00' AND '2018-07-21 00:00:00'" \ ---repeat=50 \ ---times -~~~ - -~~~ -Result: -['count'] -['1998'] - -Times (milliseconds): -[124.19795989990234, 83.74285697937012, 84.76495742797852, 76.9808292388916, 65.74702262878418, 62.478065490722656, 60.26411056518555, 59.99302864074707, 67.10195541381836, 73.45199584960938, 67.09504127502441, 60.45889854431152, 68.6960220336914, 61.94710731506348, 61.53106689453125, 60.44197082519531, 62.22796440124512, 89.34903144836426, 77.64196395874023, 71.43712043762207, 66.09010696411133, 63.668012619018555, 65.31286239624023, 77.1780014038086, 73.52113723754883, 68.84908676147461, 65.11712074279785, 65.34600257873535, 65.8869743347168, 76.90095901489258, 76.9491195678711, 69.39697265625, 64.23306465148926, 75.0880241394043, 69.34094429016113, 57.55496025085449, 65.79995155334473, 83.74285697937012, 75.32310485839844, 74.08809661865234, 77.33798027038574, 73.95505905151367, 71.85482978820801, 77.95405387878418, 74.30601119995117, 72.24106788635254, 75.28901100158691, 78.2630443572998, 74.97286796569824, 79.50282096862793] - -Average time (milliseconds): -71.8922615051 -~~~ - -To understand why performance improved, again use [`EXPLAIN`](explain.html) to see the new query plan: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT count(DISTINCT users.id) \ -FROM users \ -INNER JOIN rides ON rides.rider_id = users.id \ -WHERE start_time BETWEEN '2018-07-20 00:00:00' AND '2018-07-21 00:00:00';" -~~~ - -~~~ -+---------------------+----------+-------------------------------------------------------+ -| Tree | Field | Description | -+---------------------+----------+-------------------------------------------------------+ -| group | | | -| └── render | | | -| └── join | | | -| │ | type | inner | -| │ | equality | (id) = (rider_id) | -| ├── scan | | | -| │ | table | users@primary | -| │ | spans | ALL | -| └── scan | | | -| | table | rides@rides_start_time_idx | -| | spans | /2018-07-20T00:00:00Z-/2018-07-21T00:00:00.000000001Z | -+---------------------+----------+-------------------------------------------------------+ -(11 rows) -~~~ - -Notice that CockroachDB now starts by using `rides@rides_start_time_idx` secondary index to retrieve the relevant rides without needing to scan the full `rides` table. - -Let's check the ranges for the new index: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM INDEX rides@rides_start_time_idx;" -~~~ - -~~~ -+-----------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+----------+----------+--------------+ -| NULL | /2018-07-15T02:32:47.564891Z/"seattle"/"r\x8f\xbc\xd4\f\x18E\x9f\x85\xc2\"H\\\xe7k\xf1" | 34 | {1,2,3} | 1 | -| /2018-07-15T02:32:47.564891Z/"seattle"/"r\x8f\xbc\xd4\f\x18E\x9f\x85\xc2\"H\\\xe7k\xf1" | NULL | 35 | {1,2,3} | 1 | -+-----------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+----------+----------+--------------+ -(2 rows) -~~~ - -This tells us that the index is stored in 2 ranges, with the leaseholders for both of them on node 1. Based on the output of `SHOW EXPERIMENTAL_RANGES FROM TABLE users` that we saw earlier, we already know that the leaseholder for the `users` table is on node 2. - -#### Using `IN (list)` with a subquery - -Now let's say you want to get the latest ride of each of the 5 most used vehicles. To do this, you might think to use a subquery to get the IDs of the 5 most frequent vehicles from the `rides` table, passing the results into the `IN` list of another query to get the most recent ride of each of the 5 vehicles: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT vehicle_id, max(end_time) \ -FROM rides \ -WHERE vehicle_id IN ( \ - SELECT vehicle_id \ - FROM rides \ - GROUP BY vehicle_id \ - ORDER BY count(*) DESC \ - LIMIT 5 \ -) \ -GROUP BY vehicle_id" \ ---repeat=20 \ ---times -~~~ - -~~~ -Result: -['vehicle_id', 'max'] -['c6541da5-9858-4e3f-9b49-992e206d2c50', '2018-08-02 02:14:50.543760'] -['78fdd6f8-c6a1-42df-a89f-cd65b7bb8be9', '2018-08-02 02:47:43.755989'] -['3c950d36-c2b8-48d0-87d3-e0d6f570af62', '2018-08-02 03:06:31.293184'] -['35752c4c-b878-4436-8330-8d7246406a55', '2018-08-02 03:08:49.823209'] -['0962cdca-9d85-457c-9616-cc2ae2d32008', '2018-08-02 03:01:25.414512'] - -Times (milliseconds): -[4368.9610958099365, 4373.898029327393, 4396.070957183838, 4382.591962814331, 4274.624824523926, 4369.847059249878, 4373.079061508179, 4287.877082824707, 4307.362079620361, 4368.865966796875, 4363.792896270752, 4310.600996017456, 4378.695011138916, 4340.383052825928, 4338.238000869751, 4373.046875, 4327.131986618042, 4386.303901672363, 4429.6300411224365, 4383.068084716797] - -Average time (milliseconds): -4356.7034483 -~~~ - -However, as you can see, this query is slow because, currently, when the `WHERE` condition of a query comes from the result of a subquery, CockroachDB scans the entire table, even if there is an available index. Use `EXPLAIN` to see this in more detail: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="EXPLAIN SELECT vehicle_id, max(end_time) \ -FROM rides \ -WHERE vehicle_id IN ( \ - SELECT vehicle_id \ - FROM rides \ - GROUP BY vehicle_id \ - ORDER BY count(*) DESC \ - LIMIT 5 \ -) \ -GROUP BY vehicle_id;" -~~~ - -~~~ -+------------------------------------+-----------+--------------------------------------------------------------------------+ -| Tree | Field | Description | -+------------------------------------+-----------+--------------------------------------------------------------------------+ -| root | | | -| ├── group | | | -| │ │ | group by | @1-@1 | -| │ └── render | | | -| │ └── scan | | | -| │ | table | rides@primary | -| │ | spans | ALL | -| └── subquery | | | -| │ | id | @S1 | -| │ | sql | (SELECT vehicle_id FROM rides GROUP BY vehicle_id ORDER BY count(*) DESC | -| | | LIMIT 5) | -| │ | exec mode | all rows normalized | -| └── limit | | | -| └── sort | | | -| │ | order | -count | -| │ | strategy | top 5 | -| └── group | | | -| │ | group by | @1-@1 | -| └── render | | | -| └── scan | | | -| | table | rides@primary | -| | spans | ALL | -+------------------------------------+-----------+--------------------------------------------------------------------------+ -(21 rows) -~~~ - -This is a complex query plan, but the important thing to note is the full table scan of `rides@primary` above the `subquery`. This shows you that, after the subquery returns the IDs of the top 5 vehicles, CockroachDB scans the entire primary index to find the rows with `max(end_time)` for each `vehicle_id`, although you might expect CockroachDB to more efficiently use the secondary index on `vehicle_id` (CockroachDB is working to remove this limitation in a future version). - -#### Using `IN (list)` with explicit values - -Because CockroachDB will not use an available secondary index when using `IN (list)` with a subquery, it's much more performant to have your application first select the top 5 vehicles: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT vehicle_id \ -FROM rides \ -GROUP BY vehicle_id \ -ORDER BY count(*) DESC \ -LIMIT 5" \ ---repeat=20 \ ---times -~~~ - -~~~ -Result: -['vehicle_id'] -['35752c4c-b878-4436-8330-8d7246406a55'] -['0962cdca-9d85-457c-9616-cc2ae2d32008'] -['c6541da5-9858-4e3f-9b49-992e206d2c50'] -['78fdd6f8-c6a1-42df-a89f-cd65b7bb8be9'] -['3c950d36-c2b8-48d0-87d3-e0d6f570af62'] - -Times (milliseconds): -[787.0969772338867, 782.2480201721191, 741.5878772735596, 790.3921604156494, 767.4920558929443, 733.0870628356934, 768.8038349151611, 754.1589736938477, 716.4630889892578, 726.3698577880859, 721.092939376831, 737.1737957000732, 747.978925704956, 736.1149787902832, 727.1649837493896, 725.5918979644775, 746.1550235748291, 752.6230812072754, 728.59787940979, 733.4978580474854] - -Average time (milliseconds): -746.184563637 -~~~ - -And then put the results into the `IN` list to get the most recent rides of the vehicles: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT vehicle_id, max(end_time) \ -FROM rides \ -WHERE vehicle_id IN ( \ - '35752c4c-b878-4436-8330-8d7246406a55', \ - '0962cdca-9d85-457c-9616-cc2ae2d32008', \ - 'c6541da5-9858-4e3f-9b49-992e206d2c50', \ - '78fdd6f8-c6a1-42df-a89f-cd65b7bb8be9', \ - '3c950d36-c2b8-48d0-87d3-e0d6f570af62' \ -) \ -GROUP BY vehicle_id;" \ ---repeat=20 \ ---times -~~~ - -~~~ -Result: -['vehicle_id', 'max'] -['3c950d36-c2b8-48d0-87d3-e0d6f570af62', '2018-08-02 03:06:31.293184'] -['78fdd6f8-c6a1-42df-a89f-cd65b7bb8be9', '2018-08-02 02:47:43.755989'] -['35752c4c-b878-4436-8330-8d7246406a55', '2018-08-02 03:08:49.823209'] -['0962cdca-9d85-457c-9616-cc2ae2d32008', '2018-08-02 03:01:25.414512'] -['c6541da5-9858-4e3f-9b49-992e206d2c50', '2018-08-02 02:14:50.543760'] - -Times (milliseconds): -[828.5520076751709, 826.6720771789551, 837.0990753173828, 865.441083908081, 870.556116104126, 842.6721096038818, 859.3161106109619, 861.4299297332764, 866.6350841522217, 833.0469131469727, 838.021993637085, 841.0389423370361, 878.7519931793213, 879.6770572662354, 861.1328601837158, 855.1840782165527, 856.5502166748047, 882.9760551452637, 873.0340003967285, 858.4709167480469] - -Average time (milliseconds): -855.812931061 -~~~ - -This approach reduced the query time from 4356.70ms (query with subquery) to 1601.99ms (2 distinct queries). - -### Step 7. Test/tune write performance - -- [Bulk inserting into an existing table](#bulk-inserting-into-an-existing-table) -- [Minimizing unused indexes](#minimizing-unused-indexes) -- [Retrieving the ID of a newly inserted row](#retrieving-the-id-of-a-newly-inserted-row) - -#### Bulk inserting into an existing table - -Moving on to writes, let's imagine that you have a batch of 100 new users to insert into the `users` table. The most obvious approach is to insert each row using 100 separate [`INSERT`](insert.html) statements: - -{{site.data.alerts.callout_info}} -For the purpose of demonstration, the command below inserts the same user 100 times, each time with a different unique ID. Note also that you're now adding the `--cumulative` flag to print the total time across all 100 inserts. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347')" \ ---repeat=100 \ ---times \ ---cumulative -~~~ - -~~~ -Times (milliseconds): -[33.28299522399902, 13.558149337768555, 14.67585563659668, 8.835077285766602, 9.104013442993164, 8.157968521118164, 10.174989700317383, 8.877992630004883, 9.196996688842773, 8.93402099609375, 9.894132614135742, 9.97304916381836, 8.221149444580078, 9.334087371826172, 9.270191192626953, 8.980035781860352, 7.210969924926758, 8.212089538574219, 8.048057556152344, 7.8639984130859375, 7.489204406738281, 9.547948837280273, 9.073972702026367, 9.660005569458008, 9.325981140136719, 9.338140487670898, 9.240865707397461, 7.958889007568359, 8.417844772338867, 8.075952529907227, 7.896184921264648, 9.118080139160156, 8.161067962646484, 9.071111679077148, 8.996963500976562, 7.790803909301758, 7.8220367431640625, 9.695053100585938, 9.470939636230469, 8.415937423706055, 9.287118911743164, 9.29117202758789, 9.618043899536133, 9.107828140258789, 8.491039276123047, 7.998943328857422, 9.282827377319336, 7.735013961791992, 9.161949157714844, 9.70005989074707, 8.910894393920898, 9.124994277954102, 9.028911590576172, 9.568929672241211, 10.931968688964844, 8.813858032226562, 14.040946960449219, 7.773876190185547, 9.801864624023438, 7.989168167114258, 8.188962936401367, 9.398937225341797, 9.705066680908203, 9.213924407958984, 9.569168090820312, 9.19198989868164, 9.664058685302734, 9.52601432800293, 8.01396369934082, 8.30698013305664, 8.03995132446289, 8.166074752807617, 9.335994720458984, 7.915019989013672, 9.584903717041016, 8.049964904785156, 7.803916931152344, 8.125066757202148, 9.367942810058594, 9.21487808227539, 9.630918502807617, 9.505033493041992, 9.830951690673828, 8.285045623779297, 8.095979690551758, 9.876012802124023, 8.067131042480469, 9.438037872314453, 8.147001266479492, 8.9111328125, 9.560108184814453, 8.78596305847168, 9.341955184936523, 10.293006896972656, 9.062051773071289, 14.008045196533203, 9.293079376220703, 9.57798957824707, 14.974832534790039, 8.59689712524414] - -Average time (milliseconds): -9.41696166992 - -Cumulative time (milliseconds): -941.696166992 -~~~ - -The 100 inserts took 941.69ms to complete, which isn't bad. However, it's significantly faster to use a single `INSERT` statement with 100 comma-separated `VALUES` clauses: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="INSERT INTO users VALUES \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), \ -(gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347'), (gen_random_uuid(), 'new york', 'Max Roach', '411 Drum Street', '173635282937347')" \ ---repeat=1 \ ---cumulative -~~~ - -~~~ -Average time (milliseconds): -18.965959549 - -Cumulative time (milliseconds): -18.965959549 -~~~ - -As you can see, this multi-row `INSERT` technique reduced the total time for 100 inserts from 941.69ms to 18.96ms. It's useful to note that this technique is equally effective for [`UPSERT`](upsert.html) and [`DELETE`](delete.html) statements as well. - -#### Minimizing unused indexes - -Earlier, we saw how important secondary indexes are for read performance. For writes, however, it's important to recognized the overhead that they create. - -Let's consider the `users` table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="SHOW INDEXES FROM users;" -~~~ - -~~~ -+-------+----------------+--------+-----+-------------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+----------------+--------+-----+-------------+-----------+---------+----------+ -| users | primary | true | 1 | city | ASC | false | false | -| users | primary | true | 2 | id | ASC | false | false | -| users | users_name_idx | false | 1 | name | ASC | false | false | -| users | users_name_idx | false | 2 | credit_card | N/A | true | false | -| users | users_name_idx | false | 3 | city | ASC | false | true | -| users | users_name_idx | false | 4 | id | ASC | false | true | -+-------+----------------+--------+-----+-------------+-----------+---------+----------+ -(6 rows) -~~~ - -This table has the primary index (the full table) and a secondary index on `name` that is also storing `credit_card`. This means that whenever a row is inserted, or whenever `name`, `credit_card`, `city`, or `id` are modified in existing rows, both indexes are updated. - -To make this more concrete, let's count how many rows have a name that starts with `C` and then update those rows to all have the same name: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT count(*) \ -FROM users \ -WHERE name LIKE 'C%'" \ ---repeat=1 -~~~ - -~~~ -Result: -['count'] -['179'] - -Average time (milliseconds): -2.52413749695 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="UPDATE users \ -SET name = 'Carl Kimball' \ -WHERE name LIKE 'C%'" \ ---repeat=1 -~~~ - -~~~ -Average time (milliseconds): -110.701799393 -~~~ - -Because `name` is in both the `primary` and `users_name_idx` indexes, for each of the 168 rows, 2 keys were updated. - -Now, assuming that the `users_name_idx` index is no longer needed, lets drop the index and execute an equivalent query: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="DROP INDEX users_name_idx;" -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="UPDATE users \ -SET name = 'Peedie Hirata' \ -WHERE name = 'Carl Kimball'" \ ---repeat=1 -~~~ - -~~~ -Average time (milliseconds): -21.7709541321 -~~~ - -Before, when both the primary and secondary indexes needed to be updated, the updates took 110.70ms. Now, after dropping the secondary index, an equivalent update took only 21.77ms. - -#### Retrieving the ID of a newly inserted row - -Now let's focus on the common case of inserting a row into a table and then retrieving the ID of the new row to do some follow-up work. One approach is to execute two statements, an `INSERT` to insert the row and then a `SELECT` to get the new ID: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'Toni Brooks', '800 Camden Lane, Brooklyn, NY 11218', '98244843845134960')" \ ---repeat=1 -~~~ - -~~~ -Average time (milliseconds): -9.97304916382 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="SELECT id FROM users WHERE name = 'Toni Brooks'" \ ---repeat=1 -~~~ - -~~~ -Result: -['id'] -['cc83e0bd-2ea0-4507-a683-a707cfbe0aba'] - -Average time (milliseconds): -7.32207298279 -~~~ - -Combined, these statements are relatively fast, at 17.29ms, but an even more performant approach is to append `RETURNING id` to the end of the `INSERT`: - -{% include copy-clipboard.html %} -~~~ shell -$ ./tuning.py \ ---host=
      \ ---statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'Brian Brooks', '800 Camden Lane, Brooklyn, NY 11218', '98244843845134960') \ -RETURNING id" \ ---repeat=1 -~~~ - -~~~ -Result: -['id'] -['3d16500e-cb2e-462e-9c83-db0965d6deaf'] - -Average time (milliseconds): -9.48596000671 -~~~ - -At just 9.48ms, this approach is faster due to the write and read executing in one instead of two client-server roundtrips. Note also that, as discussed earlier, if the leaseholder for the table happens to be on a different node than the query is running against, that introduces additional network hops and latency. - - - -## Multi-region deployment - - - -Given that Movr is active on both US coasts, you'll now scale the cluster into two new regions, `us-west1-a` and `us-west2-a`, each with 3 nodes and an extra instance for simulating regional client traffic. - -### Step 8. Create more instances - -1. [Create 6 more instances](https://cloud.google.com/compute/docs/instances/create-start-instance), 3 in the `us-west1-a` zone (Oregon), and 3 in the `us-west2-a` zone (Los Angeles). While creating each instance: - - Use the `n1-standard-4` machine type (4 vCPUs, 15 GB memory). - - Use the Ubuntu 16.04 OS image. - - [Create and mount a local SSD](https://cloud.google.com/compute/docs/disks/local-ssd#create_local_ssd). - - To apply the Web UI firewall rule you created earlier, click **Management, disk, networking, SSH keys**, select the **Networking** tab, and then enter `cockroachdb` in the **Network tags** field. - -2. Note the internal IP address of each `n1-standard-4` instance. You'll need these addresses when starting the CockroachDB nodes. - -3. Create an additional instance in the `us-west1-a` and `us-west2-a` zones. These can be smaller, such as `n1-standard-1`. - -### Step 9. Scale the cluster - -1. SSH to one of the `n1-standard-4` instances in the `us-west1-a` zone. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west1,zone=us-west1-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances in the `us-west1-a` zone. - -5. SSH to one of the `n1-standard-4` instances in the `us-west2-a` zone. - -6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -7. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west2,zone=us-west2-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -8. Repeat steps 5 - 7 for the other two `n1-standard-4` instances in the `us-west2-a` zone. - -### Step 10. Install the Python client - -In each of the new zones, SSH to the instance not running a CockroachDB node, and install the Python client as described in [step 5](#step-5-install-the-python-client) above. - -### Step 11. Check rebalancing - -Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones. - -To check this, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes: - -Perf tuning rebalancing - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;" -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | NULL | 22 | {1,6,9} | 6 | -+-----------+---------+----------+----------+--------------+ -(1 row) -~~~ - -In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west1-a` zone. - -### Step 12. Test performance - -In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases. - -#### Reads - -For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./tuning.py \ - --host=
      \ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [1123.0790615081787, 190.16599655151367, 127.28595733642578, 72.94511795043945, 72.0360279083252, 70.50704956054688, 70.83487510681152, 73.11201095581055, 72.81899452209473, 71.35510444641113, 71.6249942779541, 70.8611011505127, 72.17597961425781, 71.78997993469238, 70.75691223144531, 76.08985900878906, 72.6480484008789, 71.91896438598633, 70.59216499328613, 71.07686996459961, 71.86722755432129, 71.01583480834961, 71.29812240600586, 71.74086570739746, 72.67093658447266, 71.03395462036133, 71.78306579589844, 71.5029239654541, 70.33801078796387, 72.91483879089355, 71.23708724975586, 72.81684875488281, 71.70701026916504, 71.32506370544434, 71.68197631835938, 70.78695297241211, 72.80707359313965, 73.0600357055664, 71.69818878173828, 71.40707969665527, 70.53804397583008, 71.83694839477539, 70.08099555969238, 71.96617126464844, 71.03586196899414, 72.6020336151123, 71.23398780822754, 71.03800773620605, 72.12519645690918, 71.77996635437012] - - Average time (milliseconds): - 96.2521076202 - ~~~ - -As we saw earlier, the leaseholder for the `vehicles` table is in `us-west1-a` (Oregon), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client. - -For contrast, imagine we are now a Movr administrator in Seattle, and we want to get the IDs and descriptions of all Seattle-based bikes that are currently in use: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Query for the data: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./tuning.py \ - --host=
      \ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'seattle' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"] - ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"] - ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"] - ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"] - ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"] - - Times (milliseconds): - [83.34112167358398, 35.54201126098633, 36.23318672180176, 35.546064376831055, 39.82996940612793, 35.067081451416016, 35.12001037597656, 34.34896469116211, 35.05301475524902, 35.52699089050293, 34.442901611328125, 33.95986557006836, 35.25996208190918, 35.26592254638672, 35.75301170349121, 35.50601005554199, 35.93301773071289, 32.97090530395508, 35.09712219238281, 35.33005714416504, 34.66916084289551, 34.97791290283203, 34.68203544616699, 34.09695625305176, 35.676002502441406, 33.01596641540527, 35.39609909057617, 33.804893493652344, 33.6918830871582, 34.37995910644531, 33.71405601501465, 35.18819808959961, 34.35802459716797, 34.191131591796875, 33.44106674194336, 34.84678268432617, 35.51292419433594, 33.80894660949707, 33.6911678314209, 36.14497184753418, 34.671783447265625, 35.28904914855957, 33.84900093078613, 36.21387481689453, 36.26894950866699, 34.7599983215332, 34.73687171936035, 34.715890884399414, 35.101890563964844, 35.4609489440918] - - Average time (milliseconds): - 35.9096717834 - ~~~ - -Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 35.90ms compared to the similar query in New York that took 96.25ms. - -#### Writes - -The geographic distribution of data impacts write performance as well. For example, imagine 100 people in New York and 100 people in Los Angeles want to create new Movr accounts: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Create 100 NY-based users: - - {% include copy-clipboard.html %} - ~~~ shell - ./tuning.py \ - --host=
      \ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [710.5610370635986, 75.03294944763184, 76.18403434753418, 76.6599178314209, 75.54292678833008, 77.10099220275879, 76.49803161621094, 76.12395286560059, 75.13093948364258, 76.4460563659668, 74.74899291992188, 76.11799240112305, 74.95307922363281, 75.22797584533691, 75.01792907714844, 76.11393928527832, 75.35195350646973, 76.23100280761719, 75.17099380493164, 76.05600357055664, 76.4470100402832, 76.4310359954834, 75.02388954162598, 76.38192176818848, 78.89008522033691, 76.27677917480469, 75.12402534484863, 74.9521255493164, 75.08397102355957, 76.21502876281738, 75.15192031860352, 77.74996757507324, 73.84800910949707, 85.68978309631348, 75.08993148803711, 77.28886604309082, 76.8439769744873, 76.6448974609375, 75.1500129699707, 76.38287544250488, 75.12092590332031, 76.92408561706543, 76.86591148376465, 76.45702362060547, 76.61795616149902, 75.77109336853027, 81.47501945495605, 83.72306823730469, 76.41983032226562, 75.19102096557617, 74.01609420776367, 77.21996307373047, 76.61914825439453, 75.56986808776855, 76.94005966186523, 75.74892044067383, 76.63488388061523, 76.73311233520508, 75.73890686035156, 75.3028392791748, 76.58910751342773, 76.70807838439941, 76.36213302612305, 75.05607604980469, 76.99084281921387, 79.19192314147949, 75.69003105163574, 76.53594017028809, 75.3641128540039, 76.4620304107666, 75.81305503845215, 76.84993743896484, 75.74915885925293, 77.1799087524414, 76.67183876037598, 75.85597038269043, 77.18396186828613, 78.25303077697754, 76.66516304016113, 75.4399299621582, 76.98297500610352, 75.69122314453125, 77.4688720703125, 81.50601387023926, 76.74908638000488, 76.9951343536377, 75.34193992614746, 76.82991027832031, 76.4460563659668, 75.76298713684082, 76.63083076477051, 75.43802261352539, 76.47705078125, 78.95708084106445, 75.60205459594727, 75.70815086364746, 76.48301124572754, 76.65586471557617, 75.71196556091309, 74.09906387329102] - - Average time (milliseconds): - 82.7817606926 - ~~~ - -3. SSH to the instance in `us-west2-a` with the Python client. - -4. Create 100 new Los Angeles-based users: - - {% include copy-clipboard.html %} - ~~~ shell - ./tuning.py \ - --host=
      \ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'los angeles', 'Los Angel', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [213.47904205322266, 140.0778293609619, 138.11588287353516, 138.22197914123535, 143.43595504760742, 139.0368938446045, 138.3199691772461, 138.7031078338623, 139.38307762145996, 139.53304290771484, 138.78607749938965, 140.59996604919434, 138.1399631500244, 138.94009590148926, 138.17405700683594, 137.9709243774414, 138.02003860473633, 137.82405853271484, 140.13099670410156, 139.08815383911133, 138.0600929260254, 139.01615142822266, 138.05103302001953, 137.76111602783203, 139.38617706298828, 137.42399215698242, 137.89701461791992, 138.40818405151367, 138.6868953704834, 139.13893699645996, 139.24717903137207, 138.7009620666504, 137.4349594116211, 137.24017143249512, 138.99493217468262, 138.77201080322266, 138.624906539917, 139.19997215270996, 139.4331455230713, 143.18394660949707, 138.0319595336914, 137.6488208770752, 137.27498054504395, 136.3968849182129, 139.0249729156494, 137.9079818725586, 139.37997817993164, 139.32204246520996, 140.045166015625, 137.9718780517578, 139.36805725097656, 139.6927833557129, 139.63794708251953, 138.016939163208, 145.32899856567383, 138.261079788208, 139.56904411315918, 139.6658420562744, 138.02599906921387, 139.7988796234131, 138.24796676635742, 139.9519443511963, 136.5041732788086, 139.43004608154297, 138.16499710083008, 138.2119655609131, 139.69111442565918, 140.30194282531738, 138.14496994018555, 140.00296592712402, 139.44697380065918, 139.35494422912598, 137.9709243774414, 140.78497886657715, 136.4901065826416, 138.44680786132812, 138.69094848632812, 139.2819881439209, 140.45214653015137, 138.3049488067627, 139.4188404083252, 139.9250030517578, 140.40303230285645, 138.7009620666504, 136.9321346282959, 139.20903205871582, 138.14496994018555, 140.14315605163574, 139.30511474609375, 139.58096504211426, 141.16501808166504, 138.66591453552246, 138.3810043334961, 137.39800453186035, 139.9540901184082, 138.4589672088623, 138.72814178466797, 138.3681297302246, 139.1599178314209, 139.29295539855957] - - Average time (milliseconds): - 139.702253342 - ~~~ - -On average, it took 82.78ms to create a user in New York and 139.70ms to create a user in Los Angeles. To better understand this discrepancy, let's look at the distribution of data for the `users` table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;" -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | NULL | 51 | {2,6,7} | 2 | -+-----------+---------+----------+----------+--------------+ -(1 row) -~~~ - -For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-east1-b` zone. This means that: - -- When creating a user in New York, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-a` (Oregon) or `us-west2-a` (Los Angeles) before committing and then returning confirmation to the client. -- When creating a user in Los Angeles, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-east1-b`. It then has to wait for confirmation from either the replica in `us-west1-a` (Oregon) or `us-west2-a` (Los Angeles) before committing and then returning confirmation to the client back in the west. - -### Step 13. Partition data by city - -For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region. - -1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](enterprise-licensing.html#set-the-trial-or-enterprise-license-key): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --host=
      \ - --execute="SET CLUSTER SETTING cluster.organization = '';" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --host=
      \ - --execute="SET CLUSTER SETTING enterprise.license = '';" - ~~~ - -3. Define partitions for all tables and their secondary indexes. - - Start with the `users` table: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="ALTER TABLE users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Now define partitions for the `vehicles` table and its secondary indexes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="ALTER TABLE vehicles \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york_idx VALUES IN ('new york'), \ - PARTITION boston_idx VALUES IN ('boston'), \ - PARTITION washington_dc_idx VALUES IN ('washington dc'), \ - PARTITION seattle_idx VALUES IN ('seattle'), \ - PARTITION san_francisco_idx VALUES IN ('san francisco'), \ - PARTITION los_angeles_idx VALUES IN ('los angeles') \ - );" - ~~~ - - Next, define partitions for the `rides` table and its secondary indexes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="ALTER TABLE rides \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york_idx1 VALUES IN ('new york'), \ - PARTITION boston_idx1 VALUES IN ('boston'), \ - PARTITION washington_dc_idx1 VALUES IN ('washington dc'), \ - PARTITION seattle_idx1 VALUES IN ('seattle'), \ - PARTITION san_francisco_idx1 VALUES IN ('san francisco'), \ - PARTITION los_angeles_idx1 VALUES IN ('los angeles') \ - );" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \ - PARTITION BY LIST (vehicle_city) ( \ - PARTITION new_york_idx2 VALUES IN ('new york'), \ - PARTITION boston_idx2 VALUES IN ('boston'), \ - PARTITION washington_dc_idx2 VALUES IN ('washington dc'), \ - PARTITION seattle_idx2 VALUES IN ('seattle'), \ - PARTITION san_francisco_idx2 VALUES IN ('san francisco'), \ - PARTITION los_angeles_idx2 VALUES IN ('los angeles') \ - );" - ~~~ - - Finally, drop an unused index on `rides` rather than partition it: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --insecure \ - --database=movr \ - --host=
      \ - --execute="DROP INDEX rides_start_time_idx;" - ~~~ - - {{site.data.alerts.callout_info}} - The `rides` table contains 1 million rows, so dropping this index will take a few minutes. - {{site.data.alerts.end}} - -7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-table-or-secondary-index-partition-new-in-v2-0) to require city data to be stored on specific nodes based on node locality. - - City | Locality - -----|--------- - New York | `zone=us-east1-b` - Boston | `zone=us-east1-b` - Washington DC | `zone=us-east1-b` - Seattle | `zone=us-west1-a` - San Francisco | `zone=us-west2-a` - Los Angeles | `zone=us-west2-a` - - {{site.data.alerts.callout_info}} - Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead. - {{site.data.alerts.end}} - - Start with the `users` table partitions: - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.users.new_york \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.users.boston \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.users.washington_dc \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west1-a]' | \ - cockroach zone set movr.users.seattle \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.users.san_francisco \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.users.los_angeles \ - --insecure \ - --host=
      \ - -f - - ~~~ - - Move on to the `vehicles` table and secondary index partitions: - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.vehicles.new_york \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.vehicles.new_york_idx \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.vehicles.boston \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.vehicles.boston_idx \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.vehicles.washington_dc \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.vehicles.washington_dc_idx \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west1-a]' | \ - cockroach zone set movr.vehicles.seattle \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west1-a]' | \ - cockroach zone set movr.vehicles.seattle_idx \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.vehicles.san_francisco \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.vehicles.san_francisco_idx \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.vehicles.los_angeles \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.vehicles.los_angeles_idx \ - --insecure \ - --host=
      \ - -f - - ~~~ - - Finish with the `rides` table and secondary index partitions: - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.new_york \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.new_york_idx1 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.new_york_idx2 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.boston \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.boston_idx1 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.boston_idx2 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.washington_dc \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.washington_dc_idx1 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-east1-b]' | \ - cockroach zone set movr.rides.washington_dc_idx2 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west1-a]' | \ - cockroach zone set movr.rides.seattle \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west1-a]' | \ - cockroach zone set movr.rides.seattle_idx1 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west1-a]' | \ - cockroach zone set movr.rides.seattle_idx2 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.rides.san_francisco \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.rides.san_francisco_idx1 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.rides.san_francisco_idx2 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.rides.los_angeles \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.rides.los_angeles_idx1 \ - --insecure \ - --host=
      \ - -f - - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ echo 'constraints: [+zone=us-west2-a]' | \ - cockroach zone set movr.rides.los_angeles_idx2 \ - --insecure \ - --host=
      \ - -f - - ~~~ - -### Step 14. Check rebalancing after partitioning - -Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined. - -To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning: - -Perf tuning rebalancing - -To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---insecure \ ---host=
      \ ---database=movr \ ---execute="SELECT * FROM \ -[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \ -WHERE \"Start Key\" IS NOT NULL \ - AND \"Start Key\" NOT LIKE '%Prefix%';" -~~~ - -~~~ -+------------------+----------------------------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+------------------+----------------------------+----------+----------+--------------+ -| /"boston" | /"boston"/PrefixEnd | 95 | {1,2,3} | 2 | -| /"los angeles" | /"los angeles"/PrefixEnd | 111 | {7,8,9} | 9 | -| /"new york" | /"new york"/PrefixEnd | 91 | {1,2,3} | 1 | -| /"san francisco" | /"san francisco"/PrefixEnd | 107 | {7,8,9} | 7 | -| /"seattle" | /"seattle"/PrefixEnd | 103 | {4,5,6} | 4 | -| /"washington dc" | /"washington dc"/PrefixEnd | 99 | {1,2,3} | 1 | -+------------------+----------------------------+----------+----------+--------------+ -(6 rows) -~~~ - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`. - -### Step 15. Test performance after partitioning - -After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city. - -To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance). - -#### Reads - -Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./tuning.py \ - --host=
      \ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [17.27890968322754, 9.554147720336914, 7.483959197998047, 7.407903671264648, 7.538795471191406, 7.39288330078125, 7.623910903930664, 7.172822952270508, 7.15184211730957, 7.201910018920898, 7.063865661621094, 7.602930068969727, 7.246971130371094, 6.966829299926758, 7.369041442871094, 7.277965545654297, 7.650852203369141, 7.177829742431641, 7.266998291015625, 7.150173187255859, 7.303953170776367, 7.1048736572265625, 7.218122482299805, 7.168054580688477, 7.258176803588867, 7.375955581665039, 7.013797760009766, 7.2078704833984375, 7.277965545654297, 7.352113723754883, 7.0400238037109375, 7.379055023193359, 7.227897644042969, 7.266044616699219, 6.883859634399414, 7.344961166381836, 7.222175598144531, 7.149934768676758, 7.241010665893555, 6.999969482421875, 7.40504264831543, 7.191896438598633, 7.192134857177734, 7.2231292724609375, 7.10296630859375, 7.291078567504883, 6.976127624511719, 7.338047027587891, 6.918191909790039, 7.070064544677734] - - Average time (milliseconds): - 7.48650074005 - ~~~ - -Before partitioning, this query took 96.25ms on average. After partitioning, the query took only 7.48ms on average. - -#### Writes - -Now let's again imagine 100 people in New York and 100 people in Los Angeles want to create new Movr accounts: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Create 100 NY-based users: - - {% include copy-clipboard.html %} - ~~~ shell - ./tuning.py \ - --host=
      \ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [9.378910064697266, 7.173061370849609, 9.769916534423828, 8.235931396484375, 9.124040603637695, 9.358882904052734, 8.581161499023438, 7.482051849365234, 8.441925048828125, 8.306026458740234, 8.775949478149414, 8.685827255249023, 6.851911544799805, 9.104013442993164, 9.664058685302734, 7.126092910766602, 8.738994598388672, 8.75997543334961, 9.040117263793945, 8.374929428100586, 8.384943008422852, 10.58506965637207, 8.538961410522461, 7.405996322631836, 9.508132934570312, 8.268117904663086, 11.46697998046875, 9.343147277832031, 8.31294059753418, 7.085084915161133, 8.779048919677734, 7.356166839599609, 8.732080459594727, 9.31406021118164, 8.460044860839844, 8.933067321777344, 8.610963821411133, 7.01904296875, 9.474039077758789, 8.276939392089844, 9.40704345703125, 9.205818176269531, 8.270025253295898, 7.443904876708984, 8.999824523925781, 8.215904235839844, 8.124828338623047, 8.324861526489258, 8.156061172485352, 8.740901947021484, 8.39996337890625, 7.437944412231445, 8.78000259399414, 8.615970611572266, 8.795022964477539, 8.683919906616211, 7.111072540283203, 7.770061492919922, 8.922100067138672, 9.526968002319336, 7.8411102294921875, 8.287191390991211, 10.084152221679688, 8.744001388549805, 8.032083511352539, 7.095098495483398, 8.343935012817383, 8.038997650146484, 8.939027786254883, 8.714914321899414, 6.999969482421875, 7.087945938110352, 9.23299789428711, 8.90803337097168, 7.808923721313477, 8.558034896850586, 7.122993469238281, 8.755922317504883, 8.379936218261719, 8.464813232421875, 8.405923843383789, 7.163047790527344, 9.139060974121094, 8.706092834472656, 7.130146026611328, 12.811899185180664, 9.733915328979492, 7.981061935424805, 9.001016616821289, 8.28409194946289, 7.188081741333008, 9.055137634277344, 9.569883346557617, 7.223844528198242, 8.78596305847168, 6.941080093383789, 8.934974670410156, 8.980989456176758, 7.564067840576172, 9.202003479003906] - - Average time (milliseconds): - 8.51003170013 - ~~~ - - Before partitioning, this query took 82.78ms on average. After partitioning, the query took only 8.51ms on average. - -3. SSH to the instance in `us-west2-a` with the Python client. - -4. Create 100 new Los Angeles-based users: - - {% include copy-clipboard.html %} - ~~~ shell - ./tuning.py \ - --host=
      \ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'los angeles', 'Los Angel', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [20.322084426879883, 14.09602165222168, 14.353036880493164, 25.568008422851562, 15.157938003540039, 27.19593048095703, 29.092073440551758, 14.515876770019531, 14.114141464233398, 19.414901733398438, 15.073060989379883, 13.965845108032227, 13.913869857788086, 15.218019485473633, 13.844013214111328, 14.110088348388672, 13.943910598754883, 13.73600959777832, 13.262033462524414, 14.648914337158203, 14.066219329833984, 13.91911506652832, 14.122962951660156, 14.724016189575195, 17.747879028320312, 16.537904739379883, 13.921022415161133, 14.027118682861328, 15.810012817382812, 14.811992645263672, 14.551877975463867, 14.912128448486328, 14.078140258789062, 14.576196670532227, 19.381046295166016, 14.536857604980469, 14.664888381958008, 14.539957046508789, 15.054941177368164, 17.20881462097168, 14.64700698852539, 14.211177825927734, 15.089988708496094, 14.193058013916016, 14.544010162353516, 14.680862426757812, 14.32490348815918, 15.841007232666016, 14.069080352783203, 14.59503173828125, 14.837026596069336, 14.315128326416016, 14.558792114257812, 14.645099639892578, 14.82701301574707, 14.699935913085938, 15.035152435302734, 14.724016189575195, 16.10708236694336, 14.612913131713867, 14.641046524047852, 14.706850051879883, 14.29295539855957, 14.779090881347656, 15.485048294067383, 17.444133758544922, 15.172004699707031, 20.865917205810547, 14.388084411621094, 14.241218566894531, 14.343976974487305, 14.602899551391602, 14.64390754699707, 13.908147811889648, 20.69687843322754, 15.130043029785156, 14.754056930541992, 14.123916625976562, 14.760017395019531, 14.25480842590332, 14.446020126342773, 14.229059219360352, 15.10000228881836, 14.275789260864258, 14.42098617553711, 14.935970306396484, 15.175819396972656, 27.69613265991211, 14.856815338134766, 14.902830123901367, 15.029191970825195, 15.143871307373047, 15.524148941040039, 14.510869979858398, 18.740177154541016, 14.97197151184082, 15.30003547668457, 15.158891677856445, 14.423847198486328, 35.25400161743164] - - Average time (milliseconds): - 15.7462859154 - ~~~ - - Before partitioning, this query took 139.70ms on average. After partitioning, the query took only 15.74ms on average. - -## See also - -- [SQL Performance Best Practices](performance-best-practices-overview.html) -- [Performance Benchmarking](performance-benchmarking-with-tpc-c.html) -- [Production Checklist](recommended-production-settings.html) diff --git a/src/current/v2.0/porting-postgres.md b/src/current/v2.0/porting-postgres.md deleted file mode 100644 index e40c1de502a..00000000000 --- a/src/current/v2.0/porting-postgres.md +++ /dev/null @@ -1,97 +0,0 @@ ---- -title: Porting from PostgreSQL -summary: Porting an application from PostgreSQL -toc: true ---- - -Although CockroachDB supports PostgreSQL syntax and drivers, it does not offer exact compatibility. This page documents the known list of differences between PostgreSQL and CockroachDB for identical input. That is, a SQL statement of the type listed here will behave differently than in PostgreSQL. Porting an existing application to CockroachDB will require changing these expressions. - -Note that some of these differences only apply to rare inputs, and so no change will be needed, even if the listed feature is being used. In these cases, it is safe to ignore the porting instructions. - -{{site.data.alerts.callout_info}}This document currently only covers how to rewrite SQL expressions. It does not discuss strategies for porting applications that use SQL features CockroachDB does not currently support, such as the ENUM type.{{site.data.alerts.end}} - - -### Overflow of `float` - -In PostgreSQL, the `float` type returns an error when it overflows or an expression would return Infinity: - -~~~ -postgres=# select 1e300::float * 1e10::float; -ERROR: value out of range: overflow -postgres=# select pow(0::float, -1::float); -ERROR: zero raised to a negative power is undefined -~~~ - -In CockroachDB, these expressions instead return Infinity: - -~~~ sql -SELECT 1e300::float * 1e10::float; -~~~ - -~~~ -+----------------------------+ -| 1e300::FLOAT * 1e10::FLOAT | -+----------------------------+ -| +Inf | -+----------------------------+ -~~~ - -~~~ sql -SELECT pow(0::float, -1::float); -~~~ - -~~~ -+---------------------------+ -| pow(0::FLOAT, - 1::FLOAT) | -+---------------------------+ -| +Inf | -+---------------------------+ -~~~ - -### Precedence of unary `~` - -In PostgreSQL, the unary `~` (bitwise not) operator has a low precedence. For example, the following query is parsed as `~ (1 + 2)` because `~` has a lower precedence than `+`: - -~~~ sql -SELECT ~1 + 2 -~~~ - -In CockroachDB, unary `~` has the same (high) precedence as unary `-`, so the above expression will be parsed as `(~1) + 2`. - -**Porting instructions:** Manually add parentheses around expressions that depend on the PostgreSQL behavior. - -### Precedence of bitwise operators - -In PostgreSQL, the operators `|` (bitwise OR), `#` (bitwise XOR), and `&` (bitwise AND) all have the same precedence. - -In CockroachDB, the precedence from highest to lowest is: `&`, `#`, `|`. - -**Porting instructions:** Manually add parentheses around expressions that depend on the PostgreSQL behavior. - -### Integer division - -In PostgreSQL, division of integers results in an integer. For example, the following query returns `1`, since the `1 / 2` is truncated to `0`: - -~~~ sql -SELECT 1 + 1 / 2 -~~~ - -In CockroachDB, integer division results in a `decimal`. CockroachDB instead provides the `//` operator to perform floor division. - -**Porting instructions:** Change `/` to `//` in integer division where the result must be an integer. - -### Shift argument modulo - -In PostgreSQL, the shift operators (`<<`, `>>`) sometimes modulo their second argument to the bit size of the underlying type. For example, the following query results in a `1` because the int type is 32 bits, and `32 % 32` is `0`, so this is the equivalent of `1 << 0`: - -~~~ sql -SELECT 1::int << 32 -~~~ - -In CockroachDB, no such modulo is performed. - -**Porting instructions:** Manually add a modulo to the second argument. Also note that CockroachDB's [`INT`](int.html) type is always 64 bits. For example: - -~~~ sql -SELECT 1::int << (x % 64) -~~~ diff --git a/src/current/v2.0/primary-key.md b/src/current/v2.0/primary-key.md deleted file mode 100644 index d5d21a0fcb9..00000000000 --- a/src/current/v2.0/primary-key.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: Primary Key Constraint -summary: The Primary Key constraint specifies that the columns can be used to uniquely identify rows in a table. -toc: true ---- - -The Primary Key [constraint](constraints.html) specifies that the constrained columns' values must uniquely identify each row. - -Unlike other constraints which have very specific uses, the Primary Key constraint *should be used for every table* because it provides an intrinsic structure to the table's data. This both makes it easier to understand, as well as improving query performance. - -{{site.data.alerts.callout_info}}A table's primary key can only be specified in the CREATE TABLE statement. It cannot be changed later using ALTER TABLE, though it is possible to go through a process to create a new table with the new primary key you want and then migrate the data.{{site.data.alerts.end}} - - -## Details - -- Tables can only have one primary key. -- To ensure each row has a unique identifier, the Primary Key constraint combines the properties of both the [Unique](unique.html) and [Not Null](not-null.html) constraints. The properties of both constraints are necessary to make sure each row's primary key columns contain distinct sets of values. - - - The properties of the Unique constraint ensure that each value is distinct from all other values. - - However, because *NULL* values never equal other *NULL* values, the Unique constraint is not enough (two rows can appear the same if one of the values is *NULL*). To prevent the appearance of duplicated values, the Primary Key constraint also enforces the properties of the Not Null constraint. - -- The columns in the Primary Key constraint are used to create its `primary` [index](indexes.html), which CockroachDB uses by default to access the table's data. - - This index does not take up additional disk space (unlike secondary indexes, which do) because CockroachDB uses the `primary` index to structure the table's data in the key-value layer. For more information, see our blog post [SQL in CockroachDB: Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -- For optimal performance, we recommend defining a primary key for *every* table. - - If you create a table without defining a primary key, CockroachDB uses a unique identifier for each row, which it then uses for the `primary` index. Because you cannot meaningfully use this unique row identifier column to filter table data, it does not offer any performance optimization. This means you will always have improved performance by defining a primary key for a table. For more information, see our blog post [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -## Syntax - -Primary Key constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -### Column Level - -
      -{% include {{ page.version.version }}/sql/diagrams/primary_key_column_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the Primary Key column. | -| `column_type` | The Primary Key column's [data type](data-types.html). | -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE orders ( - order_id INT PRIMARY KEY, - order_date TIMESTAMP NOT NULL, - order_mode STRING(8), - customer_id INT, - order_status INT - ); -~~~ - -### Table Level - -
      -{% include {{ page.version.version }}/sql/diagrams/primary_key_table_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_def` | Definitions for any other columns in the table. | -| `name` | The name you want to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). | -| `column_name` | The name of the column you want to use as the Primary Key.

      The order in which you list columns here affects the structure of the `primary` index.| -| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE IF NOT EXISTS inventories ( - product_id INT, - warehouse_id INT, - quantity_on_hand INT NOT NULL, - PRIMARY KEY (product_id, warehouse_id) - ); -~~~ - -## Usage Example - -~~~ sql -> CREATE TABLE IF NOT EXISTS inventories ( - product_id INT, - warehouse_id INT, - quantity_on_hand INT NOT NULL, - PRIMARY KEY (product_id, warehouse_id) - ); - -> INSERT INTO inventories VALUES (1, 1, 100); - -> INSERT INTO inventories VALUES (1, 1, 200); -~~~ -~~~ -pq: duplicate key value (product_id,warehouse_id)=(1,1) violates unique constraint "primary" -~~~ -~~~ sql -> INSERT INTO inventories VALUES (1, NULL, 100); -~~~ -~~~ -pq: null value in column "warehouse_id" violates not-null constraint -~~~ - -## See Also - -- [Constraints](constraints.html) -- [Check constraint](check.html) -- [Default Value constraint](default-value.html) -- [Foreign Key constraint](foreign-key.html) -- [Not Null constraint](not-null.html) -- [Unique constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v2.0/privileges.md b/src/current/v2.0/privileges.md deleted file mode 100644 index da492db05ca..00000000000 --- a/src/current/v2.0/privileges.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Privileges -summary: Privileges are granted to roles and users at the database and table levels. They are not yet supported for other granularities such as columns or rows. -toc: true ---- - -In CockroachDB, privileges are granted to [roles](roles.html) and [users](create-and-manage-users.html) at the database and table levels. They are not yet supported for other granularities such as columns or rows. - -When a user connects to a database, either via the [built-in SQL client](use-the-built-in-sql-client.html) or a [client driver](install-client-drivers.html), CockroachDB checks the user and role's privileges for each statement executed. If the user does not have sufficient privileges for a statement, CockroachDB gives an error. - -For the privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - - -## Supported Privileges - -For a full list of supported privileges, see the [`GRANT`](grant.html) documentation. - -## Granting Privileges - -To grant privileges to a role or user, use the [`GRANT`](grant.html) statement, for example: - -~~~ sql -> GRANT SELECT, INSERT ON bank.accounts TO maxroach; -~~~ - -## Showing Privileges - -To show privileges granted to roles or users, use the [`SHOW GRANTS`](show-grants.html) statement, for example: - -~~~ sql -> SHOW GRANTS ON DATABASE bank FOR maxroach; -~~~ - -## Revoking Privileges - -To revoke privileges from roles or users, use the [`REVOKE`](revoke.html) statement, for example: - -~~~ sql -> REVOKE INSERT ON bank.accounts FROM maxroach; -~~~ - -## See Also - -- [Manage Users](create-and-manage-users.html) -- [Manage Roles](roles.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/query-behavior-troubleshooting.md b/src/current/v2.0/query-behavior-troubleshooting.md deleted file mode 100644 index 2a6443f75d1..00000000000 --- a/src/current/v2.0/query-behavior-troubleshooting.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: Troubleshoot Query Behavior -summary: Learn how to troubleshoot issues with specific queries with CockroachDB -toc: true ---- - -If a query returns an unexpected result or takes longer than expected to process, this page will help you troubleshoot the issue. - - -## Correctness Issues - -If your queries return unexpected results, there are several possibilities: - -- You’ve encountered a [known limitation](https://github.com/cockroachdb/cockroach/issues?q=is%3Aopen+is%3Aissue+label%3Aknown-limitation), [UX surprise](https://github.com/cockroachdb/cockroach/issues?utf8=%E2%9C%93&q=is%3Aopen%20is%3Aissue%20label%3Aux-surprise) or other problem with [SQL semantics](https://github.com/cockroachdb/cockroach/issues?utf8=%E2%9C%93&q=is%3Aopen%20is%3Aissue%20label%3Asql-semantics). Feel free to leave a comment on the existing issue indicating that you’ve encountered a problem as well. -- Your application has a bug. It's always worthwhile to check and double-check your application’s logic before filing an issue. That said, you can always [reach out for support](support-resources.html). -- CockroachDB has a bug. Please [file an issue](file-an-issue.html). - -## Performance Issues - -If queries are taking longer than expected to process, there are a few things you can check into: - -- Review your deployment's monitoring. General network latency or partitioning events can affect query response times. - -- [Identify and cancel long-running queries](manage-long-running-queries.html). - -- [Turn on SQL logging](#sql-logging). - -If you're still unable to determine why the query executes slowly, please [file an issue](file-an-issue.html). - -## `bad connection` & `closed` Responses - -If you receive a response of `bad connection` or `closed`, this normally indicates that the node you connected to died. You can check this by connecting to another node in the cluster and running [`cockroach node status`](view-node-details.html#show-the-status-of-all-nodes). - -Once you find the downed node, you can check its [logs](debug-and-error-logs.html) (stored in `cockroach-data/logs` by default). - -Because this kind of behavior is entirely unexpected, you should [file an issue](file-an-issue.html). - -## SQL Logging - -{% include {{ page.version.version }}/faq/sql-query-logging.md %} - -## Something Else? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v2.0/query-order.md b/src/current/v2.0/query-order.md deleted file mode 100644 index 6df4d86ca81..00000000000 --- a/src/current/v2.0/query-order.md +++ /dev/null @@ -1,253 +0,0 @@ ---- -title: Ordering Query Results -summary: The ORDER BY clause controls the order in which rows are returned or processed. -toc: true ---- - -The `ORDER BY` clause controls the order in which rows are returned or -processed. It can be used in any [selection -query](selection-queries.html), including -as operand of [`INSERT`](insert.html) or [`UPSERT`](upsert.html), as -well as with [`DELETE`](delete.html) and [`UPDATE`](update.html) -statements. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/sort_clause.html %} -
      - -## Parameters - -The `ORDER BY` clause takes a comma-separated list of ordering specifications. -Each ordering specification is composed of a column selection followed optionally -by the keyword `ASC` or `DESC`. - -Each **column selection** can take one of the following forms: - -- A simple column selection, determined as follows: - 1. The name of a column label configured with `AS` earlier in the [`SELECT` clause](select-clause.html). This uses the value computed by the `SELECT` clause as the sorting key. - 2. A positive integer number, designating one of the columns in the data source, either the `FROM` clause of the `SELECT` clause where it happens or the table being written to by `DELETE` or `UPDATE`. This uses the corresponding input value from the data source to use as the sorting key. - 3. An arbitrary [scalar expression](scalar-expressions.html). This uses the result of evaluating that expression as the sorting key. -- The notation `PRIMARY KEY `. This uses the primary key column(s) of the given table as sorting key. This table must be part of the data source. -- The notation `INDEX @`. This uses the columns indexed by the given index as sorting key. This table must be part of the data source. - -The optional keyword `ASC` after a column selection indicates to use -the sorting key as-is, and thus is meaningless. - -The optional keyword `DESC` inverts the direction of the column(s) -selected by the selection that immediately precedes. - -## Order Preservation - -In general, the order of the intermediate results of a query is not guaranteed, -even if `ORDER BY` is specified. In other words, the `ORDER BY` clause is only -effective at the top-level statement. For example, it is *ignored* by the query -planner when present in a sub-query in a `FROM` clause as follows: - -~~~ sql -> SELECT * FROM a, b ORDER BY a.x; -- valid, effective -> SELECT * FROM (SELECT * FROM a ORDER BY a.x), b; -- ignored, ineffective -~~~ - -However, when combining queries together with -[sub-queries](table-expressions.html#subqueries-as-table-expressions), -some combinations will make the `ORDER BY` clause in a sub-query -significant: - -1. The ordering of the operand of a `WITH ORDINALITY` clause - (within the `FROM` operand of a `SELECT` clause) is preserved, - to control the numbering of the rows. -2. The ordering of the operand of a stand-alone `LIMIT` or `OFFSET` clause (within - a `FROM` operand of a `SELECT` clause) is preserved, to determine - which rows are kept in the result. -3. The ordering of the data source for an [`INSERT`](insert.html) - statement or an [`UPSERT`](upsert.html) statement that also uses - `LIMIT` is preserved, to determine which rows are processed. -4. The ordering of a sub-query used in a scalar expression - is preserved. - -For example, using `WITH ORDINALITY`: - -~~~ sql -> SELECT * FROM (SELECT * FROM a ORDER BY a.x) WITH ORDINALITY; - -- ensures that the rows are numbered in the order of column a.x. -~~~ - -For example, using a stand-alone `LIMIT` clause in `FROM`: - -~~~ sql -> SELECT * FROM a, ((SELECT * FROM b ORDER BY b.x) LIMIT 1); - -- ensures that only the first row of b in the order of column b.x - -- is used in the cross join. -~~~ - -For example, using `LIMIT` in `INSERT`: - -~~~ sql -> INSERT INTO a (SELECT * FROM b ORDER BY b.x) LIMIT 1; - -- ensures that only the first row of b in the order of column b.x - -- is inserted into a. -~~~ - -For example, using a sub-query in scalar context: - -~~~ sql -> SELECT ARRAY(SELECT a.x FROM a ORDER BY a.x); - -- ensures that the array is constructed using the values of a.x in sorted order. -> SELECT ARRAY[1, 2, 3] = ARRAY(SELECT a.x FROM a ORDER BY a.x); - -- ensures that the values on the right-hand side are compared in the order of column a.x. -~~~ - -## Ordering of Rows Without `ORDER BY` - -Without `ORDER BY`, rows are processed or returned in a -non-deterministic order. "Non-deterministic" means that the actual order -can depend on the logical plan, the order of data on disk, the topology -of the CockroachDB cluster, and is generally variable over time. - -## Sorting Using Simple Column Selections - -Considering the following table: - -~~~ sql -> CREATE TABLE a(a INT); -> INSERT INTO a VALUES (1), (3), (2); -~~~ - -The following statements are equivalent: - -~~~ sql -> SELECT a AS b FROM a ORDER BY b; -- first form: refers to an AS alias. -> SELECT a FROM a ORDER BY 1; -- second form: refers to a column position. -> SELECT a FROM a ORDER BY a; -- third form: refers to a column in the data source. -~~~ - -~~~ -+---------+ -| a | -+---------+ -| 1 | -| 2 | -| 3 | -+---------+ -(3 rows) -~~~ - -Note that the order of the rules matter. If there is ambiguity, the `AS` aliases -take priority over the data source columns, for example: - -~~~ sql -> CREATE TABLE ab(a INT, b INT); -> SELECT a AS b, b AS c FROM ab ORDER BY b; -- orders by column a, renamed to b -> SELECT a, b FROM ab ORDER BY b; -- orders by column b -~~~ - -It is also possible to sort using an arbitrary scalar expression computed for each row, for example: - -~~~ sql -> SELECT a, b FROM ab ORDER BY a + b; -- orders by the result of computing a+b. -~~~ - -## Sorting Using Multiple Columns - -When more than one ordering specification is given, the later specifications are used -to order rows that are equal over the earlier specifications, for example: - -~~~ sql -> CREATE TABLE ab(a INT, b INT); -> SELECT a, b FROM ab ORDER BY b, a; -~~~ - -This sorts the results by column `b`, and then if there are multiple -rows that have the same value in column `b`, it will then order these -rows by column `a`. - -## Inverting the Sort Order - -The keyword `DESC` ("descending") can be added after an ordering specification to -invert its order. This can be specified separately for each specification, for example: - -~~~ sql -> CREATE TABLE ab(a INT, b INT); -> SELECT a, b FROM ab ORDER BY b DESC, a; -- sorts on b descending, then a ascending. -~~~ - -## Sorting In Primary Key Order - -The `ORDER BY PRIMARY KEY` notation guarantees that the results are -presented in primary key order. - -The particular advantage is that for queries using the primary index, -this guarantees the order while also guaranteeing there will not be an -additional sorting computation to achieve it, for example: - -~~~ sql -> CREATE TABLE kv(k INT PRIMARY KEY, v INT); -> SELECT k, v FROM kv ORDER BY PRIMARY KEY kv; -- guarantees ordering by column k. -~~~ - -If a primary key uses the keyword `DESC` already, then its meaning -will be flipped (cancelled) if the `ORDER BY` clause also uses -`DESC`, for example: - -~~~ sql -> CREATE TABLE ab(a INT, b INT, PRIMARY KEY (b DESC, a ASC)); -> SELECT * FROM ab ORDER BY b DESC; -- orders by b descending, then a ascending. - -- The primary index may be used to optimize. - -> SELECT * FROM ab ORDER BY PRIMARY KEY ab DESC; -- orders by b ascending, then a descending. - -- The index order is inverted. -~~~ - -## Sorting In Index Order - -The `ORDER BY INDEX` notation guarantees that the results are presented -in the order of a given index. - -The particular advantage is that for queries using that index, this -guarantees the order while also guaranteeing there will not be an -additional sorting computation to achieve it, for example: - -~~~ sql -> CREATE TABLE kv(k INT PRIMARY KEY, v INT, INDEX v_idx(v)); -> SELECT k, v FROM kv ORDER BY INDEX kv@v_idx; -- guarantees ordering by column v. -~~~ - -If an index uses the keyword `DESC` already, then its meaning -will be flipped (cancelled) if the `ORDER BY` clause also uses -`DESC`, for example: - -~~~ sql -> CREATE TABLE ab(a INT, b INT, INDEX b_idx (b DESC, a ASC)); -> SELECT * FROM ab ORDER BY b DESC; -- orders by b descending, then a ascending. - -- The index b_idx may be used to optimize. - -> SELECT * FROM ab ORDER BY INDEX ab@b_idx DESC; -- orders by b ascending, then a descending. - -- The index order is inverted. -~~~ - -## Processing Order During Aggregations - -CockroachDB currently processes aggregations (e.g., `SELECT ... GROUP BY`) -in non-deterministic order. - -For most aggregation functions, like `MIN`, `MAX`, -`COUNT`, the order does not matter anyway because the functions are commutative -and produce the same result regardless. However, for the few aggregation -functions that are not commutative (e.g., `array_agg()`, `json_agg()`, -and `concat_agg()`), this implies the result of the aggregation will not be -deterministic. - -This is a [known limitation](https://github.com/cockroachdb/cockroach/issues/23620) -that may be lifted in the future. - -## See Also - -- [Selection Queries](selection-queries.html) -- [Scalar Expressions](scalar-expressions.html) -- [`INSERT`](insert.html) -- [`UPSERT`](upsert.html) -- [`DELETE`](delete.html) -- [`UPDATE`](delete.html) diff --git a/src/current/v2.0/recommended-production-settings.md b/src/current/v2.0/recommended-production-settings.md deleted file mode 100644 index 0606cd9d824..00000000000 --- a/src/current/v2.0/recommended-production-settings.md +++ /dev/null @@ -1,440 +0,0 @@ ---- -title: Production Checklist -summary: Recommended settings for production deployments of CockroachDB. -toc: true ---- - -This page provides important recommendations for production deployments of CockroachDB. - -## Cluster Topology - -### Terminology - -To properly plan your cluster's topology, it's important to review some basic CockroachDB-specific terminology: - -Term | Definition ------|------------ -**Cluster** | Your CockroachDB deployment, which acts as a single logical application that contains one or more databases. -**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster. -**Range** | CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -**Range Lease** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. - -### Basic Topology Recommendations - -- Run each node on a separate machine. Since CockroachDB replicates across nodes, running more than one node per machine increases the risk of data loss if a machine fails. Likewise, if a machine has multiple disks or SSDs, run one node with multiple `--store` flags and not one node per disk. For more details about stores, see [Start a Node](start-a-node.html). - -- When deploying in a single datacenter: - - To be able to tolerate the failure of any 1 node, use at least 3 nodes with the [default 3-way replication factor](configure-replication-zones.html#view-the-default-replication-zone). In this case, if 1 node fails, each range retains 2 of its 3 replicas, a majority. - - To be able to tolerate 2 simultaneous node failures, use at least 5 nodes, [increase the default replication factor](configure-replication-zones.html#edit-the-default-replication-zone) to 5, and [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5 as well. In this case, if 2 nodes fail at the same time, each range retains 3 of its 5 replicas, a majority. - -- When deploying across multiple datacenters in one or more regions: - - To be able to tolerate the failure of 1 entire datacenter, use at least 3 datacenters and set `--locality` on each node to spread data evenly across datacenters (see next bullet for more details). In this case, if 1 datacenter goes offline, the 2 remaining datacenters retain a majority of replicas. - - When starting each node, use the [`--locality`](start-a-node.html#locality) flag to describe the node's location, for example, `--locality=region=west,datacenter=us-west-1`. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes. - - CockroachDB spreads the replicas of each piece of data across as diverse a set of localities as possible, with the order determining the priority. However, locality can also be used to influence the location of data replicas in various ways using [replication zones](configure-replication-zones.html#replication-constraints). - - When there is high latency between nodes, CockroachDB uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance, also known as ["follow-the-workload"](demo-follow-the-workload.html). In a deployment across more than 3 datacenters, however, to ensure that all data benefits from "follow-the-workload", you must [increase the replication factor](configure-replication-zones.html#edit-the-default-replication-zone) to match the total number of datacenters. - - Locality is also a prerequisite for using the [table partitioning](partitioning.html) and [**Node Map**](enable-node-map.html) enterprise features. - -- When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas. - -## Hardware - -### Basic Hardware Recommendations - -- Nodes should have sufficient CPU, RAM, network, and storage capacity to handle your workload. It's important to test and tune your hardware setup before deploying to production. - -- At a bare minimum, each node should have **2 GB of RAM and one entire core**. More data, complex workloads, higher concurrency, and faster performance require additional resources. - {{site.data.alerts.callout_danger}}Avoid "burstable" or "shared-core" virtual machines that limit the load on a single core.{{site.data.alerts.end}} - -- For best performance: - - Use SSDs over HDDs. - - Use larger/more powerful nodes. Adding more CPU is usually more beneficial than adding more RAM. - -- For best resilience: - - Use many smaller nodes instead of fewer larger ones. Recovery from a failed node is faster when data is spread across more nodes. - - Use [zone configs](configure-replication-zones.html) to increase the replication factor from 3 (the default) to 5. This is especially recommended if you are using local disks rather than a cloud providers' network-attached disks that are often replicated underneath the covers, because local disks have a greater risk of failure. You can do this for the [entire cluster](configure-replication-zones.html#edit-the-default-replication-zone) or for specific [databases](configure-replication-zones.html#create-a-replication-zone-for-a-database), [tables](configure-replication-zones.html#create-a-replication-zone-for-a-table), or [rows](configure-replication-zones.html#create-a-replication-zone-for-a-table-or-secondary-index-partition-new-in-v2-0) (enterprise-only). - {{site.data.alerts.callout_danger}} - {% include {{page.version.version}}/known-limitations/system-range-replication.md %} - {{site.data.alerts.end}} - -### Cloud-Specific Recommendations - -Cockroach Labs recommends the following cloud-specific configurations based on our own internal testing. Before using configurations not recommended here, be sure to test them exhaustively. - -#### AWS - -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) [instances](https://aws.amazon.com/ec2/instance-types/). For example, Cockroach Labs has used `m3.large` instances (2 vCPUs and 7.5 GiB of RAM per instance) for internal testing. -- **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. -- Use [Provisioned IOPS SSD-backed (io1) EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_piops) or [SSD Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). - -#### Azure - -- Use storage-optimized [Ls-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-storage) VMs. For example, Cockroach Labs has used `Standard_L4s` VMs (4 vCPUs and 32 GiB of RAM per VM) for internal testing. -- Use [Premium Storage](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage) or local SSD storage with a Linux filesystem such as `ext4` (not the Windows `ntfs` filesystem). Note that [the size of a Premium Storage disk affects its IOPS](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage#premium-storage-disk-limits). -- If you choose local SSD storage, on reboot, the VM can come back with the `ntfs` filesystem. Be sure your automation monitors for this and reformats the disk to the Linux filesystem you chose initially. -- **Do not** use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on a single core. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - -#### Digital Ocean - -- Use any [droplets](https://www.digitalocean.com/pricing/) except standard droplets with only 1 GB of RAM, which is below our minimum requirement. All Digital Ocean droplets use SSD storage. - -#### GCE - -- Use `n1-standard` or `n1-highcpu` [predefined VMs](https://cloud.google.com/compute/pricing#predefined_machine_types), or [custom VMs](https://cloud.google.com/compute/pricing#custommachinetypepricing). For example, Cockroach Labs has used custom VMs (8 vCPUs and 16 GiB of RAM per VM) for internal testing. -- **Do not** use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on a single core. -- Use [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) or [SSD persistent disks](https://cloud.google.com/compute/docs/disks/#pdspecs). Note that [the IOPS of SSD persistent disks depends both on the disk size and number of CPUs on the machine](https://cloud.google.com/compute/docs/disks/performance#optimizessdperformance). - -## Security - -An insecure cluster comes with serious risks: - -- Your cluster is open to any client that can access any node's IP addresses. -- Any user, even `root`, can log in without providing a password. -- Any user, connecting as `root`, can read or write any data in your cluster. -- There is no network encryption or authentication, and thus no confidentiality. - -Therefore, to deploy CockroachDB in production, it is strongly recommended to use TLS certificates to authenticate the identity of nodes and clients and to encrypt in-flight data between nodes and clients. You can use either the built-in [`cockroach cert` commands](create-security-certificates.html) or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates for your deployment. Regardless of which option you choose, you'll need the following files: - -- A certificate authority (CA) certificate and key, used to sign all of the other certificates. -- A separate certificate and key for each node in your deployment, with the common name `node`. -- A separate certificate and key for each client and user you want to connect to your nodes, with the common name set to the username. The default user is `root`. - - Alternatively, CockroachDB supports [password authentication](create-and-manage-users.html#user-authentication), although we typically recommend using client certificates instead. - -## Networking - -### Networking flags - -When [starting a node](start-a-node.html), two main flags are used to control its network connections: - -- `--host` determines which address(es) to listen on for connections from other nodes and clients. -- `--advertise-host` determines which address to tell other nodes to use. - -The effect depends on how these two flags are used in combination: - -| | `--host` not specified | `--host` specified | -|-|----------------------------|------------------------| -| **`--advertise-host` not specified** | Node listens on all of its IP addresses and advertises its canonical hostname to other nodes. | Node listens on the IP address or hostname specified in `--host` and advertises this value to other nodes. -| **`--advertise-host` specified** | Node listens on all of its IP addresses and advertises the value specified in `--advertise-host` to other nodes. **Recommended for most cases.** | Node listens on the IP address or hostname specified in `--host` and advertises the value specified in `--advertise-host` to other nodes. - -{{site.data.alerts.callout_success}} -When using hostnames, make sure they resolve properly (e.g., via DNS or `etc/hosts`). In particular, be careful about the value advertised to other nodes, either via `--advertise-host` or via `--host` when `--advertise-host` is not specified. -{{site.data.alerts.end}} - -### Cluster on a single network - -When running a cluster on a single network, the setup depends on whether the network is private. In a private network, machines have addresses restricted to the network, not accessible to the public internet. Using these addresses is more secure and usually provides lower latency than public addresses. - -Private? | Recommended setup ----------|------------------ -Yes | Start each node with `--host` set to its private IP address and do not specify `--advertise-host`. This will tell other nodes to use the private IP address advertised. Load balancers/clients in the private network must use it as well. -No | Start each node with `--advertise-host` set to a stable public IP address that routes to the node and do not specify `--host`. This will tell other nodes to use the specific IP address advertised, but load balancers/clients will be able to use any address that routes to the node.

      If load balancers/clients are outside the network, also configure firewalls to allow external traffic to reach the cluster. - -### Cluster spanning multiple networks - -When running a cluster across multiple networks, the setup depends on whether nodes can reach each other across the networks. - -Nodes reachable across networks? | Recommended setup ----------------------------------|------------------ -Yes | This is typical when all networks are on the same cloud. In this case, use the relevant [single network setup](#cluster-on-a-single-network) above. -No | This is typical when networks are on different clouds. In this case, set up a [VPN](https://en.wikipedia.org/wiki/Virtual_private_network), [VPC](https://en.wikipedia.org/wiki/Virtual_private_cloud), [NAT](https://en.wikipedia.org/wiki/Network_address_translation), or another such solution to provide unified routing across the networks. Then start each node with `--advertise-host` set to the address that is reachable from other networks and do not specify `--host`. This will tell other nodes to use the specific IP address advertised, but load balancers/clients will be able to use any address that routes to the node. - -## Load Balancing - -Each CockroachDB node is an equally suitable SQL gateway to a cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. To ensure that traffic is not directed to failed nodes or nodes that are not ready to receive requests, load balancers should use [CockroachDB's readiness health check](monitoring-and-alerting.html#health-ready-1). - {{site.data.alerts.callout_success}}With a single load balancer, client connections are resilient to node failure, but the load balancer itself is a point of failure. It's therefore best to make load balancing resilient as well by using multiple load balancing instances, with a mechanism like floating IPs or DNS to select load balancers for clients.{{site.data.alerts.end}} - -For guidance on load balancing, see the tutorial for your deployment environment: - -Environment | Featured Approach -------------|--------------------- -[On-Premises](deploy-cockroachdb-on-premises.html#step-6-set-up-haproxy-load-balancers) | Use HAProxy. -[AWS](deploy-cockroachdb-on-aws.html#step-4-set-up-load-balancing) | Use Amazon's managed load balancing service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-4-set-up-load-balancing) | Use Azure's managed load balancing service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-3-set-up-load-balancing) | Use Digital Ocean's managed load balancing service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-4-set-up-tcp-proxy-load-balancing) | Use GCE's managed TCP proxy load balancing service. - -## Monitoring and Alerting - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Clock Synchronization - -{% include {{ page.version.version }}/faq/clock-synchronization-effects.md %} - -## Cache and SQL Memory Size - -Changed in v1.1: By default, each node's cache size and temporary SQL memory size is `128MiB` respectively. These defaults were chosen to facilitate development and testing, where users are likely to run multiple CockroachDB nodes on a single computer. When running a production cluster with one node per host, however, it's recommended to increase these values: - -- Increasing a node's **cache size** will improve the node's read performance. -- Increasing a node's **SQL memory size** will increase the number of simultaneous client connections it allows (the `128MiB` default allows a maximum of 6200 simultaneous connections) as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. - -To manually increase a node's cache size and SQL memory size, start the node using the [`--cache`](start-a-node.html#flags-changed-in-v2-0) and [`--max-sql-memory`](start-a-node.html#flags-changed-in-v2-0) flags: - -~~~ shell -$ cockroach start --cache=.25 --max-sql-memory=.25 -~~~ - -## File Descriptors Limit - -CockroachDB can use a large number of open file descriptors, often more than is available by default. Therefore, please note the following recommendations. - -For each CockroachDB node: - -- At a **minimum**, the file descriptors limit must be 1956 (1700 per store plus 256 for networking). If the limit is below this threshold, the node will not start. -- It is **recommended** to set the file descriptors limit to unlimited; otherwise, the recommended limit is at least 15000 (10000 per store plus 5000 for networking). This higher limit ensures performance and accommodates cluster growth. -- When the file descriptors limit is not high enough to allocate the recommended amounts, CockroachDB allocates 10000 per store and the rest for networking; if this would result in networking getting less than 256, CockroachDB instead allocates 256 for networking and evenly splits the rest across stores. - -### Increase the File Descriptors Limit - - - -
      - - - -
      - -
      - -- [Yosemite and later](#yosemite-and-later) -- [Older versions](#older-versions) - -#### Yosemite and later - -To adjust the file descriptors limit for a single process in Mac OS X Yosemite and later, you must create a property list configuration file with the hard limit set to the recommendation mentioned [above](#file-descriptors-limit). Note that CockroachDB always uses the hard limit, so it's not technically necessary to adjust the soft limit, although we do so in the steps below. - -For example, for a node with 3 stores, we would set the hard limit to at least 35000 (10000 per store and 5000 for networking) as follows: - -1. Check the current limits: - - ~~~ shell - $ launchctl limit maxfiles - maxfiles 10240 10240 - ~~~ - - The last two columns are the soft and hard limits, respectively. If `unlimited` is listed as the hard limit, note that the hidden default limit for a single process is actually 10240. - -2. Create `/Library/LaunchDaemons/limit.maxfiles.plist` and add the following contents, with the final strings in the `ProgramArguments` array set to 35000: - - ~~~ xml - - - - - Label - limit.maxfiles - ProgramArguments - - launchctl - limit - maxfiles - 35000 - 35000 - - RunAtLoad - - ServiceIPC - - - - ~~~ - - Make sure the plist file is owned by `root:wheel` and has permissions `-rw-r--r--`. These permissions should be in place by default. - -3. Restart the system for the new limits to take effect. - -4. Check the current limits: - - ~~~ shell - $ launchctl limit maxfiles - maxfiles 35000 35000 - ~~~ - -#### Older versions - -To adjust the file descriptors limit for a single process in OS X versions earlier than Yosemite, edit `/etc/launchd.conf` and increase the hard limit to the recommendation mentioned [above](#file-descriptors-limit). Note that CockroachDB always uses the hard limit, so it's not technically necessary to adjust the soft limit, although we do so in the steps below. - -For example, for a node with 3 stores, we would set the hard limit to at least 35000 (10000 per store and 5000 for networking) as follows: - -1. Check the current limits: - - ~~~ shell - $ launchctl limit maxfiles - maxfiles 10240 10240 - ~~~ - - The last two columns are the soft and hard limits, respectively. If `unlimited` is listed as the hard limit, note that the hidden default limit for a single process is actually 10240. - -2. Edit (or create) `/etc/launchd.conf` and add a line that looks like the following, with the last value set to the new hard limit: - - ~~~ - limit maxfiles 35000 35000 - ~~~ - -3. Save the file, and restart the system for the new limits to take effect. - -4. Verify the new limits: - - ~~~ shell - $ launchctl limit maxfiles - maxfiles 35000 35000 - ~~~ - -
      - -
      - -- [Per-Process Limit](#per-process-limit) -- [System-Wide Limit](#system-wide-limit) - -#### Per-Process Limit - -To adjust the file descriptors limit for a single process on Linux, enable PAM user limits and set the hard limit to the recommendation mentioned [above](#file-descriptors-limit). Note that CockroachDB always uses the hard limit, so it's not technically necessary to adjust the soft limit, although we do so in the steps below. - -For example, for a node with 3 stores, we would set the hard limit to at least 35000 (10000 per store and 5000 for networking) as follows: - -1. Make sure the following line is present in both `/etc/pam.d/common-session` and `/etc/pam.d/common-session-noninteractive`: - - ~~~ shell - session required pam_limits.so - ~~~ - -2. Edit `/etc/security/limits.conf` and append the following lines to the file: - - ~~~ shell - * soft nofile 35000 - * hard nofile 35000 - ~~~ - - Note that `*` can be replaced with the username that will be running the CockroachDB server. - -4. Save and close the file. - -5. Restart the system for the new limits to take effect. - -6. Verify the new limits: - - ~~~ shell - $ ulimit -a - ~~~ - -Alternately, if you're using [Systemd](https://en.wikipedia.org/wiki/Systemd): - -1. Edit the service definition to configure the maximum number of open files: - - ~~~ ini - [Service] - ... - LimitNOFILE=35000 - ~~~ - -2. Reload Systemd for the new limit to take effect: - - ~~~ shell - $ systemctl daemon-reload - ~~~ - -#### System-Wide Limit - -You should also confirm that the file descriptors limit for the entire Linux system is at least 10 times higher than the per-process limit documented above (e.g., at least 150000). - -1. Check the system-wide limit: - - ~~~ shell - $ cat /proc/sys/fs/file-max - ~~~ - -2. If necessary, increase the system-wide limit in the `proc` file system: - - ~~~ shell - $ echo 150000 > /proc/sys/fs/file-max - ~~~ - -
      -
      - -CockroachDB does not yet provide a Windows binary. Once that's available, we will also provide documentation on adjusting the file descriptors limit on Windows. - -
      - -#### Attributions - -This section, "File Descriptors Limit", is in part derivative of the chapter *Open File Limits* From the Riak LV 2.1.4 documentation, used under Creative Commons Attribution 3.0 Unported License. - -## Orchestration / Kubernetes - -When running CockroachDB on Kubernetes, making the following minimal customizations will result in better, more reliable performance: - -* Use [SSDs instead of traditional HDDs](kubernetes-performance.html#disk-type). -* Configure CPU and memory [resource requests and limits](kubernetes-performance.html#resource-requests-and-limits). - -For more information and additional customization suggestions, see our full detailed guide to [CockroachDB Performance on Kubernetes](kubernetes-performance.html). diff --git a/src/current/v2.0/release-savepoint.md b/src/current/v2.0/release-savepoint.md deleted file mode 100644 index 5f990f95bd8..00000000000 --- a/src/current/v2.0/release-savepoint.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: RELEASE SAVEPOINT cockroach_restart -summary: Commit a transaction's changes once there are no retryable errors with the RELEASE SAVEPOINT cockroach_restart statement in CockroachDB. -toc: true ---- - -When using [client-side transaction retries](transactions.html#client-side-transaction-retries), the `RELEASE SAVEPOINT cockroach_restart` statement commits the transaction. - -If statements in the transaction [generated any non-retryable errors](transactions.html#error-handling), `RELEASE SAVEPOINT cockroach_restart` is equivalent to [`ROLLBACK`](rollback-transaction.html), which aborts the transaction and discards *all* updates made by its statements. - -Despite committing the transaction, you must still issue a [`COMMIT`](commit-transaction.html) statement to prepare the connection for the next transaction. - -{{site.data.alerts.callout_danger}}CockroachDB’s SAVEPOINT implementation only supports the cockroach_restart savepoint and does not support all savepoint functionality, such as nested transactions.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/release_savepoint.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to release a savepoint. However, privileges are required for each statement within a transaction. - -## Examples - -### Commit a Transaction - -After declaring `SAVEPOINT cockroach_restart`, commit the transaction with `RELEASE SAVEPOINT cockroach_restart` and then prepare the connection for the next transaction with `COMMIT`. - -~~~ sql -> BEGIN; - -> SAVEPOINT cockroach_restart; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -## See Also - -- [Transactions](transactions.html) -- [`SAVEPOINT`](savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) diff --git a/src/current/v2.0/remove-nodes.md b/src/current/v2.0/remove-nodes.md deleted file mode 100644 index 9cd6e339e62..00000000000 --- a/src/current/v2.0/remove-nodes.md +++ /dev/null @@ -1,410 +0,0 @@ ---- -title: Decommission Nodes -summary: Permanently remove one or more nodes from a cluster. -toc: true ---- - -This page shows you how to decommission and permanently remove one or more nodes from a CockroachDB cluster. You might do this, for example, when downsizing a cluster or reacting to hardware failures. - -For information about temporarily stopping a node (e.g., for planned maintenance), see [Stop a Node](stop-a-node.html). - - -## Overview - -### How It Works - -When you decommission a node, CockroachDB lets the node finish in-flight requests, rejects any new requests, and transfers all **range replicas** and **range leases** off the node so that it can be safely shut down. - -Basic terms: - -- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -- **Range Lease:** For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. - -### Considerations - -Before decommissioning a node, make sure other nodes are available to take over the range replicas from the node. If no other nodes are available, the decommission process will hang indefinitely. See the [Examples](#examples) below for more details. - -### Examples - -#### 3-node cluster with 3-way replication - -In this scenario, each range is replicated 3 times, with each replica on a different node: - -
      Decommission Scenario 1
      - -If you try to decommission a node, the process will hang indefinitely because the cluster cannot move the decommissioned node's replicas to the other 2 nodes, which already have a replica of each range: - -
      Decommission Scenario 1
      - -To successfully decommission a node, you need to first add a 4th node: - -
      Decommission Scenario 1
      - -#### 5-node cluster with 3-way replication - -In this scenario, like in the scenario above, each range is replicated 3 times, with each replica on a different node: - -
      Decommission Scenario 1
      - -If you decommission a node, the process will run successfully because the cluster will be able to move the node's replicas to other nodes without doubling up any range replicas: - -
      Decommission Scenario 1
      - -#### 5-node cluster with 5-way replication for a specific table - -In this scenario, a [custom replication zone](configure-replication-zones.html#create-a-replication-zone-for-a-table) has been set to replicate a specific table 5 times (range 6), while all other data is replicated 3 times: - -
      Decommission Scenario 1
      - -If you try to decommission a node, the cluster will successfully rebalance all ranges but range 6. Since range 6 requires 5 replicas (based on the table-specific replication zone), and since CockroachDB will not allow more than a single replica of any range on a single node, the decommission process will hang indefinitely: - -
      Decommission Scenario 1
      - -To successfully decommission a node, you need to first add a 6th node: - -
      Decommission Scenario 1
      - -## Remove a Single Node (Live) - -
      - - -
      - -### Before You Begin - -Confirm that there are enough nodes to take over the replicas from the node you want to remove. See some [Example scenarios](#examples) above. - -### Step 1. Check the node before decommissioning - -Open the Admin UI, click **Metrics** on the left, select the **Replication** dashboard, and hover over the **Replicas per Store** and **Leaseholders per Store** graphs: - -
      Decommission a single live node
      - -
      Decommission a single live node
      - -### Step 2. Decommission and remove the node - -SSH to the machine where the node is running and execute the [`cockroach quit`](stop-a-node.html) command with the `--decommission` flag and other required flags: - -
      -~~~ shell -$ cockroach quit --decommission --certs-dir=certs --host=
      -~~~ -
      - -
      -~~~ shell -$ cockroach quit --decommission --insecure --host=
      -~~~ -
      - -You'll then see the decommissioning status print to `stderr` as it changes: - -~~~ -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 73 | false | false | -+----+---------+-------------------+--------------------+-------------+ -(1 row) -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 73 | true | true | -+----+---------+-------------------+--------------------+-------------+ -(1 row) -~~~ - -Once the node has been fully decommissioned and stopped, you'll see a confirmation: - -~~~ -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 13 | true | true | -+----+---------+-------------------+--------------------+-------------+ -(1 row) -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 0 | true | true | -+----+---------+-------------------+--------------------+-------------+ -(1 row) -All target nodes report that they hold no more data. Please verify cluster health before removing the nodes. -ok -~~~ - -### Step 3. Check the node and cluster after decommissioning - -In the Admin UI **Replication** dashboard, again hover over the **Replicas per Store** and **Leaseholders per Store** graphs. For the node that you decommissioned, the counts should be 0: - -
      Decommission a single live node
      - -
      Decommission a single live node
      - -Then view **Node List** on the **Overview** page and make sure all nodes but the one you removed are healthy (green): - -
      Decommission a single live node
      - -In about 5 minutes, you'll see the removed node listed under **Decommissioned Nodes**: - -
      Decommission a single live node
      - -New in v2.0: At this point, the node will no longer appear in timeseries graphs unless you are viewing a time range during which the node was live. However, it will never disappear from the **Decommissioned Nodes** list. - -Also, if the node is restarted, it will not accept any client connections, and the cluster will not rebalance any data to it; to make the cluster utilize the node again, you'd have to [recommission](#recommission-nodes) it. - -## Remove a Single Node (Dead) - -
      - - -
      - -Once a node has been dead for 5 minutes, CockroachDB automatically transfers the range replicas and range leases on the node to available live nodes. However, if it is restarted, the cluster will rebalance replicas and leases to it. - -To prevent the cluster from rebalancing data to a dead node if it comes back online, do the following: - -### Step 1. Identify the ID of the dead node - -Open the Admin UI and select the **Node List** view. Note the ID of the node listed under **Dead Nodes**: - -
      Decommission a single dead node
      - -### Step 2. Mark the dead node as decommissioned - -SSH to any live node in the cluster and run the [`cockroach node decommission`](view-node-details.html) command with the ID of the node to officially decommission: - -
      -~~~ shell -$ cockroach node decommission 4 --certs-dir=certs --host=
      -~~~ -
      - -
      -~~~ shell -$ cockroach node decommission 4 --insecure --host=
      -~~~ -
      - -~~~ -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | false | 12 | true | true | -+----+---------+-------------------+--------------------+-------------+ -(1 row) -Decommissioning finished. Please verify cluster health before removing the nodes. -~~~ - -New in v2.0: If you go back to the **Nodes List** page, in about 5 minutes, you'll see the node move from the **Dead Nodes** to **Decommissioned Nodes** list. At this point, the node will no longer appear in timeseries graphs unless you are viewing a time range during which the node was live. However, it will never disappear from the **Decommissioned Nodes** list. - -
      Decommission a single live node
      - -Also, if the node is ever restarted, it will not accept any client connections, and the cluster will not rebalance any data to it; to make the cluster utilize the node again, you'd have to [recommission](#recommission-nodes) it. - -## Remove Multiple Nodes - -
      - - -
      - -### Before You Begin - -Confirm that there are enough nodes to take over the replicas from the nodes you want to remove. See some [Example scenarios](#examples) above. - -### Step 1. Identify the IDs of the nodes to decommission - -Open the Admin UI and select the **Node List** view, or go to **Metrics** on the left and click **View nodes list** in the **Summary** area. Note the IDs of the nodes that you want to decommission: - -
      Decommission multiple nodes
      - -### Step 2. Check the nodes before decommissioning - -Select the **Replication** dashboard, and hover over the **Replicas per Store** and **Leaseholders per Store** graphs: - -
      Decommission multiple nodes
      - -
      Decommission multiple nodes
      - -### Step 3. Decommission the nodes - -SSH to any live node in the cluster and run the [`cockroach node decommission`](view-node-details.html) command with the IDs of the nodes to officially decommission: - -
      -~~~ shell -$ cockroach node decommission 4 5 --certs-dir=certs --host=
      -~~~ -
      - -
      -~~~ shell -$ cockroach node decommission 4 5 --insecure --host=
      -~~~ -
      - -You'll then see the decommissioning status print to `stderr` as it changes: - -~~~ -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 8 | true | false | -| 5 | true | 9 | true | false | -+----+---------+-------------------+--------------------+-------------+ -(2 rows) -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 8 | true | true | -| 5 | true | 9 | true | true | -+----+---------+-------------------+--------------------+-------------+ -(2 rows) -~~~ - -Once the nodes have been fully decommissioned, you'll see a confirmation: - -~~~ -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | true | 0 | true | true | -| 5 | true | 0 | true | true | -+----+---------+-------------------+--------------------+-------------+ -(2 rows) -Decommissioning finished. Please verify cluster health before removing the nodes. -~~~ - -### Step 4. Check the nodes and cluster after decommissioning - -In the Admin UI **Replication** dashboard, again hover over the **Replicas per Store** and **Leaseholders per Store** graphs. For the nodes that you decommissioned, the counts should be 0: - -
      Decommission multiple nodes
      - -
      Decommission multiple nodes
      - -Then click **View nodes list** in the **Summary** area and make sure all nodes are healthy (green) and the decommissioned nodes have 0 replicas: - -
      Decommission multiple nodes
      - -New in v2.0: In about 5 minutes, you'll see the node move to the **Decommissioned Nodes** list, and the node will no longer appear in timeseries graphs unless you are viewing a time range during which the node was live. However, it will never disappear from the **Decommissioned Nodes** list. - -
      Decommission multiple nodes
      - -### Step 5. Remove the decommissioned nodes - -At this point, although the decommissioned nodes are live, the cluster will not rebalance any data to them, and the nodes will not accept any client connections. However, to officially remove the nodes from the cluster, you still need to stop them. - -For each decommissioned node, SSH to the machine running the node and execute the `cockroach quit` command: - -
      -~~~ shell -$ cockroach quit --certs-dir=certs --host=
      -~~~ -
      - -
      -~~~ shell -$ cockroach quit --insecure --host=
      -~~~ -
      - -## Recommission Nodes - -
      - - -
      - -If you accidentally decommissioned any nodes, or otherwise want decommissioned nodes to rejoin a cluster as active members, do the following: - -### Step 1. Identify the IDs of the decommissioned nodes - -Open the Admin UI and select the **Node List** view. Note the IDs of the nodes listed under **Decommissioned Nodes**: - -
      Decommission a single dead node
      - -### Step 2. Recommission the nodes - -SSH to one of the live nodes and execute the [`cockroach node recommission`](view-node-details.html) command with the IDs of the nodes to recommission: - -
      -~~~ shell -$ cockroach node recommission 4 --certs-dir=certs --host=
      -~~~ -
      - -
      -~~~ shell -$ cockroach node recommision 4 --insecure --host=
      -~~~ -
      - -~~~ -+----+---------+-------------------+--------------------+-------------+ -| id | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+---------+-------------------+--------------------+-------------+ -| 4 | false | 12 | false | true | -+----+---------+-------------------+--------------------+-------------+ -(1 row) -The affected nodes must be restarted for the change to take effect. -~~~ - -### Step 3. Restart the recommissioned nodes - -SSH to each machine with a recommissioned node and run the same `cockroach start` command that you used to initially start the node, for example: - -
      -~~~ shell -$ cockroach start --certs-dir=certs --host=
      --join=
      :26257 --background -~~~ -
      - -
      -~~~ shell -$ cockroach start --insecure --host=
      --join=
      :26257 --background -~~~ -
      - -On the **Nodes List** page, you should very soon see the recommissioned nodes listed under **Live Nodes** and, after a few minutes, you should see replicas rebalanced to it. - -## Check the Status of Decommissioning Nodes - -To check the progress of decommissioning nodes, you can run the `cockroach node status` command with the `--decommission` flag: - -
      - - -

      - -
      -~~~ shell -$ cockroach node status --decommission --certs-dir=certs --host=
      -~~~ -
      - -
      -~~~ shell -$ cockroach node status --decommission --insecure --host=
      -~~~ -
      - -~~~ -+----+-----------------------+---------+---------------------+---------------------+---------+-------------------+--------------------+-------------+ -| id | address | build | updated_at | started_at | is_live | gossiped_replicas | is_decommissioning | is_draining | -+----+-----------------------+---------+---------------------+---------------------+---------+-------------------+--------------------+-------------+ -| 1 | 165.227.60.76:26257 | 91a299d | 2017-09-07 18:16:03 | 2017-09-07 16:30:13 | true | 134 | false | false | -| 2 | 192.241.239.201:26257 | 91a299d | 2017-09-07 18:16:05 | 2017-09-07 16:30:45 | true | 134 | false | false | -| 3 | 67.207.91.36:26257 | 91a299d | 2017-09-07 18:16:06 | 2017-09-07 16:31:06 | true | 136 | false | false | -| 4 | 138.197.12.74:26257 | 91a299d | 2017-09-07 18:16:03 | 2017-09-07 16:44:23 | true | 1 | true | true | -| 5 | 174.138.50.192:26257 | 91a299d | 2017-09-07 18:16:07 | 2017-09-07 17:12:57 | true | 3 | true | true | -+----+-----------------------+---------+---------------------+---------------------+---------+-------------------+--------------------+-------------+ -(5 rows) -~~~ - -## See Also - -- [Temporarily Stop a Node](stop-a-node.html) diff --git a/src/current/v2.0/rename-column.md b/src/current/v2.0/rename-column.md deleted file mode 100644 index bd401970432..00000000000 --- a/src/current/v2.0/rename-column.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: RENAME COLUMN -summary: The RENAME COLUMN statement changes the name of a column in a table. -toc: true ---- - -The `RENAME COLUMN` [statement](sql-statements.html) changes the name of a column in a table. - -{{site.data.alerts.callout_info}}It is not possible to rename a column referenced by a view. For more details, see View Dependencies.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/rename_column.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `IF EXISTS` | Rename the column only if a column of `current_name` exists; if one does not exist, do not return an error. | -| `table_name` | The name of the table with the column you want to use. | -| `current_name` | The current name of the column. | -| `name` | The [`name`](sql-grammar.html#name) you want to use for the column, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Example - -### Rename a Column - -~~~ sql -> SELECT * FROM users; -~~~ -~~~ -+----+-------+-------+ -| id | name | title | -+----+-------+-------+ -| 1 | Tom | cat | -| 2 | Jerry | rat | -+----+-------+-------+ -~~~ -~~~ sql -> ALTER TABLE users RENAME COLUMN title TO species; -~~~ -~~~ sql -> SELECT * FROM users; -~~~ -~~~ -+----+-------+---------+ -| id | name | species | -+----+-------+---------+ -| 1 | Tom | cat | -| 2 | Jerry | rat | -+----+-------+---------+ -~~~ - -## See Also - -- [`RENAME DATABASE`](rename-database.html) -- [`RENAME TABLE`](rename-table.html) -- [`ALTER TABLE`](alter-table.html) diff --git a/src/current/v2.0/rename-database.md b/src/current/v2.0/rename-database.md deleted file mode 100644 index d24e971e8a6..00000000000 --- a/src/current/v2.0/rename-database.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: RENAME DATABASE -summary: The RENAME DATABASE statement changes the name of a database. -toc: true ---- - -The `RENAME DATABASE` [statement](sql-statements.html) changes the name of a database. - -{{site.data.alerts.callout_info}}It is not possible to rename a database referenced by a view. For more details, see View Dependencies.{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -Database renames **are not transactional**. For more information, see [Database renaming considerations](#database-renaming-considerations). -{{site.data.alerts.end}} - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/rename_database.html %} -
      - -## Required Privileges - -Only the `root` user can rename databases. - -## Parameters - -Parameter | Description -----------|------------ -`name` | The first instance of `name` is the current name of the database. The second instance is the new name for the database. The new name [must be unique](#rename-fails-new-name-already-in-use) and follow these [identifier rules](keywords-and-identifiers.html#identifiers). - -## Database renaming considerations - -Database renames are not transactional. There are two phases during a rename: - -1. The `system.namespace` table is updated. This phase is transactional, and will be rolled back if the transaction aborts. -2. The database descriptor (an internal data structure) is updated, and announced to every other node. This phase is **not** transactional. The rename will be announced to other nodes only if the transaction commits, but there is no guarantee on how much time this operation will take. -3. Once the new name has propagated to every node in the cluster, another internal transaction is run that declares the old name ready for reuse in another context. - -This yields a surprising and undesirable behavior: when run inside a [`BEGIN`](begin-transaction.html) ... [`COMMIT`](commit-transaction.html) block, it’s possible for a rename to be half-done - not persisted in storage, but visible to other nodes or other transactions. This violates A, C, and I in [ACID](https://en.wikipedia.org/wiki/ACID_(computer_science)). Only D is guaranteed: If the transaction commits successfully, the new name will persist after that. - -This is a [known limitation](known-limitations.html#database-and-table-renames-are-not-transactional). For an issue tracking this limitation, see [cockroach#12123](https://github.com/cockroachdb/cockroach/issues/12123). - -## Examples - -### Rename a Database - -~~~ sql -> SHOW DATABASES; -~~~ -~~~ -+----------+ -| Database | -+----------+ -| db1 | -| db2 | -| system | -+----------+ -~~~ -~~~ sql -> ALTER DATABASE db1 RENAME TO db3; -~~~ -~~~ -RENAME DATABASE -~~~ -~~~ sql -> SHOW DATABASES; -~~~ -~~~ -+----------+ -| Database | -+----------+ -| db2 | -| db3 | -| system | -+----------+ -~~~ - -### Rename Fails (New Name Already In Use) - -~~~ sql -> SHOW DATABASES; -~~~ -~~~ -+----------+ -| Database | -+----------+ -| db2 | -| db3 | -| system | -+----------+ -~~~ -~~~ sql -> ALTER DATABASE db2 RENAME TO db3; -~~~ -~~~ -pq: the new database name "db3" already exists -~~~ - -## See Also - -- [`CREATE DATABASE`](create-database.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SET DATABASE`](set-vars.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/rename-index.md b/src/current/v2.0/rename-index.md deleted file mode 100644 index a9bdf808064..00000000000 --- a/src/current/v2.0/rename-index.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: RENAME INDEX -summary: The RENAME INDEX statement changes the name of an index for a table. -toc: true ---- - -The `RENAME INDEX` [statement](sql-statements.html) changes the name of an index for a table. - -{{site.data.alerts.callout_info}}It is not possible to rename an index referenced by a view. For more details, see View Dependencies.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/rename_index.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `IF EXISTS` | Rename the index only if an index `current_name` exists; if one does not exist, do not return an error. | -| `table_name` | The name of the table with the index you want to use | -| `index_name` | The current name of the index | -| `name` | The [`name`](sql-grammar.html#name) you want to use for the index, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). | - -## Example - -### Rename an Index - -~~~ sql -> SHOW INDEXES FROM users; -~~~ -~~~ -+-------+----------------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+----------------+--------+-----+--------+-----------+---------+----------+ -| users | primary | true | 1 | id | ASC | false | false | -| users | users_name_idx | false | 1 | name | ASC | false | false | -| users | users_name_idx | false | 2 | id | ASC | false | true | -+-------+----------------+--------+-----+--------+-----------+---------+----------+ -(3 rows) -~~~ -~~~ sql -> ALTER INDEX users@users_name_idx RENAME TO name_idx; -~~~ -~~~ -RENAME INDEX -~~~ -~~~ sql -> SHOW INDEXES FROM users; -~~~ -~~~ -+-------+----------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+----------+--------+-----+--------+-----------+---------+----------+ -| users | primary | true | 1 | id | ASC | false | false | -| users | name_idx | false | 1 | name | ASC | false | false | -| users | name_idx | false | 2 | id | ASC | false | true | -+-------+----------+--------+-----+--------+-----------+---------+----------+ -(3 rows) -~~~ - -## See Also - -- [Indexes](indexes.html) -- [`CREATE INDEX`](create-index.html) -- [`RENAME COLUMN`](rename-column.html) -- [`RENAME DATABASE`](rename-database.html) -- [`RENAME TABLE`](rename-table.html) diff --git a/src/current/v2.0/rename-sequence.md b/src/current/v2.0/rename-sequence.md deleted file mode 100644 index 2e98bb4051b..00000000000 --- a/src/current/v2.0/rename-sequence.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: RENAME SEQUENCE -summary: The RENAME SEQUENCE statement changes the name of a sequence. -toc: true ---- - -New in v2.0: The `RENAME TO` [statement](sql-statements.html) is part of [`ALTER SEQUENCE`](alter-sequence.html), and changes the name of a sequence. - -{{site.data.alerts.callout_danger}}You cannot rename a sequence that's being used in a table. To rename the sequence, drop the DEFAULT expressions that reference the sequence, rename the sequence, and add the DEFAULT expressions back.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}To understand how CockroachDB changes schema elements without requiring table locking or other user-visible downtime, see Online Schema Changes in CockroachDB.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the parent database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/rename_sequence.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Rename the sequence only if it exists; if it does not exist, do not return an error. -`current_name` | The current name of the sequence you want to modify. -`new_name` | The new name of the sequence, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers).

      Name changes do not propagate to the table(s) using the sequence. - -## Examples - -### Rename a Sequence - -In this example, we will change the name of sequence `customer_seq` to `customer_number`. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_seq | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE test_db.customer_seq RENAME TO test_db.customer_number; -~~~ -~~~ -RENAME SEQUENCE -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_number | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -### Move a Sequence - -In this example, we will move the sequence we renamed in the first example (`customer_number`) to a different database. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_number | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE test_db.customer_number RENAME TO db_2.customer_number; -~~~ -~~~ -RENAME SEQUENCE -~~~ -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | db_2 | customer_number | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -## See Also - -- [`CREATE SEQUENCE`](create-sequence.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [Functions and Operators](functions-and-operators.html) diff --git a/src/current/v2.0/rename-table.md b/src/current/v2.0/rename-table.md deleted file mode 100644 index be19ce102d0..00000000000 --- a/src/current/v2.0/rename-table.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -title: RENAME TABLE -summary: The RENAME TABLE statement changes the name of a table. -toc: true ---- - -The `RENAME TABLE` [statement](sql-statements.html) changes the name of a table. It can also be used to move a table from one database to another. - -{{site.data.alerts.callout_info}}It is not possible to rename a table referenced by a view. For more details, see View Dependencies.{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -Table renames **are not transactional**. For more information, see [Table renaming considerations](#table-renaming-considerations). -{{site.data.alerts.end}} - -## Required privileges - -The user must have the `DROP` [privilege](privileges.html) on the table and the `CREATE` on the parent database. When moving a table from one database to another, the user must have the `CREATE` privilege on both the source and target databases. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/rename_table.html %} -
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `IF EXISTS` | Rename the table only if a table with the current name exists; if one does not exist, do not return an error. | -| `current_name` | The current name of the table. | -| `new_name` | The new name of the table, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`.

      The [`UPSERT`](upsert.html) and [`INSERT ON CONFLICT`](insert.html) statements use a temporary table called `excluded` to handle uniqueness conflicts during execution. It's therefore not recommended to use the name `excluded` for any of your tables. | - -## Viewing Schema Changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Table renaming considerations - -Table renames are not transactional. There are two phases during a rename: - -1. The `system.namespace` table is updated. This phase is transactional, and will be rolled back if the transaction aborts. -2. The table descriptor (an internal data structure) is updated, and announced to every other node. This phase is **not** transactional. The rename will be announced to other nodes only if the transaction commits, but there is no guarantee on how much time this operation will take. -3. Once the new name has propagated to every node in the cluster, another internal transaction is run that declares the old name ready for reuse in another context. - -This yields a surprising and undesirable behavior: when run inside a [`BEGIN`](begin-transaction.html) ... [`COMMIT`](commit-transaction.html) block, it’s possible for a rename to be half-done - not persisted in storage, but visible to other nodes or other transactions. This violates A, C, and I in [ACID](https://en.wikipedia.org/wiki/ACID_(computer_science)). Only D is guaranteed: If the transaction commits successfully, the new name will persist after that. - -This is a [known limitation](known-limitations.html#database-and-table-renames-are-not-transactional). For an issue tracking this limitation, see [cockroach#12123](https://github.com/cockroachdb/cockroach/issues/12123). - -## Examples - -### Rename a table - -~~~ sql -> SHOW TABLES FROM db1; -~~~ -~~~ -+--------+ -| Table | -+--------+ -| table1 | -| table2 | -+--------+ -~~~ -~~~ sql -> ALTER TABLE db1.table1 RENAME TO db1.tablea -~~~ -~~~ sql -> SHOW TABLES FROM db1; -~~~ -~~~ -+--------+ -| Table | -+--------+ -| table2 | -| tablea | -+--------+ -~~~ - -To avoid an error in case the table does not exist, you can include `IF EXISTS`: - -~~~ sql -> ALTER TABLE IF EXISTS db1.table1 RENAME TO db1.table2; -~~~ - -### Move a table - -To move a table from one database to another, use the above syntax but specify the source database after `ALTER TABLE` and the target database after `RENAME TO`: - -~~~ sql -> SHOW DATABASES; -~~~ -~~~ -+----------+ -| Database | -+----------+ -| db1 | -| db2 | -| system | -+----------+ -~~~ -~~~ sql -> SHOW TABLES FROM db1; -~~~ -~~~ -+--------+ -| Table | -+--------+ -| table2 | -| tablea | -+--------+ -~~~ -~~~ sql -> SHOW TABLES FROM db2; -~~~ -~~~ -+-------+ -| Table | -+-------+ -+-------+ -~~~ -~~~ sql -> ALTER TABLE db1.tablea RENAME TO db2.tablea -~~~ -~~~ sql -> SHOW TABLES FROM db1; -~~~ -~~~ -+--------+ -| Table | -+--------+ -| table2 | -+--------+ -~~~ -~~~ sql -> SHOW TABLES FROM db2; -~~~ -~~~ -+--------+ -| Table | -+--------+ -| tablea | -+--------+ -~~~ - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW TABLES`](show-tables.html) -- [`DROP TABLE`](drop-table.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/reset-cluster-setting.md b/src/current/v2.0/reset-cluster-setting.md deleted file mode 100644 index 677e60ca157..00000000000 --- a/src/current/v2.0/reset-cluster-setting.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: RESET CLUSTER SETTING -summary: The RESET CLUSTER SETTING statement resets a cluster setting to its default value for the client session. -toc: true ---- - -The `RESET` [statement](sql-statements.html) resets a [cluster setting](set-cluster-setting.html) to its default value for the client session.. - - -## Required Privileges - -Only the `root` user can modify cluster settings. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/reset_csetting.html %} -
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `var_name` | The name of the [cluster setting](cluster-settings.html) (case-insensitive). | - -## Example - -{{site.data.alerts.callout_success}}You can use SET CLUSTER SETTING .. TO DEFAULT to reset a cluster setting as well.{{site.data.alerts.end}} - -~~~ sql -> SET CLUSTER SETTING sql.metrics.statement_details.enabled = false; -~~~ - -~~~ sql -> SHOW CLUSTER SETTING sql.metrics.statement_details.enabled; -~~~ - -~~~ -+---------------------------------------+ -| sql.metrics.statement_details.enabled | -+---------------------------------------+ -| false | -+---------------------------------------+ -(1 row) -~~~ - -~~~ sql -> RESET CLUSTER SETTING sql.metrics.statement_details.enabled; -~~~ - -~~~ sql -> SHOW CLUSTER SETTING sql.metrics.statement_details.enabled; -~~~ - -~~~ -+---------------------------------------+ -| sql.metrics.statement_details.enabled | -+---------------------------------------+ -| true | -+---------------------------------------+ -(1 row) -~~~ - -## See Also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Cluster settings](cluster-settings.html) diff --git a/src/current/v2.0/reset-vars.md b/src/current/v2.0/reset-vars.md deleted file mode 100644 index 8524108d2ea..00000000000 --- a/src/current/v2.0/reset-vars.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: RESET (session variable) -summary: The SET statement resets a session variable to its default value. -toc: true ---- - -The `RESET` [statement](sql-statements.html) resets a [session variable](set-vars.html) to its default value for the client session. - - -## Required Privileges - -No [privileges](privileges.html) are required to reset a session setting. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/reset_session.html %}
      - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `session_var` | The name of the [session variable](set-vars.html#supported-variables). | - -## Example - -{{site.data.alerts.callout_success}}You can use SET .. TO DEFAULT to reset a session variable as well.{{site.data.alerts.end}} - -~~~ sql -> SET default_transaction_isolation = SNAPSHOT; -~~~ - -~~~ sql -> SHOW default_transaction_isolation; -~~~ - -~~~ -+-------------------------------+ -| default_transaction_isolation | -+-------------------------------+ -| SNAPSHOT | -+-------------------------------+ -(1 row) -~~~ - -~~~ sql -> RESET default_transaction_isolation; -~~~ - -~~~ sql -> SHOW default_transaction_isolation; -~~~ - -~~~ -+-------------------------------+ -| default_transaction_isolation | -+-------------------------------+ -| SERIALIZABLE | -+-------------------------------+ -(1 row) -~~~ - -## See Also - -- [`SET` (session variable)](set-vars.html) -- [`SHOW` (session variables)](show-vars.html) diff --git a/src/current/v2.0/restore-data.md b/src/current/v2.0/restore-data.md deleted file mode 100644 index f7081372280..00000000000 --- a/src/current/v2.0/restore-data.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Restore Data -summary: Learn how to back up and restore a CockroachDB cluster. -toc: false ---- - -How you restore your cluster's data depends on the type of [backup](back-up-data.html) originally: - -Backup Type | Restore using... -------------|----------------- -[`cockroach dump`](sql-dump.html) | [Import data](import-data.html) -[`BACKUP`](backup.html)
      (*[enterprise license](https://www.cockroachlabs.com/pricing/) only*) | [`RESTORE`](restore.html) - -If you created a back up from another database and want to import it into CockroachDB, see [Import data](import-data.html). - -## See Also - -- [Back up Data](back-up-data.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/restore.md b/src/current/v2.0/restore.md deleted file mode 100644 index 8eba2372ce0..00000000000 --- a/src/current/v2.0/restore.md +++ /dev/null @@ -1,225 +0,0 @@ ---- -title: RESTORE -summary: Restore your CockroachDB cluster to a cloud storage services such as AWS S3, Google Cloud Storage, or other NFS. -toc: true ---- - -{{site.data.alerts.callout_danger}}The RESTORE feature is only available to enterprise users. For non-enterprise restores, see Restore Data.{{site.data.alerts.end}} - -The `RESTORE` [statement](sql-statements.html) restores your cluster's schemas and data from [an enterprise `BACKUP`](backup.html) stored on a services such as AWS S3, Google Cloud Storage, NFS, or HTTP storage. - -Because CockroachDB is designed with high fault tolerance, restores are designed primarily for disaster recovery, i.e., restarting your cluster if it loses a majority of its nodes. Isolated issues (such as small-scale node outages) do not require any intervention. - - -## Functional Details - -### Restore Targets - -You can restore entire tables (which automatically includes their indexes) or [views](views.html) from a backup. This process uses the data stored in the backup to create entirely new tables or views in the [target database](#target-database). - -The notion of "restoring a database" simply restores all of the tables and views that belong to the database, but does not create the database. For more information, see [Target Database](#target-database). - -{{site.data.alerts.callout_info}}RESTORE only offers table-level granularity; it does not support restoring subsets of a table.{{site.data.alerts.end}} - -Because this process is designed for disaster recovery, CockroachDB expects that the tables do not currently exist in the [target database](#target-database). This means the target database must have not have tables or views with the same name as the restored table or view. If any of the restore target's names are being used, you can: - -- [`DROP TABLE`](drop-table.html), [`DROP VIEW`](drop-view.html), or [`DROP SEQUENCE`](drop-sequence.html) and then restore them. Note that a sequence cannot be dropped while it is being used in a column's `DEFAULT` expression, so those expressions must be dropped before the sequence is dropped, and recreated after the sequence is recreated. The `setval` [function](functions-and-operators.html#sequence-functions) can be used to set the value of the sequence to what it was previously. -- [Restore the table or view into a different database](#into_db). - -### Object Dependencies - -Dependent objects must be restored at the same time as the objects they depend on. - -Object | Depends On --------|----------- -Table with [foreign key](foreign-key.html) constraints | The table it `REFERENCES` (however, this dependency can be [removed during the restore](#skip_missing_foreign_keys)). -Table with a [sequence](create-sequence.html) | The sequence. -[Views](views.html) | The tables used in the view's `SELECT` statement. -[Interleaved tables](interleave-in-parent.html) | The parent table in the [interleaved hierarchy](interleave-in-parent.html#interleaved-hierarchy). - -### Target Database - -By default, tables and views are restored into a database with the name of the database from which they were backed up. However, also consider: - -- You can choose to [change the target database](#into_db). -- If it no longer exists, you must [create the target database](create-database.html). - -The target database must have not have tables or views with the same name as the tables or views you're restoring. - -### Users and Privileges - -Table and view users/privileges are not restored. Restored tables and views instead inherit the privileges of the database into which they're restored. - -However, every backup includes `system.users`, so you can [restore users and their passwords](#restoring-users-from-system-users-backup). - -Table-level privileges must be [granted to users](grant.html) after the restore is complete. - -### Restore Types - -You can either restore from a full backup or from a full backup with incremental backups, based on the backup files you include. - -Restore Type | Parameters -----|---------- -**Full backup** | Include only the path to the full backup. -**Full backup +
      incremental backups** | Include the path to the full backup as the first argument and the subsequent incremental backups from oldest to newest as the following arguments. - -### Point-in-time Restore New in v2.0 - -{% include {{ page.version.version }}/misc/beta-warning.md %} - -If the full or incremental backup was taken [with revision history](backup.html#backups-with-revision-history-new-in-v2-0), you can restore the data as it existed at the specified point-in-time within the revision history captured by that backup. - -If you do not specify a point-in-time, the data will be restored to the backup timestamp; that is, the restore will work as if the data was backed up without revision history. - -## Performance - -The `RESTORE` process minimizes its impact to the cluster's performance by distributing work to all nodes. Subsets of the restored data (known as ranges) are evenly distributed among randomly selected nodes, with each range initially restored to only one node. Once the range is restored, the node begins replicating it others. - -{{site.data.alerts.callout_info}}When a RESTORE fails or is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance.{{site.data.alerts.end}} - -## Viewing and Controlling Restore Jobs - -After CockroachDB successfully initiates a restore, it registers the restore as a job, which you can view with [`SHOW JOBS`](show-jobs.html). - -After the restore has been initiated, you can control it with [`PAUSE JOB`](pause-job.html), [`RESUME JOB`](resume-job.html), and [`CANCEL JOB`](cancel-job.html). - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/restore.html %} -
      - -{{site.data.alerts.callout_info}}The RESTORE statement cannot be used within a transaction.{{site.data.alerts.end}} - -## Required Privileges - -Only the `root` user can run `RESTORE`. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_pattern` | The table or [view](views.html) you want to restore. | -| `database_name` | The name of the database you want to restore (i.e., restore all tables and views in the database). You can restore an entire database only if you had backed up the entire database. | -| `full_backup_location` | The URL where the full backup is stored.

      For information about this URL structure, see [Backup File URLs](#backup-file-urls). | -| `incremental_backup_location` | The URL where an incremental backup is stored.

      Lists of incremental backups must be sorted from oldest to newest. The newest incremental backup's timestamp must be within the table's garbage collection period.

      For information about this URL structure, see [Backup File URLs](#backup-file-urls).

      For more information about garbage collection, see [Configure Replication Zones](configure-replication-zones.html#replication-zone-format). | -| `AS OF SYSTEM TIME timestamp` | New in v2.0: Restore data as it existed as of [`timestamp`](as-of-system-time.html). You can restore point-in-time data only if you had taken full or incremental backup [with revision history](backup.html#backups-with-revision-history-new-in-v2-0). | -| `kv_option_list` | Control your backup's behavior with [these options](#restore-option-list). | - -### Backup File URLs - -The URL for your backup's locations must use the following format: - -{% include {{ page.version.version }}/misc/external-urls.md %} - -### Restore Option List - -You can include the following options as key-value pairs in the `kv_option_list` to control the restore process's behavior. - -#### `into_db` - -- **Description**: If you want to restore a table or view into a database other than the one it originally existed in, you can [change the target database](#restore-into-a-different-database). This is useful if you want to restore a table that currently exists, but do not want to drop it. -- **Key**: `into_db` -- **Value**: The name of the database you want to use -- **Example**: `WITH into_db = 'newdb'` - -#### `skip_missing_foreign_keys` - -- **Description**: If you want to restore a table with a foreign key but do not want to restore the table it references, you can drop the Foreign Key constraint from the table and then have it restored. -- **Key**: `skip_missing_foreign_keys` -- **Value**: *No value* -- **Example**: `WITH skip_missing_foreign_keys` - -#### `skip_missing_sequences` - -New in v2.0 - -- **Description**: If you want to restore a table that depends on a sequence but do not want to restore the sequence it references, you can drop the sequence dependency from a table (i.e., the `DEFAULT` expression that uses the sequence) and then have it restored. -- **Key**: `skip_missing_sequences` -- **Value**: *No value* -- **Example**: `WITH skip_missing_sequences` - -## Examples - -### Restore a Single Table - -~~~ sql -> RESTORE bank.customers FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly'; -~~~ - -### Restore Multiple Tables - -~~~ sql -> RESTORE bank.customers, bank.accounts FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly'; -~~~ - -### Restore an Entire Database - -~~~ sql -> RESTORE DATABASE bank FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly'; -~~~ - -{{site.data.alerts.callout_info}}RESTORE DATABASE can only be used if the entire database was backed up.{{site.data.alerts.end}} - -### Point-in-time RestoreNew in v2.0 - -~~~ sql -> RESTORE bank.customers FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '2017-02-26 10:00:00'; -~~~ - -### Restore from Incremental Backups - -~~~ sql -> RESTORE bank.customers \ -FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly', 'gs://acme-co-backup/database-bank-2017-03-29-nightly'; -~~~ - -### Point-in-time Restore from Incremental BackupsNew in v2.0 - -~~~ sql -> RESTORE bank.customers \ -FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly', 'gs://acme-co-backup/database-bank-2017-03-29-nightly' \ -AS OF SYSTEM TIME '2017-02-28 10:00:00'; -~~~ - -### Restore into a Different Database - -By default, tables and views are restored to the database they originally belonged to. However, using the [`into_db`](#into_db) option, you can control the target database. - -~~~ sql -> RESTORE bank.customers \ -FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -WITH into_db = 'newdb'; -~~~ - -### Remove the Foreign Key Before Restore - -By default, tables with [Foreign Key](foreign-key.html) constraints must be restored at the same time as the tables they reference. However, using the [`skip_missing_foreign_keys`](#skip_missing_foreign_keys) option you can remove the Foreign Key constraint from the table and then restore it. - -~~~ sql -> RESTORE bank.accounts \ -FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -WITH skip_missing_foreign_keys; -~~~ - -### Restoring Users from `system.users` Backup - -Every full backup contains the `system.users` table, which you can use to restore your cluster's usernames and their hashed passwords. However, to restore them, you must restore the `system.users` table into a new database because you cannot drop the existing `system.users` table. - -After it's restored into a new database, you can write the restored `users` table data to the cluster's existing `system.users` table. - -~~~ sql -> RESTORE system.users \ -FROM 'azure://acme-co-backup/table-users-2017-03-27-full?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' \ -WITH into_db = 'newdb'; - -> INSERT INTO system.users SELECT * FROM newdb.users; - -> DROP TABLE newdb.users; -~~~ - -## See Also - -- [`BACKUP`](backup.html) -- [Configure Replication Zones](configure-replication-zones.html) diff --git a/src/current/v2.0/resume-job.md b/src/current/v2.0/resume-job.md deleted file mode 100644 index 377cda88de1..00000000000 --- a/src/current/v2.0/resume-job.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: RESUME JOB -summary: The RESUME JOB statement lets you resume jobs that were previously paused with PAUSE JOB. -toc: true ---- - - The `RESUME JOB` [statement](sql-statements.html) lets you resume [paused](pause-job.html) [`BACKUP`](backup.html), [`RESTORE`](restore.html), and [`IMPORT`](import.html) jobs. - -{{site.data.alerts.callout_info}}You cannot pause schema changes.{{site.data.alerts.end}} - - -## Required Privileges - -By default, only the `root` user can control a job. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/resume_job.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`job_id` | The ID of the job you want to resume, which can be found with [`SHOW JOBS`](show-jobs.html). - -## Examples - -### Pause & Resume a Restore Job - -~~~ sql -> SHOW JOBS; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ -~~~ sql -> PAUSE JOB 27536791415282; -~~~ - -Once you're ready for the restore to resume: - -~~~ sql -> RESUME JOB 27536791415282; -~~~ - -## See Also - -- [`PAUSE JOB`](pause-job.html) -- [`SHOW JOBS`](show-jobs.html) -- [`CANCEL JOB`](cancel-job.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`IMPORT`](import.html) \ No newline at end of file diff --git a/src/current/v2.0/revoke-roles.md b/src/current/v2.0/revoke-roles.md deleted file mode 100644 index 377a5f5734e..00000000000 --- a/src/current/v2.0/revoke-roles.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: REVOKE <roles> -summary: The REVOKE statement revokes privileges from users and/or roles. -toc: true ---- - -New in v2.0: The `REVOKE ` [statement](sql-statements.html) lets you revoke a [role](roles.html) or [user's](create-and-manage-users.html) membership to a role. - -{{site.data.alerts.callout_info}}REVOKE <roles> is an enterprise-only feature.{{site.data.alerts.end}} - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/revoke_roles.html %}
      - -## Required Privileges - -The user revoking role membership must be a role admin (i.e., members with the `ADMIN OPTION`) or a superuser (i.e., a member of the `admin` role). - -## Considerations - -- The `root` user cannot be revoked from the `admin` role. - -## Parameters - -Parameter | Description -----------|------------ -`ADMIN OPTION` | Revoke the user's role admin status. -`role_name` | The name of the role from which you want to remove members. To revoke members from multiple roles, use a comma-separated list of role names. -`user_name` | The name of the [user](create-and-manage-users.html) or [role](roles.html) from whom you want to revoke membership. To revoke multiple members, use a comma-separated list of user and/or role names. - -## Examples - -### Revoke Role Membership - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | true | -| design | lola | false | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE design FROM lola; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | true | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -### Revoke the Admin Option - -To revoke a user or role's admin option from a role (without revoking the membership): -{% include copy-clipboard.html %} -~~~ sql -> REVOKE ADMIN OPTION FOR design FROM ernie; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | false | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -## See Also - -- [Privileges](privileges.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [`CREATE USER`](create-user.html) -- [`DROP USER`](drop-user.html) -- [Roles](roles.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/revoke.md b/src/current/v2.0/revoke.md deleted file mode 100644 index 421abd010ed..00000000000 --- a/src/current/v2.0/revoke.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: REVOKE <privileges> -summary: The REVOKE statement revokes privileges from users and/or roles. -toc: true ---- - -The `REVOKE ` [statement](sql-statements.html) revokes [privileges](privileges.html) from [users](create-and-manage-users.html) and/or [roles](roles.html). - -For the list of privileges that can be granted to and revoked from users and roles, see [`GRANT`](grant.html). - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/revoke_privileges.html %}
      - -## Required Privileges - -The user revoking privileges must have the [`GRANT`](grant.html) privilege on the target databases or tables. - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table for which you want to revoke privileges. To revoke privileges for multiple tables, use a comma-separated list of table names. To revoke privileges for all tables, use `*`. -`database_name` | The name of the database for which you want to revoke privileges. To revoke privileges for multiple databases, use a comma-separated list of database names.

      Privileges revoked for databases will be revoked for any new tables created in the databases. -`user_name` | A comma-separated list of [users](create-and-manage-users.html) and/or [roles](roles.html) from whom you want to revoke privileges. - - -## Examples - -### Revoke Privileges on Databases - -~~~ sql -> SHOW GRANTS ON DATABASE db1, db2; -~~~ - -~~~ -+----------+------------+------------+ -| Database | User | Privileges | -+----------+------------+------------+ -| db1 | betsyroach | CREATE | -| db1 | maxroach | CREATE | -| db1 | root | ALL | -| db2 | betsyroach | CREATE | -| db2 | maxroach | CREATE | -| db2 | root | ALL | -+----------+------------+------------+ -(6 rows) -~~~ - -~~~ sql -> REVOKE CREATE ON DATABASE db1, db2 FROM maxroach, betsyroach; -~~~ - -~~~ sql -> SHOW GRANTS ON DATABASE db1, db2; -~~~ - -~~~ -+----------+------+------------+ -| Database | User | Privileges | -+----------+------+------------+ -| db1 | root | ALL | -| db2 | root | ALL | -+----------+------+------------+ -(2 rows) -~~~ - -{{site.data.alerts.callout_info}} Note that any tables that previously inherited the database-level privileges retain the privileges.{{site.data.alerts.end}} - -### Revoke Privileges on Specific Tables in a Database - -~~~ sql -> SHOW GRANTS ON TABLE db1.t1, db1.t2; -~~~ - -~~~ -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | betsyroach | CREATE | -| t1 | betsyroach | DELETE | -| t1 | maxroach | CREATE | -| t1 | root | ALL | -| t2 | betsyroach | CREATE | -| t2 | betsyroach | DELETE | -| t2 | maxroach | CREATE | -| t2 | root | ALL | -+-------+------------+------------+ -(8 rows) -~~~ - -~~~ sql -> REVOKE CREATE ON TABLE db1.t1, db1,t2 FROM betsyroach; -~~~ - -~~~ sql -> SHOW GRANTS ON TABLE db1.t1, db1.t2; -~~~ - -~~~ -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | betsyroach | DELETE | -| t1 | maxroach | CREATE | -| t1 | root | ALL | -| t2 | betsyroach | DELETE | -| t2 | maxroach | CREATE | -| t2 | root | ALL | -+-------+------------+------------+ -(6 rows) -~~~ - -### Revoke privileges on all tables in a database - -~~~ sql -> SHOW GRANTS ON TABLE db2.t1, db2.t2; -~~~ - -~~~ -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | betsyroach | DELETE | -| t1 | root | ALL | -| t2 | betsyroach | DELETE | -| t2 | root | ALL | -+-------+------------+------------+ -(4 rows) -~~~ - -~~~ sql -> REVOKE DELETE ON db2.* FROM betsyroach; -~~~ - -~~~ -+-------+------+------------+ -| Table | User | Privileges | -+-------+------+------------+ -| t1 | root | ALL | -| t2 | root | ALL | -+-------+------+------------+ -(2 rows) -~~~ - -## See Also - -- [Privileges](privileges.html) -- [`GRANT `](grant.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [`CREATE USER`](create-user.html) -- [`DROP USER`](drop-user.html) -- [Roles](roles.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/roles.md b/src/current/v2.0/roles.md deleted file mode 100644 index 249ea760a56..00000000000 --- a/src/current/v2.0/roles.md +++ /dev/null @@ -1,255 +0,0 @@ ---- -title: Manage Roles -summary: Roles are SQL groups that contain any number of users and roles as members. -toc: true ---- - -New in v2.0: Roles are SQL groups that contain any number of users and roles as members. To create and manage your cluster's roles, use the following statements: - -- [`CREATE ROLE` (Enterprise)](create-role.html) -- [`DROP ROLE` (Enterprise)](drop-role.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`SHOW ROLES`](show-roles.html) -- [`SHOW GRANTS`](show-grants.html) - - -## Terminology - -To get started, basic role terminology is outlined below: - -Term | Description ------|------------ -Role | A group containing any number of [users](create-and-manage-users.html) or other roles. -Role admin | A member of the role that's allowed to modify role membership. To create a role admin, use [`WITH ADMIN OPTION`](grant-roles.html#grant-the-admin-option). -Superuser / Admin | A member of the `admin` role. Only superusers can [`CREATE ROLE`](create-role.html) or [`DROP ROLE`](drop-role.html). The `admin` role is created by default and cannot be dropped. -`root` | A user that exists by default as a member of the `admin` role. The `root` user must always be a member of the `admin` role. -Inherit | The behavior that grants a role's privileges to its members. -Direct member | A user or role that is an immediate member of the role.

      Example: `A` is a member of `B`. -Indirect member | A user or role that is a member of the role by association.

      Example: `A` is a member of `C` ... is a member of `B` where "..." is an arbitrary number of memberships. - -## Example - -For the purpose of this example, you need: - -- An [enterprise license](enterprise-licensing.html) -- One CockroachDB node running in insecure mode: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --store=roles \ - --host=localhost - ~~~ - -In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set maxroach --insecure -~~~ - -As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -Create a database and set it as the default: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE test_roles; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET DATABASE = test_roles; -~~~ - -Now, let's [create a role](create-role.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE ROLE system_ops; -~~~ - -See what roles are in our databases: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ROLES; -~~~ -~~~ -+------------+ -| rolename | -+------------+ -| admin | -| system_ops | -+------------+ -~~~ - -Next, grant privileges to the `system_ops` role you created: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT CREATE, SELECT ON DATABASE test_roles TO system_ops; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE test_roles; -~~~ -~~~ -+------------+--------------------+------------+------------+ -| Database | Schema | User | Privileges | -+------------+--------------------+------------+------------+ -| test_roles | crdb_internal | admin | ALL | -| test_roles | crdb_internal | root | ALL | -| test_roles | crdb_internal | system_ops | CREATE | -| test_roles | crdb_internal | system_ops | SELECT | -| test_roles | information_schema | admin | ALL | -| test_roles | information_schema | root | ALL | -| test_roles | information_schema | system_ops | CREATE | -| test_roles | information_schema | system_ops | SELECT | -| test_roles | pg_catalog | admin | ALL | -| test_roles | pg_catalog | root | ALL | -| test_roles | pg_catalog | system_ops | CREATE | -| test_roles | pg_catalog | system_ops | SELECT | -| test_roles | public | admin | ALL | -| test_roles | public | root | ALL | -| test_roles | public | system_ops | CREATE | -| test_roles | public | system_ops | SELECT | -+------------+--------------------+------------+------------+ -~~~ - -Now, add the `maxroach` user to the `system_ops` role: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT system_ops TO maxroach; -~~~ - -To test the privileges you just added to the `system_ops` role, use `\q` or `ctrl-d` to exit the interactive shell, and then open the shell again as the `maxroach` user (who is a member of the `system_ops` role): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --user=maxroach --database=test_roles --insecure -~~~ - -Create a table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE employees ( - id UUID DEFAULT uuid_v4()::UUID PRIMARY KEY, - profile JSONB - ); -~~~ - -You were able to create the table because `maxroach` has `CREATE` privileges. Now, try to drop the table: - -{% include copy-clipboard.html %} -~~~ sql -> DROP TABLE employees; -~~~ -~~~ -pq: user maxroach does not have DROP privilege on relation employees -~~~ - -You cannot drop the table because your current user (`maxroach`) is a member of the `system_ops` role, which doesn't have `DROP` privileges. - -`maxroach` has `CREATE` and `SELECT` privileges, so try a `SHOW` statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE employees; -~~~ -~~~ -+------------+--------+-----------+------------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+------------+------------+ -| test_roles | public | employees | admin | ALL | -| test_roles | public | employees | root | ALL | -| test_roles | public | employees | system_ops | CREATE | -| test_roles | public | employees | system_ops | SELECT | -+------------+--------+-----------+------------+------------+ -~~~ - -Let's switch back to the `root` user to test more of the SQL statements related to roles. Log out of the `maxroach` user by exiting the interactive shell. To exit the interactive shell, use `\q` or `ctrl-d`. - -Open `cockroach sql` as the `root` user: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -Now that you're logged in as the `root` user, revoke privileges and then drop the `system_ops` role. - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE ALL ON DATABASE test_roles FROM system_ops; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE test_roles; -~~~ -~~~ -+------------+--------------------+-------+------------+ -| Database | Schema | User | Privileges | -+------------+--------------------+-------+------------+ -| test_roles | crdb_internal | admin | ALL | -| test_roles | crdb_internal | root | ALL | -| test_roles | information_schema | admin | ALL | -| test_roles | information_schema | root | ALL | -| test_roles | pg_catalog | admin | ALL | -| test_roles | pg_catalog | root | ALL | -| test_roles | public | admin | ALL | -| test_roles | public | root | ALL | -+------------+--------------------+-------+------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE ALL ON TABLE test_roles.* FROM system_ops; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE test_roles.*; -~~~ -~~~ -+------------+--------+-----------+-------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+-------+------------+ -| test_roles | public | employees | admin | ALL | -| test_roles | public | employees | root | ALL | -+------------+--------+-----------+-------+------------+ -~~~ - -{{site.data.alerts.callout_info}}All of a role or user's privileges must be revoked before it can be dropped.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> DROP ROLE system_ops; -~~~ - -## See Also - -- [`CREATE ROLE`](create-role.html) -- [`DROP ROLE`](drop-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT `](grant.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`REVOKE `](revoke.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`SHOW GRANTS`](show-grants.html) -- [Manage Users](create-and-manage-users.html) -- [Privileges](privileges.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/rollback-transaction.md b/src/current/v2.0/rollback-transaction.md deleted file mode 100644 index fe650d36ca3..00000000000 --- a/src/current/v2.0/rollback-transaction.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: ROLLBACK -summary: Abort the current transaction, discarding all updates made by statements included in the transaction with the ROLLBACK statement in CockroachDB. -toc: true ---- - -The `ROLLBACK` [statement](sql-statements.html) aborts the current [transaction](transactions.html), discarding all updates made by statements included in the transaction. - -When using [client-side transaction retries](transactions.html#client-side-transaction-retries), use `ROLLBACK TO SAVEPOINT cockroach_restart` to handle a transaction that needs to be retried (identified via the `40001` error code or `retry transaction` string in the error message), and then re-execute the statements you want the transaction to contain. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/rollback_transaction.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to rollback a transaction. However, privileges are required for each statement within a transaction. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `TO SAVEPOINT cockroach_restart` | If using [client-side transaction retries](transactions.html#client-side-transaction-retries), retry the transaction. You should execute this statement when a transaction returns a `40001` / `retry transaction` error. | - -## Example - -### Rollback a Transaction - -Typically, your application conditionally executes rollbacks, but you can see their behavior by using `ROLLBACK` instead of `COMMIT` directly through SQL. - -~~~ sql -> SELECT * FROM accounts; -~~~ -~~~ -+----------+---------+ -| name | balance | -+----------+---------+ -| Marciela | 1000 | -+----------+---------+ -~~~ -~~~ sql -> BEGIN; - -> UPDATE accounts SET balance = 2500 WHERE name = 'Marciela'; - -> ROLLBACK; - -> SELECT * FROM accounts; -~~~ -~~~ -+----------+---------+ -| name | balance | -+----------+---------+ -| Marciela | 1000 | -+----------+---------+ -~~~ - -### Retry a Transaction - -To use [client-side transaction retries](transactions.html#client-side-transaction-retries), your application must execute `ROLLBACK TO SAVEPOINT cockroach_restart` after detecting a `40001` / `retry transaction` error. - -~~~ sql -> ROLLBACK TO SAVEPOINT cockroach_restart; -~~~ - -For examples of retrying transactions in your application, check out the transaction code samples in our [Build an App with CockroachDB](build-an-app-with-cockroachdb.html) tutorials. - -## See Also - -- [Transactions](transactions.html) -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) diff --git a/src/current/v2.0/rotate-certificates.md b/src/current/v2.0/rotate-certificates.md deleted file mode 100644 index 4ca76e8b533..00000000000 --- a/src/current/v2.0/rotate-certificates.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: Rotate Security Certificates -summary: Rotate the security certificates of a secure CockroachDB cluster by creating and reloading new certificates. -toc: true -toc_not_nested: true ---- - -New in v1.1: CockroachDB allows you to rotate security certificates without restarting nodes. - -{{site.data.alerts.callout_success}}For an introduction to how security certificates work in a secure CockroachDB cluster, see Create Security Certificates.{{site.data.alerts.end}} - - -## When to Rotate Certificates - -You may need to rotate the node, client, or CA certificates in the following scenarios: - -- The node, client, or CA certificates are expiring soon. -- Your organization's compliance policy requires periodical certificate rotation. -- The key (for a node, client, or CA) is compromised. -- You need to modify the contents of a certificate, for example, to add another DNS name or the IP address of a load balancer through which a node can be reached. In this case, you would need to rotate only the node certificates. - -## Rotate Client Certificates - -1. Create a new client certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -2. Upload the new client certificate and key to the client using your preferred method. - -3. Have the client use the new client certificate. - - This step is application-specific and may require restarting the client. - -## Rotate Node Certificates - -To rotate a node certificate, you create a new node certificate and key and reload them on the node. - -1. Create a new node certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key \ - --overwrite - ~~~ - - Since you must create the new certificate and key in the same directory as the existing certificate and key, use the `--overwrite` flag to overwrite the existing files. Also, be sure to specify all addresses at which node can be reached. - -2. Upload the node certificate and key to the node: - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -3. Reload the node certificate without restarting the node by issuing a `SIGHUP` signal to the `cockroach` process: - - {% include copy-clipboard.html %} - ~~~ shell - pkill -SIGHUP -x cockroach - ~~~ - - The `SIGHUP` signal must be sent by the same user running the process (e.g., run with `sudo` if the `cockroach` process is running under user `root`). - -4. Verify that certificate rotation was successful using the **Local Node Certificates** page in the Admin UI: `https://
      :8080/#/reports/certificates/local`. - - Scroll to the node certificate details and confirm that the **Valid Until** field shows the new certificate expiration time. - -## Rotate the CA Certificate - -To rotate the CA certificate, you create a new CA key and a combined CA certificate that contains the new CA certificate followed by the old CA certificate, and then you reload the new combined CA certificate on the nodes and clients. Once all nodes and clients have the combined CA certificate, you then create new node and client certificates signed with the new CA certificate and reload those certificates on the nodes and clients as well. - -For more background, see [Why CockroachDB creates a combined CA certificate](rotate-certificates.html#why-cockroachdb-creates-a-combined-ca-certificate) and [Why rotate CA certificate in advance](rotate-certificates.html#why-rotate-ca-certificates-in-advance). - -1. Rename the existing CA key: - - {% include copy-clipboard.html %} - ~~~ shell - $ mv my-safe-directory/ca.key my-safe-directory/ca.old.key - ~~~ - -2. Create a new CA certificate and key, using the `--overwrite` flag to overwrite the old CA certificate: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key \ - --overwrite - ~~~ - - This results in the [combined CA certificate](rotate-certificates.html#why-cockroachdb-creates-a-combined-ca-certificate), `ca.crt`, which contains the new certificate followed by the old certificate. - - {{site.data.alerts.callout_danger}}The CA key is never loaded automatically by cockroach commands, so it should be created in a separate directory, identified by the --ca-key flag.{{site.data.alerts.end}} - -2. Upload the new CA certificate to each node: - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt - @:~/certs - ~~~ - -3. Upload the new CA certificate to each client using your preferred method. - -4. On each node, reload the CA certificate without restarting the node by issuing a `SIGHUP` signal to the `cockroach` process: - - {% include copy-clipboard.html %} - ~~~ shell - pkill -SIGHUP -x cockroach - ~~~ - - The `SIGHUP` signal must be sent by the same user running the process (e.g., run with `sudo` if the `cockroach` process is running under user `root`). - -5. Reload the CA certificate on each client. - - This step is application-specific and may require restarting the client. - -6. Verify that certificate rotation was successful using the **Local Node Certificates** page in the Admin UI: `https://
      :8080/#/reports/certificates/local`. - - The details of the old as well as new CA certificates should be shown. Confirm that the **Valid Until** field of the new CA certificate shows the new certificate expiration time. - -7. Once you are confident that all nodes and clients have the new CA certificate, [rotate the node certificates](#rotate-node-certificates) and [rotate the client certificates](#rotate-client-certificates). - -### Why CockroachDB creates a combined CA certificate - -On rotating the CA certificate, the nodes have the new CA certificate after certs directory is rescanned, and the clients have the new CA certificates as and when they are restarted. However, until the node and client certificates are rotated, the nodes and client certificates are still signed with the old CA certificate. Thus the nodes and clients are unable to verify each other's identity using the new CA certificate. - -To overcome the issue, we take advantage of the fact that multiple CA certificates can be active at the same time. While verifying the identity of another node or a client, they can check with multiple CA certificates uploaded to them. Thus instead of creating only the new certificate while rotating the CA certificates, CockroachDB creates a combined CA certificate that contains the new CA certificate followed by the old CA certificate. As and when node and client certificates are rotated, the combined CA certificate is used to verify old as well as new node and client certificates. - -### Why rotate CA certificates in advance - -On rotating node and client certificates after rotating the CA certificate, the node and client certificates are signed using new CA certificates. The nodes use the new node and CA certificates as soon as the certs directory on the node is rescanned. However, the clients use the new CA and client certificates only when the clients are restarted. Thus node certificates signed by the new CA certificate are not accepted by clients that do not have the new CA certificate yet. To ensure all nodes and clients have the latest CA certificate, rotate CA certificates on a completely different schedule; ideally, months before changing the node and client certificates. - -## See Also - -- [Create Security Certificates](create-security-certificates.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](secure-a-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/savepoint.md b/src/current/v2.0/savepoint.md deleted file mode 100644 index 02d8ef33685..00000000000 --- a/src/current/v2.0/savepoint.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: SAVEPOINT -summary: Identify your intent to retry aborted transactions with the SAVEPOINT cockroach_restart statement in CockroachDB. -toc: true ---- - -The `SAVEPOINT cockroach_restart` statement defines the intent to retry [transactions](transactions.html) using the CockroachDB-provided function for client-side transaction retries. For more information, see [Transaction Retries](transactions.html#transaction-retries). - -{{site.data.alerts.callout_danger}}CockroachDB’s SAVEPOINT implementation only supports the cockroach_restart savepoint and does not support all savepoint functionality, such as nested transactions.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/savepoint.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to create a savepoint. However, privileges are required for each statement within a transaction. - -## Example - -### Create Savepoint - -After you `BEGIN` the transaction, you must create the savepoint to identify that if the transaction contends with another transaction for resources and "loses", you intend to use [the function for client-side transaction retries](transactions.html#transaction-retries). - -~~~ sql -> BEGIN; - -> SAVEPOINT cockroach_restart; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -When using `SAVEPOINT`, your application must also include functions to execute retries with [`ROLLBACK TO SAVEPOINT cockroach_restart`](rollback-transaction.html#retry-a-transaction). - -## See Also - -- [Transactions](transactions.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) diff --git a/src/current/v2.0/scalar-expressions.md b/src/current/v2.0/scalar-expressions.md deleted file mode 100644 index 62e91a33297..00000000000 --- a/src/current/v2.0/scalar-expressions.md +++ /dev/null @@ -1,775 +0,0 @@ ---- -title: Scalar Expressions -summary: Scalar expressions allow the computation of new values from basic parts. -toc: true -key: sql-expressions.html ---- - -Most SQL statements can contain *scalar expressions* that compute new -values from data. For example, in the query `SELECT ceil(price) FROM -items`, the expression `ceil(price)` computes the rounded-up value of -the values from the `price` column. - -Scalar expressions produce values suitable to store in a single table -cell (one column of one row). They can be contrasted with -[table expressions](table-expressions.html) and [selection queries](selection-queries.html), which produce results -structured as a table. - -The following sections provide details on each of these options. - - -## Constants - -Constant expressions represent a simple value that doesn't change. -They are described further in section [SQL Constants](sql-constants.html). - -## Column References - -An expression in a query can refer to columns in the current data source in two ways: - -- Using the name of the column, e.g., `price` in `SELECT price FROM - items`. - - - If the name of a column is also a - [SQL keyword](keywords-and-identifiers.html#keywords), the name - must be appropriately quoted. For example: `SELECT "Default" FROM - configuration`. - - - If the name is ambiguous (e.g., when joining across multiple - tables), it is possible to disambiguate by prefixing the column - name by the table name. For example, `SELECT items.price FROM - items`. - -- Using the ordinal position of the column. For example, `SELECT @1 - FROM items` selects the first column in `items`. - - *This is a CockroachDB SQL extension.* - - {{site.data.alerts.callout_danger}} - Ordinal references should be used with care in production - code! During schema updates, column ordinal positions can change and - invalidate existing queries that use ordinal positions based on a - previous version of the schema. - {{site.data.alerts.end}} - -## Unary and Binary Operations - -An expression prefixed by a unary operator, or two expressions -separated by a binary operator, form a new expression. - -For a full list of CockroachDB operators, with details about their order of precedence and which data types are valid operands for each operator, see [Functions and Operators](functions-and-operators.html#operators). - -### Value Comparisons - -The standard operators `<` (smaller than), `>` (greater than), `<=` -(lower than or equal to), `>=` (greater than or equal to), `=` -(equals), `<>` and `!=` (not equal to), `IS` (identical to), and `IS -NOT` (not identical to) can be applied to any pair of values from a -single data type, as well as some pairs of values from different data -types. - -See also [this section over which data types are valid operands -for each operator](functions-and-operators.html#operators). - -The following special rules apply: - -- `NULL` is always ordered smaller than every other value, even itself. -- `NULL` is never equal to anything via `=`, even `NULL`. To check - whether a value is `NULL`, use the `IS` operator or the conditional - expression `IFNULL(..)`. - -See also [NULLs and Ternary Logic](null-handling.html#nulls-and-ternary-logic). - -#### Typing rule - -All comparisons accept any combination of argument types and result in type `BOOL`. - -#### Comparison with NaN - -CockroachDB recognizes the special value `NaN` -([Not-a-Number](https://en.wikipedia.org/wiki/NaN)) for scalars of -type [`FLOAT`](float.html) or [`DECIMAL`](decimal.html). - -As per the [IEEE 754](https://en.wikipedia.org/wiki/IEEE_754) -standard, `NaN` is considered to be different from every other numeric -value in comparisons. - -There are two exceptions however, made for compatibility with PostgreSQL: - -- `NaN` is considered to be equal with itself in comparisons. IEEE 754 - specifies that `NaN` is different from itself. -- `NaN` is considered to be smaller than every other value, including - `-INFINITY`. IEEE 754 specifies that `NaN` does not order with any - other value, i.e., `x <= NaN` and `x >= NaN` are both false for every - value of `x` including infinities. - -These exceptions exist so that the value `NaN` can be used in `WHERE` -clauses and indexes. - -For example: - -~~~sql -> SELECT FLOAT 'NaN < 1, 1 < FLOAT 'NaN', FLOAT 'NaN' < FLOAT 'NaN'; -+-----------------+-----------------+---------------------------+ -| FLOAT 'NaN' < 1 | 1 < FLOAT 'NaN' | FLOAT 'NaN' < FLOAT 'NaN' | -+-----------------+-----------------+---------------------------+ -| true | false | false | -+-----------------+-----------------+---------------------------+ -> SELECT FLOAT 'NaN' = FLOAT 'NaN' AS result; -+--------+ -| result | -+--------+ -| true | -+--------+ -> SELECT FLOAT 'NaN' < FLOAT '-INFINITY' AS result; -+--------+ -| result | -+--------+ -| true | -+--------+ -~~~ - -### Multi-Valued Comparisons - -Syntax: - -~~~ - ANY - SOME - ALL -~~~ - -The value comparison operators `<`, `>`, `=`, `<=`, `>=`, `<>` and -`!=`, as well as the pattern matching operators `[NOT] LIKE` and -`[NOT] ILIKE`, can be applied to compare a single value on the left to -multiple values on the right. - -This is done by combining the operator using the keywords `ANY`/`SOME` or `ALL`. - -The right operand can be either an array, a tuple or [subquery](subqueries.html). - -The result of the comparison is true if and only if: - -- For `ANY`/`SOME`, the comparison of the left value is true for any - element on the right. -- For `ALL`, the comparison of the left value is true for every - element on the right. - -For example: - -~~~sql -> SELECT 12 = ANY (10, 12, 13); -- returns true -> SELECT 12 = ALL (10, 12, 13); -- returns false -> SELECT 1 = ANY ARRAY[2, 3, 1]; -- using an array -> SELECT 1 = ALL (SELECT * FROM rows); -- using a tuple generated by a subquery -~~~ - -#### Typing rule - -The comparison between the type on the left and the element type of -the right operand must be possible. - -### Set Membership - -Syntax: - -~~~ - IN - IN ( ... subquery ... ) - - NOT IN - NOT IN ( ... subquery ... ) -~~~ - -Returns `TRUE` if and only if the value of the left operand is part of -the result of evaluating the right operand. In the subquery form, any -[selection query](selection-queries.html) can be used. - -For example: - -~~~sql -> SELECT a IN (1, 2, 3) FROM sometable; -> SELECT a IN (SELECT * FROM allowedvalues) FROM sometable; -> SELECT ('x', 123) IN (SELECT * FROM rows); -~~~ - -{{site.data.alerts.callout_info}}See also Subqueries for more details and performance best practices.{{site.data.alerts.end}} - -#### Typing rule - -`IN` requires its right operand to be a homogeneous tuple type and its left operand -to match the tuple element type. The result has type `BOOL`. - -### String Pattern Matching - -Syntax: - -~~~ - LIKE - ILIKE - NOT LIKE - NOT ILIKE -~~~ - -Evaluates both expressions as strings, then tests whether the string on the left -matches the pattern given on the right. Returns `TRUE` if a match is found -or `FALSE` otherwise, or the inverted value for the `NOT` variants. - -Patterns can contain `_` to match any single -character, or `%` to match any sequence of zero or more characters. -`ILIKE` causes the match to be tested case-insensitively. - -For example: - -~~~sql -> SELECT 'monday' LIKE '%day' AS a, 'tuesday' LIKE 'tue_day' AS b, 'wednesday' ILIKE 'W%' AS c; -~~~ -~~~ -+------+------+------+ -| a | b | c | -+------+------+------+ -| true | true | true | -+------+------+------+ -~~~ - -#### Typing rule - -The operands must be either both `STRING` or both `BYTES`. The result has type `BOOL`. - -### String Matching Using POSIX Regular Expressions - -Syntax: - -~~~ - ~ - ~* - !~ - !~* -~~~ - -Evaluates both expressions as strings, then tests whether the string -on the left matches the pattern given on the right. Returns `TRUE` if -a match is found or `FALSE` otherwise, or the inverted value for the -`!` variants. - -The variants with an asterisk `*` use case-insensitive matching; -otherwise the matching is case-sensitive. - -The pattern is expressed using -[POSIX regular expression syntax](https://en.wikipedia.org/wiki/Regular_expression). Unlike -`LIKE` patterns, a regular expression is allowed to match anywhere -inside a string, not only at the beginning. - -For example: - -~~~sql -> SELECT 'monday' ~ 'onday' AS a, 'tuEsday' ~ 't[uU][eE]sday' AS b, 'wednesday' ~* 'W.*y' AS c; -~~~ -~~~ -+------+------+------+ -| a | b | c | -+------+------+------+ -| true | true | true | -+------+------+------+ -~~~ - -#### Typing rule - -The operands must be either both `STRING` or both `BYTES`. The result has type `BOOL`. - -### String Matching Using SQL Regular Expressions - -Syntax: - -~~~ - SIMILAR TO - NOT SIMILAR TO -~~~ - -Evaluates both expressions as strings, then tests whether the string on the left -matches the pattern given on the right. Returns `TRUE` if a match is found -or `FALSE` otherwise, or the inverted value for the `NOT` variant. - -The pattern is expressed using the SQL standard's definition of a regular expression. -This is a mix of SQL `LIKE` patterns and POSIX regular expressions: - -- `_` and `%` denote any character or any string, respectively. -- `.` matches specifically the period character, unlike in POSIX where it is a wildcard. -- Most of the other POSIX syntax applies as usual. -- The pattern matches the entire string (as in `LIKE`, unlike POSIX regular expressions). - -For example: - -~~~sql -> SELECT 'monday' SIMILAR TO '_onday' AS a, 'tuEsday' SIMILAR TO 't[uU][eE]sday' AS b, 'wednesday' SIMILAR TO 'w%y' AS c; -~~~ -~~~ -+------+------+------+ -| a | b | c | -+------+------+------+ -| true | true | true | -+------+------+------+ -~~~ - -#### Typing rule - -The operands must be either both `STRING` or both `BYTES`. The result has type `BOOL`. - -## Function Calls and SQL Special Forms - -General syntax: - -~~~ - ( ) -~~~ - -A built-in function name followed by an opening parenthesis, followed -by a comma-separated list of expressions, followed by a closing -parenthesis. - -This applies the named function to the arguments between -parentheses. When the function's namespace is not prefixed, the -[name resolution rules](sql-name-resolution.html) determine which -function is called. - -See also [the separate section on supported built-in functions](functions-and-operators.html). - -In addition, the following SQL special forms are also supported: - -{% include {{ page.version.version }}/sql/function-special-forms.md %} - -#### Typing rule - -In general, a function call requires the arguments to be of the types -accepted by the function, and returns a value of the type determined -by the function. - -However, the typing of function calls is complicated by the fact -SQL supports function overloading. [See our blog post for more details](https://www.cockroachlabs.com/blog/revisiting-sql-typing-in-cockroachdb/). - -## Subscripted Expressions - -It is possible to access one item in an array value using the `[` ... `]` operator. - -For example, if the name `a` refers to an array of 10 -values, `a[3]` will retrieve the 3rd value. The first value has index -1. - -If the index is smaller or equal to 0, or larger than the size of the array, then -the result of the subscripted expression is `NULL`. - -#### Typing rule - -The subscripted expression must have an array type; the index expression -must have type `INT`. The result has the element type of the -subscripted expression. - -## Conditional Expressions - -Expressions can test a conditional expression and, depending on whether -or which condition is satisfied, evaluate to one or more additional -operands. - -These expression formats share the following property: some of their -operands are only evaluated if a condition is true. This matters -especially when an operand would be invalid otherwise. For example, -`IF(a=0, 0, x/a)` returns 0 if `a` is 0, and `x/a` otherwise. - -### `IF` Expressions - -Syntax: - -~~~ -IF ( , , ) -~~~ - -Evaluates ``, then evaluates `` if the condition is true, -or `` otherwise. - -The expression corresponding to the case when the condition is false -is not evaluated. - -#### Typing rule - -The condition must have type `BOOL`, and the two remaining expressions -must have the same type. The result has the same type as the -expression that was evaluated. - -### Simple `CASE` Expressions - -Syntax: - -~~~ -CASE - WHEN THEN - [ WHEN THEN ] ... - [ ELSE ] -END -~~~ - -Evaluates ``, then picks the `WHEN` branch where `` is -equal to ``, then evaluates and returns the corresponding `THEN` -expression. If no `WHEN` branch matches, the `ELSE` expression is -evaluated and returned, if any. Otherwise, `NULL` is returned. - -Conditions and result expressions after the first match are not evaluated. - -#### Typing rule - -The condition and the `WHEN` expressions must have the same type. -The `THEN` expressions and the `ELSE` expression, if any, must have the same type. -The result has the same type as the `THEN`/`ELSE` expressions. - -### Searched `CASE` Expressions - -Syntax: - -~~~ -CASE WHEN THEN - [ WHEN THEN ] ... - [ ELSE ] -END -~~~ - -In order, evaluates each `` expression; at the first `` -expression that evaluates to `TRUE`, returns the result of evaluating the -corresponding `THEN` expression. If none of the `` expressions -evaluates to true, then evaluates and returns the value of the `ELSE` -expression, if any, or `NULL` otherwise. - -Conditions and result expressions after the first match are not evaluated. - -#### Typing rule - -All the `WHEN` expressions must have type `BOOL`. -The `THEN` expressions and the `ELSE` expression, if any, must have the same type. -The result has the same type as the `THEN`/`ELSE` expressions. - -### `NULLIF` Expressions - -Syntax: - -~~~ -NULLIF ( , ) -~~~ - -Equivalent to: `IF ( = , NULL, )` - -#### Typing rule - -Both operands must have the same type, which is also the type of the result. - -### `COALESCE` and `IFNULL` Expressions - -Syntax: - -~~~ -IFNULL ( , ) -COALESCE ( [, [, ] ...] ) -~~~ - -`COALESCE` evaluates the first expression first. If its value is not -`NULL`, its value is returned directly. Otherwise, it returns the -result of applying `COALESCE` on the remaining expressions. If all the -expressions are `NULL`, `NULL` is returned. - -Arguments to the right of the first non-null argument are not evaluated. - -`IFNULL(a, b)` is equivalent to `COALESCE(a, b)`. - -#### Typing rule - -The operands must have the same type, which is also the type of the result. - -## Logical operators - -The Boolean operators `AND`, `OR` and `NOT` are available. - -Syntax: - -~~~ -NOT - AND - OR -~~~ - -`AND` and `OR` are commutative. Moreover, the input to `AND` -and `OR` is not evaluated in any particular order. Some operand may -not even be evaluated at all if the result can be fully ascertained using -only the other operand. - -{{site.data.alerts.callout_info}}This is different from the left-to-right "short-circuit logic" found in other programming languages. When it is essential to force evaluation order, use a conditional expression.{{site.data.alerts.end}} - -See also [NULLs and Ternary Logic](null-handling.html#nulls-and-ternary-logic). - -### Typing rule - -The operands must have type `BOOL`. The result has type `BOOL`. - -## Aggregate Expressions - -An aggregate expression has the same syntax as a function call, with a special -case for `COUNT`: - -~~~ - ( ) -COUNT ( * ) -~~~ - -The difference between aggregate expressions and function calls is -that the former use -[aggregate functions](functions-and-operators.html#aggregate-functions) -and can only appear in the list of rendered expressions in a -[`SELECT` clause](select-clause.html). - -An aggregate expression computes a combined value, depending on -which aggregate function is used, across all the rows currently -selected. - -#### Typing rule - -[The operand and return types are determined like for regular function calls](#function-calls-and-sql-special-forms). - -## Window Function Calls - -A window function call has the syntax of a function call followed by an `OVER` clause: - -~~~ - ( ) OVER - ( * ) OVER -~~~ - -It represents the application of a window or aggregate function over a -subset ("window") of the rows selected by a query. - -#### Typing rule - -[The operand and return types are determined like for regular function calls](#function-calls-and-sql-special-forms). - -## Explicit Type Coercions - -Syntax: - -~~~ - :: -CAST ( AS ) -~~~ - -Evaluates the expression and converts the resulting value to the -specified type. An error is reported if the conversion is invalid. - -For example: `CAST(now() AS DATE)` - -Note that in many cases a type annotation is preferrable to a type -coercion. See the section on -[type annotations](#explicitly-typed-expressions) below for more -details. - -#### Typing rule - -The operand can have any type. -The result has the type specified in the `CAST` expression. - -As a special case, if the operand is a literal, a constant expression -or a placeholder, the `CAST` type is used to guide the typing of the -operand. [See our blog post for more details](https://www.cockroachlabs.com/blog/revisiting-sql-typing-in-cockroachdb/). - -## Collation Expressions - -Syntax: - -~~~ - COLLATE -~~~ - -Evaluates the expression and converts its result to a collated string -with the specified collation. - -For example: `'a' COLLATE de` - -#### Typing rule - -The operand must have type `STRING`. The result has type `COLLATEDSTRING`. - -## Array Constructors - -Syntax: - -~~~ -ARRAY[ , , ... ] -~~~ - -Evaluates to an array containing the specified values. - -For example: - -~~~sql -> SELECT ARRAY[1,2,3] AS a; -~~~ -~~~ -+---------+ -| a | -+---------+ -| {1,2,3} | -+---------+ -~~~ - -The data type of the array is inferred from the values of the provided -expressions. All the positions in the array must have the same data type. - -If there are no expressions specified (empty array), or -all the values are `NULL`, then the type of the array must be -specified explicitly using a type annotation. For example: - -~~~sql -> SELECT ARRAY[]:::int[]; -~~~ - -{{site.data.alerts.callout_info}}To convert the results of a subquery to an array, use ARRAY(...) instead.{{site.data.alerts.end}} - -#### Typing rule - -The operands must all have the same type. -The result has the array type with the operand type as element type. - -## Tuple Constructor - -Syntax: - -~~~ -(, , ...) -ROW (, , ...) -~~~ - -Evaluates to a tuple containing the values of the provided expressions. - -For example: - -~~~sql -> SELECT ('x', 123, 12.3) AS a; -~~~ -~~~ -+----------------+ -| a | -+----------------+ -| ('x',123,12.3) | -+----------------+ -~~~ - -The data type of the resulting tuple is inferred from the values. -Each position in a tuple can have a distinct data type. - -#### Typing rule - -The operands can have any type. -The result has a tuple type whose item types are the types of the operands. - -## Explicitly Typed Expressions - -Syntax: - -~~~ -::: -ANNOTATE_TYPE(, ) -~~~ - -Evaluates to the given expression, requiring the expression to have -the given type. If the expression doesn't have the given type, an -error is returned. - -Type annotations are specially useful to guide the arithmetic on -numeric values. For example: - -~~~sql -> SELECT (1 / 0):::FLOAT; --> +Inf -> SELECT (1 / 0); --> error "division by zero" -> SELECT (1 / 0)::FLOAT; --> error "division by zero" -~~~ - -Type annotations are also different from cast expressions (see above) in -that they do not cause the value to be converted. For example, -`now()::DATE` converts the current timestamp to a date value (and -discards the current time), whereas `now():::DATE` triggers an error -message (that `now()` does not have type `DATE`). - -Check our blog for -[more information about context-dependent typing](https://www.cockroachlabs.com/blog/revisiting-sql-typing-in-cockroachdb/). - -#### Typing rule - -The operand must be implicitly coercible to the given type. -The result has the given type. - -## Subquery Expressions - -### Scalar Subqueries - -Syntax: - -~~~ -( ... subquery ... ) -~~~ - -Evaluates the subquery, asserts that it returns a single row and single column, -and then evaluates to the value of that single cell. Any [selection query](selection-queries.html) -can be used as subquery. - -For example: - -~~~sql -> SELECT (SELECT COUNT(*) FROM users) > (SELECT COUNT(*) FROM admins); -~~~ - -returns `TRUE` if there are more rows in table `users` than in table -`admins`. - -{{site.data.alerts.callout_info}}See also Subqueries for more details and performance best practices.{{site.data.alerts.end}} - -#### Typing rule - -The operand must have a table type with only one column. -The result has the type of that single column. - -### Existence Test on the Result of Subqueries - -Syntax: - -~~~ -EXISTS ( ... subquery ... ) -NOT EXISTS ( ... subquery ... ) -~~~ - -Evaluates the subquery and then returns `TRUE` or `FALSE` depending on -whether the subquery returned any row (for `EXISTS`) or didn't return -any row (for `NOT EXISTS`). Any [selection query](selection-queries.html) -can be used as subquery. - -{{site.data.alerts.callout_info}}See also Subqueries for more details and performance best practices.{{site.data.alerts.end}} - -#### Typing rule - -The operand can have any table type. The result has type `BOOL`. - -### Conversion of Subquery Results to an Array - -Syntax: - -~~~ -ARRAY( ... subquery ... ) -~~~ - -Evaluates the subquery and converts its results to an array. Any -[selection query](selection-queries.html) can be used as subquery. - -{{site.data.alerts.callout_info}}See also Subqueries for more details and performance best practices.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}To convert a list of scalar expressions to an array, use ARRAY[...] instead.{{site.data.alerts.end}} - -## See Also - -- [Constants](sql-constants.html) -- [Selection Queries](selection-queries.html) -- [Table Expressions](table-expressions.html) -- [Data Types](data-types.html) -- [Functions and Operators](functions-and-operators.html) -- [Subqueries](subqueries.html) diff --git a/src/current/v2.0/secure-a-cluster.md b/src/current/v2.0/secure-a-cluster.md deleted file mode 100644 index a0b4bf3e96e..00000000000 --- a/src/current/v2.0/secure-a-cluster.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -title: Start a Local Cluster (Secure) -summary: Run a secure multi-node CockroachDB cluster locally, using TLS certificates to encrypt network communication. -toc: true ---- - - - -Once you’ve [installed CockroachDB](install-cockroachdb.html), it’s simple to start a secure multi-node cluster locally, using [TLS certificates](create-security-certificates.html) to encrypt network communication. - -{{site.data.alerts.callout_info}}Running multiple nodes on a single host is useful for testing out CockroachDB, but it's not recommended for production deployments. To run a physically distributed cluster in production, see Manual Deployment or Orchestrated Deployment.{{site.data.alerts.end}} - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Create security certificates - -You can use either `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands. - -{% include copy-clipboard.html %} -~~~ shell -# Create a certs directory and safe directory for the CA key. -# If using the default certificate directory (`${HOME}/.cockroach-certs`), make sure it is empty. -$ mkdir certs -~~~ - -{% include copy-clipboard.html %} -~~~ -$ mkdir my-safe-directory -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Create the CA key pair: -$ cockroach cert create-ca \ ---certs-dir=certs \ ---ca-key=my-safe-directory/ca.key -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Create a client key pair for the root user: -$ cockroach cert create-client \ -root \ ---certs-dir=certs \ ---ca-key=my-safe-directory/ca.key -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Create a key pair for the nodes: -$ cockroach cert create-node \ -localhost \ -$(hostname) \ ---certs-dir=certs \ ---ca-key=my-safe-directory/ca.key -~~~ - -- The first command makes a new directory for the certificates. -- The second command creates the Certificate Authority (CA) certificate and key: `ca.crt` and `ca.key`. -- The third command creates the client certificate and key, in this case for the `root` user: `client.root.crt` and `client.root.key`. These files will be used to secure communication between the built-in SQL shell and the cluster (see step 4). -- The fourth command creates the node certificate and key: `node.crt` and `node.key`. These files will be used to secure communication between nodes. Typically, you would generate these separately for each node since each node has unique addresses; in this case, however, since all nodes will be running locally, you need to generate only one node certificate and key. - -## Step 2. Start the first node - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---host=localhost \ ---http-host=localhost -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: https://ROACHs-MBP:8080 -sql: postgresql://root@ROACHs-MBP:26257?sslcert=%2FUsers%2F... -logs: cockroach-data/logs -store[0]: path=cockroach-data -status: restarted pre-existing node -clusterID: {dab8130a-d20b-4753-85ba-14d8956a294c} -nodeID: 1 -~~~ - -This command starts a node in secure mode, accepting most [`cockroach start`](start-a-node.html) defaults. - -- The `--certs-dir` directory points to the directory holding certificates and keys. -- Since this is a purely local cluster, `--host=localhost` tells the node to listens only on `localhost`, with default ports used for internal and client traffic (`26257`) and for HTTP requests from the Admin UI (`8080`). -- The Admin UI defaults to listening on all interfaces. The `--http-host` flag is therefore used to restrict Admin UI access to the specified interface, in this case, `localhost`. -- Node data is stored in the `cockroach-data` directory. -- The [standard output](start-a-node.html#standard-output) gives you helpful details such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. - -## Step 3. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. This step helps you simulate a real deployment locally. - -In a new terminal, add the second node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---store=node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---http-host=localhost \ ---join=localhost:26257 -~~~ - -In a new terminal, add the third node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---store=node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---http-host=localhost \ ---join=localhost:26257 -~~~ - -The main difference in these commands is that you use the `--join` flag to connect the new nodes to the cluster, specifying the address and port of the first node, in this case `localhost:26257`. Since you're running all nodes on the same machine, you also set the `--store`, `--port`, and `--http-port` flags to locations and ports not used by other nodes, but in a real deployment, with each node on a different machine, the defaults would suffice. - -## Step 4. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, open a new terminal and connect the [built-in SQL client](use-the-built-in-sql-client.html) to node 1: - -{{site.data.alerts.callout_info}}The SQL client is built into the cockroach binary, so nothing extra is needed.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---certs-dir=certs -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -~~~ - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bank.accounts VALUES (1, 1000.50); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Then connect the SQL shell to node 2, this time specifying the node's non-default port: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---certs-dir=certs \ ---port=26258 -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -~~~ - -{{site.data.alerts.callout_info}}In a real deployment, all nodes would likely use the default port 26257, and so you wouldn't need to set the --port flag.{{site.data.alerts.end}} - -Now run the same `SELECT` query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -Exit the SQL shell on node 2: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 5. Monitor the cluster - -Access the [Admin UI](admin-ui-overview.html) for your cluster by pointing a browser to `http://localhost:8080`, or to the address in the `admin` field in the standard output of any node on startup. Then click **Metrics** on the left-hand navigation bar. - -Note that your browser will consider the CockroachDB-created certificate invalid; you’ll need to click through a warning message to get to the UI. - -CockroachDB Admin UI - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Node** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_info}}Capacity metrics can be incorrect when running multiple nodes on a single machine. For more details, see this limitation. {{site.data.alerts.end}} - -{{site.data.alerts.callout_success}}For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our replication, rebalancing, fault tolerance demos.{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Once you're done with your test cluster, switch to the terminal running the first node and press **CTRL-C** to stop the node. - -At this point, with 2 nodes still online, the cluster remains operational because a majority of replicas are available. To verify that the cluster has tolerated this "failure", connect the built-in SQL shell to nodes 2 or 3. You can do this in the same terminal or in a new terminal. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---certs-dir=certs \ ---port=26258 -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Now stop nodes 2 and 3 by switching to their terminals and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}}For node 3, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 of 3 nodes left, a majority of replicas are not available, and so the cluster is no longer operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data node2 node3 -~~~ - -## Step 7. Restart the cluster - -If you decide to use the cluster for further testing, you'll need to restart at least 2 of your 3 nodes from the directories containing the nodes' data stores. - -Restart the first node from the parent directory of `cockroach-data/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---host=localhost \ ---http-host=localhost -~~~ - -{{site.data.alerts.callout_info}}With only 1 node back online, the cluster will not yet be operational, so you will not see a response to the above command until after you restart the second node. -{{site.data.alerts.end}} - -In a new terminal, restart the second node from the parent directory of `node2/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---store=node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---http-host=localhost \ ---join=localhost:26257 -~~~ - -In a new terminal, restart the third node from the parent directory of `node3/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---store=node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---http-host=localhost \ ---join=localhost:26257 -~~~ - -## What's Next? - -- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html) -- [Install the client driver](install-client-drivers.html) for your preferred language -- Learn how to use [Client Connection Parameters](connection-parameters.html) to connect your app to your secure cluster -- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html) -- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, and fault tolerance diff --git a/src/current/v2.0/select-clause.md b/src/current/v2.0/select-clause.md deleted file mode 100644 index 7727254dad6..00000000000 --- a/src/current/v2.0/select-clause.md +++ /dev/null @@ -1,442 +0,0 @@ ---- -title: Simple SELECT Clause -summary: The Simple SELECT clause loads or computes data from various sources. -toc: true -key: select.html ---- - -The simple `SELECT` clause is the main SQL syntax to read and process -existing data. - -When used as a stand-alone statement, the simple `SELECT` clause is -also called "the `SELECT` statement". However, it is also a -[selection clause](selection-queries.html#selection-clauses) that can be combined -with other constructs to form more complex [selection queries](selection-queries.html). - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/simple_select_clause.html %} -
      - -
      - -{{site.data.alerts.callout_success}}The simple SELECT clause also has other applications not covered here, such as executing functions like SELECT current_timestamp();.{{site.data.alerts.end}} - -## Required Privileges - -The user must have the `SELECT` [privilege](privileges.html) on the tables used as operands. - -## Parameters - -Parameter | Description -----------|------------- -`DISTINCT` or `ALL` | See [Eliminate Duplicate Rows](#eliminate-duplicate-rows). -`DISTINCT ON ( a_expr [, ...] )` | `DISTINCT ON` followed by a list of [scalar expressions](scalar-expressions.html) within parentheses. See [Eliminate Duplicate Rows](#eliminate-duplicate-rows). -`target_elem` | A [scalar expression](scalar-expressions.html) to compute a column in each result row, or `*` to automatically retrieve all columns from the `FROM` clause.

      If `target_elem` contains an [aggregate function](functions-and-operators.html#aggregate-functions), a `GROUP BY` clause can be used to further control the aggregation. -`table_ref` | The [table expression](table-expressions.html) you want to retrieve data from.

      Using two or more table expressions in the `FROM` sub-clause, separated with a comma, is equivalent to a [`CROSS JOIN`](joins.html) expression. -`AS OF SYSTEM TIME timestamp` | Retrieve data as it existed [as of `timestamp`](as-of-system-time.html).
      **Note**: Because `AS OF SYSTEM TIME` returns historical data, your reads might be stale. -`WHERE a_expr` | Only retrieve rows that return `TRUE` for `a_expr`, which must be a [scalar expression](scalar-expressions.html) that returns Boolean values using columns (e.g., ` = `). -`GROUP BY a_expr` | When using [aggregate functions](functions-and-operators.html#aggregate-functions) in `target_elem` or `HAVING`, list the column groupings after `GROUP BY`. -`HAVING a_expr` | Only retrieve aggregate function groups that return `TRUE` for `a_expr`, which must be a [scalar expression](scalar-expressions.html) that returns Boolean values using an aggregate function (e.g., ` = `).

      `HAVING` works like the `WHERE` clause, but for aggregate functions. -`WINDOW window_definition_list` | A list of [window functions definitions](window-functions.html). - -## Eliminate Duplicate Rows - -The `DISTINCT` subclause specifies to remove duplicate rows. - -By default, or when `ALL` is specified, `SELECT` returns all the rows -selected, without removing duplicates. When `DISTINCT` is specified, -duplicate rows are eliminated. - -Without `ON`, two rows are considered duplicates if they are equal on -all the results computed by `SELECT`. - -With `ON`, two rows are considered duplicates if they are equal only -using the [scalar expressions](scalar-expressions.html) listed with `ON`. When two rows are considered duplicates according to `DISTINCT ON`, the values from the first `FROM` row in the order specified by [`ORDER BY`](query-order.html) are used to compute the remaining target expressions. If `ORDER BY` is not specified, CockroachDB will pick any one of the duplicate rows as first row, non-deterministically. - -## Examples - -### Choose Columns - -#### Retrieve Specific Columns - -Retrieve specific columns by naming them in a comma-separated list: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, name, balance -FROM accounts; -~~~ -~~~ -+----+-----------------------+---------+ -| id | name | balance | -+----+-----------------------+---------+ -| 1 | Bjorn Fairclough | 1200 | -| 2 | Bjorn Fairclough | 2500 | -| 3 | Arturo Nevin | 250 | -[ truncated ] -+----+-----------------------+---------+ -~~~ - -#### Retrieve All Columns - -Retrieve all columns by using `*`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * -FROM accounts; -~~~ -~~~ -+----+-----------------------+---------+----------+--------------+ -| id | name | balance | type | state_opened | -+----+-----------------------+---------+----------+--------------+ -| 1 | Bjorn Fairclough | 1200 | checking | AL | -| 2 | Bjorn Fairclough | 2500 | savings | AL | -| 3 | Arturo Nevin | 250 | checking | AK | -[ truncated ] -+----+-----------------------+---------+----------+--------------+ -~~~ - -### Filter Rows - -#### Filter on a Single Condition - -Filter rows with expressions that use columns and return Boolean values in the `WHERE` clause: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance -FROM accounts -WHERE balance < 300; -~~~ -~~~ -+------------------+---------+ -| name | balance | -+------------------+---------+ -| Arturo Nevin | 250 | -| Akbar Jinks | 250 | -| Andrea Maas | 250 | -+------------------+---------+ -~~~ - -#### Filter on Multiple Conditions - -To use multiple `WHERE` filters join them with `AND` or `OR`. You can also create negative filters with `NOT`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * -FROM accounts -WHERE balance > 2500 AND NOT type = 'checking'; -~~~ -~~~ -+----+-------------------+---------+---------+--------------+ -| id | name | balance | type | state_opened | -+----+-------------------+---------+---------+--------------+ -| 4 | Tullia Romijnders | 3000 | savings | AK | -| 62 | Ruarc Mathews | 3000 | savings | OK | -+----+-------------------+---------+---------+--------------+ -~~~ - -#### Select Distinct Rows - -Columns without the [Primary Key](primary-key.html) or [Unique](unique.html) constraints can have multiple instances of the same value: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name -FROM accounts -WHERE state_opened = 'VT'; -~~~ -~~~ -+----------------+ -| name | -+----------------+ -| Sibylla Malone | -| Sibylla Malone | -+----------------+ -~~~ - -Using `DISTINCT`, you can remove all but one instance of duplicate values from your retrieved data: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT DISTINCT name -FROM accounts -WHERE state_opened = 'VT'; -~~~ -~~~ -+----------------+ -| name | -+----------------+ -| Sibylla Malone | -+----------------+ -~~~ - -#### Filter Values with a List - -Using `WHERE IN ()` performs an `OR` search for listed values in the specified column: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance, state_opened -FROM accounts -WHERE state_opened IN ('AZ', 'NY', 'WA'); -~~~ -~~~ -+-----------------+---------+--------------+ -| name | balance | state_opened | -+-----------------+---------+--------------+ -| Naseem Joossens | 300 | AZ | -| Aygün Sanna | 900 | NY | -| Carola Dahl | 800 | NY | -| Edna Barath | 750 | WA | -| Edna Barath | 2200 | WA | -+-----------------+---------+--------------+ -~~~ - -### Rename Columns in Output - -Instead of outputting a column's name in the retrieved table, you can change its label using `AS`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name AS NY_accounts, balance -FROM accounts -WHERE state_opened = 'NY'; -~~~ -~~~ -+-------------+---------+ -| NY_accounts | balance | -+-------------+---------+ -| Aygün Sanna | 900 | -| Carola Dahl | 800 | -+-------------+---------+ -~~~ - -This *does not* change the name of the column in the table. To do that, use [`RENAME COLUMN`](rename-column.html). - -### Search for String Values - -Search for partial [string](string.html) matches in columns using `LIKE`, which supports the following wildcard operators: - -- `%` matches 0 or more characters. -- `_` matches exactly 1 character. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, name, type -FROM accounts -WHERE name LIKE 'Anni%'; -~~~ -~~~ -+----+----------------+----------+ -| id | name | type | -+----+----------------+----------+ -| 58 | Annibale Karga | checking | -| 59 | Annibale Karga | savings | -+----+----------------+----------+ -~~~ - -### Aggregate Functions - -[Aggregate functions](functions-and-operators.html#aggregate-functions) perform calculations on retrieved rows. - -#### Perform Aggregate Function on Entire Column - -By using an aggregate function as a `target_elem`, you can perform the calculation on the entire column. - -~~~sql -> SELECT MIN(balance) -FROM accounts; -~~~ -~~~ -+--------------+ -| MIN(balance) | -+--------------+ -| 250 | -+--------------+ -~~~ - -You can also use the retrieved value as part of an expression. For example, you can use the result in the `WHERE` clause to select additional rows that were not part of the aggregate function itself: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, name, balance -FROM accounts -WHERE balance = ( - SELECT - MIN(balance) - FROM accounts -); -~~~ -~~~ -+----+------------------+---------+ -| id | name | balance | -+----+------------------+---------+ -| 3 | Arturo Nevin | 250 | -| 10 | Henrik Brankovic | 250 | -| 26 | Odalys Ziemniak | 250 | -| 35 | Vayu Soun | 250 | -+----+------------------+---------+ -~~~ - -#### Perform Aggregate Function on Retrieved Rows - -By filtering the statement, you can perform the calculation only on retrieved rows: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT SUM(balance) -FROM accounts -WHERE state_opened IN ('AZ', 'NY', 'WA'); -~~~ -~~~ -+--------------+ -| SUM(balance) | -+--------------+ -| 4950 | -+--------------+ -~~~ - -#### Filter Columns Fed into Aggregate Functions - -You can use `FILTER (WHERE )` in the `target_elem` to filter which rows are processed by an aggregate function; those that return `FALSE` or `NULL` for the `FILTER` clause's Boolean expression are not fed into the aggregate function: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT count(*) AS unfiltered, count(*) FILTER (WHERE balance > 1500) AS filtered FROM accounts; -~~~ -~~~ -+------------+----------+ -| unfiltered | filtered | -+------------+----------+ -| 84 | 14 | -+------------+----------+ -~~~ - -#### Create Aggregate Groups - -Instead of performing aggregate functions on an the entire set of retrieved rows, you can split the rows into groups and then perform the aggregate function on each of them. - -When creating aggregate groups, each column used as a `target_elem` must be included in `GROUP BY`. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT state_opened AS state, SUM(balance) AS state_balance -FROM accounts -WHERE state_opened IN ('AZ', 'NY', 'WA') -GROUP BY state_opened; -~~~ -~~~ -+-------+---------------+ -| state | state_balance | -+-------+---------------+ -| AZ | 300 | -| NY | 1700 | -| WA | 2950 | -+-------+---------------+ -~~~ - -#### Filter Aggregate Groups - -To filter aggregate groups, use `HAVING`, which is the equivalent of the `WHERE` clause for aggregate groups, which must evaluate to a Boolean value. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT state_opened, AVG(balance) as avg -FROM accounts -GROUP BY state_opened -HAVING AVG(balance) BETWEEN 1700 AND 50000; -~~~ -~~~ -+--------------+---------+ -| state_opened | avg | -+--------------+---------+ -| AR | 3700.00 | -| UT | 1750.00 | -| OH | 2500.00 | -| AL | 1850.00 | -+--------------+---------+ -~~~ - -#### Use Aggregate Functions in Having Clause - -Aggregate functions can also be used in the `HAVING` clause without needing to be included as a `target_elem`. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, state_opened -FROM accounts -WHERE state_opened in ('LA', 'MO') -GROUP BY name, state_opened -HAVING COUNT(name) > 1; -~~~ -~~~ -+----------------+--------------+ -| name | state_opened | -+----------------+--------------+ -| Yehoshua Kleid | MO | -+----------------+--------------+ -~~~ - - -### Select Historical Data (Time-Travel) - -CockroachDB lets you find data as it was stored at a given point in -time using `AS OF SYSTEM TIME` with various [supported -formats](as-of-system-time.html). This can be also advantageous for -performance. For more details, see [`AS OF SYSTEM -TIME`](as-of-system-time.html). - -## Advanced Uses of `SELECT` Clauses - -CockroachDB supports numerous ways to combine results from `SELECT` -clauses together. - -See [Selection Queries](selection-queries.html) for -details. A few examples follow. - -### Sorting and Limiting Query Results - -To order the results of a `SELECT` clause or limit the number of rows -in the result, you can combine it with `ORDER BY` or `LIMIT` / -`OFFSET` to form a [selection query](selection-queries.html) or -[subquery](table-expressions.html#subqueries-as-table-expressions). - -See [Ordering Query Results](query-order.html) and [Limiting Query -Results](limit-offset.html) for more details. - -{{site.data.alerts.callout_info}}When ORDER BY is not included in a query, rows are not sorted by any consistent criteria. Instead, CockroachDB returns them as the coordinating node receives them.

      Also, CockroachDB sorts NULL values first with ASC and last with DESC. This differs from PostgreSQL, which sorts NULL values last with ASC and first with DESC.{{site.data.alerts.end}} - -### Combining Results From Multiple Queries - -Results from two or more queries can be combined together as follows: - -- Using [join - expressions](joins.html) to combine rows - according to conditions on specific columns. -- Using [set operations](selection-queries.html#set-operations) to combine rows - using inclusion/exclusion rules. - -## See Also - -- [Scalar Expressions](scalar-expressions.html) -- [Selection Queries](selection-queries.html) - - [Selection Clauses](selection-queries.html#selection-clauses) - - [Set Operations](selection-queries.html#set-operations) -- [Table Expressions](table-expressions.html) -- [Ordering Query Results](query-order.html) -- [Limiting Query Results](limit-offset.html) -- [SQL Performance Best Practices](performance-best-practices-overview.html) diff --git a/src/current/v2.0/selection-queries.md b/src/current/v2.0/selection-queries.md deleted file mode 100644 index 5def7a1342c..00000000000 --- a/src/current/v2.0/selection-queries.md +++ /dev/null @@ -1,474 +0,0 @@ ---- -title: Selection Queries -summary: Selection queries can read and process data in CockroachDB. -toc: true -key: selection-clauses.html ---- - -Selection queries read and process data in CockroachDB. They are more -general than [simple `SELECT` clauses](select-clause.html): they can -group one or more [selection clauses](#selection-clauses) with [set -operations](#set-operations) and can request a [specific -ordering](query-order.html) or [row limit](limit-offset.html). - -Selection queries can occur: - -- At the top level of a query like other [SQL statements](sql-statements.html). -- Between parentheses as a [subquery](table-expressions.html#subqueries-as-table-expressions). -- As [operand to other statements](#using-selection-queries-with-other-statements) that take tabular data as input, for example [`INSERT`](insert.html), [`UPSERT`](upsert.html), [`CREATE TABLE AS`](create-table-as.html) or [`ALTER ... SPLIT AT`](split-at.html). - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/select.html %}
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`select_clause` | A valid [selection clause](#selection-clauses), either simple or using [set operations](#set-operations). -`sort_clause` | An optional `ORDER BY` clause. See [Ordering Query Results](query-order.html) for details. -`limit_clause` | An optional `LIMIT` clause. See [Limiting Query Results](limit-offset.html) for details. -`offset_clause` | An optional `OFFSET` clause. See [Limiting Query Results](limit-offset.html) for details. - -The optional `LIMIT` and `OFFSET` clauses can appear in any order, but must appear after `ORDER BY`, if also present. - -{{site.data.alerts.callout_info}}Because the WITH, ORDER BY, LIMIT and OFFSET sub-clauses are all optional, any simple selection clause is also a valid selection query.{{site.data.alerts.end}} - -## Selection Clauses - -Selection clauses are the main component of a selection query. They -define tabular data. There are four specific syntax forms collectively named selection clauses: - -Form | Usage ------|-------- -[`SELECT`](#select-clause) | Load or compute tabular data from various sources. This is the most common selection clause. -[`VALUES`](#values-clause) | List tabular data by the client. -[`TABLE`](#table-clause) | Load tabular data from the database. -[Set Operations](#set-operations) | Combine tabular data from two or more selection clauses. - -{{site.data.alerts.callout_info}}To perform joins or other relational operations over selection clauses, use a table expression and convert it back into a selection clause with TABLE or SELECT.{{site.data.alerts.end}} - -### Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/select_clause.html %} -
      - -
      - -### Overview - - -### `VALUES` Clause - -#### Syntax - -
      -{% include {{ page.version.version }}/sql/diagrams/values_clause.html %} -
      - -A `VALUES` clause defines tabular data defined by the expressions -listed within parentheses. Each parenthesis group defines a single row -in the resulting table. - -The columns of the resulting table data have automatically generated -names. [These names can be modified with -`AS`](table-expressions.html#aliased-table-expressions) when the -`VALUES` clause is used as a sub-query. - -#### Example - -{% include copy-clipboard.html %} -~~~sql -> VALUES (1, 2, 3), (4, 5, 6); -~~~ - -~~~ -+---------+---------+---------+ -| column1 | column2 | column3 | -+---------+---------+---------+ -| 1 | 2 | 3 | -| 4 | 5 | 6 | -+---------+---------+---------+ -~~~ - -### `TABLE` Clause - -#### Syntax - -
      -{% include {{ page.version.version }}/sql/diagrams/table_clause.html %} -
      - -
      - -A `TABLE` clause reads tabular data from a specified table. The -columns of the resulting table data are named after the schema of the -table. - -In general, `TABLE x` is equivalent to `SELECT * FROM x`, but it is -shorter to type. - -{{site.data.alerts.callout_info}}Any table expression between parentheses is a valid operand for TABLE, not just -simple table or view names.{{site.data.alerts.end}} - -#### Example - -{% include copy-clipboard.html %} -~~~sql -> CREATE TABLE employee_copy AS TABLE employee; -~~~ - -This statement copies the content from table `employee` into a new -table. However, note that the `TABLE` clause does not preserve the indexing, -foreign key, or constraint and default information from the schema of the -table it reads from, so in this example, the new table `employee_copy` -will likely have a simpler schema than `employee`. - -Other examples: - -{% include copy-clipboard.html %} -~~~sql -> TABLE employee; -~~~ - -{% include copy-clipboard.html %} -~~~sql -> INSERT INTO employee_copy TABLE employee; -~~~ - -### `SELECT` Clause - -See [Simple `SELECT` Clause](select-clause.html) for more -details. - -## Set Operations - -Set operations combine data from two [selection -clauses](#selection-clauses). They are valid as operand to other -set operations or as main component in a selection query. - -### Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/set_operation.html %}
      - -
      - -### Set Operators - -SQL lets you compare the results of multiple [selection clauses](#selection-clauses). You can think of each of the set operators as representing a Boolean operator: - -- `UNION` = OR -- `INTERSECT` = AND -- `EXCEPT` = NOT - -By default, each of these comparisons displays only one copy of each value (similar to `SELECT DISTINCT`). However, each function also lets you add an `ALL` to the clause to display duplicate values. - -### Union: Combine Two Queries - -`UNION` combines the results of two queries into one result. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name -FROM accounts -WHERE state_opened IN ('AZ', 'NY') -UNION -SELECT name -FROM mortgages -WHERE state_opened IN ('AZ', 'NY'); -~~~ -~~~ -+-----------------+ -| name | -+-----------------+ -| Naseem Joossens | -| Ricarda Caron | -| Carola Dahl | -| Aygün Sanna | -+-----------------+ -~~~ - -To show duplicate rows, you can use `ALL`. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name -FROM accounts -WHERE state_opened IN ('AZ', 'NY') -UNION ALL -SELECT name -FROM mortgages -WHERE state_opened IN ('AZ', 'NY'); -~~~ -~~~ -+-----------------+ -| name | -+-----------------+ -| Naseem Joossens | -| Ricarda Caron | -| Carola Dahl | -| Naseem Joossens | -| Aygün Sanna | -| Carola Dahl | -+-----------------+ -~~~ - -### Intersect: Retrieve Intersection of Two Queries - -`INTERSECT` finds only values that are present in both query operands. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name -FROM accounts -WHERE state_opened IN ('NJ', 'VA') -INTERSECT -SELECT name -FROM mortgages; -~~~ -~~~ -+-----------------+ -| name | -+-----------------+ -| Danijel Whinery | -| Agar Archer | -+-----------------+ -~~~ - -### Except: Exclude One Query's Results from Another - -`EXCEPT` finds values that are present in the first query operand but not the second. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name -FROM mortgages -EXCEPT -SELECT name -FROM accounts; -~~~ -~~~ -+------------------+ -| name | -+------------------+ -| Günay García | -| Karla Goddard | -| Cybele Seaver | -+------------------+ -~~~ - -## Ordering Results - -The following sections provide examples. For more details, see [Ordering Query Results](query-order.html). - -### Order Retrieved Rows by One Column - -~~~ sql -> SELECT * -FROM accounts -WHERE balance BETWEEN 350 AND 500 -ORDER BY balance DESC; -~~~ -~~~ -+----+--------------------+---------+----------+--------------+ -| id | name | balance | type | state_opened | -+----+--------------------+---------+----------+--------------+ -| 12 | Raniya Žitnik | 500 | savings | CT | -| 59 | Annibale Karga | 500 | savings | ND | -| 27 | Adelbert Ventura | 500 | checking | IA | -| 86 | Theresa Slaski | 500 | checking | WY | -| 73 | Ruadh Draganov | 500 | checking | TN | -| 16 | Virginia Ruan | 400 | checking | HI | -| 43 | Tahirih Malinowski | 400 | checking | MS | -| 50 | Dusan Mallory | 350 | savings | NV | -+----+--------------------+---------+----------+--------------+ -~~~ - -### Order Retrieved Rows by Multiple Columns - -Columns are sorted in the order you list them in `sortby_list`. For example, `ORDER BY a, b` sorts the rows by column `a` and then sorts rows with the same `a` value by their column `b` values. - -~~~ sql -> SELECT * -FROM accounts -WHERE balance BETWEEN 350 AND 500 -ORDER BY balance DESC, name ASC; -~~~ -~~~ -+----+--------------------+---------+----------+--------------+ -| id | name | balance | type | state_opened | -+----+--------------------+---------+----------+--------------+ -| 27 | Adelbert Ventura | 500 | checking | IA | -| 59 | Annibale Karga | 500 | savings | ND | -| 12 | Raniya Žitnik | 500 | savings | CT | -| 73 | Ruadh Draganov | 500 | checking | TN | -| 86 | Theresa Slaski | 500 | checking | WY | -| 43 | Tahirih Malinowski | 400 | checking | MS | -| 16 | Virginia Ruan | 400 | checking | HI | -| 50 | Dusan Mallory | 350 | savings | NV | -+----+--------------------+---------+----------+--------------+ -~~~ - -## Limiting Row Count and Pagination - -The following sections provide examples. For more details, see [Limiting Query Results](limit-offset.html). - -### Limit Number of Retrieved Results - -You can reduce the number of results with `LIMIT`. - -~~~ sql -> SELECT id, name -FROM accounts -LIMIT 5; -~~~ -~~~ -+----+------------------+ -| id | name | -+----+------------------+ -| 1 | Bjorn Fairclough | -| 2 | Bjorn Fairclough | -| 3 | Arturo Nevin | -| 4 | Arturo Nevin | -| 5 | Naseem Joossens | -+----+------------------+ -~~~ - -### Paginate Through Limited Results - -If you want to limit the number of results, but go beyond the initial set, use `OFFSET` to proceed to the next set of results. This is often used to paginate through large tables where not all of the values need to be immediately retrieved. - -~~~ sql -> SELECT id, name -FROM accounts -LIMIT 5 -OFFSET 5; -~~~ -~~~ -+----+------------------+ -| id | name | -+----+------------------+ -| 6 | Juno Studwick | -| 7 | Juno Studwick | -| 8 | Eutychia Roberts | -| 9 | Ricarda Moriarty | -| 10 | Henrik Brankovic | -+----+------------------+ -~~~ - -## Composability - -[Selection clauses](#selection-clauses) are defined in the context of selection queries. [Table expressions](table-expressions.html) are defined in the context of the `FROM` sub-clause of [`SELECT`](select-clause.html). Nevertheless, they can be integrated with one another to form more complex queries or statements. - -### Using Any Selection Clause as a Selection Query - -Any [selection clause](#selection-clauses) can be used as a -selection query with no change. - -For example, the construct [`SELECT * FROM accounts`](select-clause.html) is a selection clause. It is also a valid selection query, and thus can be used as a stand-alone statement by appending a semicolon: - -~~~sql -> SELECT * FROM accounts; -~~~ -~~~ -+----+-----------------------+---------+----------+--------------+ -| id | name | balance | type | state_opened | -+----+-----------------------+---------+----------+--------------+ -| 1 | Bjorn Fairclough | 1200 | checking | AL | -| 2 | Bjorn Fairclough | 2500 | savings | AL | -| 3 | Arturo Nevin | 250 | checking | AK | -[ truncated ] -+----+-----------------------+---------+----------+--------------+ -~~~ - -Likewise, the construct [`VALUES (1), (2), (3)`](#values-clause) is also a selection -clause and thus can also be used as a selection query on its own: - -~~~sql -> VALUES (1), (2), (3); -~~~ -~~~ -+---------+ -| column1 | -+---------+ -| 1 | -| 2 | -| 3 | -+---------+ -(3 rows) -~~~ - -### Using Any Table Expression as Selection Clause - -Any [table expression](table-expressions.html) can be used as a selection clause (and thus also a selection query) by prefixing it with `TABLE` or by using it as an operand to `SELECT * FROM`. - -For example, the [simple table name](table-expressions.html#table-or-view-names) `customers` is a table expression, which designates all rows in that table. The expressions [`TABLE accounts`](selection-queries.html#table-clause) and [`SELECT * FROM accounts`](select-clause.html) are valid selection clauses. - -Likewise, the [SQL join expression](joins.html) `customers c JOIN orders o ON c.id = o.customer_id` is a table expression. You can turn it into a valid selection clause, and thus a valid selection query as follows: - -~~~sql -> TABLE (customers c JOIN orders o ON c.id = o.customer_id); -> SELECT * FROM customers c JOIN orders o ON c.id = o.customer_id; -~~~ - -### Using Any Selection Query as Table Expression - -Any selection query (or [selection clause](#selection-clauses)) can be used as a [table -expression](table-expressions.html) by enclosing it between parentheses, which forms a -[subquery](table-expressions.html#subqueries-as-table-expressions). - -For example, the following construct is a selection query, but is not a valid table expression: - -~~~sql -> SELECT * FROM customers ORDER BY name LIMIT 5 -~~~ - -To make it valid as operand to `FROM` or another table expression, you can enclose it between parentheses as follows: - -~~~sql -> SELECT id FROM (SELECT * FROM customers ORDER BY name LIMIT 5); -> SELECT o.id - FROM orders o - JOIN (SELECT * FROM customers ORDER BY name LIMIT 5) AS c - ON o.customer_id = c.id; -~~~ - -### Using Selection Queries With Other Statements - -Selection queries are also valid as operand in contexts that require tabular data. - -For example: - -| Statement | Example using `SELECT` | Example using `VALUES` | Example using `TABLE` | -|----------------|-----------------------------------|------------------------------------|-------------------------------| -| [`INSERT`](insert.html) | `INSERT INTO foo SELECT * FROM bar` | `INSERT INTO foo VALUES (1), (2), (3)` | `INSERT INTO foo TABLE bar` -| [`UPSERT`](upsert.html) | `UPSERT INTO foo SELECT * FROM bar` | `UPSERT INTO foo VALUES (1), (2), (3)` | `UPSERT INTO foo TABLE bar` -| [`CREATE TABLE AS`](create-table-as.html) | `CREATE TABLE foo AS SELECT * FROM bar` | `CREATE TABLE foo AS VALUES (1),(2),(3)` | `CREATE TABLE foo AS TABLE bar` -| [`ALTER ... SPLIT AT`](split-at.html) | `ALTER TABLE foo SPLIT AT SELECT * FROM bar` | `ALTER TABLE foo SPLIT AT VALUES (1),(2),(3)` | `ALTER TABLE foo SPLIT AT TABLE bar` -| Subquery in a [table expression](table-expressions.html) | `SELECT * FROM (SELECT * FROM bar)` | `SELECT * FROM (VALUES (1),(2),(3))` | `SELECT * FROM (TABLE bar)` -| Subquery in a [scalar expression](scalar-expressions.html) | `SELECT * FROM foo WHERE x IN (SELECT * FROM bar)` | `SELECT * FROM foo WHERE x IN (VALUES (1),(2),(3))` | `SELECT * FROM foo WHERE x IN (TABLE bar)` - -## Known Limitations - -{{site.data.alerts.callout_info}} The following limitations may be lifted -in a future version of CockroachDB.{{site.data.alerts.end}} - -### Using `VALUES` Clauses with Common Table Expressions - -{% include {{ page.version.version }}/known-limitations/cte-in-values-clause.md %} - -### Using Set Operations with Common Table Expressions - -{% include {{ page.version.version }}/known-limitations/cte-in-set-expression.md %} - -## See Also - -- [Simple `SELECT` Clause](select-clause.html) -- [Table Expressions](table-expressions.html) -- [Ordering Query Results](query-order.html) -- [Limiting Query Results](limit-offset.html) diff --git a/src/current/v2.0/serial.md b/src/current/v2.0/serial.md deleted file mode 100644 index c81cfb5c0bf..00000000000 --- a/src/current/v2.0/serial.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: SERIAL -summary: The SERIAL pseudo-type produces integer values automatically. -toc: true ---- - -The `SERIAL` pseudo [data type](data-types.html) is a keyword that can -be used *in lieu* of a real data type when defining table columns. It -is approximately equivalent to using an [integer type](int.html) with -a [`DEFAULT` expression](default-value.html) that generates different -values every time it is evaluated. This default expression in turn -ensures that inserts that do not specify this column will receive an -automatically generated value instead of `NULL`. - -{{site.data.alerts.callout_info}} -`SERIAL` is provided only for compatibility with PostgreSQL. New applications should use real data types and a suitable `DEFAULT` expression. - -In most cases, we recommend using the [`UUID`](uuid.html) data type with the `gen_random_uuid()` function as the default value, which generates 128-bit values (larger than `SERIAL`'s maximum of 64 bits) and more uniformly scatters them across all of a table's underlying key-value ranges. UUIDs ensure more effectively that multiple nodes share the insert load when a UUID column is used in an index or primary key. - -See [this FAQ entry](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for more details. -{{site.data.alerts.end}} - -## Behavior - -The keyword `SERIAL` is recognized in `CREATE TABLE` and is -automatically translated to a real data type and a [`DEFAULT` -expression](default-value.html) using `unique_rowid()` during table -creation. - -The result of this translation is then used internally by CockroachDB, -and can be observed using [`SHOW CREATE TABLE`](show-create-table.html). - -The chosen `DEFAULT` expression ensures that different values are -automatically generated for the column during row insertion. These -are not guaranteed to increase monotonically, see [this section -below](#auto-incrementing-is-not-always-sequential) for details. - -{{site.data.alerts.callout_info}} -The particular choice of `DEFAULT` expression when clients use the -`SERIAL` keyword is subject to change in future versions of -CockroachDB. Applications that wish to use `unique_rowid()` -specifically must use the full explicit syntax `INT DEFAULT -unique_rowid()` and avoid `SERIAL` altogether. -{{site.data.alerts.end}} - -For compatibility with PostgreSQL, CockroachDB recognizes the following keywords as aliases to `SERIAL`: - -- `SERIAL2` -- `SERIAL4` -- `SERIAL8` -- `SMALLSERIAL` -- `BIGSERIAL` - -{{site.data.alerts.callout_danger}} -`SERIAL2` and `SERIAL4` are the same as `SERIAL` and store 8-byte values, not 2- or 4-byte values as their names might suggest. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -This behavior is updated in CockroachDB v2.1. -{{site.data.alerts.end}} - -### Automatically generated values - -The default expression `unique_rowid()` produces a 64-bit integer from -the current timestamp and ID of the node executing the -[`INSERT`](insert.html) or [`UPSERT`](upsert.html) operation. -This behavior is statistically likely to be globally unique except in -extreme cases (see [this FAQ -entry](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) -for more details). - -Also, because value generation using `unique_rowid()` does not require -inter-node coordination, its performance scales unimpeded when -multiple SQL clients are writing to the table from different nodes. - -## Examples - -### Use `SERIAL` to Auto-Generate Primary Keys - -In this example, we create a table with the `SERIAL` column as the primary key so we can auto-generate unique IDs on insert. - -~~~ sql -> CREATE TABLE serial (a SERIAL PRIMARY KEY, b STRING, c BOOL); -~~~ - -The [`SHOW COLUMNS`](show-columns.html) statement shows that the `SERIAL` type is just an alias for `INT` with `unique_rowid()` as the default. - -~~~ sql -> SHOW COLUMNS FROM serial; -~~~ - -~~~ -+-------+------------+-------+----------------+ -| Field | Type | Null | Default | -+-------+------------+-------+----------------+ -| a | INT | false | unique_rowid() | -| b | STRING | true | NULL | -| c | BOOL | true | NULL | -+-------+------------+-------+----------------+ -~~~ - -When we insert rows without values in column `a` and display the new rows, we see that each row has defaulted to a unique value in column `a`. - -~~~ sql -> INSERT INTO serial (b,c) VALUES ('red', true), ('yellow', false), ('pink', true); -> INSERT INTO serial (a,b,c) VALUES (123, 'white', false); -> SELECT * FROM serial; -~~~ - -~~~ -+--------------------+--------+-------+ -| a | b | c | -+--------------------+--------+-------+ -| 148656994422095873 | red | true | -| 148656994422161409 | yellow | false | -| 148656994422194177 | pink | true | -| 123 | white | false | -+--------------------+--------+-------+ -~~~ - -## Auto-Incrementing Is Not Always Sequential - -It's a common misconception that the auto-incrementing types in PostgreSQL and MySQL generate strictly sequential values. In fact, each insert increases the sequence by one, even when the insert is not commited. This means that auto-incrementing types may leave gaps in a sequence. - -To experience this for yourself, run through the following example in PostgreSQL: - -1. Create a table with a `SERIAL` column. - - ~~~ sql - > CREATE TABLE increment (a SERIAL PRIMARY KEY); - ~~~ - -2. Run four transactions for inserting rows. - - ~~~ sql - > BEGIN; INSERT INTO increment DEFAULT VALUES; ROLLBACK; - > BEGIN; INSERT INTO increment DEFAULT VALUES; COMMIT; - > BEGIN; INSERT INTO increment DEFAULT VALUES; ROLLBACK; - > BEGIN; INSERT INTO increment DEFAULT VALUES; COMMIT; - ~~~ - -3. View the rows created. - - ~~~ sql - > SELECT * from increment; - ~~~ - ~~~ - +---+ - | a | - +---+ - | 2 | - | 4 | - +---+ - ~~~ - - Since each insert increased the sequence in column `a` by one, the first commited insert got the value `2`, and the second commited insert got the value `4`. As you can see, the values aren't strictly sequential, and the last value doesn't give an accurate count of rows in the table. - -In summary, the `SERIAL` type in PostgreSQL and CockroachDB, and the `AUTO_INCREMENT` type in MySQL, all behave the same in that they do not create strict sequences. CockroachDB will likely create more gaps than these other databases, but will generate these values much faster. - - -## See also - -- [FAQ: How do I auto-generate unique row IDs in CockroachDB?](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) -- [Data Types](data-types.html) diff --git a/src/current/v2.0/set-cluster-setting.md b/src/current/v2.0/set-cluster-setting.md deleted file mode 100644 index 6cb738ba126..00000000000 --- a/src/current/v2.0/set-cluster-setting.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: SET CLUSTER SETTING -summary: The SET CLUSTER SETTING statement configures one cluster setting. -toc: true ---- - -The `SET CLUSTER SETTING` [statement](sql-statements.html) modifies a [cluster-wide setting](cluster-settings.html). - -{{site.data.alerts.callout_danger}}Many cluster settings are intended for tuning CockroachDB internals. Before changing these settings, we strongly encourage you to discuss your goals with CockroachDB; otherwise, you use them at your own risk.{{site.data.alerts.end}} - - -## Required Privileges - -Only the `root` user can modify cluster settings. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/set_cluster_setting.html %} -
      - -{{site.data.alerts.callout_info}}The SET CLUSTER SETTING statement is unrelated to the other SET TRANSACTION and SET (session variable) statements.{{site.data.alerts.end}} - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `var_name` | The name of the [cluster setting](cluster-settings.html) (case-insensitive). | -| `var_value` | The value for the [cluster setting](cluster-settings.html). | -| `DEFAULT` | Reset the [cluster setting](cluster-settings.html) to its default value.

      The [`RESET CLUSTER SETTING`](reset-cluster-setting.html) resets a cluster setting as well. | - -## Examples - -### Change the Default Distributed Execution Parameter - -You can configure a cluster so that new sessions automatically try to run queries [in a distributed fashion](https://www.cockroachlabs.com/blog/local-and-distributed-processing-in-cockroachdb/): - -~~~ sql -> SET CLUSTER SETTING sql.defaults.distsql = 1; -~~~ - -You can also disable distributed execution for all new sessions: - -~~~ sql -> SET CLUSTER SETTING sql.defaults.distsql = 0; -~~~ - -### Disable Automatic Diagnostic Reporting - -You can opt out of -[automatic diagnostic reporting](diagnostics-reporting.html) of usage -data to Cockroach Labs using the following: - -~~~ sql -> SET CLUSTER SETTING diagnostics.reporting.enabled = false; -> SHOW CLUSTER SETTING diagnostics.reporting.enabled; -~~~ - -~~~ -+-------------------------------+ -| diagnostics.reporting.enabled | -+-------------------------------+ -| false | -+-------------------------------+ -(1 row) -~~~ - -### Reset a Setting to Its Default Value - -{{site.data.alerts.callout_success}}You can use RESET CLUSTER SETTING to reset a cluster setting as well.{{site.data.alerts.end}} - -~~~ sql -> SET CLUSTER SETTING sql.metrics.statement_details.enabled = false; -~~~ - -~~~ sql -> SHOW CLUSTER SETTING sql.metrics.statement_details.enabled; -~~~ - -~~~ -+---------------------------------------+ -| sql.metrics.statement_details.enabled | -+---------------------------------------+ -| false | -+---------------------------------------+ -(1 row) -~~~ - -~~~ sql -> SET CLUSTER SETTING sql.metrics.statement_details.enabled = DEFAULT; -~~~ - -~~~ sql -> SHOW CLUSTER SETTING sql.metrics.statement_details.enabled; -~~~ - -~~~ -+---------------------------------------+ -| sql.metrics.statement_details.enabled | -+---------------------------------------+ -| true | -+---------------------------------------+ -(1 row) -~~~ - -## See Also - -- [`SET` (session variable)](set-vars.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Cluster settings](cluster-settings.html) diff --git a/src/current/v2.0/set-transaction.md b/src/current/v2.0/set-transaction.md deleted file mode 100644 index db4aa4a3c8e..00000000000 --- a/src/current/v2.0/set-transaction.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: SET TRANSACTION -summary: The SET TRANSACTION statement sets the transaction isolation level and/or priority for the current session or for an individual transaction. -toc: true ---- - -The `SET TRANSACTION` [statement](sql-statements.html) sets the transaction isolation level or priority after you [`BEGIN`](begin-transaction.html) it but before executing the first statement that manipulates a database. - -{{site.data.alerts.callout_info}}You can also set the session's default isolation level.{{site.data.alerts.end}} - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/set_transaction.html %}
      - -## Required Privileges - -No [privileges](privileges.html) are required to set the transaction isolation level or priority. However, privileges are required for each statement within a transaction. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `ISOLATION LEVEL` | By default, transactions in CockroachDB implement the strongest ANSI isolation level: `SERIALIZABLE`. At this isolation level, transactions will never result in anomalies. The `SNAPSHOT` isolation level is still supported as well for backwards compatibility, but you should avoid using it. It provides little benefit in terms of performance and can result in inconsistent state under certain complex workloads. For more information, see [Transactions: Isolation Levels](transactions.html#isolation-levels).

      New in v2.0: The current isolation level is also exposed as the [session variable](show-vars.html) `transaction_isolation`.

      **Default**: `SERIALIZABLE` | -| `PRIORITY` | If you do not want the transaction to run with `NORMAL` priority, you can set it to `LOW` or `HIGH`.

      Transactions with higher priority are less likely to need to be retried.

      For more information, see [Transactions: Priorities](transactions.html#transaction-priorities).

      The current priority is also exposed as the [session variable](show-vars.html) `transaction_priority`.

      **Default**: `NORMAL` | -| `READ` | New in v2.0: Set the transaction access mode to `READ ONLY` or `READ WRITE`. The current transaction access mode is also exposed as the [session variable](show-vars.html) `transaction_read_only`.

      **Default**: `READ WRITE`| - -## Examples - -### Set Isolation & Priority - -You can set a transaction's isolation level to `SNAPSHOT`, as well as its priority to `LOW` or `HIGH`. - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET TRANSACTION ISOLATION LEVEL SNAPSHOT, PRIORITY HIGH; -~~~ - -{{site.data.alerts.callout_success}}You can also set both transaction options as a space-separated list, e.g., SET TRANSACTION ISOLATION LEVEL SNAPSHOT PRIORITY HIGH.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -### Set Session's Default Isolation - -You can also set the default isolation level for all transactions in the client's current session using `SET DEFAULT_TRANSACTION_ISOLATION TO `. - -~~~ sql -> SHOW DEFAULT_TRANSACTION_ISOLATION; -~~~ -~~~ -+-------------------------------+ -| default_transaction_isolation | -+-------------------------------+ -| SERIALIZABLE | -+-------------------------------+ -(1 row) -~~~ -~~~ sql -> SET DEFAULT_TRANSACTION_ISOLATION TO SNAPSHOT; -~~~ -~~~ -SET -~~~ -~~~ sql -> SHOW DEFAULT_TRANSACTION_ISOLATION; -~~~ -~~~ -+-------------------------------+ -| default_transaction_isolation | -+-------------------------------+ -| SNAPSHOT | -+-------------------------------+ -(1 row) -~~~ - -## See Also - -- [`SET`](set-vars.html) -- [Transaction parameters](transactions.html#transaction-parameters) -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) diff --git a/src/current/v2.0/set-vars.md b/src/current/v2.0/set-vars.md deleted file mode 100644 index 5d579e79642..00000000000 --- a/src/current/v2.0/set-vars.md +++ /dev/null @@ -1,220 +0,0 @@ ---- -title: SET (session variable) -summary: The SET statement modifies the current configuration variables for the client session. -toc: true ---- - -The `SET` [statement](sql-statements.html) can modify one of the session configuration variables. These can also be queried via [`SHOW`](show-vars.html). - -{{site.data.alerts.callout_danger}}In some cases, client drivers can drop and restart the connection to the server. When this happens, any session configurations made with SET statements are lost. It is therefore more reliable to configure the session in the client's connection string. For examples in different languages, see the Build an App with CockroachDB tutorials.{{site.data.alerts.end}} - - -## Required Privileges - -No [privileges](privileges.html) are required to modify the session settings. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/set_var.html %} -
      - -{{site.data.alerts.callout_info}}The SET statement for session settings is unrelated to the other SET TRANSACTION and SET CLUSTER SETTING statements.{{site.data.alerts.end}} - -## Parameters - -The `SET ` statement accepts two parameters: the -variable name and the value to use to modify the variable. - -The variable name is case insensitive. The value can be a list of one or more items. For example, the variable `search_path` is multi-valued. - -### Supported Variables - -| Variable name | Description | Initial value | Can be viewed with [`SHOW`](show-vars.html)? | -|---------------|--------------|---------------|----------------------------------------------| -| `application_name` | The current application name for statistics collection. | Empty string | Yes | -| `database` | The [current database](sql-name-resolution.html#current-database). | Database in connection string, or empty if not specified | Yes | -| `default_transaction_isolation` | New in v2.0: The default transaction isolation level for the current session. See [Transaction parameters](transactions.html#transaction-parameters) and [`SET TRANSACTION`](set-transaction.html) for more details. | Settings in connection string, or "`SERIALIZABLE`" if not specified | Yes | -| `default_transaction_read_only` | The default transaction access mode for the current session. If set to `on`, only read operations are allowed in transactions in the current session; if set to `off`, both read and write operations are allowed. See [`SET TRANSACTION`](set-transaction.html) for more details. | `off` | Yes | -| `sql_safe_updates` | If `true`, disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`. See [Allow Potentially Unsafe SQL Statements](use-the-built-in-sql-client.html#allow-potentially-unsafe-sql-statements) for more details. | `true` for interactive sessions from the [built-in SQL client](use-the-built-in-sql-client.html) unless `--safe-updates=false` is specified,
      `false` for sessions from other clients | Yes | -| `search_path` | Changed in v2.0: A list of schemas that will be searched to resolve unqualified table or function names. For more details, see [Name Resolution](sql-name-resolution.html). | "`{public}`" | Yes | -| `server_version_num` | New in v2.0: The version of PostgreSQL that CockroachDB emulates. | Version-dependent | Yes | -| `timezone` | The default time zone for the current session.

      This value can be a string representation of a local system-defined time zone (e.g., `'EST'`, `'America/New_York'`) or a positive or negative numeric offset from UTC (e.g., `-7`, `+7`). Also, `DEFAULT`, `LOCAL`, or `0` sets the session time zone to `UTC`.

      See [Setting the Time Zone](#set-time-zone) for more details.

      Changed in v2.0: This session variable was named `"time zone"` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `UTC` | Yes | -| `tracing` | The trace recording state.

      See [`SET TRACING`](#set-tracing) for more details. | `off` | Yes | -| `transaction_isolation` | The isolation level of the current transaction. See [Transaction parameters](transactions.html#transaction-parameters) for more details.

      Changed in v2.0: This session variable was called `transaction isolation level` (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `SERIALIZABLE` | Yes | -| `transaction_priority` | The priority of the current transaction. See [Transaction parameters](transactions.html#transaction-parameters) for more details.

      Changed in v2.0: This session variable was called `transaction priority` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NORMAL` | Yes | -| `transaction_read_only` | New in v2.0: The access mode of the current transaction. See [Set Transaction](set-transaction.html) for more details. | `off` | Yes | -| `transaction_status` | The state of the current transaction. See [Transactions](transactions.html) for more details.

      Changed in v2.0: This session variable was called `transaction status` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NoTxn` | Yes | -| `client_encoding` | Ignored; recognized for compatibility with PostgreSQL clients. Only possible value is "`UTF8`". | N/A | No | -| `client_min_messages` | Ignored; recognized for compatibility with PostgreSQL clients. Only possible value is "`on`". | N/A | No | -| `extra_float_digits` | Ignored; recognized for compatibility with PostgreSQL clients. | N/A | No | -| `standard_conforming_strings` | Ignored; recognized for compatibility with PostgreSQL clients. | N/A | No | - -Special syntax cases: - -| Syntax | Equivalent to | Notes | -|--------|---------------|-------| -| `USE ...` | `SET database = ...` | This is provided as convenience for users with a MySQL/MSSQL background. -| `SET NAMES ...` | `SET client_encoding = ...` | This is provided for compatibility with PostgreSQL clients. -| `SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL ...` | `SET default_transaction_isolation = ...` | This is provided for compatibility with standard SQL. -| `SET TIME ZONE ...` | `SET timezone = ...` | This is provided for compatibility with PostgreSQL clients. - -## Examples - -### Set Simple Variables - -The following demonstrates how `SET` can be used to configure the -default database for the current session: - -~~~ sql -> SET database = bank; -> SHOW database; -~~~ - -~~~ -+----------+ -| database | -+----------+ -| bank | -+----------+ -(1 row) -~~~ - -### Set Variables to Values Containing Spaces - -The following demonstrates how to use quoting to use values containing spaces: - -~~~ sql -> SET database = "database name with spaces"; -> SHOW database; -~~~ - -~~~ -+---------------------------+ -| database | -+---------------------------+ -| database name with spaces | -+---------------------------+ -(1 row) -~~~ - -### Set Variables to a List of Values - -The following demonstrates how to assign a list of values: - -~~~ sql -> SET search_path = pg_catalog,public; -> SHOW search_path; -~~~ - -~~~ -+---------------------------+ -| search_path | -+---------------------------+ -| pg_catalog, public | -+---------------------------+ -(1 row) -~~~ - -### Reset a Variable to Its Default Value - -{{site.data.alerts.callout_success}}You can use RESET to reset a session variable as well.{{site.data.alerts.end}} - -~~~ sql -> SET default_transaction_isolation = SNAPSHOT; -~~~ - -~~~ sql -> SHOW default_transaction_isolation; -~~~ - -~~~ -+-------------------------------+ -| default_transaction_isolation | -+-------------------------------+ -| SNAPSHOT | -+-------------------------------+ -(1 row) -~~~ - -~~~ sql -> SET default_transaction_isolation = DEFAULT; -~~~ - -~~~ sql -> SHOW default_transaction_isolation; -~~~ - -~~~ -+-------------------------------+ -| default_transaction_isolation | -+-------------------------------+ -| SERIALIZABLE | -+-------------------------------+ -(1 row) -~~~ - -## `SET TIME ZONE` - -{{site.data.alerts.callout_danger}}As a best practice, we recommend not using this setting and avoid setting a session time for your database. We instead recommend converting UTC values to the appropriate time zone on the client side.{{site.data.alerts.end}} - -You can control your client's default time zone for the current session with SET TIME ZONE. This will apply a session offset to all [`TIMESTAMP WITH TIME ZONE`](timestamp.html) values. - -{{site.data.alerts.callout_info}}With setting SET TIME ZONE, CockroachDB uses UTC as the default time zone.{{site.data.alerts.end}} - -### Parameters - -The time zone value indicates the time zone for the current session. - -This value can be a string representation of a local system-defined -time zone (e.g., `'EST'`, `'America/New_York'`) or a positive or -negative numeric offset from UTC (e.g., `-7`, `+7`). Also, `DEFAULT`, -`LOCAL`, or `0` sets the session time zone to `UTC`. - -### Example: Set the Default Time Zone via `SET TIME ZONE` - -~~~ sql -> SET TIME ZONE 'EST'; -- same as SET "timezone" = 'EST' -> SHOW TIME ZONE; -~~~ -~~~ shell -+-----------+ -| time zone | -+-----------+ -| EST | -+-----------+ -(1 row) -~~~ -~~~ sql -> SET TIME ZONE DEFAULT; -- same as SET "timezone" = DEFAULT -> SHOW TIME ZONE; -~~~ -~~~ shell -+-----------+ -| time zone | -+-----------+ -| UTC | -+-----------+ -(1 row) -~~~ - -## `SET TRACING` - -`SET TRACING` changes the trace recording state of the current session. A trace recording can be inspected with the [`SHOW TRACE FOR SESSION`](show-trace.html) statement. - - Value | Description --------|------------ -`off` | Trace recording is disabled. -`cluster` | Trace recording is enabled; distributed traces are collected. -`on` | Same as `cluster`. -`kv` | Same as `cluster` except that "kv messages" are collected instead of regular trace messages. See [`SHOW TRACE`](show-trace.html). -`local` | Trace recording is enabled; only trace messages issued by the local node are collected. - -## See Also - -- [`RESET`](reset-vars.html) -- [`SET TRANSACTION`](set-transaction.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW` (session variable)](show-vars.html) -- [The `TIMESTAMP` and `TIMESTAMPTZ` data types.](timestamp.html) -- [`SHOW TRACE`](show-trace.html) diff --git a/src/current/v2.0/show-backup.md b/src/current/v2.0/show-backup.md deleted file mode 100644 index 71834afab7f..00000000000 --- a/src/current/v2.0/show-backup.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: SHOW BACKUP -summary: The SHOW BACKUP statement lists the contents of a backup. -toc: true ---- - -New in v1.1: The `SHOW BACKUP` [statement](sql-statements.html) lists the contents of an enterprise backup created with the [`BACKUP`](backup.html) statement. - - -## Required Privileges - -Only the `root` user can run `SHOW BACKUP`. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_backup.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`location` | The location of the backup to inspect. For more details, see [Backup File URLs](backup.html#backup-file-urls). - -## Response - -The following fields are returned. - -Field | Description -------|------------ -`database` | The database name. -`table` | The table name. -`start_time` | The time at which the backup was started. For a full backup, this will be empty. -`end_time` | The time at which the backup was completed. -`size_bytes` | The size of the backup, in bytes. - -## Example - -~~~ sql -> SHOW BACKUP 'azure://acme-co-backup/tpch-2017-03-27-full?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co'; -~~~ - -~~~ -+----------+----------+------------+----------------------------------+------------+ -| database | table | start_time | end_time | size_bytes | -+----------+----------+------------+----------------------------------+------------+ -| tpch | nation | | 2017-03-27 13:54:31.371103+00:00 | 3828 | -| tpch | region | | 2017-03-27 13:54:31.371103+00:00 | 6626 | -| tpch | part | | 2017-03-27 13:54:31.371103+00:00 | 8128 | -| tpch | supplier | | 2017-03-27 13:54:31.371103+00:00 | 2834 | -| tpch | partsupp | | 2017-03-27 13:54:31.371103+00:00 | 3884 | -| tpch | customer | | 2017-03-27 13:54:31.371103+00:00 | 12736 | -| tpch | orders | | 2017-03-27 13:54:31.371103+00:00 | 6020 | -| tpch | lineitem | | 2017-03-27 13:54:31.371103+00:00 | 729811 | -+----------+----------+------------+----------------------------------+------------+ -(8 rows) - -Time: 32.540353ms -~~~ - -## See Also - -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) diff --git a/src/current/v2.0/show-cluster-setting.md b/src/current/v2.0/show-cluster-setting.md deleted file mode 100644 index 34e1aed7a62..00000000000 --- a/src/current/v2.0/show-cluster-setting.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: SHOW CLUSTER SETTING -summary: The SHOW CLUSTER SETTING statement displays the current cluster settings. -toc: true ---- - -The `SHOW CLUSTER SETTING` [statement](sql-statements.html) can -display the value of either one or all of the -[cluster settings](cluster-settings.html). These can also be configured -via [`SET CLUSTER SETTING`](set-cluster-setting.html). - - -## Required Privileges Changed in v2.0 - -Only the `root` user can display cluster settings. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_cluster_setting.html %} -
      - -{{site.data.alerts.callout_info}}The SHOW statement for cluster settings is unrelated to the other SHOW statements: SHOW (session variable), SHOW CREATE TABLE, SHOW CREATE VIEW, SHOW USERS, SHOW DATABASES, SHOW COLUMNS, SHOW GRANTS, and SHOW CONSTRAINTS.{{site.data.alerts.end}} - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `any_name` | The name of the [cluster setting](cluster-settings.html) (case-insensitive). | - -## Examples - -### Showing the Value of a Single Cluster Setting - -~~~ sql -> SHOW CLUSTER SETTING diagnostics.reporting.enabled; -~~~ - -~~~ -+-------------------------------+ -| diagnostics.reporting.enabled | -+-------------------------------+ -| true | -+-------------------------------+ -(1 row) -~~~ - -~~~ sql -> SHOW CLUSTER SETTING sql.default.distsql; -~~~ - -~~~ -+----------------------+ -| sql.defaults.distsql | -+----------------------+ -| 1 | -+----------------------+ -(1 row) -~~~ - -### Showing the Value of All Cluster Settings - -~~~ sql -> SHOW ALL CLUSTER SETTINGS; -~~~ - -~~~ -+-------------------------------+---------------+------+--------------------------------------------------------+ -| name | current_value | type | description | -+-------------------------------+---------------+------+--------------------------------------------------------+ -| diagnostics.reporting.enabled | true | b | enable reporting diagnostic metrics to cockroach labs | -| ... | ... | ... | ... | -+-------------------------------+---------------+------+--------------------------------------------------------+ -(24 rows) -~~~ - -## See Also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`RESET CLUSTER SETTING`](reset-cluster-setting.html) -- [Cluster settings](cluster-settings.html) -- [`SHOW` (session variable)](show-vars.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE TABLE`](show-create-table.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW INDEX`](show-index.html) -- [`SHOW USERS`](show-users.html) diff --git a/src/current/v2.0/show-columns.md b/src/current/v2.0/show-columns.md deleted file mode 100644 index 90be1ba8cf9..00000000000 --- a/src/current/v2.0/show-columns.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: SHOW COLUMNS -summary: The SHOW COLUMNS statement shows details about columns in a table, including each column's name, type, default value, and whether or not it's nullable. -toc: true ---- - -The `SHOW COLUMNS` [statement](sql-statements.html) shows details about columns in a table, including each column's name, type, default value, and whether or not it's nullable. - - -## Required Privileges - -The user must have any [privilege](privileges.html) on the target table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_columns.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table for which to show columns. - -## Response - -The following fields are returned for each column. - -Field | Description -------|------------ -`Field` | The name of the column. -`Type` | The [data type](data-types.html) of the column. -`Null` | Whether or not the column accepts `NULL`. Possible values: `true` or `false`. -`Default` | The default value for the column, or an expression that evaluates to a default value. -`Indices` | The list of [indexes](indexes.html) that the column is involved in, as an array. - -## Example - -~~~ sql -> CREATE TABLE orders ( - id INT PRIMARY KEY DEFAULT unique_rowid(), - date TIMESTAMP NOT NULL, - priority INT DEFAULT 1, - customer_id INT UNIQUE, - status STRING DEFAULT 'open', - CHECK (priority BETWEEN 1 AND 5), - CHECK (status in ('open', 'in progress', 'done', 'cancelled')), - FAMILY (id, date, priority, customer_id, status) -); - -> SHOW COLUMNS FROM orders; -~~~ - -~~~ -+-------------+-----------+-------+----------------+----------------------------------+ -| Field | Type | Null | Default | Indices | -+-------------+-----------+-------+----------------+----------------------------------+ -| id | INT | false | unique_rowid() | {primary,orders_customer_id_key} | -| date | TIMESTAMP | false | NULL | {} | -| priority | INT | true | 1 | {} | -| customer_id | INT | true | NULL | {orders_customer_id_key} | -| status | STRING | true | 'open' | {} | -+-------------+-----------+-------+----------------+----------------------------------+ -(5 rows) -~~~ - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) - diff --git a/src/current/v2.0/show-constraints.md b/src/current/v2.0/show-constraints.md deleted file mode 100644 index f807bec1e70..00000000000 --- a/src/current/v2.0/show-constraints.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: SHOW CONSTRAINTS -summary: The SHOW CONSTRAINTS statement lists the constraints on a table. -toc: true ---- - -The `SHOW CONSTRAINTS` [statement](sql-statements.html) lists all named [constraints](constraints.html) as well as any unnamed Check constraints on a table. - -{{site.data.alerts.callout_danger}}The SHOW CONSTRAINTS statement is under development; the exact output will continue to change.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have any [privilege](privileges.html) on the target table. - -## Aliases - -`SHOW CONSTRAINT` is an alias for `SHOW CONSTRAINTS`. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_constraints.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table for which to show constraints. - -## Response - -The following fields are returned for each constraint. - -{{site.data.alerts.callout_danger}}The SHOW CONSTRAINTS statement is under development; the exact output will continue to change.{{site.data.alerts.end}} - -Field | Description -------|------------ -`Table` | The name of the table. -`Name` | The name of the constraint. -`Type` | The type of constraint. -`Column(s)` | The columns to which the constraint applies. For [Check constraints](check.html), column information will be in `Details` and this field will be `NULL`. -`Details` | The conditions for a Check constraint. - -## Example - -~~~ sql -> CREATE TABLE orders ( - id INT PRIMARY KEY, - date TIMESTAMP NOT NULL, - priority INT DEFAULT 1, - customer_id INT UNIQUE, - status STRING DEFAULT 'open', - CHECK (priority BETWEEN 1 AND 5), - CHECK (status in ('open', 'in progress', 'done', 'cancelled')), - FAMILY (id, date, priority, customer_id, status) -); - -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+------------------------+-------------+---------------+--------------------------------------------------------+ -| Table | Name | Type | Column(s) | Details | -+--------+------------------------+-------------+---------------+--------------------------------------------------------+ -| orders | | CHECK | NULL | status IN ('open', 'in progress', 'done', 'cancelled') | -| orders | | CHECK | NULL | priority BETWEEN 1 AND 5 | -| orders | orders_customer_id_key | UNIQUE | [customer_id] | NULL | -| orders | primary | PRIMARY KEY | [id] | NULL | -+--------+------------------------+-------------+---------------+--------------------------------------------------------+ -(4 rows) -~~~ - -## See Also - -- [Constraints](constraints.html) -- [`CREATE TABLE`](create-table.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-create-sequence.md b/src/current/v2.0/show-create-sequence.md deleted file mode 100644 index f532cf449f0..00000000000 --- a/src/current/v2.0/show-create-sequence.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: SHOW CREATE SEQUENCE -summary: The SHOW CREATE SEQUENCE statement shows the CREATE SEQUENCE statement that would create a copy of the specified sequence. -toc: true ---- - -New in v2.0: The `SHOW CREATE SEQUENCE` [statement](sql-statements.html) shows the `CREATE SEQUENCE` statement that would create a copy of the specified sequence. - - -## Required Privileges - -The user must have any [privilege](privileges.html) on the target sequence. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_create_sequence.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`sequence_name` | The name of the sequence for which to show the `CREATE SEQUENCE` statement. - -## Response - -Field | Description -------|------------ -`Sequence` | The name of the sequence. -`CreateSequence` | The [`CREATE SEQUENCE`](create-sequence.html) statement for creating a copy of the specified sequence. - -## Example - -~~~ sql -> CREATE SEQUENCE desc_customer_list START -1 INCREMENT -2; -~~~ - -~~~ sql -> SHOW CREATE SEQUENCE desc_customer_list; -~~~ - -~~~ -+--------------------+----------------------------------------------------------------------------------------------------+ -| Sequence | CreateSequence | -+--------------------+----------------------------------------------------------------------------------------------------+ -| desc_customer_list | CREATE SEQUENCE desc_customer_list MINVALUE -9223372036854775808 MAXVALUE -1 INCREMENT -2 START -1 | -+--------------------+----------------------------------------------------------------------------------------------------+ -~~~ - -## See Also - -- [`CREATE SEQUENCE`](create-sequence.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-create-table.md b/src/current/v2.0/show-create-table.md deleted file mode 100644 index 031a34c04f3..00000000000 --- a/src/current/v2.0/show-create-table.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: SHOW CREATE TABLE -summary: The SHOW CREATE TABLE statement shows the CREATE TABLE statement that would create a copy of the specified table. -toc: true ---- - -The `SHOW CREATE TABLE` [statement](sql-statements.html) shows the `CREATE TABLE` statement that would create a copy of the specified table. - - -## Required Privileges - -The user must have any [privilege](privileges.html) on the target table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_create_table.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table for which to show the `CREATE TABLE` statement. - -## Response - -Field | Description -------|------------ -`Table` | The name of the table. -`CreateTable` | The [`CREATE TABLE`](create-table.html) statement for creating a copy of the specified table. - -## Example - -~~~ sql -> CREATE TABLE customers (id INT PRIMARY KEY, email STRING UNIQUE); -~~~ - -~~~ sql -> CREATE TABLE products (sku STRING PRIMARY KEY, price DECIMAL(9,2)); -~~~ - -~~~ sql -> CREATE TABLE orders ( - id INT PRIMARY KEY, - product STRING NOT NULL REFERENCES products, - quantity INT, - customer INT NOT NULL CONSTRAINT valid_customer REFERENCES customers (id), - CONSTRAINT id_customer_unique UNIQUE (id, customer), - INDEX (product), - INDEX (customer) -); -~~~ - -~~~ sql -> SHOW CREATE TABLE customer; -~~~ - - -~~~ -+-----------+----------------------------------------------------+ -| Table | CreateTable | -+-----------+----------------------------------------------------+ -| customers | CREATE TABLE customers ( | -| | id INT NOT NULL, | -| | email STRING NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | UNIQUE INDEX customers_email_key (email ASC), | -| | FAMILY "primary" (id, email) | -| | ) | -+-----------+----------------------------------------------------+ -(1 row) -~~~ - -~~~ sql -> SHOW CREATE TABLE products; -~~~ - -~~~ -+----------+--------------------------------------------------+ -| Table | CreateTable | -+----------+--------------------------------------------------+ -| products | CREATE TABLE products ( | -| | sku STRING NOT NULL, | -| | price DECIMAL(9,2) NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (sku ASC), | -| | FAMILY "primary" (sku, price) | -| | ) | -+----------+--------------------------------------------------+ -(1 row) -~~~ - -~~~ sql -> SHOW CREATE TABLE orders; -~~~ - -~~~ -+--------+------------------------------------------------------------------------------------------+ -| Table | CreateTable | -+--------+------------------------------------------------------------------------------------------+ -| orders | CREATE TABLE orders ( | -| | id INT NOT NULL, | -| | product STRING NOT NULL, | -| | quantity INT NULL, | -| | customer INT NOT NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | UNIQUE INDEX id_customer_unique (id ASC, customer ASC), | -| | CONSTRAINT fk_product_ref_products FOREIGN KEY (product) REFERENCES products (sku), | -| | INDEX orders_product_idx (product ASC), | -| | CONSTRAINT valid_customer FOREIGN KEY (customer) REFERENCES customers (id), | -| | INDEX orders_customer_idx (customer ASC), | -| | FAMILY "primary" (id, product, quantity, customer) | -| | ) | -+--------+------------------------------------------------------------------------------------------+ -(1 row) -~~~ - -## See Also - -- [`CREATE TABLE`](create-table.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-create-view.md b/src/current/v2.0/show-create-view.md deleted file mode 100644 index fe5b2a782c0..00000000000 --- a/src/current/v2.0/show-create-view.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: SHOW CREATE VIEW -summary: The SHOW CREATE VIEW statement shows the CREATE VIEW statement that would create a copy of the specified view. -toc: true ---- - -The `SHOW CREATE VIEW` [statement](sql-statements.html) shows the `CREATE VIEW` statement that would create a copy of the specified [view](views.html). - - -## Required Privileges - -The user must have any [privilege](privileges.html) on the target view. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_create_view.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`view_name` | The name of the view for which to show the `CREATE VIEW` statement. - -## Response - -Field | Description -------|------------ -`View` | The name of the view. -`CreateView` | The [`CREATE VIEW`](create-view.html) statement for creating a copy of the specified view. - -## Examples - -### Show the `CREATE VIEW` statement for a view - -~~~ sql -> SHOW CREATE VIEW bank.user_accounts; -~~~ - -~~~ -+--------------------+---------------------------------------------------------------------------+ -| View | CreateView | -+--------------------+---------------------------------------------------------------------------+ -| bank.user_accounts | CREATE VIEW "bank.user_accounts" AS SELECT type, email FROM bank.accounts | -+--------------------+---------------------------------------------------------------------------+ -(1 row) -~~~ - -### Show just a view's `SELECT` statement - -To get just a view's `SELECT` statement, you can query the `views` table in the built-in `information_schema` database and filter on the view name: - -~~~ sql -> SELECT view_definition - FROM information_schema.views - WHERE table_name = 'user_accounts'; -~~~ - -~~~ -+---------------------------------------+ -| view_definition | -+---------------------------------------+ -| SELECT type, email FROM bank.accounts | -+---------------------------------------+ -(1 row) -~~~ - -## See Also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`ALTER VIEW`](alter-view.html) -- [`DROP VIEW`](drop-view.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-databases.md b/src/current/v2.0/show-databases.md deleted file mode 100644 index 1b778f31779..00000000000 --- a/src/current/v2.0/show-databases.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: SHOW DATABASES -summary: The SHOW DATABASES statement lists all database in the CockroachDB cluster. -keywords: reflection -toc: true ---- - -The `SHOW DATABASES` [statement](sql-statements.html) lists all database in the CockroachDB cluster. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_databases.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to list the databases in the CockroachDB cluster. - -## Example - -~~~ sql -> SHOW DATABASES; -~~~ -~~~ -+--------------------+ -| Database | -+--------------------+ -| bank | -| system | -+--------------------+ -(5 rows) -~~~ - -## See Also - -- [`SHOW SCHEMAS`](show-schemas.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-experimental-ranges.md b/src/current/v2.0/show-experimental-ranges.md deleted file mode 100644 index 89be272c2b1..00000000000 --- a/src/current/v2.0/show-experimental-ranges.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: SHOW EXPERIMENTAL_RANGES -summary: The SHOW EXPERIMENTAL_RANGES shows information about the ranges that make up a specific table's data. -toc: true ---- - -The `SHOW EXPERIMENTAL_RANGES` [statement](sql-statements.html) shows information about the [ranges](architecture/overview.html#glossary) that make up a specific table's data, including: - -- The start and end keys for the range(s) -- The range ID(s) -- Which nodes contain the range [replicas](architecture/overview.html#glossary) -- Which node contains the range that is the [leaseholder](architecture/overview.html#glossary) - -This information is useful for verifying that: - -- The ["follow-the-workload"](demo-follow-the-workload.html) feature is operating as expected. -- Range splits specified by the [`SPLIT AT`](split-at.html) statement were created as expected. - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/show_ranges.html %} -
      - -## Required Privileges - -The user must have the `SELECT` [privilege](privileges.html) on the target table. - -## Parameters - -Parameter | Description -----------|------------ -[`table_name`](sql-grammar.html#table_name) | The name of the table you want range information about. -[`table_name_with_index`](sql-grammar.html#table_name_with_index) | The name of the index you want range information about. - -## Examples - -The examples in this section operate on a hypothetical "user credit information" table filled with placeholder data, running on a 5-node cluster. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE credit_users ( - id INT PRIMARY KEY, - area_code INTEGER NOT NULL, - name STRING UNIQUE NOT NULL, - address STRING NOT NULL, - zip_code INTEGER NOT NULL, - credit_score INTEGER NOT NULL -); -~~~ - -We added a secondary [index](indexes.html) to the table on the `area_code` column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX areaCode on credit_users(area_code); -~~~ - -Next, we ran a couple of [`SPLIT AT`s](split-at.html) on the table and the index: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE credit_users SPLIT AT VALUES (5), (10), (15); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX credit_users@areaCode SPLIT AT VALUES (400), (600), (999); -~~~ - -{{site.data.alerts.callout_info}} -In the example output below, a `NULL` in the *Start Key* column means "beginning of table". -A `NULL` in the *End Key* column means "end of table". -{{site.data.alerts.end}} - -### Show Ranges for a Table (Primary Index) - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM TABLE credit_users; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | /5 | 158 | {2,3,5} | 5 | -| /5 | /10 | 159 | {3,4,5} | 5 | -| /10 | /15 | 160 | {2,4,5} | 5 | -| /15 | NULL | 161 | {2,3,5} | 5 | -+-----------+---------+----------+----------+--------------+ -(4 rows) -~~~ - -### Show Ranges for an Index - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM INDEX credit_users@areaCode; -~~~ - -~~~ -+-----------+---------+----------+-----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+-----------+--------------+ -| NULL | /400 | 135 | {2,4,5} | 2 | -| /400 | /600 | 136 | {2,4,5} | 4 | -| /600 | /999 | 137 | {1,3,4,5} | 3 | -| /999 | NULL | 72 | {2,3,4,5} | 4 | -+-----------+---------+----------+-----------+--------------+ -(4 rows) -~~~ - -## See Also - -- [`SPLIT AT`](split-at.html) -- [`CREATE TABLE`](create-table.html) -- [`CREATE INDEX`](create-index.html) -- [Indexes](indexes.html) -+ [Follow-the-Workload](demo-follow-the-workload.html) -+ [Architecture Overview](architecture/overview.html) diff --git a/src/current/v2.0/show-grants.md b/src/current/v2.0/show-grants.md deleted file mode 100644 index 0134fc36abe..00000000000 --- a/src/current/v2.0/show-grants.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: SHOW GRANTS -summary: The SHOW GRANTS statement lists the privileges granted to users. -keywords: reflection -toc: true ---- - -The `SHOW GRANTS` [statement](sql-statements.html) lists the [privileges](privileges.html) granted to users. - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/show_grants.html %}
      - -## Required Privileges - -No [privileges](privileges.html) are required to view privileges granted to users. For `SHOW GRANTS ON ROLES`, the user must have the [`SELECT`](select-clause.html) [privilege](privileges.html) on the system table. - -## Parameters - -Parameter | Description -----------|------------ -`role_name` | A comma-separated list of role names. -`table_name` | A comma-separated list of table names. Alternately, to list privileges for all tables, use `*`. -`database_name` | A comma-separated list of database names. -`user_name` | An optional, comma-separated list of grantees. - -## Examples - -### Show All Grants New in v2.0 - -To list all grants for all users and roles on all databases and tables: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS; -~~~ -~~~ -+------------+--------------------+------------------+------------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------------------+------------------+------------+------------+ -| system | crdb_internal | NULL | admin | GRANT | -| system | crdb_internal | NULL | admin | SELECT | -| system | crdb_internal | NULL | root | GRANT | -... -| test_roles | public | employees | system_ops | CREATE | -+------------+--------------------+------------------+------------+------------+ -(167 rows) -~~~ - -### Show a Specific User or Role's Grants New in v2.0 - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS FOR maxroach; -~~~ -~~~ -+------------+--------------------+-------+----------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------------------+-------+----------+------------+ -| test_roles | crdb_internal | NULL | maxroach | DELETE | -| test_roles | information_schema | NULL | maxroach | DELETE | -| test_roles | pg_catalog | NULL | maxroach | DELETE | -| test_roles | public | NULL | maxroach | DELETE | -+------------+--------------------+-------+----------+------------+ -~~~ - -### Show Grants on Databases - -**Specific database, all users and roles:** - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE db2: -~~~ -~~~ shell -+----------+--------------------+------------+------------+ -| Database | Schema | User | Privileges | -+----------+--------------------+------------+------------+ -| db2 | crdb_internal | admin | ALL | -| db2 | crdb_internal | betsyroach | CREATE | -| db2 | crdb_internal | root | ALL | -| db2 | information_schema | admin | ALL | -| db2 | information_schema | betsyroach | CREATE | -| db2 | information_schema | root | ALL | -| db2 | pg_catalog | admin | ALL | -| db2 | pg_catalog | betsyroach | CREATE | -| db2 | pg_catalog | root | ALL | -| db2 | public | admin | ALL | -| db2 | public | betsyroach | CREATE | -| db2 | public | root | ALL | -+----------+--------------------+------------+------------+ -~~~ - -**Specific database, specific user or role:** - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE db2 FOR betsyroach; -~~~ -~~~ shell -+----------+--------------------+------------+------------+ -| Database | Schema | User | Privileges | -+----------+--------------------+------------+------------+ -| db2 | crdb_internal | betsyroach | CREATE | -| db2 | information_schema | betsyroach | CREATE | -| db2 | pg_catalog | betsyroach | CREATE | -| db2 | public | betsyroach | CREATE | -+----------+--------------------+------------+------------+ -~~~ - -### Show Grants on Tables - -**Specific tables, all users and roles:** - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE test_roles.employees; -~~~ - -~~~ shell -+------------+--------+-----------+------------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+------------+------------+ -| test_roles | public | employees | admin | ALL | -| test_roles | public | employees | root | ALL | -| test_roles | public | employees | system_ops | CREATE | -+------------+--------+-----------+------------+------------+ -~~~ - -**Specific tables, specific role or user:** - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE test_roles.employees FOR system_ops; -~~~ -~~~ shell -+------------+--------+-----------+------------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+------------+------------+ -| test_roles | public | employees | system_ops | CREATE | -+------------+--------+-----------+------------+------------+ -~~~ - -**All tables, all users and roles:** - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE test_roles.*; -~~~ - -~~~ shell -+------------+--------+-----------+------------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+------------+------------+ -| test_roles | public | employees | admin | ALL | -| test_roles | public | employees | root | ALL | -| test_roles | public | employees | system_ops | CREATE | -+------------+--------+-----------+------------+------------+ -~~~ - -**All tables, specific users or roles:** - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE test_roles.* FOR system_ops; -~~~ - -~~~ shell -+------------+--------+-----------+------------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+------------+------------+ -| test_roles | public | employees | system_ops | CREATE | -+------------+--------+-----------+------------+------------+ -~~~ - -### Show Role Memberships New in v2.0 - -**All members of all roles:** - -{% include copy-clipboard.html %} -~~~ sql -SHOW GRANTS ON ROLE; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| admin | root | true | -| design | ernie | false | -| design | lola | false | -| dev | barkley | false | -| dev | carl | false | -| docs | carl | false | -| hr | finance | false | -| hr | lucky | false | -+--------+---------+---------+ -~~~ - -**Members of a specific role:** - -{% include copy-clipboard.html %} -~~~ sql -SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+--------+---------+ -| role | member | isAdmin | -+--------+--------+---------+ -| design | ernie | false | -| design | lola | false | -+--------+--------+---------+ -~~~ - -**Roles of a specific user or role:** - -{% include copy-clipboard.html %} -~~~ sql -SHOW GRANTS ON ROLE FOR carl; -~~~ -~~~ -+------+--------+---------+ -| role | member | isAdmin | -+------+--------+---------+ -| dev | carl | false | -| docs | carl | false | -+------+--------+---------+ -~~~ - -## See Also - -- [`CREATE ROLE`](create-role.html) -- [`DROP ROLE`](drop-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT `](grant.html) -- [`GRANT ` (Enterprise)](grant-roles.html) -- [`REVOKE `](revoke.html) -- [`REVOKE ` (Enterprise)](revoke-roles.html) -- [`SHOW GRANTS`](show-grants.html) -- [Manage Users](create-and-manage-users.html) -- [Manage Roles](roles.html) -- [Privileges](privileges.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [Information Schema](information-schema.html) diff --git a/src/current/v2.0/show-index.md b/src/current/v2.0/show-index.md deleted file mode 100644 index 897be970abe..00000000000 --- a/src/current/v2.0/show-index.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: SHOW INDEX -summary: The SHOW INDEX statement returns index information for a table. -toc: true ---- - -The `SHOW INDEX` [statement](sql-statements.html) returns index information for a table. - - -## Required Privileges - -The user must have any [privilege](privileges.html) on the target table. - -## Aliases - -In CockroachDB, the following are aliases for `SHOW INDEX`: - -- `SHOW INDEXES` -- `SHOW KEYS` - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_index.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table for which you want to show indexes. - -## Response - -The following fields are returned for each column in each index. - -Field | Description -----------|------------ -`Table` | The name of the table. -`Name` | The name of the index. -`Unique` | Whether or not values in the indexed column are unique. Possible values: `true` or `false`. -`Seq` | The position of the column in the index, starting with 1. -`Column` | The indexed column. -`Direction` | How the column is sorted in the index. Possible values: `ASC` or `DESC` for indexed columns; `N/A` for stored columns. -`Storing` | Whether or not the `STORING` clause was used to index the column during [index creation](create-index.html). Possible values: `true` or `false`. -`Implicit` | Whether or not the column is part of the index despite not being explicitly included during [index creation](create-index.html). Possible values: `true` or `false`

      At this time, [primary key](primary-key.html) columns are the only columns that get implicitly included in secondary indexes. The inclusion of primary key columns improves performance when retrieving columns not in the index. - -## Examples - -~~~ sql -> CREATE TABLE t1 ( - a INT PRIMARY KEY, - b DECIMAL, - c TIMESTAMP, - d STRING - ); - -> CREATE INDEX b_c_idx ON t1 (b, c) STORING (d); - -> SHOW INDEX FROM t1; -~~~ - -~~~ -+-------+---------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+-------+---------+--------+-----+--------+-----------+---------+----------+ -| t1 | primary | true | 1 | a | ASC | false | false | -| t1 | b_c_idx | false | 1 | b | ASC | false | false | -| t1 | b_c_idx | false | 2 | c | ASC | false | false | -| t1 | b_c_idx | false | 3 | d | N/A | true | false | -| t1 | b_c_idx | false | 4 | a | ASC | false | true | -+-------+---------+--------+-----+--------+-----------+---------+----------+ -(5 rows) -~~~ - -## See Also - -- [`CREATE INDEX`](create-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-jobs.md b/src/current/v2.0/show-jobs.md deleted file mode 100644 index 7c608f19001..00000000000 --- a/src/current/v2.0/show-jobs.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: SHOW JOBS -summary: The SHOW JOBS statement lists all currently active schema changes and backup/restore jobs. -toc: true ---- - -New in v1.1: The `SHOW JOBS` [statement](sql-statements.html) lists all of the types of long-running tasks your cluster has performed, including: - -- Schema changes through `ALTER TABLE`. -- [`IMPORT`](import.html). -- Enterprise [`BACKUP`](backup.html) and [`RESTORE`](restore.html). - -These details can help you understand the status of crucial tasks that can impact the performance of your cluster, as well as help you control them. - -{{site.data.alerts.callout_info}} The SHOW JOBS statement shows only long-running tasks. For an exhaustive list of jobs running in the cluster, use the SQL Audit Logging (Experimental) feature.{{site.data.alerts.end}} - - -## Required Privileges - -By default, only the `root` user can execute `SHOW JOBS`. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_jobs.html %} -
      - -## Response - -The following fields are returned for each job: - -Field | Description -------|------------ -`id` | A unique ID to identify each job. This value is used if you want to control jobs (i.e., [pause](pause-job.html), [resume](resume-job.html), or [cancel](cancel-job.html) it). -`type` | The type of job. Possible values: `SCHEMA CHANGE`, [`BACKUP`](backup.html), [`RESTORE`](restore.html), or [`IMPORT`](import.html). -`description` | The command that started the job. -`username` | The user who started the job. -`status` | The job's current state. Possible values: `pending`, `running`, `paused`, `failed`, `succeeded`, or `canceled`. -`created` | The `TIMESTAMP` when the job was created. -`started` | The `TIMESTAMP` when the job began running first. -`finished` | The `TIMESTAMP` when the job was `succeeded`, `failed`, or `canceled`. -`modified` | The `TIMESTAMP` when the job had anything modified. -`fraction_completed` | The fraction (between `0.00` and `1.00`) of the job that's been completed. -`error` | If the job `failed`, the error generated by the failure. - -## Examples - -### Show Jobs - -~~~ sql -> SHOW JOBS; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ - -### Filter Jobs - -You can filter jobs by using `SHOW JOBS` as the data source for a [`SELECT`](select-clause.html) statement, and then filtering the values with the `WHERE` clause. - -~~~ sql -> SELECT * FROM [SHOW JOBS] WHERE type = 'RESTORE' AND status IN ('running', 'failed') ORDER BY created DESC; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ - - -## See Also - -- [`PAUSE JOB`](pause-job.html) -- [`RESUME JOB`](pause-job.html) -- [`CANCEL JOB`](cancel-job.html) -- [`ALTER TABLE`](alter-table.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) diff --git a/src/current/v2.0/show-queries.md b/src/current/v2.0/show-queries.md deleted file mode 100644 index 32fc4c9ae1f..00000000000 --- a/src/current/v2.0/show-queries.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: SHOW QUERIES -summary: The SHOW QUERIES statement lists all currently active queries across the cluster or on the local node. -toc: true ---- - -New in v1.1: The `SHOW QUERIES` [statement](sql-statements.html) lists details about currently active SQL queries, including: - -- The internal ID of the query -- The node executing the query -- The SQL query itself -- How long the query has been running -- The client address, application name, and user that issued the query - -These details let you monitor the progress of active queries and, if necessary, identify those that may need to be [cancelled](cancel-query.html) to prevent unwanted resource consumption. - -{{site.data.alerts.callout_info}}Schema changes and BACKUP/RESTORE statements are not executed as queries internally and so are not listed by SHOW QUERIES. To monitor such statements, use SHOW JOBS instead.{{site.data.alerts.end}} - - -## Required Privileges - -No [privileges](privileges.html) are required to execute this statement. However, note that non-`root` users see only their own currently active queries, whereas the `root` user sees all users' currently active queries. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/show_queries.html %}
      - -- To list the active queries across all nodes of the cluster, use `SHOW QUERIES` or `SHOW CLUSTER QUERIES`. -- To list the active queries just on the local node, use `SHOW LOCAL QUERIES`. - -## Response - -The following fields are returned for each query: - -Field | Description -------|------------ -`query_id` | The ID of the query. -`node_id` | The ID of the node connected to. -`username` | The username of the connected user. -`start` | The timestamp at which the query started. -`query` | The SQL query. -`client_address` | The address and port of the client that issued the SQL query. -`application_name` | The [application name](set-vars.html#supported-variables) specified by the client, if any. For queries from the [built-in SQL client](use-the-built-in-sql-client.html), this will be `cockroach`. -`distributed` | If `true`, the query is being executed by the Distributed SQL (DistSQL) engine. If `false`, the query is being executed by the standard "local" SQL engine. If `NULL`, the query is being prepared and it's not yet known which execution engine will be used. -`phase` | The phase of the query's execution. If `preparing`, the statement is being parsed and planned. If `executing`, the statement is being executed. - -## Examples - -### List Queries Across the Cluster - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER QUERIES; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| 14db657443230c3e0000000000000001 | 1 | root | 2017-08-16 18:00:50.675151+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54119 | test_app | false | executing | -| 14db657443b68c7d0000000000000001 | 1 | root | 2017-08-16 18:00:50.684818+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54123 | test_app | false | executing | -| 14db65744382c2340000000000000001 | 1 | root | 2017-08-16 18:00:50.681431+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54103 | test_app | false | executing | -| 14db657443c9dc660000000000000001 | 1 | root | 2017-08-16 18:00:50.686083+00:00 | SHOW CLUSTER QUERIES | 192.168.12.56:54108 | cockroach | NULL | preparing | -| 14db657443e30a850000000000000003 | 3 | root | 2017-08-16 18:00:50.68774+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.58:54118 | test_app | false | executing | -| 14db6574439f477d0000000000000003 | 3 | root | 2017-08-16 18:00:50.6833+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.58:54122 | test_app | false | executing | -| 14db6574435817d20000000000000002 | 2 | root | 2017-08-16 18:00:50.678629+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54121 | test_app | false | executing | -| 14db6574433c621f0000000000000002 | 2 | root | 2017-08-16 18:00:50.676813+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54124 | test_app | false | executing | -| 14db6574436f71d50000000000000002 | 2 | root | 2017-08-16 18:00:50.680165+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54117 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -(9 rows) -~~~ - -Alternatively, you can use `SHOW QUERIES` to receive the same response. - -### List Queries on the Local Node - -{% include copy-clipboard.html %} -~~~ sql -> SHOW LOCAL QUERIES; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| 14db657cd9005cb90000000000000001 | 1 | root | 2017-08-16 18:01:27.5492+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54103 | test_app | false | executing | -| 14db657cd8d7d9a50000000000000001 | 1 | root | 2017-08-16 18:01:27.546538+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54119 | test_app | false | executing | -| 14db657cd8e966c40000000000000001 | 1 | root | 2017-08-16 18:01:27.547696+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54123 | test_app | false | executing | -| 14db657cd92ad8f80000000000000001 | 1 | root | 2017-08-16 18:01:27.551986+00:00 | SHOW LOCAL QUERIES | 192.168.12.56:54122 | cockroach | NULL | preparing | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -(4 rows) -~~~ - -### Filter for Specific Queries - -You can use a [`SELECT`](select-clause.html) statement to filter the list of active queries by one or more of the [response fields](#response). - -#### Show all queries on node 2 - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE node_id = 2; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| 14db6574435817d20000000000000002 | 2 | root | 2017-08-16 18:00:50.678629+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54121 | test_app | false | executing | -| 14db6574433c621f0000000000000002 | 2 | root | 2017-08-16 18:00:50.676813+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54124 | test_app | false | executing | -| 14db6574436f71d50000000000000002 | 2 | root | 2017-08-16 18:00:50.680165+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54117 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -(3 rows) -~~~ - -#### Show all queries that have been running for more than 3 hours - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE start < (now() - INTERVAL '3 hours'); -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 11:34:32.778412+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -~~~ - -#### Show all queries from a specific address and user - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE client_address = '192.168.0.72:56194' - AND username = 'mroach'; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 14:08:22.878113+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -~~~ - -#### Exclude queries from the built-in SQL client - -To exclude queries from the [built-in SQL client](use-the-built-in-sql-client.html), filter for queries that do not show `cockroach` as the `application_name`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE application_name != 'cockroach'; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -| 14db657443230c3e0000000000000001 | 1 | root | 2017-08-16 18:00:50.675151+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54119 | test_app | false | executing | -| 14db657443b68c7d0000000000000001 | 1 | root | 2017-08-16 18:00:50.684818+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54123 | test_app | false | executing | -| 14db65744382c2340000000000000001 | 1 | root | 2017-08-16 18:00:50.681431+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.56:54103 | test_app | false | executing | -| 14db657443e30a850000000000000003 | 3 | root | 2017-08-16 18:00:50.68774+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.58:54118 | test_app | false | executing | -| 14db6574439f477d0000000000000003 | 3 | root | 2017-08-16 18:00:50.6833+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.58:54122 | test_app | false | executing | -| 14db6574435817d20000000000000002 | 2 | root | 2017-08-16 18:00:50.678629+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54121 | test_app | false | executing | -| 14db6574433c621f0000000000000002 | 2 | root | 2017-08-16 18:00:50.676813+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54124 | test_app | false | executing | -| 14db6574436f71d50000000000000002 | 2 | root | 2017-08-16 18:00:50.680165+00:00 | UPSERT INTO test.kv(k, v) VALUES ($1, $2) | 192.168.12.57:54117 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+-------------------------------------------+---------------------+------------------+-------------+-----------+ -(8 rows) -~~~ - -### Cancel a Query - -When you see a query that is taking too long to complete, you can use the [`CANCEL QUERY`](cancel-query.html) statement to stop it. - -For example, let's say you use `SHOW CLUSTER QUERIES` to find queries that have been running for more than 3 hours: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE start < (now() - INTERVAL '3 hours'); -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 11:34:32.778412+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -~~~ - -To cancel this long-running query, and stop it from consuming resources, you note the `query_id` and use it with the `CANCEL QUERY` statement: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -## See Also - -- [Manage Long-Running Queries](manage-long-running-queries.html) -- [`CANCEL QUERY`](cancel-query.html) -- [`SHOW SESSIONS`](show-sessions.html) -- [`SHOW JOBS`](show-jobs.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-roles.md b/src/current/v2.0/show-roles.md deleted file mode 100644 index f86dbeb80b5..00000000000 --- a/src/current/v2.0/show-roles.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: SHOW ROLES -summary: The SHOW ROLES statement lists the roles for all databases. -toc: true ---- - -New in v2.0: The `SHOW ROLES` [statement](sql-statements.html) lists the roles for all databases. - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/show_roles.html %}
      - -## Required Privileges - -The user must have the [`SELECT`](select-clause.html) [privilege](privileges.html) on the system table. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ROLES; -~~~ -~~~ -+----------+ -| rolename | -+----------+ -| admin | -| dev_ops | -+----------+ -~~~ - -## See Also - -- [`CREATE ROLE` (Enterprise)](create-role.html) -- [`DROP ROLE` (Enterprise)](drop-role.html) -- [`GRANT `](grant.html) -- [`REVOKE ` (Enterprise)](grant-roles.html) -- [`REVOKE New in v2.0: The `SHOW SCHEMAS` [statement](sql-statements.html) lists all [schemas](sql-name-resolution.html#logical-schemas-and-namespaces) in a database. - - -## Required Privileges - -No [privileges](privileges.html) are required to list the schemas in a database. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_schemas.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`name` | The name of the database for which to show [schemas](sql-name-resolution.html#logical-schemas-and-namespaces). When omitted, the schemas in the [current database](sql-name-resolution.html#current-database) are listed. - -## Example - -~~~ sql -> SET DATABASE = bank; -~~~ - -~~~ sql -> SHOW SCHEMAS; -~~~ - -~~~ -+--------------------+ -| Schema | -+--------------------+ -| crdb_internal | -| information_schema | -| pg_catalog | -| public | -+--------------------+ -(4 rows) -~~~ - -## See Also - -- [Logical Schemas and Namespaces](sql-name-resolution.html) -- [`SHOW DATABASES`](show-databases.html) -- [Information Schema](information-schema.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-sessions.md b/src/current/v2.0/show-sessions.md deleted file mode 100644 index fb703a73003..00000000000 --- a/src/current/v2.0/show-sessions.md +++ /dev/null @@ -1,193 +0,0 @@ ---- -title: SHOW SESSIONS -summary: The SHOW SESSIONS statement lists all currently active sessions across the cluster or on the local node. -toc: true ---- - -New in v1.1: The `SHOW SESSIONS` [statement](sql-statements.html) lists details about currently active sessions, including: - -- The address of the client that opened the session -- The node connected to -- How long the connection has been open -- Which queries are active in the session -- Which query has been running longest in the session - -These details let you monitor the overall state of client connections and identify those that may need further investigation or adjustment. - - -## Required Privileges - -No [privileges](privileges.html) are required to execute this statement. However, note that non-`root` users see only their own currently active sessions, whereas the `root` user sees all users' currently active sessions. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/show_sessions.html %}
      - -- To list the active sessions across all nodes of the cluster, use `SHOW SESSIONS` or `SHOW CLUSTER SESSIONS`. -- To list the active sessions just on the local node, use `SHOW LOCAL SESSIONS`. - -## Response - -The following fields are returned for each session: - -Field | Description -------|------------ -`node_id` | The ID of the node connected to. -`username` | The username of the connected user. -`client_address` | The address and port of the connected client. -`application_name` | The [application name](set-vars.html#supported-variables) specified by the client, if any. For sessions from the [built-in SQL client](use-the-built-in-sql-client.html), this will be `cockroach`. -`active_queries` | The SQL queries currently active in the session. -`last_active_query` | The most recently completed SQL query in the session. -`session_start` | The timestamp at which the session started. -`oldest_query_start` | The timestamp at which the oldest currently active SQL query in the session started. -`kv_txn` | The ID of the current key-value transaction for the session. - -## Examples - -### List Active Sessions Across the Cluster - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SESSIONS; -~~~ - -~~~ -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------|----------------------------------+----------------------------------+--------------------------------------+ -| node_id | username | client_address | application_name | active_queries | last_active_query | session_start | oldest_query_start | kv_txn | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------+--------------------------------------| -| 2 | mroach | 192.168.0.72:56194 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878113+00:00 | 2017-08-10 14:08:44.648985+00:00 | 81fbdd4d-394c-4784-b540-97cd73910dba | -| 2 | mroach | 192.168.0.72:56201 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878306+00:00 | 2017-08-10 14:08:44.653135+00:00 | 5aa6f141-5cae-468f-b16a-dfe8d4fb4bea | -| 2 | mroach | 192.168.0.72:56198 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878464+00:00 | 2017-08-10 14:08:44.643749+00:00 | d8fedb88-fc21-4720-aabe-cd43ec204d88 | -| 3 | broach | 192.168.0.73:56199 | test_app | SELECT k, v FROM test.kv WHERE k = $1; | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | 2017-08-10 14:08:22.878048+00:00 | 2017-08-10 14:08:44.655709+00:00 | NULL | -| 3 | broach | 192.168.0.73:56196 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878166+00:00 | 2017-08-10 14:08:44.647464+00:00 | aded7717-94e1-4ac4-9d37-8765e3418e32 | -| 1 | lroach | 192.168.0.71:56180 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.87337+00:00 | 2017-08-10 14:08:44.64788+00:00 | f691c5dd-b29e-48ed-a1dd-6d7f71faa82e | -| 1 | lroach | 192.168.0.71:56197 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.877932+00:00 | 2017-08-10 14:08:44.644786+00:00 | 86ae25ea-9abf-4f5e-ad96-0522178f4ce6 | -| 1 | lroach | 192.168.0.71:56200 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878534+00:00 | 2017-08-10 14:08:44.653524+00:00 | 8ad972b6-4347-4128-9e52-8553f3491963 | -| 1 | root | 127.0.0.1:56211 | cockroach | SHOW CLUSTER SESSIONS; | | 2017-08-10 14:08:27.666826+00:00 | 2017-08-10 14:08:44.653355+00:00 | NULL | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------|--------------------------------------+ -(9 rows) -~~~ - -Alternatively, you can use `SHOW SESSIONS` to receive the same response. - -### List Active Sessions on the Local Node - -{% include copy-clipboard.html %} -~~~ sql -> SHOW LOCAL SESSIONS; -~~~ - -~~~ -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------|----------------------------------+----------------------------------+--------------------------------------+ -| node_id | username | client_address | application_name | active_queries | last_active_query | session_start | oldest_query_start | kv_txn | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------+--------------------------------------| -| 1 | lroach | 192.168.0.71:56180 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.87337+00:00 | 2017-08-10 14:08:44.64788+00:00 | f691c5dd-b29e-48ed-a1dd-6d7f71faa82e | -| 1 | lroach | 192.168.0.71:56197 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.877932+00:00 | 2017-08-10 14:08:44.644786+00:00 | 86ae25ea-9abf-4f5e-ad96-0522178f4ce6 | -| 1 | lroach | 192.168.0.71:56200 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878534+00:00 | 2017-08-10 14:08:44.653524+00:00 | 8ad972b6-4347-4128-9e52-8553f3491963 | -| 1 | root | 127.0.0.1:56211 | cockroach | SHOW CLUSTER SESSIONS; | | 2017-08-10 14:08:27.666826+00:00 | 2017-08-10 14:08:44.653355+00:00 | NULL | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------|--------------------------------------+ -(4 rows) -~~~ - -### Filter for Specific Sessions - -You can use a [`SELECT`](select-clause.html) statement to filter the list of currently active sessions by one or more of the [response fields](#response). - -#### Show sessions associated with a specific user - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER SESSIONS] WHERE username = 'mroach'; -~~~ - -~~~ -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------|----------------------------------+----------------------------------+--------------------------------------+ -| node_id | username | client_address | application_name | active_queries | last_active_query | session_start | oldest_query_start | kv_txn | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------+--------------------------------------| -| 2 | mroach | 192.168.0.72:56194 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878113+00:00 | 2017-08-10 14:08:44.648985+00:00 | 81fbdd4d-394c-4784-b540-97cd73910dba | -| 2 | mroach | 192.168.0.72:56201 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878306+00:00 | 2017-08-10 14:08:44.653135+00:00 | 5aa6f141-5cae-468f-b16a-dfe8d4fb4bea | -| 2 | mroach | 192.168.0.72:56198 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878464+00:00 | 2017-08-10 14:08:44.643749+00:00 | d8fedb88-fc21-4720-aabe-cd43ec204d88 | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------|--------------------------------------+ -(3 rows) -~~~ - -#### Exclude sessions from the built-in SQL client - -To exclude sessions from the [built-in SQL client](use-the-built-in-sql-client.html), filter for sessions that do not show `cockroach` as the `application_name`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER SESSIONS] - WHERE application_name != 'cockroach'; -~~~ - -~~~ -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------|----------------------------------+----------------------------------+--------------------------------------+ -| node_id | username | client_address | application_name | active_queries | last_active_query | session_start | oldest_query_start | kv_txn | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------+--------------------------------------| -| 2 | mroach | 192.168.0.72:56194 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878113+00:00 | 2017-08-10 14:08:44.648985+00:00 | 81fbdd4d-394c-4784-b540-97cd73910dba | -| 2 | mroach | 192.168.0.72:56201 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878306+00:00 | 2017-08-10 14:08:44.653135+00:00 | 5aa6f141-5cae-468f-b16a-dfe8d4fb4bea | -| 2 | mroach | 192.168.0.72:56198 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878464+00:00 | 2017-08-10 14:08:44.643749+00:00 | d8fedb88-fc21-4720-aabe-cd43ec204d88 | -| 3 | broach | 192.168.0.73:56199 | test_app | SELECT k, v FROM test.kv WHERE k = $1; | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | 2017-08-10 14:08:22.878048+00:00 | 2017-08-10 14:08:44.655709+00:00 | NULL | -| 3 | broach | 192.168.0.73:56196 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878166+00:00 | 2017-08-10 14:08:44.647464+00:00 | aded7717-94e1-4ac4-9d37-8765e3418e32 | -| 1 | lroach | 192.168.0.71:56180 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.87337+00:00 | 2017-08-10 14:08:44.64788+00:00 | f691c5dd-b29e-48ed-a1dd-6d7f71faa82e | -| 1 | lroach | 192.168.0.71:56197 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.877932+00:00 | 2017-08-10 14:08:44.644786+00:00 | 86ae25ea-9abf-4f5e-ad96-0522178f4ce6 | -| 1 | lroach | 192.168.0.71:56200 | test_app | UPSERT INTO test.kv(k, v) VALUES ($1, $2); | SELECT k, v FROM test.kv WHERE k = $1; | 2017-08-10 14:08:22.878534+00:00 | 2017-08-10 14:08:44.653524+00:00 | 8ad972b6-4347-4128-9e52-8553f3491963 | -+---------+----------+--------------------+------------------+---------------------------------------------+--------------------------------------------+----------------------------------+----------------------------------|--------------------------------------+ -(8 rows) -~~~ - -### Identify and Cancel a Problematic Query - -If a session has been open for a long time and you are concerned that the oldest active SQL query may be problematic, you can use the [`SHOW QUERIES`](show-queries.html) statement to further investigate the query and then, if necessary, use the [`CANCEL QUERY`](cancel-query.html) statement to cancel it. - -For example, let's say you run `SHOW SESSIONS` and notice that the following session has been open for more than 2 hours: - -~~~ -+---------+----------+--------------------+------------------+------------------------------------+--------------------|----------------------------------+----------------------------------+--------+ -| node_id | username | client_address | application_name | active_queries | last_active_query | session_start | oldest_query_start | kv_txn | -+---------+----------+--------------------+------------------+------------------------------------+--------------------+----------------------------------+----------------------------------|--------+ -| 2 | mroach | 192.168.0.72:56194 | test_app | SELECT * FROM test.kv ORDER BY k; | | 2017-08-10 14:08:22.878113+00:00 | 2017-08-10 14:08:22.878113+00:00 | NULL | -+---------+----------+--------------------+------------------+------------------------------------+--------------------|----------------------------------+----------------------------------+--------+ -~~~ - -Since the `oldest_query_start` timestamp is the same as the `session_start` timestamp, you are concerned that the `SELECT` query shown in `active_queries` has been running for too long and may be consuming too many resources. So you use the [`SHOW QUERIES`](show-queries.html) statement to get more information about the query, filtering based on details you already have: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW CLUSTER QUERIES] - WHERE client_address = '192.168.0.72:56194' - AND username = 'mroach' - AND query = 'SELECT * FROM test.kv ORDER BY k'; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 14:08:22.878113+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -~~~ - -Using the `start` field, you confirm that the query has been running since the start of the session and decide that is too long. So to cancel the query, and stop it from consuming resources, you note the `query_id` and use it with the [`CANCEL QUERY`](cancel-query.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -Alternatively, if you know that you want to cancel the query based on the details in `SHOW SESSIONS`, you could execute a single [`CANCEL QUERY`](cancel-query.html) statement with a nested `SELECT` statement that returns the `query_id`: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL QUERY (SELECT query_id FROM [SHOW CLUSTER QUERIES] - WHERE client_address = '192.168.0.72:56194' - AND username = 'mroach' - AND query = 'SELECT * FROM test.kv ORDER BY k'); -~~~ - -## See Also - -- [`SHOW QUERIES`](show-queries.html) -- [`CANCEL QUERY`](cancel-query.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/show-tables.md b/src/current/v2.0/show-tables.md deleted file mode 100644 index 4b54c9eabb9..00000000000 --- a/src/current/v2.0/show-tables.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: SHOW TABLES -summary: The SHOW TABLES statement lists the tables in a schema or database. -keywords: reflection -toc: true ---- - -The `SHOW TABLES` [statement](sql-statements.html) lists the tables or [views](views.html) in a schema or database. - -{{site.data.alerts.callout_info}}While a table or view is being dropped, SHOW TABLES will list the object with a (dropped) suffix.{{site.data.alerts.end}} - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_tables.html %} -
      - -## Required Privileges - -No [privileges](privileges.html) are required to list the tables in a schema or database. - -## Parameters - -Parameter | Description -----------|------------ -`name` | Changed in v2.0: The name of the schema or database for which to show tables. When omitted, the tables of the [current schema](sql-name-resolution.html#current-schema) in the [current database](sql-name-resolution.html#current-database) are listed. - -`SHOW TABLES` will attempt to find a schema with the specified name first. If that fails, it will try to find a database with that name instead, and list the tables of its `public` schema. For more details, see [Name Resolution](sql-name-resolution.html). - -## Examples - -These example assumes that the `bank` database has been set as the current database for the session, either via the [`SET`](set-vars.html) statement or in the client's connection string. - -### Show Tables in the Current Database - -~~~ sql -> SHOW TABLES; -~~~ - -~~~ -+---------------+ -| Table | -+---------------+ -| accounts | -| user_accounts | -+---------------+ -(2 rows) -~~~ - -This uses the [current schema](sql-name-resolution.html#current-schema) `public` set by default in `search_path`. - -### Show Tables in a Different Schema - -~~~ sql -> SHOW TABLES FROM information_schema; -> SHOW TABLES FROM bank.information_schema; -- also possible -~~~ - -~~~ -+-----------------------------------+ -| Table | -+-----------------------------------+ -| administrable_role_authorizations | -| applicable_roles | -| column_privileges | -| columns | -| constraint_column_usage | -| enabled_roles | -| key_column_usage | -| referential_constraints | -| role_table_grants | -| schema_privileges | -| schemata | -| sequences | -| statistics | -| table_constraints | -| table_privileges | -| tables | -| user_privileges | -| views | -+-----------------------------------+ -(18 rows) -~~~ - -### Show Tables in a Different Database - -~~~ sql -> SHOW TABLES FROM startrek.public; -> SHOW TABLES FROM startrek; -- also possible -~~~ - -~~~ -+-------------------+ -| Table | -+-------------------+ -| episodes | -| quotes | -| quotes_per_season | -+-------------------+ -(3 rows) -~~~ - -## See Also - -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW SCHEMAS`](show-schemas.html) -- [`CREATE TABLE`](create-table.html) -- [`CREATE VIEW`](create-view.html) -- [Information Schema](information-schema.html) diff --git a/src/current/v2.0/show-trace.md b/src/current/v2.0/show-trace.md deleted file mode 100644 index baa2c66793c..00000000000 --- a/src/current/v2.0/show-trace.md +++ /dev/null @@ -1,407 +0,0 @@ ---- -title: SHOW TRACE -summary: The SHOW TRACE statement returns details about how CockroachDB executed a statement or series of statements. -toc: true ---- - -The `SHOW TRACE` [statement](sql-statements.html) returns details about how CockroachDB executed a statement or series of statements. These details include messages and timing information from all nodes involved in the execution, providing visibility into the actions taken by CockroachDB across all of its software layers. - -You can use `SHOW TRACE` to debug why a query is not performing as expected, to add more information to bug reports, or to generally learn more about how CockroachDB works. - - -## Usage Overview - -There are two distinct ways to use `SHOW TRACE`: - -Statement | Usage -----------|------ -[`SHOW TRACE FOR `](#show-trace-for-stmt) | Execute a single [explainable](sql-grammar.html#explainable_stmt) statement and return a trace of its actions. -[`SHOW TRACE FOR SESSION`](#show-trace-for-session) | Return a trace of all executed statements recorded during a session. - -### SHOW TRACE FOR <stmt> - -This use of `SHOW TRACE` executes a single [explainable](explain.html#explainable-statements) statement and then returns messages and timing information from all nodes involved in its execution. It's important to note the following: - -- `SHOW TRACE FOR ` executes the target statement and, once execution has completed, then returns a trace of the actions taken. For example, tracing an `INSERT` statement inserts data and then returns a trace, a `DELETE` statement deletes data and then returns a trace, etc. This is different than the [`EXPLAIN`](explain.html) statement, which does not execute its target statement but instead returns details about its predicted execution plan. - - The target statement must be an [`explainable`](explain.html#explainable-statements) statement. All non-explainable statements are not supported. - - The target statement is always executed with the local SQL engine. Due to this [known limitation](https://github.com/cockroachdb/cockroach/issues/16562), the trace will not reflect the way in which some statements would have been executed when not the target of `SHOW TRACE FOR `. This limitation does not apply to `SHOW TRACE FOR SESSION`. - -- If the target statement encounters errors, those errors are not returned to the client. Instead, they are included in the trace. This has the following important implications for [transaction retries](transactions.html#transaction-retries): - - Normally, individual statements (considered implicit transactions) and multi-statement transactions batched by the client are [automatically retried](transactions.html#automatic-retries) by CockroachDB when [retryable errors](transactions.html#error-handling) are encountered due to contention. However, when such statements are the target of `SHOW TRACE FOR `, CockroachDB does **not** automatically retry. - - When each statement in a multi-statement transaction is sent individually (as opposed to being batched), if one of the statements is the target or `SHOW TRACE `, retryable errors encountered by that statement will not be returned to the client. - - {{site.data.alerts.callout_success}}Given these implications, when you expect transaction retries or want to trace across retries, it's recommended to use SHOW TRACE FOR SESSION.{{site.data.alerts.end}} - -- When tracing an individual statement (i.e., an implicit transaction), the tracing - might change the way in which the statement commits its data; tracing - inhibits the "one-phase commit" optimization for transactions that touch a - single range. The trace will not reflect the committing of the transaction. - `SHOW TRACE FOR SESSION` does not have this effect. - -### SHOW TRACE FOR SESSION - -This use of `SHOW TRACE` returns messages and timing information for all statements recorded during a session. It's important to note the following: - -- `SHOW TRACE FOR SESSION` only returns the most recently recorded traces, or for a currently active recording of traces. - - To start recording traces during a session, enable the `tracing` session variable via [`SET tracing = on;`](set-vars.html#set-tracing). - - To stop recording traces during a session, disable the `tracing` session variable via [`SET tracing = off;`](set-vars.html#set-tracing). - -- In contrast to `SHOW TRACE FOR `, recording traces during a session does not effect the execution of any statements traced. This means that errors encountered by statements during a recording are returned to clients. CockroachDB will [automatically retry](transactions.html#automatic-retries) individual statements (considered implicit transactions) and multi-statement transactions sent as a single batch when [retryable errors](transactions.html#error-handling) are encountered due to contention. Also, clients will receive retryable errors required to handle [client-side retries](transactions.html#client-side-intervention). As a result, traces of all transaction retries will be captured during a recording. - -- `SHOW TRACE FOR ` overwrites the last recorded trace. This means that if you enable session recording, disable session recording, execute `SHOW TRACE FOR `, and then execute `SHOW TRACE FOR SESSION`, the response will be the trace for `SHOW TRACE FOR `, not for the previously recorded session. - -## Required Privileges - -For `SHOW TRACE FOR `, the user must have the appropriate [privileges](privileges.html) for the statement being traced. For `SHOW TRACE FOR SESSION`, no privileges are required. - -## Syntax - -
      {% include {{ page.version.version }}/sql/diagrams/show_trace.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`KV` | If specified, the returned messages are restricted to those describing requests to and responses from the underlying key-value [storage layer](architecture/storage-layer.html), including per-result-row messages.

      For `SHOW KV TRACE FOR `, per-result-row messages are included.

      For `SHOW KV TRACE FOR SESSION`, per-result-row messages are included only if the session was/is recording with `SET tracing = kv;`. -`COMPACT` | New in v2.0: If specified, fewer columns are returned by the statement. See [Response](#response) for more details. -`explainable_stmt` | The statement to execute and trace. Only [explainable](explain.html#explainable-statements) statements are supported. - -## Trace Description - -CockroachDB's definition of a "trace" is a specialization of [OpenTracing's](https://opentracing.io/docs/overview/what-is-tracing/#what-is-opentracing) definition. Internally, CockroachDB uses OpenTracing libraries for tracing, which also means that -it can be easily integrated with OpenTracing-compatible trace collectors; for example, Lightstep and Zipkin are already supported. - -Concept | Description ---------|------------ -**trace** | Information about the sub-operations performed as part of a high-level operation (a query or a transaction). This information is internally represented as a tree of "spans", with a special "root span" representing a query execution in the case of `SHOW TRACE FOR ` or a whole SQL transaction in the case of `SHOW TRACE FOR SESSION`. -**span** | A named, timed operation that describes a contiguous segment of work in a trace. Each span links to "child spans", representing sub-operations; their children would be sub-sub-operations of the grandparent span, etc.

      Different spans can represent (sub-)operations that executed either sequentially or in parallel with respect to each other. (This possibly-parallel nature of execution is one of the important things that a trace is supposed to describe.) The operations described by a trace may be _distributed_, that is, different spans may describe operations executed by different nodes. -**message** | A string with timing information. Each span can contain a list of these. They are produced by CockroachDB's logging infrastructure and are the same messages that can be found in node [log files](debug-and-error-logs.html) except that a trace contains message across all severity levels, whereas log files, by default, do not. Thus, a trace is much more verbose than logs but only contains messages produced in the context of one particular traced operation. - -To further clarify these concepts, let's look at a visualization of a trace for one statement. This particular trace is visualized by [Lightstep](http://lightstep.com/) (docs on integrating Lightstep with CockroachDB coming soon). The image only shows spans, but in the tool, it would be possible drill down to messages. You can see names of operations and sub-operations, along with parent-child relationships and timing information, and it's easy to see which operations are executed in parallel. - -
      Lightstep example
      - -## Response - -{{site.data.alerts.callout_info}}The format of the SHOW TRACE response may change in future versions.{{site.data.alerts.end}} - -CockroachDB outputs traces in linear tabular format. Each result row represents either a span start (identified by the `=== SPAN START: ===` message) or a log message from a span. Rows are generally listed in their timestamp order (i.e., the order in which the events they represent occurred) with the exception that messages from child spans are interleaved in the parent span according to their timing. Messages from sibling spans, however, are not interleaved with respect to one another. - -The following diagram shows the order in which messages from different spans would be interleaved in an example trace. Each box is a span; inner-boxes are child spans. The numbers indicate the order in which the log messages would appear in the virtual table. - -~~~ - +-----------------------+ - | 1 | - | +-------------------+ | - | | 2 | | - | | +----+ | | - | | | | +----+ | | - | | | 3 | | 4 | | | - | | | | | | 5 | | - | | | | | | ++ | | - | | +----+ | | | | - | | +----+ | | - | | 6 | | - | +-------------------+ | - | 7 | - +-----------------------+ -~~~ - -Each row contains the following columns: - -Column | Type | Description --------|------|------------ -`timestamp` | timestamptz | The absolute time when the message occurred. -`age` | interval | The age of the message relative to the beginning of the trace (i.e., the beginning of the statement execution in the case of `SHOW TRACE FOR ` and the beginning of the recording in the case of `SHOW TRACE FOR SESSION`. -`message` | string | The log message. -`tag` | string | Meta-information about the message's context. This is the same information that appears in the beginning of log file messages in between square brackets (e.g, `[client=[::1]:49985,user=root,n1]`). -`loc` | string | New in v2.0: The file:line location of the line of code that produced the message. Only some of the messages have this field set; it depends on specifically how the message was logged. The `--vmodule` flag passed to the node producing the message also affects what rows get this field populated. Generally, if `--vmodule==` is specified, messages produced by that file will have the field populated. -`operation` | string | The name of the operation (or sub-operation) on whose behalf the message was logged. -`span` | int | The index of the span within the virtual list of all spans if they were ordered by the span's start time. - -{{site.data.alerts.callout_info}}If the COMPACT keyword was specified, only the age, message, tag and operation columns are returned. In addition, the value of the loc columns is prepended to message.{{site.data.alerts.end}} - -## Examples - -### Trace a simple `SELECT` - -~~~ sql -> SHOW TRACE FOR SELECT * FROM foo; -~~~ - -~~~ -+----------------------------------+---------------+-------------------------------------------------------+------------------------------------------------+-----+-----------------------------------+------+ -| timestamp | age | message | tag | loc | operation | span | -+----------------------------------+---------------+-------------------------------------------------------+------------------------------------------------+-----+-----------------------------------+------+ -| 2018-03-08 21:22:18.266373+00:00 | 0s | === SPAN START: sql txn === | | | sql txn | 0 | -| 2018-03-08 21:22:18.267341+00:00 | 967µs713ns | === SPAN START: session recording === | | | session recording | 5 | -| 2018-03-08 21:22:18.267343+00:00 | 969µs760ns | === SPAN START: starting plan === | | | starting plan | 1 | -| 2018-03-08 21:22:18.267367+00:00 | 993µs551ns | === SPAN START: consuming rows === | | | consuming rows | 2 | -| 2018-03-08 21:22:18.267384+00:00 | 1ms10µs504ns | Scan /Table/51/{1-2} | [n1,client=[::1]:58264,user=root] | | sql txn | 0 | -| 2018-03-08 21:22:18.267434+00:00 | 1ms60µs392ns | === SPAN START: dist sender === | | | dist sender | 3 | -| 2018-03-08 21:22:18.267444+00:00 | 1ms71µs136ns | querying next range at /Table/51/1 | [client=[::1]:58264,user=root,txn=76d25cda,n1] | | dist sender | 3 | -| 2018-03-08 21:22:18.267462+00:00 | 1ms88µs421ns | r20: sending batch 1 Scan to (n1,s1):1 | [client=[::1]:58264,user=root,txn=76d25cda,n1] | | dist sender | 3 | -| 2018-03-08 21:22:18.267465+00:00 | 1ms91µs570ns | sending request to local server | [client=[::1]:58264,user=root,txn=76d25cda,n1] | | dist sender | 3 | -| 2018-03-08 21:22:18.267467+00:00 | 1ms93µs707ns | === SPAN START: /cockroach.roachpb.Internal/Batch === | | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267469+00:00 | 1ms96µs103ns | 1 Scan | [n1] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267471+00:00 | 1ms97µs437ns | read has no clock uncertainty | [n1] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267474+00:00 | 1ms101µs60ns | executing 1 requests | [n1,s1] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267479+00:00 | 1ms105µs912ns | read-only path | [n1,s1,r20/1:/Table/5{1-2}] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267483+00:00 | 1ms110µs94ns | command queue | [n1,s1,r20/1:/Table/5{1-2}] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267487+00:00 | 1ms114µs240ns | waiting for read lock | [n1,s1,r20/1:/Table/5{1-2}] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.26752+00:00 | 1ms146µs596ns | read completed | [n1,s1,r20/1:/Table/5{1-2}] | | /cockroach.roachpb.Internal/Batch | 4 | -| 2018-03-08 21:22:18.267566+00:00 | 1ms192µs724ns | plan completed execution | [n1,client=[::1]:58264,user=root] | | consuming rows | 2 | -| 2018-03-08 21:22:18.267568+00:00 | 1ms195µs60ns | resources released, stopping trace | [n1,client=[::1]:58264,user=root] | | consuming rows | 2 | -+----------------------------------+---------------+-------------------------------------------------------+------------------------------------------------+-----+-----------------------------------+------+ -(19 rows) -~~~ - -{{site.data.alerts.callout_success}}You can use SHOW TRACE as the data source for a SELECT statement, and then filter the values with the WHERE clause. For example, to see only messages about spans starting, you might execute SELECT * FROM [SHOW TRACE FOR ] where message LIKE '=== SPAN START%'.{{site.data.alerts.end}} - -### Trace conflicting transactions - -In this example, we use two terminals concurrently to generate conflicting transactions. - -1. In terminal 1, create a table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE t (k INT); - ~~~ - -2. Still in terminal 1, open a transaction and perform a write without closing the transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO t VALUES (1); - ~~~ - - Press enter one more time to send these statements to the server. - -3. In terminal 2, execute and trace a conflicting read: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT age, span, message FROM [SHOW TRACE FOR SELECT * FROM t]; - ~~~ - - You'll see that this statement is blocked until the transaction in terminal 1 finishes. - -4. Back in terminal 1, finish the transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > COMMIT; - ~~~ - -5. Back in terminal 2, you'll see the completed trace: - - {{site.data.alerts.callout_success}}Check the lines starting with #Annotation for insights into how the conflict is traced.{{site.data.alerts.end}} - - ~~~ shell - +-------------------+--------+-------------------------------------------------------------------------------------------------------+ - | age | span | message | - +-------------------+--------+-------------------------------------------------------------------------------------------------------+ - | 0s | (0,0) | === SPAN START: sql txn implicit === | - | 409µs750ns | (0,1) | === SPAN START: starting plan === | - | 417µs68ns | (0,2) | === SPAN START: consuming rows === | - | 446µs968ns | (0,0) | querying next range at /Table/61/1 | - | 474µs387ns | (0,0) | r42: sending batch 1 Scan to (n1,s1):1 | - | 491µs800ns | (0,0) | sending request to local server | - | 503µs260ns | (0,3) | === SPAN START: /cockroach.roachpb.Internal/Batch === | - | 506µs135ns | (0,3) | 1 Scan | - | 508µs385ns | (0,3) | read has no clock uncertainty | - | 512µs176ns | (0,3) | executing 1 requests | - | 518µs675ns | (0,3) | read-only path | - | 525µs357ns | (0,3) | command queue | - | 531µs990ns | (0,3) | waiting for read lock | - | # Annotation: The following line identifies the conflict, and some of the lines below it describe the conflict resolution | - | 603µs363ns | (0,3) | conflicting intents on /Table/61/1/285895906846146561/0 | - | 611µs228ns | (0,3) | replica.Send got error: conflicting intents on /Table/61/1/285895906846146561/0 | - | # Annotation: The read is now going to wait for the writer to finish by executing a PushTxn request. | - | 615µs680ns | (0,3) | pushing 1 transaction(s) | - | 630µs734ns | (0,3) | querying next range at /Table/61/1/285895906846146561/0 | - | 646µs292ns | (0,3) | r42: sending batch 1 PushTxn to (n1,s1):1 | - | 658µs613ns | (0,3) | sending request to local server | - | 665µs738ns | (0,4) | === SPAN START: /cockroach.roachpb.Internal/Batch === | - | 668µs765ns | (0,4) | 1 PushTxn | - | 671µs770ns | (0,4) | executing 1 requests | - | 677µs182ns | (0,4) | read-write path | - | 681µs909ns | (0,4) | command queue | - | 693µs718ns | (0,4) | applied timestamp cache | - | 794µs20ns | (0,4) | evaluated request | - | 807µs125ns | (0,4) | replica.Send got error: failed to push "sql txn" id=23fce0c4 key=/Table/61/1/285895906846146561/0 ... | - | 812µs917ns | (0,4) | 62cddd0b pushing 23fce0c4 (1 pending) | - | 4s348ms604µs506ns | (0,4) | result of pending push: "sql txn" id=23fce0c4 key=/Table/61/1/285895906846146561/0 rw=true pri=0 ... | - | # Annotation: The writer is detected to have finished. | - | 4s348ms609µs635ns | (0,4) | push request is satisfied | - | 4s348ms657µs576ns | (0,3) | 23fce0c4-1d22-4321-9779-35f0f463b2d5 is now COMMITTED | - | # Annotation: The write has committed. Some cleanup follows. | - | 4s348ms659µs899ns | (0,3) | resolving intents [wait=true] | - | 4s348ms669µs431ns | (0,17) | === SPAN START: storage.intentResolve: resolving intents === | - | 4s348ms726µs582ns | (0,17) | querying next range at /Table/61/1/285895906846146561/0 | - | 4s348ms746µs398ns | (0,17) | r42: sending batch 1 ResolveIntent to (n1,s1):1 | - | 4s348ms758µs761ns | (0,17) | sending request to local server | - | 4s348ms769µs344ns | (0,18) | === SPAN START: /cockroach.roachpb.Internal/Batch === | - | 4s348ms772µs713ns | (0,18) | 1 ResolveIntent | - | 4s348ms776µs159ns | (0,18) | executing 1 requests | - | 4s348ms781µs364ns | (0,18) | read-write path | - | 4s348ms786µs536ns | (0,18) | command queue | - | 4s348ms797µs901ns | (0,18) | applied timestamp cache | - | 4s348ms868µs521ns | (0,18) | evaluated request | - | 4s348ms875µs924ns | (0,18) | acquired {raft,replica}mu | - | 4s349ms150µs556ns | (0,18) | applying command | - | 4s349ms232µs373ns | (0,3) | read-only path | - | 4s349ms237µs724ns | (0,3) | command queue | - | 4s349ms241µs857ns | (0,3) | waiting for read lock | - | # Annotation: This is where we would have been if there hadn't been a conflict. | - | 4s349ms280µs702ns | (0,3) | read completed | - | 4s349ms330µs707ns | (0,2) | output row: [1] | - | 4s349ms333µs718ns | (0,2) | output row: [1] | - | 4s349ms336µs53ns | (0,2) | output row: [1] | - | 4s349ms338µs212ns | (0,2) | output row: [1] | - | 4s349ms339µs111ns | (0,2) | plan completed execution | - | 4s349ms341µs476ns | (0,2) | resources released, stopping trace | - +-------------------+--------+-------------------------------------------------------------------------------------------------------+ - ~~~ - -### Trace a transaction retry - -In this example, we use session tracing to show an [automatic transaction retry](transactions.html#automatic-retries). Like in the previous example, we'll have to use two terminals because retries are induced by unfortunate interactions between transactions. - -1. In terminal 1, unset the `smart_prompt` shell option, turn on trace recording, and then start a transaction: - - {% include copy-clipboard.html %} - - ~~~ sql - > \unset smart_prompt - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET tracing = cluster; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN; - ~~~ - - Starting a transaction gets us an early timestamp, i.e., we "lock" the snapshot of the data on which the transaction is going to operate. - -2. In terminal 2, perform a read: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM t; - ~~~ - - This read is performed at a timestamp higher than the timestamp of the transaction running in terminal 1. Because we're running at the [`SERIALIZABLE` transaction isolation level](architecture/transaction-layer.html#isolation-levels) (the default), if the system allows terminal 1's transaction to commit, it will have to ensure that ordering terminal 1's transaction *before* terminal 2's transaction is valid; this will become relevant in a second. - -3. Back in terminal 1, execute and trace a conflicting write: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO t VALUES (1); - ~~~ - - At this point, the system will detect the conflict and realize that the transaction can no longer commit because allowing it to commit would mean that we have changed history with respect to terminal 2's read. As a result, it will automatically retry the transaction so it can be serialized *after* terminal 2's transaction. The trace will reflect this retry. - -4. Turn off trace recording and request the trace: - - {% include copy-clipboard.html %} - ~~~ sql - > SET tracing = off; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT age, message FROM [SHOW TRACE FOR SESSION]; - ~~~ - - {{site.data.alerts.callout_success}}Check the lines starting with #Annotation for insights into how the retry is traced.{{site.data.alerts.end}} - - ~~~ shell - +--------------------+---------------------------------------------------------------------------------------------------------------+ - | age | message | - +--------------------+---------------------------------------------------------------------------------------------------------------+ - | 0s | === SPAN START: sql txn implicit === | - | 123µs317ns | AutoCommit. err: | - | | txn: "sql txn implicit" id=64d34fbc key=/Min rw=false pri=0.02500536 iso=SERIALIZABLE stat=COMMITTED ... | - | 1s767ms959µs448ns | === SPAN START: sql txn === | - | 1s767ms989µs448ns | executing 1/1: BEGIN TRANSACTION | - | # Annotation: First execution of INSERT. | - | 13s536ms79µs67ns | executing 1/1: INSERT INTO t VALUES (1) | - | 13s536ms134µs682ns | client.Txn did AutoCommit. err: | - | | txn: "unnamed" id=329e7307 key=/Min rw=false pri=0.01354772 iso=SERIALIZABLE stat=COMMITTED epo=0 ... | - | 13s536ms143µs145ns | added table 't' to table collection | - | 13s536ms305µs103ns | query not supported for distSQL: mutations not supported | - | 13s536ms365µs919ns | querying next range at /Table/61/1/285904591228600321/0 | - | 13s536ms400µs155ns | r42: sending batch 1 CPut, 1 BeginTxn to (n1,s1):1 | - | 13s536ms422µs268ns | sending request to local server | - | 13s536ms434µs962ns | === SPAN START: /cockroach.roachpb.Internal/Batch === | - | 13s536ms439µs916ns | 1 CPut, 1 BeginTxn | - | 13s536ms442µs413ns | read has no clock uncertainty | - | 13s536ms447µs42ns | executing 2 requests | - | 13s536ms454µs413ns | read-write path | - | 13s536ms462µs456ns | command queue | - | 13s536ms497µs475ns | applied timestamp cache | - | 13s536ms637µs637ns | evaluated request | - | 13s536ms646µs468ns | acquired {raft,replica}mu | - | 13s536ms947µs970ns | applying command | - | 13s537ms34µs667ns | coordinator spawns | - | 13s537ms41µs171ns | === SPAN START: [async] kv.TxnCoordSender: heartbeat loop === | - | # Annotation: The conflict is about to be detected in the form of a retriable error. | - | 13s537ms77µs356ns | automatically retrying transaction: sql txn (id: b4bd1f60-30d9-4465-bdb6-6b553aa42a96) because of error: | - | HandledRetryableTxnError: serializable transaction timestamp pushed (detected by SQL Executor) | - | # Annotation: Second execution of INSERT. | - | 13s537ms83µs369ns | executing 1/1: INSERT INTO t VALUES (1) | - | 13s537ms109µs516ns | client.Txn did AutoCommit. err: | - | | txn: "unnamed" id=1228171b key=/Min rw=false pri=0.02917782 iso=SERIALIZABLE stat=COMMITTED epo=0 | - | ts=1507321556.991937203,0 orig=1507321556.991937203,0 max=1507321557.491937203,0 wto=false rop=false | - | 13s537ms111µs738ns | releasing 1 tables | - | 13s537ms116µs944ns | added table 't' to table collection | - | 13s537ms163µs155ns | query not supported for distSQL: writing txn | - | 13s537ms192µs584ns | querying next range at /Table/61/1/285904591231418369/0 | - | 13s537ms209µs601ns | r42: sending batch 1 CPut to (n1,s1):1 | - | 13s537ms224µs219ns | sending request to local server | - | 13s537ms233µs350ns | === SPAN START: /cockroach.roachpb.Internal/Batch === | - | 13s537ms236µs572ns | 1 CPut | - | 13s537ms238µs39ns | read has no clock uncertainty | - | 13s537ms241µs255ns | executing 1 requests | - | 13s537ms245µs473ns | read-write path | - | 13s537ms248µs915ns | command queue | - | 13s537ms261µs543ns | applied timestamp cache | - | 13s537ms309µs401ns | evaluated request | - | 13s537ms315µs302ns | acquired {raft,replica}mu | - | 13s537ms580µs149ns | applying command | - | 18s378ms239µs968ns | executing 1/1: COMMIT TRANSACTION | - | 18s378ms291µs929ns | querying next range at /Table/61/1/285904591228600321/0 | - | 18s378ms322µs473ns | r42: sending batch 1 EndTxn to (n1,s1):1 | - | 18s378ms348µs650ns | sending request to local server | - | 18s378ms364µs928ns | === SPAN START: /cockroach.roachpb.Internal/Batch === | - | 18s378ms370µs772ns | 1 EndTxn | - | 18s378ms373µs902ns | read has no clock uncertainty | - | 18s378ms378µs613ns | executing 1 requests | - | 18s378ms386µs573ns | read-write path | - | 18s378ms394µs316ns | command queue | - | 18s378ms417µs576ns | applied timestamp cache | - | 18s378ms588µs396ns | evaluated request | - | 18s378ms597µs715ns | acquired {raft,replica}mu | - | 18s383ms388µs599ns | applying command | - | 18s383ms494µs709ns | coordinator stops | - | 23s169ms850µs906ns | === SPAN START: sql txn implicit === | - | 23s169ms885µs921ns | executing 1/1: SET tracing = off | - | 23s169ms919µs90ns | query not supported for distSQL: SET / SET CLUSTER SETTING should never distribute | - +--------------------+---------------------------------------------------------------------------------------------------------------+ - ~~~ - -## See Also - -- [`EXPLAIN`](explain.html) -- [`SET (session settings)`](set-vars.html) diff --git a/src/current/v2.0/show-users.md b/src/current/v2.0/show-users.md deleted file mode 100644 index 6ecb9c8696a..00000000000 --- a/src/current/v2.0/show-users.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: SHOW USERS -summary: The SHOW USERS statement lists the users for all databases. -toc: true ---- - -The `SHOW USERS` [statement](sql-statements.html) lists the users for all databases. - - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_users.html %} -
      - -## Required Privileges - -The user must have the [`SELECT`](select-clause.html) [privilege](privileges.html) on the system table. - -## Example - -~~~ sql -> SHOW USERS; -~~~ - -~~~ -+------------+ -| username | -+------------+ -| jpointsman | -| maxroach | -| root | -+------------+ -~~~ - -## See Also - -- [`CREATE USER`](create-user.html) -- [Manage Users](create-and-manage-users.html) diff --git a/src/current/v2.0/show-vars.md b/src/current/v2.0/show-vars.md deleted file mode 100644 index 6112bc9a22d..00000000000 --- a/src/current/v2.0/show-vars.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: SHOW (session settings) -summary: The SHOW statement displays the current settings for the client session. -toc: true ---- - -The `SHOW` [statement](sql-statements.html) can display the value of either one or all of -the session setting variables. Some of these can also be configured via [`SET`](set-vars.html). - - -## Required Privileges - -No [privileges](privileges.html) are required to display the session settings. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/show_var.html %} -
      - -{{site.data.alerts.callout_info}}The SHOW statement for session settings is unrelated to the other SHOW statements: SHOW CLUSTER SETTING, SHOW CREATE TABLE, SHOW CREATE VIEW, SHOW USERS, SHOW DATABASES, SHOW COLUMNS, SHOW GRANTS, and SHOW CONSTRAINTS.{{site.data.alerts.end}} - -## Parameters - -The `SHOW ` statement accepts a single parameter: the variable name. - -The variable name is case insensitive. It may be enclosed in double quotes; this is useful if the variable name itself contains spaces. - -### Supported variables - -| Variable name | Description | Initial value | Can be modified with [`SET`](set-vars.html)? | -|---------------|-------------|---------------|-----------------------------------------------| -| `application_name` | The current application name for statistics collection. | Empty string, or `cockroach` for sessions from the [built-in SQL client](use-the-built-in-sql-client.html) | Yes | -| `database` | The [current database](sql-name-resolution.html#current-database). | Database in connection string, or empty if not specified | Yes | -| `default_transaction_isolation` | The default transaction isolation level for the current session. See [Transaction parameters](transactions.html#transaction-parameters) for more details. | Settings in connection string, or `SERIALIZABLE` if not specified | Yes | -| `default_transaction_read_only` | New in v2.0: The default transaction access mode for the current session. If set to `on`, only read operations are allowed in transactions in the current session; if set to `off`, both read and write operations are allowed. See [`SET TRANSACTION`](set-transaction.html) for more details. | `off` | Yes | -| `distsql` | | `auto` | | -| `node_id` | New in v1.1: The ID of the node currently connected to.

      This variable is particularly useful for verifying load balanced connections. | Node-dependent | No | -| `search_path` | Changed in v2.0: A list of schemas that will be searched to resolve unqualified table or function names. For more details, see [Name Resolution](sql-name-resolution.html). | `{public}` | Yes | -| `server_version` | The version of PostgreSQL that CockroachDB emulates. | Version-dependent | No | -| `server_version_num` | New in v2.0: The version of PostgreSQL that CockroachDB emulates. | Version-dependent | Yes | -| `session_user` | The user connected for the current session. | User in connection string | No | -| `sql_safe_updates` | If `false`, potentially unsafe SQL statements are allowed, including `DROP` of a non-empty database and all dependent objects, `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE .. DROP COLUMN`. See [Allow Potentially Unsafe SQL Statements](use-the-built-in-sql-client.html#allow-potentially-unsafe-sql-statements) for more details. | `true` for interactive sessions from the [built-in SQL client](use-the-built-in-sql-client.html),
      `false` for sessions from other clients | Yes | -| `timezone` | The default time zone for the current session.

      Changed in v2.0: This session variable was named `"time zone"` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `UTC` | Yes | -| `tracing` | | `off` | | -| `transaction_isolation` | The isolation level of the current transaction. See [Transaction parameters](transactions.html#transaction-parameters) for more details.

      Changed in v2.0: This session variable was called `transaction isolation level` (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `SERIALIZABLE` | Yes | -| `transaction_priority` | The priority of the current transaction. See [Transaction parameters](transactions.html#transaction-parameters) for more details.

      Changed in v2.0: This session variable was called `transaction priority` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NORMAL` | Yes | -| `transaction_read_only` | New in v2.0: The access mode of the current transaction. See [Set Transaction](set-transaction.html) for more details. | `off` | Yes | -| `transaction_status` | The state of the current transaction. See [Transactions](transactions.html) for more details.

      Changed in v2.0: This session variable was called `transaction status` (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL. | `NoTxn` | No | -| `client_encoding` | (Reserved; exposed only for ORM compatibility.) | `UTF8` | No | -| `client_min_messages` | (Reserved; exposed only for ORM compatibility.) | (Reserved) | No | -| `datestyle` | (Reserved; exposed only for ORM compatibility.) | `ISO` | No | -| `extra_float_digits` | (Reserved; exposed only for ORM compatibility.) | (Reserved) | No | -| `intervalstyle` | New in v2.0: (Reserved; exposed only for ORM compatibility.) | `postgres` | No | -| `max_index_keys` | (Reserved; exposed only for ORM compatibility.) | (Reserved) | No | -| `standard_conforming_strings` | (Reserved; exposed only for ORM compatibility.) | (Reserved) | No | - -Special syntax cases supported for compatibility: - -| Syntax | Equivalent to | -|--------|---------------| -| `SHOW TRANSACTION PRIORITY` | `SHOW "transaction priority"` | -| `SHOW TRANSACTION ISOLATION LEVEL` | `SHOW "transaction isolation level"` | -| `SHOW TIME ZONE` | `SHOW "timezone"` | -| `SHOW TRANSACTION STATUS` | `SHOW "transaction status"` | - -## Examples - -### Showing the Value of a Single Session Variable - -~~~ sql -> SHOW DATABASE; -~~~ - -~~~ -+----------+ -| database | -+----------+ -| test | -+----------+ -(1 row) -~~~ - -### Showing the Value of all Session Variables - -~~~ sql -> SHOW ALL; -~~~ - -~~~ -+-------------------------------+--------------+ -| Variable | Value | -+-------------------------------+--------------+ -| application_name | | -| client_encoding | UTF8 | -| client_min_messages | | -| database | | -| default_transaction_isolation | SERIALIZABLE | -| distsql | off | -| extra_float_digits | | -| max_index_keys | 32 | -| node_id | 1 | -| search_path | pg_catalog | -| server_version | 9.5.0 | -| session_user | root | -| standard_conforming_strings | on | -| timezone | UTC | -| transaction isolation level | SERIALIZABLE | -| transaction priority | NORMAL | -| transaction status | NoTxn | -+-------------------------------+--------------+ -(16 rows) -~~~ - -## See Also - -- [`SET` (session variable)](set-vars.html) -- [Transactions](transactions.html) and [Transaction parameters](transactions.html#transaction-parameters) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE TABLE`](show-create-table.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW INDEX`](show-index.html) -- [`SHOW USERS`](show-users.html) diff --git a/src/current/v2.0/simplified-deployment.md b/src/current/v2.0/simplified-deployment.md deleted file mode 100644 index 0fe01df93f4..00000000000 --- a/src/current/v2.0/simplified-deployment.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Simplified Deployment -summary: Deploying CockroachDB is simple and straightforward. -toc: false ---- - -Deploying and maintaining databases has forever been a difficult and expensive prospect. Simplicity is one of our foremost design goals. CockroachDB is self contained and eschews external dependencies. There are no explicit roles like primaries or secondaries to get in the way. Instead, every CockroachDB node is symmetric and equally important, meaning no single points of failure in the architecture. - -- No external dependencies -- Self-organizes using gossip network -- Dead-simple configuration without “knobs” -- Symmetric nodes are ideally suited to container-based deployments -- Every node provides access to centralized admin console - -CockroachDB is simple to deploy diff --git a/src/current/v2.0/split-at.md b/src/current/v2.0/split-at.md deleted file mode 100644 index 885b983a24b..00000000000 --- a/src/current/v2.0/split-at.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: SPLIT AT -summary: The SPLIT AT statement forces a key-value layer range split at the specified row in a table or index. -toc: true ---- - -The `SPLIT AT` [statement](sql-statements.html) forces a key-value layer range split at the specified row in a table or index. - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/split_table_at.html %}
      - -
      {% include {{ page.version.version }}/sql/diagrams/split_index_at.html %}
      - -## Required Privileges - -The user must have the `INSERT` [privilege](privileges.html) on the table or index. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name`
      `table_name @ index_name` | The name of the table or index that should be split. | -| `select_stmt` | A [selection query](selection-queries.html) that produces one or more rows at which to split the table or index. | - -## Why Manually Split a Range? - -The key-value layer of CockroachDB is broken into sections of contiguous -key-space known as ranges. By default, CockroachDB attempts to keep ranges below -a size of 64MiB. To do this, the system will automatically [split](architecture/distribution-layer.html#range-splits) -a range if it grows larger than this limit. For most use cases, this automatic -range splitting is sufficient, and you should never need to worry about -when or where the system decides to split ranges. - -However, there are reasons why you may want to perform manual splits on -the ranges that store tables or indexes: - -- When a table only consists of a single range, all writes and reads to the - table will be served by that range's [leaseholder](architecture/replication-layer.html#leases). - If a table only holds a small amount of data but is serving a large amount of traffic, - load distribution can become unbalanced. Splitting the table's ranges manually - can allow the load on the table to be more evenly distributed across multiple - nodes. For tables consisting of more than a few ranges, load will naturally - be distributed across multiple nodes and this will not be a concern. - -- When a table is created, it will only consist of a single range. If you know - that a new table will immediately receive significant write - traffic, you may want to preemptively split the table based on the expected - distribution of writes before applying the load. This can help avoid reduced - workload performance that results when automatic splits are unable to keep up - with write traffic. - -Note that when a table is [truncated](truncate.html), it is essentially re-created in a single new empty range, and the old ranges that used to constitute the table are garbage collected. Any pre-splitting you have performed on the old version of the table will not carry over to the new version. The new table will need to be pre-split again. - -## Examples - -### Split a Table - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM TABLE kv; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | NULL | 72 | {1} | 1 | -+-----------+---------+----------+----------+--------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE kv SPLIT AT VALUES (10), (20), (30); -~~~ - -~~~ -+------------+----------------+ -| key | pretty | -+------------+----------------+ -| \u0209\x92 | /Table/64/1/10 | -| \u0209\x9c | /Table/64/1/20 | -| \u0209\xa6 | /Table/64/1/30 | -+------------+----------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM TABLE kv; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | /10 | 72 | {1} | 1 | -| /10 | /20 | 73 | {1} | 1 | -| /20 | /30 | 74 | {1} | 1 | -| /30 | NULL | 75 | {1} | 1 | -+-----------+---------+----------+----------+--------------+ -(4 rows) -~~~ - -### Split a Table with a Composite Primary Key - -You may want to split a table with a composite primary key (e.g., when working with [partitions](partitioning.html#partition-using-primary-key)). - -Given the table - -{% include copy-clipboard.html %} -~~~ sql -CREATE TABLE t (k1 INT, k2 INT, v INT, w INT, PRIMARY KEY (k1, k2)); -~~~ - -we can split it at its primary key like so: - -{% include copy-clipboard.html %} -~~~ sql -ALTER TABLE t SPLIT AT VALUES (5,1), (5,2), (5,3); -~~~ - -~~~ -+------------+-----------------+ -| key | pretty | -+------------+-----------------+ -| \xbc898d89 | /Table/52/1/5/1 | -| \xbc898d8a | /Table/52/1/5/2 | -| \xbc898d8b | /Table/52/1/5/3 | -+------------+-----------------+ -(3 rows) -~~~ - -To see more information about the range splits, run: - -{% include copy-clipboard.html %} -~~~ sql -SHOW EXPERIMENTAL_RANGES FROM TABLE t; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | /5/1 | 151 | {2,3,5} | 5 | -| /5/1 | /5/2 | 152 | {2,3,5} | 5 | -| /5/2 | /5/3 | 153 | {2,3,5} | 5 | -| /5/3 | NULL | 154 | {2,3,5} | 5 | -+-----------+---------+----------+----------+--------------+ -(4 rows) -~~~ - -Alternatively, you could split at a prefix of the primary key columns. For example, to add a split before all keys that start with `3`, run: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t SPLIT AT VALUES (3); -~~~ - -~~~ -+----------+---------------+ -| key | pretty | -+----------+---------------+ -| \xcd898b | /Table/69/1/3 | -+----------+---------------+ -(1 row) -~~~ - -Conceptually, this means that the second range will include keys that start with `3` through `∞`: - -{% include copy-clipboard.html %} -~~~ sql -SHOW EXPERIMENTAL_RANGES FROM TABLE t; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | /3 | 155 | {2,3,5} | 5 | -| /3 | NULL | 165 | {2,3,5} | 5 | -+-----------+---------+----------+----------+--------------+ -(2 rows) -~~~ - -### Split an Index - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX secondary ON kv (v); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM INDEX kv@secondary; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | NULL | 75 | {1} | 1 | -+-----------+---------+----------+----------+--------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX kv@secondary SPLIT AT (SELECT v FROM kv LIMIT 3); -~~~ - -~~~ -+---------------------+-----------------+ -| key | pretty | -+---------------------+-----------------+ -| \u020b\x12a\x00\x01 | /Table/64/3/"a" | -| \u020b\x12b\x00\x01 | /Table/64/3/"b" | -| \u020b\x12c\x00\x01 | /Table/64/3/"c" | -+---------------------+-----------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_RANGES FROM INDEX kv@secondary; -~~~ - -~~~ -+-----------+---------+----------+----------+--------------+ -| Start Key | End Key | Range ID | Replicas | Lease Holder | -+-----------+---------+----------+----------+--------------+ -| NULL | /"a" | 75 | {1} | 1 | -| /"a" | /"b" | 76 | {1} | 1 | -| /"b" | /"c" | 77 | {1} | 1 | -| /"c" | NULL | 78 | {1} | 1 | -+-----------+---------+----------+----------+--------------+ -(4 rows) -~~~ - -## See Also - -- [Selection Queries](selection-queries.html) -- [Distribution Layer](architecture/distribution-layer.html) -- [Replication Layer](architecture/replication-layer.html) diff --git a/src/current/v2.0/sql-audit-logging.md b/src/current/v2.0/sql-audit-logging.md deleted file mode 100644 index 99fc5ca134d..00000000000 --- a/src/current/v2.0/sql-audit-logging.md +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: SQL Audit Logging -summary: Use the EXPERIMENTAL_AUDIT setting to turn SQL audit logging on or off for a table. -toc: true ---- - -SQL audit logging gives you detailed information about queries being executed against your system. This feature is especially useful when you want to log all queries that are run against a table containing personally identifiable information (PII). - -This page has an example showing: - -- How to turn audit logging on and off. -- Where the audit log files live. -- What the audit log files look like. - -For reference material, including a detailed description of the audit log file format, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - - -## Step 1. Create sample tables - -Use the statements below to create: - -- A `customers` table which contains PII such as name, address, etc. -- An `orders` table with a foreign key into `customers`, which does not expose any PII - -Later, we'll show how to turn on audit logs for the `customers` table. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name STRING NOT NULL, - address STRING NOT NULL, - national_id INT NOT NULL, - telephone INT NOT NULL, - email STRING UNIQUE NOT NULL -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - product_id INT NOT NULL, - delivery_status STRING check (delivery_status='processing' or delivery_status='in-transit' or delivery_status='delivered') NOT NULL, - customer_id UUID NOT NULL REFERENCES customers (id) -); -~~~ - -## Step 2. Turn on auditing for the `customers` table - -We turn on auditing for a table using the [`EXPERIMENTAL_AUDIT`](experimental-audit.html) subcommand of [`ALTER TABLE`](alter-table.html). - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE; -~~~ - -{{site.data.alerts.callout_info}} -To turn on auditing for more than one table, issue a separate `ALTER` statement for each table. -{{site.data.alerts.end}} - -## Step 3. Populate the `customers` table - -Now that we have auditing turned on, let's add some customer data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers (name, address, national_id, telephone, email) VALUES ( - 'Pritchard M. Cleveland', - '23 Crooked Lane, Garden City, NY USA 11536', - 778124477, - 12125552000, - 'pritchmeister@aol.com' -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers (name, address, national_id, telephone, email) VALUES ( - 'Vainglorious K. Snerptwiddle III', - '44 Straight Narrows, Garden City, NY USA 11536', - 899127890, - 16465552000, - 'snerp@snerpy.net' -); -~~~ - -Now let's verify that our customers were added successfully: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ - -~~~ -+--------------------------------------+----------------------------------+------------------------------------------------+-------------+-------------+-----------------------+ -| id | name | address | national_id | telephone | email | -+--------------------------------------+----------------------------------+------------------------------------------------+-------------+-------------+-----------------------+ -| 4bd266fc-0b62-4cc4-8c51-6997675884cd | Vainglorious K. Snerptwiddle III | 44 Straight Narrows, Garden City, NY USA 11536 | 899127890 | 16465552000 | snerp@snerpy.net | -| 988f54f0-b4a5-439b-a1f7-284358633250 | Pritchard M. Cleveland | 23 Crooked Lane, Garden City, NY USA 11536 | 778124477 | 12125552000 | pritchmeister@aol.com | -+--------------------------------------+----------------------------------+------------------------------------------------+-------------+-------------+-----------------------+ -(2 rows) -~~~ - -## Step 4. Check the audit log - -By default, the active audit log file is named `cockroach-sql-audit.log` and is stored in CockroachDB's standard log directory. To store the audit log files in a specific directory, pass the `--sql-audit-dir` flag to [`cockroach start`](start-a-node.html). Like the other log files, it's rotated according to the `--log-file-max-size` setting. - -When we look at the audit log for this example, we see the following lines showing every command we've run so far, as expected. - -~~~ -I180321 20:54:21.381565 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 2 exec "cockroach sql" {"customers"[76]:READWRITE} "ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE" {} 4.811 0 OK -I180321 20:54:26.315985 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 3 exec "cockroach sql" {"customers"[76]:READWRITE} "INSERT INTO customers(\"name\", address, national_id, telephone, email) VALUES ('Pritchard M. Cleveland', '23 Crooked Lane, Garden City, NY USA 11536', 778124477, 12125552000, 'pritchmeister@aol.com')" {} 6.319 1 OK -I180321 20:54:30.080592 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 4 exec "cockroach sql" {"customers"[76]:READWRITE} "INSERT INTO customers(\"name\", address, national_id, telephone, email) VALUES ('Vainglorious K. Snerptwiddle III', '44 Straight Narrows, Garden City, NY USA 11536', 899127890, 16465552000, 'snerp@snerpy.net')" {} 2.809 1 OK -I180321 20:54:39.377395 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 5 exec "cockroach sql" {"customers"[76]:READ} "SELECT * FROM customers" {} 1.236 2 OK -~~~ - -{{site.data.alerts.callout_info}} -For reference documentation of the audit log file format, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). -{{site.data.alerts.end}} - -## Step 5. Populate the `orders` table - -Unlike the `customers` table, `orders` doesn't have any PII, just a Product ID and a delivery status. (Note the use of the [`CHECK` constraint](check.html) as a workaround for the as-yet-unimplemented `ENUM` - see [SQL feature support](sql-feature-support.html) for more information.) - -Let's populate the `orders` table with some placeholder data using [`CREATE SEQUENCE`](create-sequence.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE product_ids_asc START 1 INCREMENT 1; -~~~ - -Evaluate the below a few times to generate data; note that this would error if [`SELECT`](select-clause.html) returned multiple results, but it doesn't in this case. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (product_id, delivery_status, customer_id) VALUES ( - nextval('product_ids_asc'), - 'processing', - (SELECT id FROM customers WHERE name ~ 'Cleve') -); -~~~ - -Let's verify that our orders were added successfully: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders ORDER BY product_id; -~~~ - -~~~ -+--------------------------------------+------------+-----------------+--------------------------------------+ -| id | product_id | delivery_status | customer_id | -+--------------------------------------+------------+-----------------+--------------------------------------+ -| 6e85c390-3bbf-48da-9c2f-a73a0ab9c2ce | 1 | processing | df053c68-fcb0-4a80-ad25-fef9d3b408ca | -| e93cdaee-d5eb-428c-bc1b-a7367f334f99 | 2 | processing | df053c68-fcb0-4a80-ad25-fef9d3b408ca | -| f05a1b0f-5847-424d-b8c8-07faa6b6e46b | 3 | processing | df053c68-fcb0-4a80-ad25-fef9d3b408ca | -| 86f619d6-9f18-4c84-8ead-68cd07a1ee37 | 4 | processing | df053c68-fcb0-4a80-ad25-fef9d3b408ca | -| 882c0fc8-64e7-4fab-959d-a4ff74f170c0 | 5 | processing | df053c68-fcb0-4a80-ad25-fef9d3b408ca | -+--------------------------------------+------------+-----------------+--------------------------------------+ -(5 rows) -~~~ - -## Step 6. Check the audit log again - -Because we used a `SELECT` against the `customers` table to generate the placeholder data for `orders`, those queries will also show up in the audit log as follows: - -~~~ -I180321 21:01:59.677273 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 7 exec "cockroach sql" {"customers"[76]:READ, "customers"[76]:READ} "INSERT INTO orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM customers WHERE \"name\" ~ 'Cleve'))" {} 5.183 1 OK -I180321 21:04:07.497555 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 8 exec "cockroach sql" {"customers"[76]:READ, "customers"[76]:READ} "INSERT INTO orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM customers WHERE \"name\" ~ 'Cleve'))" {} 5.219 1 OK -I180321 21:04:08.730379 351 sql/exec_log.go:163 [n1,client=127.0.0.1:60754,user=root] 9 exec "cockroach sql" {"customers"[76]:READ, "customers"[76]:READ} "INSERT INTO orders(product_id, delivery_status, customer_id) VALUES (nextval('product_ids_asc'), 'processing', (SELECT id FROM customers WHERE \"name\" ~ 'Cleve'))" {} 5.392 1 OK -~~~ - -{{site.data.alerts.callout_info}} -For reference documentation of the audit log file format, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). -{{site.data.alerts.end}} - -## See Also - -- [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html) -- [`cockroach start` logging flags](start-a-node.html#logging) -- [SQL FAQ - generating unique row IDs](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) -- [`CREATE SEQUENCE`](create-sequence.html) -- [SQL Feature Support](sql-feature-support.html) diff --git a/src/current/v2.0/sql-constants.md b/src/current/v2.0/sql-constants.md deleted file mode 100644 index 25929a3240d..00000000000 --- a/src/current/v2.0/sql-constants.md +++ /dev/null @@ -1,236 +0,0 @@ ---- -title: Constant Values -summary: SQL Constants represent a simple value that doesn't change. -toc: true ---- - -SQL Constants represent a simple value that doesn't change. - - -## Introduction - -There are five categories of constants in CockroachDB: - -- [String literals](#string-literals): these define string values but their actual data type will - be inferred from context, for example, `'hello'`. -- [Numeric literals](#numeric-literals): these define numeric values but their actual data - type will be inferred from context, for example, `-12.3`. -- [Byte array literals](#byte-array-literals): these define byte array values with data type - `BYTES`, for example, `b'hello'`. -- [Interpreted literals](#interpreted-literals): these define arbitrary values with an explicit - type, for example, `INTERVAL '3 days'`. -- [Named constants](#named-constants): these have predefined values with a predefined - type, for example, `TRUE` or `NULL`. - -## String literals - -CockroachDB supports two formats for string literals: - -- [Standard SQL string literals](#standard-sql-string-literals). -- [String literals with C escape sequences](#string-literals-with-character-escapes). - -These format also allow arbitrary Unicode characters encoded as UTF-8. - -In any case, the actual data type of a string literal is determined -using the context where it appears. - -For example: - -| Expression | Data type of the string literal | -|------------|---------------------------------| -| `length('hello')` | `STRING` | -| `now() + '3 day'` | `INTERVAL` | -| `INSERT INTO tb(date_col) VALUES ('2013-01-02')` | `DATE` | - -In general, the data type of a string literal is that demanded by the -context if there is no ambiguity, or `STRING` otherwise. - -Check our blog for -[more information about the typing of string literals](https://www.cockroachlabs.com/blog/revisiting-sql-typing-in-cockroachdb/). - -### Standard SQL string literals - -SQL string literals are formed by an arbitrary sequence of characters -enclosed between single quotes (`'`), for example, `'hello world'`. - -To include a single quote in the string, use a double single quote. -For example: - -~~~sql -> SELECT 'hello' as a, 'it''s a beautiful day' as b; -~~~ -~~~ -+-------+----------------------+ -| a | b | -+-------+----------------------+ -| hello | it's a beautiful day | -+-------+----------------------+ -~~~ - -For compatibility with the SQL standard, CockroachDB also recognizes -the following special syntax: two simple string literals separated by -a newline character are automatically concatenated together to form a -single constant. For example: - -~~~sql -> SELECT 'hello' -' world!' as a; -~~~ -~~~ -+--------------+ -| a | -+--------------+ -| hello world! | -+--------------+ -~~~ - -This special syntax only works if the two simple literals are -separated by a newline character. For example `'hello' ' world!'` -doesn't work. This is mandated by the SQL standard. - -### String literals with character escapes - -CockroachDB also supports string literals containing escape sequences -like in the programming language C. These are constructed by prefixing -the string literal with the letter `e`, for example, -`e'hello\nworld!'`. - -The following escape sequences are supported: - -Escape Sequence | Interpretation -----------------|--------------- -`\a` | ASCII code 7 (BEL) -`\b` | backspace (ASCII 8) -`\t` | tab (ASCII 9) -`\n` | newline (ASCII 10) -`\v` | vertical tab (ASCII 11) -`\f` | form feed (ASCII 12) -`\r` | carriage return (ASCII 13) -`\xHH` | hexadecimal byte value -`\ooo` | octal byte value -`\uXXXX` | 16-bit hexadecimal Unicode character value -`\UXXXXXXXX` | 32-bit hexadecimal Unicode character value - -For example, the `e'x61\141\u0061'` escape string represents the -hexadecimal byte, octal byte, and 16-bit hexadecimal Unicode character -values equivalent to the `'aaa'` string literal. - -## Numeric literals - -Numeric literals can have the following forms: - -~~~ -[+-]9999 -[+-]9999.[9999][e[+-]999] -[+-][9999].9999[e[+-]999] -[+-]9999e[+-]999 -[+-]0xAAAA -~~~ - -Some examples: - -~~~ -+4269 -3.1415 --.001 -6.626e-34 -50e6 -0xcafe111 -~~~ - -The actual data type of a numeric constant depends both on the context -where it is used, its literal format, and its numeric value. - -| Syntax | Possible data types | -|--------|---------------------| -| Contains a decimal separator | `FLOAT`, `DECIMAL` | -| Contains an exponent | `FLOAT`, `DECIMAL` | -| Contains a value outside of the range -2^63...(2^63)-1 | `FLOAT`, `DECIMAL` | -| Otherwise | `INT`, `DECIMAL`, `FLOAT` | - -Of the possible data types, which one is actually used is then further -refined depending on context. - -Check our blog for -[more information about the typing of numeric literals](https://www.cockroachlabs.com/blog/revisiting-sql-typing-in-cockroachdb/). - -## Byte array literals - -CockroachDB supports two formats for byte array literals: - -- [Byte array literals with C escape sequences](#byte-array-literals-with-character-escapes) -- [Hexadecimal-encoded byte array literals](#hexadecimal-encoded-byte-array-literals) - -### Byte array literals with character escapes - -This uses the same syntax as [string literals containing character escapes](#string-literals-with-character-escapes), -using a `b` prefix instead of `e`. Any character escapes are interpreted like they -would be for string literals. - -For example: `b'hello,\x32world'` - -The two differences between byte array literals and string literals -with character escapes are as follows: - -- Byte array literals always have data type `BYTES`, whereas the data - type of a string literal depends on context. -- Byte array literals may contain invalid UTF-8 byte sequences, - whereas string literals must always contain valid UTF-8 sequences. - -### Hexadecimal-encoded byte array literals - -This is a CockroachDB-specific extension to express byte array -literals: the delimiter `x'` followed by an arbitrary sequence of -hexadecimal digits, followed by a closing `'`. - -For example, all the following formats are equivalent to `b'cat'`: - -- `x'636174'` -- `X'636174'` - -## Interpreted literals - -A constant of any data type can be formed using either of the following formats: - -~~~ -type 'string' -'string':::type -~~~ - -The value of the string part is used as input for the conversion function to the -specified data type, and the result is used as a constant with that data type. - -Examples: - -~~~ -DATE '2013-12-23' -BOOL 'FALSE' -'42.69':::INT -'TRUE':::BOOL -'3 days':::INTERVAL -~~~ - -Additionally, for compatibility with PostgreSQL, the notation -`'string'::type` and `CAST('string' AS type)` is also recognized as an -interpreted literal. These are special cases of -[cast expressions](scalar-expressions.html). - -For more information about the allowable format of interpreted -literals, refer to the "Syntax" section of the respective data types: -[`DATE`](date.html#syntax), [`INET`](inet.html#syntax), [`INTERVAL`](interval.html#syntax), [`TIME`](time.html#syntax), -[`TIMESTAMP`/`TIMESTAMPTZ`](timestamp.html#syntax). - -## Named constants - -CockroachDB recognizes the following SQL named constants: - -- `TRUE` and `FALSE`, the two possible values of data type `BOOL`. -- `NULL`, the special SQL symbol that indicates "no value present". - -Note that `NULL` is a valid constant for any type: its actual data -type during expression evaluation is determined based on context. - -## See Also - -- [Scalar Expressions](scalar-expressions.html) -- [Data Types](data-types.html) diff --git a/src/current/v2.0/sql-dump.md b/src/current/v2.0/sql-dump.md deleted file mode 100644 index 3a51c1e1fd5..00000000000 --- a/src/current/v2.0/sql-dump.md +++ /dev/null @@ -1,359 +0,0 @@ ---- -title: SQL Dump (Export) -summary: Learn how to dump schemas and data from a CockroachDB cluster. -toc: true ---- - -The `cockroach dump` [command](cockroach-commands.html) outputs the SQL statements required to recreate tables, views, and sequences. This command can be used to back up or export each database in a cluster. The output should also be suitable for importing into other relational databases, with minimal adjustments. - -{{site.data.alerts.callout_success}}CockroachDB enterprise license users can also back up their cluster's data using BACKUP.{{site.data.alerts.end}} - - -## Considerations - -When `cockroach dump` is executed: - -- Table, sequence, and view schemas and table data are dumped as they appeared at the time that the command is started. Any changes after the command starts will not be included in the dump. -- Table and view schemas are dumped in the order in which they can successfully be recreated. As of v2.0, this is true of sequences as well. However, `cockroach dump` does not support circular foreign keys. See this [known limitation](#known-limitations) for more details. -- If the dump takes longer than the [`ttlseconds`](configure-replication-zones.html) replication setting for the table (25 hours by default), the dump may fail. -- Reads, writes, and schema changes can happen while the dump is in progress, but will not affect the output of the dump. - -{{site.data.alerts.callout_info}}The user must have the SELECT privilege on the target table(s).{{site.data.alerts.end}} - -## Synopsis - -~~~ shell -# Dump the schemas and data of specific tables to stdout: -$ cockroach dump
      - -# Dump just the data of specific tables to stdout: -$ cockroach dump
      --dump-mode=data - -# Dump just the schemas of specific tables to stdout: -$ cockroach dump
      --dump-mode=schema - -# Dump the schemas and data of all tables in a database to stdout: -$ cockroach dump - -# Dump just the schemas of all tables in a database to stdout: -$ cockroach dump --dump-mode=schema - -# Dump just the data of all tables in a database to stdout: -$ cockroach dump --dump-mode=data - -# Dump to a file: -$ cockroach dump
      > dump-file.sql - -# View help: -$ cockroach dump --help -~~~ - -## Flags - -The `dump` command supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--as-of` | Dump table schema and/or data as they appear at the specified [timestamp](timestamp.html). See this [example](#dump-table-data-as-of-a-specific-time) for a demonstration.

      Note that historical data is available only within the garbage collection window, which is determined by the [`ttlseconds`](configure-replication-zones.html) replication setting for the table (25 hours by default). If this timestamp is earlier than that window, the dump will fail.

      **Default:** Current time -`--dump-mode` | Whether to dump table and view schemas, table data, or both.

      To dump just table and view schemas, set this to `schema`. To dump just table data, set this to `data`. To dump both table and view schemas and table data, leave this flag out or set it to `both`.

      New in v1.1: Table and view schemas are dumped in the order in which they can successfully be recreated. For example, if a database includes a table, a second table with a foreign key dependency on the first, and a view that depends on the second table, the dump will list the schema for the first table, then the schema for the second table, and then the schema for the view.

      **Default:** `both` -`--echo-sql` | New in v1.1: Reveal the SQL statements sent implicitly by the command-line utility. - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -{{site.data.alerts.callout_info}}The user specified with --user must -have the SELECT privilege on the target -tables.{{site.data.alerts.end}} - -### Logging - -By default, the `dump` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -{{site.data.alerts.callout_info}}These examples use our sample startrek database, which you can add to a cluster via the cockroach gen command. Also, the examples assume that the maxroach user has been granted the SELECT privilege on all target tables. {{site.data.alerts.end}} - -### Dump a table's schema and data - -~~~ shell -$ cockroach dump startrek episodes --insecure --user=maxroach > backup.sql -~~~ - -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE episodes ( - id INT NOT NULL, - season INT NULL, - num INT NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id), - FAMILY "primary" (id, season, num), - FAMILY fam_1_title (title), - FAMILY fam_2_stardate (stardate) -); - -INSERT INTO episodes (id, season, num, title, stardate) VALUES - (1, 1, 1, 'The Man Trap', 1531.1), - (2, 1, 2, 'Charlie X', 1533.6), - (3, 1, 3, 'Where No Man Has Gone Before', 1312.4), - (4, 1, 4, 'The Naked Time', 1704.2), - (5, 1, 5, 'The Enemy Within', 1672.1), - (6, 1, 6, e'Mudd\'s Women', 1329.8), - (7, 1, 7, 'What Are Little Girls Made Of?', 2712.4), - (8, 1, 8, 'Miri', 2713.5), - (9, 1, 9, 'Dagger of the Mind', 2715.1), - (10, 1, 10, 'The Corbomite Maneuver', 1512.2), - ... -~~~ - -### Dump just a table's schema - -~~~ shell -$ cockroach dump startrek episodes --insecure --user=maxroach --dump-mode=schema > backup.sql -~~~ - -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE episodes ( - id INT NOT NULL, - season INT NULL, - num INT NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id), - FAMILY "primary" (id, season, num), - FAMILY fam_1_title (title), - FAMILY fam_2_stardate (stardate) -); -~~~ - -### Dump just a table's data - -~~~ shell -$ cockroach dump startrek episodes --insecure --user=maxroach --dump-mode=data > backup.sql -~~~ - -~~~ shell -$ cat backup.sql -~~~ - -~~~ -INSERT INTO episodes (id, season, num, title, stardate) VALUES - (1, 1, 1, 'The Man Trap', 1531.1), - (2, 1, 2, 'Charlie X', 1533.6), - (3, 1, 3, 'Where No Man Has Gone Before', 1312.4), - (4, 1, 4, 'The Naked Time', 1704.2), - (5, 1, 5, 'The Enemy Within', 1672.1), - (6, 1, 6, e'Mudd\'s Women', 1329.8), - (7, 1, 7, 'What Are Little Girls Made Of?', 2712.4), - (8, 1, 8, 'Miri', 2713.5), - (9, 1, 9, 'Dagger of the Mind', 2715.1), - (10, 1, 10, 'The Corbomite Maneuver', 1512.2), - ... -~~~ - -### Dump all tables in a database - -~~~ shell -$ cockroach dump startrek --insecure --user=maxroach > backup.sql -~~~ - -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE episodes ( - id INT NOT NULL, - season INT NULL, - num INT NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id), - FAMILY "primary" (id, season, num), - FAMILY fam_1_title (title), - FAMILY fam_2_stardate (stardate) -); - -CREATE TABLE quotes ( - quote STRING NULL, - characters STRING NULL, - stardate DECIMAL NULL, - episode INT NULL, - INDEX quotes_episode_idx (episode), - FAMILY "primary" (quote, rowid), - FAMILY fam_1_characters (characters), - FAMILY fam_2_stardate (stardate), - FAMILY fam_3_episode (episode) -); - -INSERT INTO episodes (id, season, num, title, stardate) VALUES - (1, 1, 1, 'The Man Trap', 1531.1), - (2, 1, 2, 'Charlie X', 1533.6), - (3, 1, 3, 'Where No Man Has Gone Before', 1312.4), - (4, 1, 4, 'The Naked Time', 1704.2), - (5, 1, 5, 'The Enemy Within', 1672.1), - (6, 1, 6, e'Mudd\'s Women', 1329.8), - (7, 1, 7, 'What Are Little Girls Made Of?', 2712.4), - (8, 1, 8, 'Miri', 2713.5), - (9, 1, 9, 'Dagger of the Mind', 2715.1), - (10, 1, 10, 'The Corbomite Maneuver', 1512.2), - ... - -INSERT INTO quotes (quote, characters, stardate, episode) VALUES - ('"... freedom ... is a worship word..." "It is our worship word too."', 'Cloud William and Kirk', NULL, 52), - ('"Beauty is transitory." "Beauty survives."', 'Spock and Kirk', NULL, 72), - ('"Can you imagine how life could be improved if we could do away with jealousy, greed, hate ..." "It can also be improved by eliminating love, tenderness, sentiment -- the other side of the coin"', 'Dr. Roger Corby and Kirk', 2712.4, 7), - ... -~~~ - -### Dump fails (user does not have `SELECT` privilege) - -In this example, the `dump` command fails for a user that does not have the `SELECT` privilege on the `episodes` table. - -~~~ shell -$ cockroach dump startrek episodes --insecure --user=leslieroach > backup.sql -~~~ - -~~~ shell -Error: pq: user leslieroach has no privileges on table episodes -Failed running "dump" -~~~ - -### Restore a table from a backup file - -In this example, a user that has the `CREATE` privilege on the `startrek` database uses the [`cockroach sql`](use-the-built-in-sql-client.html) command to recreate a table, based on a file created by the `dump` command. - -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE quotes ( - quote STRING NULL, - characters STRING NULL, - stardate DECIMAL NULL, - episode INT NULL, - INDEX quotes_episode_idx (episode), - FAMILY "primary" (quote, rowid), - FAMILY fam_1_characters (characters), - FAMILY fam_2_stardate (stardate), - FAMILY fam_3_episode (episode) -); - -INSERT INTO quotes (quote, characters, stardate, episode) VALUES - ('"... freedom ... is a worship word..." "It is our worship word too."', 'Cloud William and Kirk', NULL, 52), - ('"Beauty is transitory." "Beauty survives."', 'Spock and Kirk', NULL, 72), - ('"Can you imagine how life could be improved if we could do away with jealousy, greed, hate ..." "It can also be improved by eliminating love, tenderness, sentiment -- the other side of the coin"', 'Dr. Roger Corby and Kirk', 2712.4, 7), - ... -~~~ - -~~~ shell -$ cockroach sql --insecure --database=startrek --user=maxroach < backup.sql -~~~ - -~~~ shell -CREATE TABLE -INSERT 100 -INSERT 100 -~~~ - -### Dump table data as of a specific time - -In this example, we assume there were several inserts into a table both before and after `2017-03-07 19:55:00`. - -First, let's use the built-in SQL client to view the table at the current time: - -~~~ shell -$ cockroach sql --insecure --execute="SELECT * FROM db1.dump_test" -~~~ - -~~~ -+--------------------+------+ -| id | name | -+--------------------+------+ -| 225594758537183233 | a | -| 225594758537248769 | b | -| 225594758537281537 | c | -| 225594758537314305 | d | -| 225594758537347073 | e | -| 225594758537379841 | f | -| 225594758537412609 | g | -| 225594758537445377 | h | -| 225594991654174721 | i | -| 225594991654240257 | j | -| 225594991654273025 | k | -| 225594991654305793 | l | -| 225594991654338561 | m | -| 225594991654371329 | n | -| 225594991654404097 | o | -| 225594991654436865 | p | -+--------------------+------+ -(16 rows) -~~~ - -Next, let's use a [time-travel query](select-clause.html#select-historical-data-time-travel) to view the contents of the table as of `2017-03-07 19:55:00`: - -~~~ shell -$ cockroach sql --insecure --execute="SELECT * FROM db1.dump_test AS OF SYSTEM TIME '2017-03-07 19:55:00'" -~~~ - -~~~ -+--------------------+------+ -| id | name | -+--------------------+------+ -| 225594758537183233 | a | -| 225594758537248769 | b | -| 225594758537281537 | c | -| 225594758537314305 | d | -| 225594758537347073 | e | -| 225594758537379841 | f | -| 225594758537412609 | g | -| 225594758537445377 | h | -+--------------------+------+ -(8 rows) -~~~ - -Finally, let's use `cockroach dump` with the `--as-of` flag set to dump the contents of the table as of `2017-03-07 19:55:00`. - -~~~ shell -$ cockroach dump db1 dump_test --insecure --dump-mode=data --as-of='2017-03-07 19:55:00' -~~~ - -~~~ -INSERT INTO dump_test (id, name) VALUES - (225594758537183233, 'a'), - (225594758537248769, 'b'), - (225594758537281537, 'c'), - (225594758537314305, 'd'), - (225594758537347073, 'e'), - (225594758537379841, 'f'), - (225594758537412609, 'g'), - (225594758537445377, 'h'); -~~~ - -As you can see, the results of the dump are identical to the earlier time-travel query. - -## Known Limitations - -{% include {{ page.version.version }}/known-limitations/dump-cyclic-foreign-keys.md %} - -## See Also - -- [Import Data](import-data.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/sql-faqs.md b/src/current/v2.0/sql-faqs.md deleted file mode 100644 index 8f08cbda821..00000000000 --- a/src/current/v2.0/sql-faqs.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: SQL FAQs -summary: Get answers to frequently asked questions about CockroachDB SQL. -toc: true -toc_not_nested: true ---- - - -## How do I bulk insert data into CockroachDB? - -Currently, you can bulk insert data with batches of [`INSERT`](insert.html) statements not exceeding a few MB. The size of your rows determines how many you can use, but 1,000 - 10,000 rows typically works best. For more details, see [Import Data](import-data.html). - -## How do I auto-generate unique row IDs in CockroachDB? - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -## How do I generate unique, slowly increasing sequential numbers in CockroachDB? - -{% include {{ page.version.version }}/faq/sequential-numbers.md %} - -## What are the differences between `UUID`, sequences, and `unique_rowid()`? - -{% include {{ page.version.version }}/faq/differences-between-numberings.md %} - -## How do I order writes to a table to closely follow time in CockroachDB? - -{% include {{ page.version.version }}/faq/sequential-transactions.md %} - -## How do I get the last ID/SERIAL value inserted into a table? - -There’s no function in CockroachDB for returning last inserted values, but you can use the [`RETURNING` clause](insert.html#insert-and-return-values) of the `INSERT` statement. - -For example, this is how you’d use `RETURNING` to return a value auto-generated via `unique_rowid()` or [`SERIAL`](serial.html): - -~~~ sql -> CREATE TABLE users (id SERIAL, name STRING); - -> INSERT INTO users (name) VALUES ('mike') RETURNING id; -~~~ - -## What is transaction contention? - -Transaction contention occurs when transactions issued from multiple -clients at the same time operate on the same data. -This can cause transactions to wait on each other and decrease -performance, like when many people try to check out with the same -cashier at a store. - -For more information about contention, see [Understanding and Avoiding -Transaction -Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). - -## Does CockroachDB support `JOIN`? - -[CockroachDB supports uncorrelated SQL joins](joins.html). - -At this time, `LATERAL` (correlated) joins are not yet supported. - -## When should I use interleaved tables? - -[Interleaving tables](interleave-in-parent.html) improves query performance by optimizing the key-value structure of closely related tables, attempting to keep data on the same key-value range if it's likely to be read and written together. - -{% include {{ page.version.version }}/faq/when-to-interleave-tables.html %} - -## Does CockroachDB support JSON or Protobuf datatypes? - -Yes, as of v2.0, the [`JSONB`](jsonb.html) data type is supported. - -## How do I know which index CockroachDB will select for a query? - -To see which indexes CockroachDB is using for a given query, you can use the [`EXPLAIN`](explain.html) statement, which will print out the query plan, including any indexes that are being used: - -~~~ sql -> EXPLAIN SELECT col1 FROM tbl1; -~~~ - -If you'd like to tell the query planner which index to use, you can do so via some [special syntax for index hints](table-expressions.html#force-index-selection): - -~~~ sql -> SELECT col1 FROM tbl1@idx1; -~~~ - -## How do I log SQL queries? - -{% include {{ page.version.version }}/faq/sql-query-logging.md %} - -## Does CockroachDB support a UUID type? - -Yes. For more details, see [`UUID`](uuid.html). - -## How does CockroachDB sort results when `ORDER BY` is not used? - -When an [`ORDER BY`](query-order.html) clause is not used in a query, rows are processed or returned in a -non-deterministic order. "Non-deterministic" means that the actual order -can depend on the logical plan, the order of data on disk, the topology -of the CockroachDB cluster, and is generally variable over time. - -## Why are my `INT` columns returned as strings in JavaScript? - -In CockroachDB, all `INT`s are represented with 64 bits of precision, but JavaScript numbers only have 53 bits of precision. This means that large integers stored in CockroachDB are not exactly representable as JavaScript numbers. For example, JavaScript will round the integer `235191684988928001` to the nearest representable value, `235191684988928000`. Notice that the last digit is different. This is particularly problematic when using the `unique_rowid()` [function](functions-and-operators.html), since `unique_rowid()` nearly always returns integers that require more than 53 bits of precision to represent. - -To avoid this loss of precision, Node's [`pg` driver](https://github.com/brianc/node-postgres) will, by default, return all CockroachDB `INT`s as strings. - -~~~ javascript -// Schema: CREATE TABLE users (id INT DEFAULT unique_rowid(), name STRING); -pgClient.query("SELECT id FROM users WHERE name = 'Roach' LIMIT 1", function(err, res) { - var idString = res.rows[0].id; - // idString === '235191684988928001' - // typeof idString === 'string' -}); -~~~ - -To perform another query using the value of `idString`, you can simply use `idString` directly, even where an `INT` type is expected. The string will automatically be coerced into a CockroachDB `INT`. - -~~~ javascript -pgClient.query("UPDATE users SET name = 'Ms. Roach' WHERE id = $1", [idString], function(err, res) { - // All should be well! -}); -~~~ - -If you instead need to perform arithmetic on `INT`s in JavaScript, you will need to use a big integer library like [Long.js](https://www.npmjs.com/package/long). Do _not_ use the built-in `parseInt` function. - -~~~ javascript -parseInt(idString, 10) + 1; // WRONG: returns 235191684988928000 -require('long').fromString(idString).add(1).toString(); // GOOD: returns '235191684988928002' -~~~ - -## See Also - -- [Product FAQs](frequently-asked-questions.html) -- [Operational FAQS](operational-faqs.html) diff --git a/src/current/v2.0/sql-feature-support.md b/src/current/v2.0/sql-feature-support.md deleted file mode 100644 index 1bba4edf623..00000000000 --- a/src/current/v2.0/sql-feature-support.md +++ /dev/null @@ -1,171 +0,0 @@ ---- -title: SQL Feature Support in CockroachDB v2.0 -summary: Summary of CockroachDB's conformance to the SQL standard and which common extensions it supports. -toc: true ---- - -Making CockroachDB easy to use is a top priority for us, so we chose to implement SQL. However, even though SQL has a standard, no database implements all of it, nor do any of them have standard implementations of all features. - -To understand which standard SQL features we support (as well as common extensions to the standard), use the table below. - -- **Component** lists the components that are commonly considered part of SQL. -- **Supported** shows CockroachDB's level of support for the component. -- **Type** indicates whether the component is part of the SQL *Standard* or is an *Extension* created by ourselves or others. -- **Details** provides greater context about the component. - - - -## Features - -### Row Values - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Identifiers | ✓ | Standard | [Identifiers documentation](keywords-and-identifiers.html#identifiers) | -| `INT` | ✓ | Standard | [`INT` documentation](int.html) | -| `FLOAT`, `REAL` | ✓ | Standard | [`FLOAT` documentation](float.html) | -| `BOOLEAN` | ✓ | Standard | [`BOOL` documentation](bool.html) | -| `DECIMAL`, `NUMERIC` | ✓ | Standard | [`DECIMAL` documentation](decimal.html) | -| `NULL` | ✓ | Standard | [*NULL*-handling documentation](null-handling.html) | -| `BYTES` | ✓ | CockroachDB Extension | [`BYTES` documentation](bytes.html) | -| Automatic key generation | ✓ | Common Extension | [Automatic key generation FAQ](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) | -| `STRING`, `CHARACTER` | ✓ | Standard | [`STRING` documentation](string.html) | -| `COLLATE` | ✓ | Standard | [`COLLATE` documentation](collate.html) | -| `AUTO INCREMENT` | Alternative | Common Extension | [Automatic key generation FAQ](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) | -| Key-value pairs | Alternative | Extension | [Key-Value FAQ](frequently-asked-questions.html#can-i-use-cockroachdb-as-a-key-value-store) | -| New in v1.1: `ARRAY` | ✓ | Standard | [`ARRAY` documentation](array.html) | -| New in v1.1: `UUID` | ✓ | PostgreSQL Extension | [`UUID` documentation](uuid.html) | -| New in v2.0: JSON | ✓ | Common Extension | [`JSONB` documentation](jsonb.html) | -| New in v2.0: `TIME` | ✓ | Standard | [`TIME` documentation](time.html) | -| XML | ✗ | Standard | XML data can be stored as `BYTES`, but we do not offer XML parsing. | -| `UNSIGNED INT` | ✗ | Common Extension | `UNSIGNED INT` causes numerous casting issues, so we do not plan to support it. | -| `SET`, `ENUM` | ✗ | MySQL, PostgreSQL Extension | Only allow rows to contain values from a defined set of terms. | -| `INET` | ✓ | PostgreSQL Extension | [`INET` documentation](inet.html) - -### Constraints - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Not Null | ✓ | Standard | [Not Null documentation](not-null.html) | -| Unique | ✓ | Standard | [Unique documentation](unique.html) | -| Primary Key | ✓ | Standard | [Primary Key documentation](primary-key.html) | -| Check | ✓ | Standard | [Check documentation](check.html) | -| Foreign Key | ✓ | Standard | [Foreign Key documentation](foreign-key.html) | -| Default Value | ✓ | Standard | [Default Value documentation](default-value.html) | - -### Transactions - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Transactions (ACID semantics) | ✓ | Standard | [Transactions documentation](transactions.html) | -| `BEGIN` | ✓ | Standard | [`BEGIN` documentation](begin-transaction.html) | -| `COMMIT` | ✓ | Standard | [`COMMIT` documentation](commit-transaction.html) | -| `ROLLBACK` | ✓ | Standard | [`ROLLBACK` documentation](rollback-transaction.html) | -| `SAVEPOINT` | ✓ | CockroachDB Extension | While `SAVEPOINT` is part of the SQL standard, we only support [our extension of it](transactions.html#transaction-retries) | - -### Indexes - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Indexes | ✓ | Common Extension | [Indexes documentation](indexes.html) | -| Multi-column indexes | ✓ | Common Extension | We do not limit on the number of columns indexes can include | -| Covering indexes | ✓ | Common Extension | [Storing Columns documentation](create-index.html#store-columns) | -| New in v2.0: Inverted indexes | ✓ | Common Extension | [Inverted Indexes documentation](inverted-indexes.html) | -| Multiple indexes per query | Planned | Common Extension | Use multiple indexes to filter the table's values for a single query | -| Full-text indexes | Planned | Common Extension | [GitHub issue tracking full-text index support](https://github.com/cockroachdb/cockroach/issues/7821) | -| Prefix/Expression Indexes | Potential | Common Extension | Apply expressions (such as `LOWER()`) to values before indexing them | -| Geospatial indexes | Potential | Common Extension | Improves performance of queries calculating geospatial data | -| Hash indexes | ✗ | Common Extension | Improves performance of queries looking for single, exact values | -| Partial indexes | ✗ | Common Extension | Only index specific rows from indexed columns | - -### Schema Changes - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| `ALTER TABLE` | ✓ | Standard | [`ALTER TABLE` documentation](alter-table.html) | -| Database renames | ✓ | Standard | [`RENAME DATABASE` documentation](rename-database.html) | -| Table renames | ✓ | Standard | [`RENAME TABLE` documentation](rename-table.html) | -| Column renames | ✓ | Standard | [`RENAME COLUMN` documentation](rename-column.html) | -| Adding columns | ✓ | Standard | [`ADD COLUMN` documentation](add-column.html) | -| Removing columns | ✓ | Standard | [`DROP COLUMN` documentation](drop-column.html) | -| Adding constraints | ✓ | Standard | [`ADD CONSTRAINT` documentation](add-constraint.html) | -| Removing constraints | ✓ | Standard | [`DROP CONSTRAINT` documentation](drop-constraint.html) | -| Index renames | ✓ | Standard | [`RENAME INDEX` documentation](rename-index.html) | -| Adding indexes | ✓ | Standard | [`CREATE INDEX` documentation](create-index.html) | -| Removing indexes | ✓ | Standard | [`DROP INDEX` documentation](drop-index.html) | - -### Statements - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Common statements | ✓ | Standard | [SQL Statements documentation](sql-statements.html) | -| `UPSERT` | ✓ | PostgreSQL, MSSQL Extension | [`UPSERT` documentation](upsert.html) | -| `EXPLAIN` | ✓ | Common Extension | [`EXPLAIN` documentation](explain.html) | -| `SELECT INTO` | Alternative | Common Extension | You can replicate similar functionality using [`CREATE TABLE`](create-table.html) and then `INSERT INTO ... SELECT ...`. | - -### Clauses - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Common clauses | ✓ | Standard | [SQL Grammar documentation](sql-grammar.html) | -| `LIMIT` | ✓ | Common Extension | Limit the number of rows a statement returns. | -| `LIMIT` with `OFFSET` | ✓ | Common Extension | Skip a number of rows, and then limit the size of the return set. | -| `RETURNING` | ✓ | Common Extension | Retrieve a table of rows statements affect. | - -### Table Expressions - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Table and View references | ✓ | Standard | [Table expressions documentation](table-expressions.html#table-or-view-names) | -| `AS` in table expressions | ✓ | Standard | [Aliased table expressions documentation](table-expressions.html#aliased-table-expressions) | -| `JOIN` (`INNER`, `LEFT`, `RIGHT`, `FULL`, `CROSS`) | [Functional](https://www.cockroachlabs.com/blog/better-sql-joins-in-cockroachdb/) | Standard | [Join expressions documentation](table-expressions.html#join-expressions) | -| Sub-queries as table expressions | Partial | Standard | Non-correlated subqueries are [supported](table-expressions.html#subqueries-as-table-expressions); correlated are not. | -| Table generator functions | Partial | PostgreSQL Extension | [Table generator functions documentation](table-expressions.html#table-generator-functions) | -| `WITH ORDINALITY` | ✓ | CockroachDB Extension | [Ordinality annotation documentation](table-expressions.html#ordinality-annotation) | - -### Scalar Expressions and Boolean Formulas - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Common functions | ✓ | Standard | [Functions calls and SQL special forms documentation](scalar-expressions.html#function-calls-and-sql-special-forms) -| Common operators | ✓ | Standard | [Operators documentation](scalar-expressions.html#unary-and-binary-operations) | -| `IF`/`CASE`/`NULLIF` | ✓ | Standard | [Conditional expressions documentation](scalar-expressions.html#conditional-expressions) | -| `COALESCE`/`IFNULL` | ✓ | Standard | [Conditional expressions documentation](scalar-expressions.html#conditional-expressions) | -| `AND`/`OR`/`NOT` | ✓ | Standard | [Logical operators documentation](scalar-expressions.html#logical-operators) | -| `LIKE`/`ILIKE` | ✓ | Standard | [String pattern matching documentation](scalar-expressions.html#string-pattern-matching) | -| `SIMILAR TO` | ✓ | Standard | [SQL regexp pattern matching documentation](scalar-expressions.html#string-matching-using-sql-regular-expressions) | -| Matching using POSIX regular expressions | ✓ | Common Extension | [POSIX regexp pattern matching documentation](scalar-expressions.html#string-matching-using-posix-regular-expressions) | -| `EXISTS` | Partial | Standard | Non-correlated subqueries are [supported](scalar-expressions.html#existence-test-on-the-result-of-subqueries); correlated are not. Currently works only with small data sets. | -| Scalar subqueries | Partial | Standard | Non-correlated subqueries are [supported](scalar-expressions.html#scalar-subqueries); correlated are not. Currently works only with small data sets. | -| Bitwise arithmetic | ✓ | Common Extension | [Operators documentation](scalar-expressions.html#unary-and-binary-operations) | -| Array constructors and subscripting | Partial | PostgreSQL Extension | Array expression documentation: [Constructor syntax](scalar-expressions.html#array-constructors) and [Subscripting](scalar-expressions.html#subscripted-expressions) | -| `COLLATE`| ✓ | Standard | [Collation expressions documentation](scalar-expressions.html#collation-expressions) | -| Column ordinal references | ✓ | CockroachDB Extension | [Column references documentation](scalar-expressions.html#column-references) | -| Type annotations | ✓ | CockroachDB Extension | [Type annotations documentation](scalar-expressions.html#explicitly-typed-expressions) | - -### Permissions - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Users | ✓ | Standard | [`GRANT` documentation](grant.html) | -| Privileges | ✓ | Standard | [Privileges documentation](privileges.html) | - -### Miscellaneous - -| Component | Supported | Type | Details | -|-----------|-----------|------|---------| -| Column families | ✓ | CockroachDB Extension | [Column Families documentation](column-families.html) | -| Interleaved tables | ✓ | CockroachDB Extension | [Interleaved Tables documentation](interleave-in-parent.html) | -| Parallel Statement Execution | ✓ | CockroachDB Extension | [Parallel Statement Execution documentation](parallel-statement-execution.html) | -| Information Schema | ✓ | Standard | [Information Schema documentation](information-schema.html) -| Views | ✓ | Standard | [Views documentation](views.html) | -| Window functions | ✓ | Standard | [Window Functions documentation](window-functions.html) | -| Common Table Expressions | Partial | Common Extension | [Common Table Expressions documentation](common-table-expressions.html) | -| Stored Procedures | Planned | Common Extension | Execute a procedure explicitly. | -| Cursors | ✗ | Standard | Traverse a table's rows. | -| Triggers | ✗ | Standard | Execute a set of commands whenever a specified event occurs. | -| New in v2.0: Sequences | ✓ | Common Extension | [`CREATE SEQUENCE` documentation](create-sequence.html) | diff --git a/src/current/v2.0/sql-grammar.md b/src/current/v2.0/sql-grammar.md deleted file mode 100644 index 3a897be476d..00000000000 --- a/src/current/v2.0/sql-grammar.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: SQL Grammar -summary: The full SQL grammar for CockroachDB, generated automatically from the CockroachDB code. -toc: false -back_to_top: true ---- - - - -{{site.data.alerts.callout_success}} -This page describes the full CockroachDB SQL grammar. However, as a starting point, it's best to reference our SQL statements pages first, which provide detailed explanations and examples. -{{site.data.alerts.end}} - -{% comment %} -TODO: clean up the SQL diagrams not to link to these missing nonterminals. -{% endcomment %} - - - - - - - - - - - - - - -
      - {% include {{ page.version.version }}/sql/diagrams/stmt_block.html %} -
      diff --git a/src/current/v2.0/sql-name-resolution.md b/src/current/v2.0/sql-name-resolution.md deleted file mode 100644 index 030a46bcf98..00000000000 --- a/src/current/v2.0/sql-name-resolution.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -title: Name Resolution -summary: Table and function names can exist in multiple places. Resolution decides which one to use. -toc: true ---- - -Changed in v2.0: A query can specify a table name without a database or schema name (e.g., `SELECT * FROM orders`). How does CockroachDB know which `orders` table is being considered and in which schema? - -This page details how CockroachDB performs **name resolution** to answer this question. - - -## Logical Schemas And Namespaces - -New in v2.0: A CockroachDB cluster can store multiple databases, and each database can store multiple tables/views/sequences. This **two-level structure for stored data** is commonly called the "logical schema" in relational database management systems. - -Meanwhile, CockroachDB aims to provide compatibility with PostgreSQL -client applications and thus supports PostgreSQL's semantics for SQL -queries. To achieve this, CockroachDB supports a **three-level -structure for names**. This is called the "naming hierarchy". - -In the naming hierarchy, the path to a stored object has three components: - -- database name (also called "catalog") -- schema name -- object name - -The schema name for all stored objects in any given database is always -`public`. There is only a single schema available for stored -objects because CockroachDB only supports a two-level storage -structure. - -In addition to `public`, CockroachDB also supports a fixed set of -virtual schemas, available in every database, that provide ancillary, non-stored -data to client applications. For example, -[`information_schema`](information-schema.html) is provided for -compatibility with the SQL standard. - -The list of all databases can be obtained with [`SHOW -DATABASES`](show-databases.html). The list of all schemas for a given -database can be obtained with [`SHOW SCHEMAS`](show-schemas.html). The -list of all objects for a given schema can be obtained with other -`SHOW` statements. - -## How Name Resolution Works - -Name resolution occurs separately to **look up existing objects** and to -**decide the full name of a new object**. - -The rules to look up an existing object are as follows: - -1. If the name already fully specifies the database and schema, use that information. -2. If the name has a single component prefix, try to find a schema with the prefix name in the [current database](#current-database). If that fails, try to find the object in the `public` schema of a database with the prefix name. -3. If the name has no prefix, use the [search path](#search-path) with the [current database](#current-database). - -Similarly, the rules to decide the full name of a new object are as follows: - -1. If the name already fully specifies the database and schema, use that. -2. If the name has a single component prefix, try to find a schema with that name. If no such schema exists, use the `public` schema in the database with the prefix name. -3. If the name has no prefix, use the [current schema](#current-schema) in the [current database](#current-database). - -## Parameters for Name Resolution - -### Current Database - -The current database is used when a name is unqualified or has only one component prefix. It is the current value of the `database` session variable. - -- You can view the current value of the `database` session variable with [`SHOW -database`](show-vars.html) and change it with [`SET database`](set-vars.html). - -- You can inspect the list of valid database names that can be specified in `database` with [`SHOW DATABASES`](show-databases.html). - -- For client apps that connect to CockroachDB using a URL of the form `postgres://...`, the initial value of the `database` session variable can be set using the path component of the URL. For example, `postgres://node/mydb` sets `database` to `mydb` when the connection is established. - -### Search Path - -The search path is used when a name is unqualified (has no prefix). It lists the schemas where objects are looked up. Its first element is also the [current schema](#current-schema) where new objects are created. - -- You can set the current search path with [`SET search_path`](set-vars.html) and inspected it with [`SHOW -search_path`](show-vars.html). - -- You can inspect the list of valid schemas that can be listed in `search_path` with [`SHOW SCHEMAS`](show-schemas.html). - -- By default, the search path contains `public` and `pg_catalog`. For compatibility with PostgreSQL, `pg_catalog` is forced to be present in `search_path` at all times, even when not specified with -`SET search_path`. - -### Current Schema - -The current schema is used as target schema when creating a new object if the name is unqualified (has no prefix). - -- The current schema is always the first value of `search_path`, for compatibility with PostgreSQL. - -- You can inspect the current schema using the special built-in function/identifier `current_schema()`. - -## Examples - -The examples below use the following logical schema as a starting point: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE mydb; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE mydb.mytable(x INT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET database = mydb; -~~~ - -### Lookup with Unqualified Names - -An unqualified name is a name with no prefix, that is, a simple identifier. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mytable; -~~~ - -This uses the search path over the current database. The search path -is `public` by default, in the current database. The resolved name is -`mydb.public.mytable`. - -{% include copy-clipboard.html %} -~~~ sql -> SET database = system; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mytable; -~~~ - -~~~ -pq: relation "mytable" does not exist -~~~ - -This uses the search path over the current database, which is now -`system`. No schema in the search path contain table `mytable`, so the -look up fails with an error. - -### Lookup with Fully Qualified Names - -A fully qualified name is a name with two prefix components, that is, -three identifiers separated by periods. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.public.mytable; -~~~ - -Both the database and schema components are specified. The lookup -succeeds if and only if the object exists at that specific location. - -### Lookup with Partially Qualified Names - -A partially qualified name is a name with one prefix component, that is, two identifiers separated by a period. When a name is partially qualified, CockroachDB will try to use the prefix as a schema name first; and if that fails, use it as a database name. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM public.mytable; -~~~ - -This looks up `mytable` in the `public` schema of the current -database. If the current database is `mydb`, the lookup succeeds. - -For compatibility with CockroachDB 1.x, and to ease development in -multi-database scenarios, CockroachDB also allows queries to specify -a database name in a partially qualified name. For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.mytable; -~~~ - -In that case, CockroachDB will first attempt to find a schema called -`mydb` in the current database. When no such schema exists (which is -the case with the starting point in this section), it then tries to -find a database called `mydb` and uses the `public` schema in that. In -this example, this rule applies and the fully resolved name is -`mydb.public.mytable`. - -### Using the Search Path to Use Tables Across Schemas - -Suppose that a client frequently accesses a stored table as well as a virtual table in the [Information Schema](information-schema.html). Because `information_schema` is not in the search path by default, all queries that need to access it must mention it explicitly. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.information_schema.schemata; -- valid -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.schemata; -- valid; uses mydb implicitly -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM schemata; -- invalid; information_schema not in search_path -~~~ - -For clients that use `information_schema` often, you can add it to the -search path to simplify queries. For example: - -{% include copy-clipboard.html %} -~~~ sql -> SET search_path = public, information_schema; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM schemata; -- now valid, uses search_path -~~~ - -## Databases with Special Names - -When resolving a partially qualified name with just one component -prefix, CockroachDB will look up a schema with the given prefix name -first, and only look up a database with that name if the schema lookup -fails. This matters in the (likely uncommon) case where you wish your -database to be called `information_schema`, `public`, `pg_catalog` -or `crdb_internal`. - -For example: - -~~~sql -> CREATE DATABASE public; -> SET database = mydb; -> CREATE TABLE public.mypublictable (x INT); -~~~ - -The [`CREATE TABLE`](create-table.html) statement in this example uses a partially -qualified name. Because the `public` prefix designates a valid schema -in the current database, the full name of `mypublictable` becomes -`mydb.public.mypublictable`. The table is created in database `mydb`. - -To create the table in database `public`, one would instead use a -fully qualified name, as follows: - -~~~sql -> CREATE DATABASE public; -> CREATE TABLE public.public.mypublictable (x INT); -~~~ - -## See Also - -- [`SET`](set-vars.html) -- [`SHOW`](show-vars.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW SCHEMAS`](show-schemas.html) -- [Information Schema](information-schema.html) diff --git a/src/current/v2.0/sql-statements.md b/src/current/v2.0/sql-statements.md deleted file mode 100644 index 66b787faa7b..00000000000 --- a/src/current/v2.0/sql-statements.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: SQL Statements -summary: Overview of SQL statements supported by CockroachDB. -toc: true ---- - -CockroachDB supports the following SQL statements. Click a statement for more details. - -{{site.data.alerts.callout_success}}In the built-in SQL shell, use \h [statement] to get inline help about a specific statement.{{site.data.alerts.end}} - - -## Data Manipulation Statements - -Statement | Usage -----------|------------ -[`CREATE TABLE AS`](create-table-as.html) | Create a new table in a database using the results from a [selection query](selection-queries.html). -[`DELETE`](delete.html) | Delete specific rows from a table. -[`EXPLAIN`](explain.html) | View debugging and analysis details for a statement that operates over tabular data. -[`IMPORT`](import.html) | Import an entire table's data via CSV files. -[`INSERT`](insert.html) | Insert rows into a table. -[`SELECT`](select-clause.html) | Select specific rows and columns from a table and optionally compute derived values. -[`SHOW TRACE`](show-trace.html) | Execute a statement and then return a trace of its actions through all of CockroachDB's software layers. -[`TABLE`](selection-queries.html#table-clause) | Select all rows and columns from a table. -[`TRUNCATE`](truncate.html) | Delete all rows from specified tables. -[`UPDATE`](update.html) | Update rows in a table. -[`UPSERT`](upsert.html) | Insert rows that do not violate uniqueness constraints; update rows that do. -[`VALUES`](selection-queries.html#values-clause) | Return rows containing specific values. - -## Data Definition Statements - -Statement | Usage -----------|------------ -[`ADD COLUMN`](add-column.html) | Add columns to a table. -[`ADD CONSTRAINT`](add-constraint.html) | Add a constraint to a column. -[`ALTER COLUMN`](alter-column.html) | Change a column's [Default constraint](default-value.html) or drop the [Not Null constraint](not-null.html). -[`ALTER DATABASE`](alter-database.html) | Apply a schema change to a database. -[`ALTER INDEX`](alter-index.html) | Apply a schema change to an index. -[`ALTER SEQUENCE`](alter-sequence.html) | New in v2.0: Apply a schema change to a sequence. -[`ALTER TABLE`](alter-table.html) | Apply a schema change to a table. -[`ALTER USER`](alter-user.html) | New in v2.0: Add or change a user's password. -[`ALTER VIEW`](alter-view.html) | Rename a view. -[`CREATE DATABASE`](create-database.html) | Create a new database. -[`CREATE INDEX`](create-index.html) | Create an index for a table. -[`CREATE SEQUENCE`](create-sequence.html) | New in v2.0: Create a new sequence. -[`CREATE TABLE`](create-table.html) | Create a new table in a database. -[`CREATE TABLE AS`](create-table-as.html) | Create a new table in a database using the results from a [selection query](selection-queries.html). -[`CREATE VIEW`](create-view.html) | Create a new [view](views.html) in a database. -[`DROP COLUMN`](drop-column.html) | Remove columns from a table. -[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from a column. -[`DROP DATABASE`](drop-database.html) | Remove a database and all its objects. -[`DROP INDEX`](drop-index.html) | Remove an index for a table. -[`DROP SEQUENCE`](drop-sequence.html) | New in v2.0: Remove a sequence. -[`DROP TABLE`](drop-table.html) | Remove a table. -[`DROP VIEW`](drop-view.html)| Remove a view. -[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Turn SQL audit logging on or off for a table. -[`RENAME COLUMN`](rename-column.html) | Rename a column in a table. -[`RENAME DATABASE`](rename-database.html) | Rename a database. -[`RENAME INDEX`](rename-index.html) | Rename an index for a table. -[`RENAME SEQUENCE`](rename-sequence.html) | Rename a sequence. -[`RENAME TABLE`](rename-table.html) | Rename a table or move a table between databases. -[`SHOW COLUMNS`](show-columns.html) | View details about columns in a table. -[`SHOW CONSTRAINTS`](show-constraints.html) | List constraints on a table. -[`SHOW CREATE SEQUENCE`](show-create-sequence.html) | New in v2.0: View the `CREATE SEQUENCE` statement that would create a copy of the specified sequence. -[`SHOW CREATE TABLE`](show-create-table.html) | View the `CREATE TABLE` statement that would create a copy of the specified table. -[`SHOW CREATE VIEW`](show-create-view.html) | View the `CREATE VIEW` statement that would create a copy of the specified view. -[`SHOW DATABASES`](show-databases.html) | List databases in the cluster. -[`SHOW INDEX`](show-index.html) | View index information for a table. -[`SHOW SCHEMAS`](show-schemas.html) | New in v2.0: List the schemas in a database. -[`SHOW TABLES`](show-tables.html) | List tables or views in a database or virtual schema. -[`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html) | Show range information about a specific table or index. -[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the table or index. -[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column. - -## Transaction Management Statements - -Statement | Usage -----------|------------ -[`BEGIN`](begin-transaction.html)| Initiate a [transaction](transactions.html). -[`COMMIT`](commit-transaction.html) | Commit the current [transaction](transactions.html). -[`RELEASE SAVEPOINT`](release-savepoint.html) | When using the CockroachDB-provided function for client-side [transaction retries](transactions.html#transaction-retries), commit the transaction's changes once there are no retryable errors. -[`ROLLBACK`](rollback-transaction.html) | Discard all updates made by the current [transaction](transactions.html) or, when using the CockroachDB-provided function for client-side [transaction retries](transactions.html#transaction-retries), rollback to the `cockroach_restart` savepoint and retry the transaction. -[`SAVEPOINT`](savepoint.html) | When using the CockroachDB-provided function for client-side [transaction retries](transactions.html#transaction-retries), start a retryable transaction. -[`SET TRANSACTION`](set-transaction.html) | Set the isolation level or priority for the session or for an individual [transaction](transactions.html). -[`SHOW`](show-vars.html) | View the current [transaction settings](transactions.html). - -## Access Management Statements - -Statement | Usage -----------|------------ -[`CREATE ROLE`](create-role.html) | New in v2.0: Create SQL [roles](roles.html), which are groups containing any number of roles and users as members. -[`CREATE USER`](create-user.html) | Create SQL users, which lets you control [privileges](privileges.html) on your databases and tables. -[`DROP ROLE`](drop-role.html) | New in v2.0: Remove one or more SQL [roles](roles.html). -[`DROP USER`](drop-user.html) | Remove one or more SQL users. -[`GRANT `](grant.html) | Grant privileges to [users](create-and-manage-users.html) or [roles](roles.html). -[`GRANT `](grant-roles.html) | New in v2.0: Add a [role](roles.html) or [user](create-and-manage-users.html) as a member to a role. -[`REVOKE `](revoke.html) | Revoke privileges from [users](create-and-manage-users.html) or [roles](roles.html). -[`REVOKE `](revoke-roles.html) | New in v2.0: Revoke a [role](roles.html) or [user's](create-and-manage-users.html) membership to a role. -[`SHOW GRANTS`](show-grants.html) | View privileges granted to users. -[`SHOW ROLES`](show-roles.html) | Lists the roles for all databases. -[`SHOW USERS`](show-users.html) | Lists the users for all databases. - -## Session Management Statements - -Statement | Usage -----------|------------ -[`RESET`](reset-vars.html) | Reset a session variable to its default value. -[`SET`](set-vars.html) | Set a current session variable. -[`SET TRANSACTION`](set-transaction.html) | Set the isolation level or priority for an individual [transaction](transactions.html). -[`SHOW`](show-vars.html) | List the current session or transaction settings. - -## Cluster Management Statements - -Statement | Usage -----------|------------ -[`RESET CLUSTER SETTING`](reset-cluster-setting.html) | Reset a cluster setting to its default value. -[`SET CLUSTER SETTING`](set-cluster-setting.html) | Set a cluster-wide setting. -[`SHOW ALL CLUSTER SETTINGS`](show-cluster-setting.html) | List the current cluster-wide settings. -[`SHOW SESSIONS`](show-sessions.html) | List details about currently active sessions. - -## Query Management Statements - -Statement | Usage -----------|------------ -[`CANCEL QUERY`](cancel-query.html) | Cancel a running SQL query. -[`SHOW QUERIES`](show-queries.html) | List details about current active SQL queries. - -## Job Management Statements - -Jobs in CockroachDB represent tasks that might not complete immediately, such as schema changes or enterprise backups or restores. - -Statement | Usage -----------|------------ -[`CANCEL JOB`](cancel-job.html) | Cancel a `BACKUP`, `RESTORE`, or `IMPORT` job. -[`PAUSE JOB`](pause-job.html) | Pause a `BACKUP`, `RESTORE`, or `IMPORT` job. -[`RESUME JOB`](resume-job.html) | Resume paused `BACKUP`, `RESTORE`, or `IMPORT` jobs. -[`SHOW JOBS`](show-jobs.html) | View information on jobs. - -## Backup & Restore Statements (Enterprise) - -The following statements are available only to [enterprise](https://www.cockroachlabs.com/product/cockroachdb/) users. - -{{site.data.alerts.callout_info}}For non-enterprise users, see Back up Data and Restore Data.{{site.data.alerts.end}} - -Statement | Usage -----------|------------ -[`BACKUP`](backup.html) | Create disaster recovery backups of databases and tables. -[`RESTORE`](restore.html) | Restore databases and tables using your backups. -[`SHOW BACKUP`](show-backup.html) | List the contents of a backup. diff --git a/src/current/v2.0/sql.md b/src/current/v2.0/sql.md deleted file mode 100644 index 8ebc30575ce..00000000000 --- a/src/current/v2.0/sql.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: SQL -summary: CockroachDB's external API is Standard SQL with extensions. -toc: false ---- - -At the lowest level, CockroachDB is a distributed, strongly-consistent, transactional key-value store, but the external API is [Standard SQL with extensions](sql-feature-support.html). This provides developers familiar relational concepts such as schemas, tables, columns, and indexes and the ability to structure, manipulate, and query data using well-established and time-proven tools and processes. Also, since CockroachDB supports the PostgreSQL wire protocol, it’s simple to get your application talking to Cockroach; just find your [PostgreSQL language-specific driver](install-client-drivers.html) and start building. - -## See Also - -- [SQL Feature Support](sql-feature-support.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) -- [Use the Built-In SQL Client](use-the-built-in-sql-client.html) -- [SQL in CockroachDB: Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/) -- [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) \ No newline at end of file diff --git a/src/current/v2.0/start-a-local-cluster-in-docker.md b/src/current/v2.0/start-a-local-cluster-in-docker.md deleted file mode 100644 index 967136441ae..00000000000 --- a/src/current/v2.0/start-a-local-cluster-in-docker.md +++ /dev/null @@ -1,268 +0,0 @@ ---- -title: Start a Cluster in Docker (Insecure) -summary: Run an insecure multi-node CockroachDB cluster across multiple Docker containers on a single host. -toc: false -allowed_hashes: [os-mac, os-linux, os-windows] ---- - - - -
      - - - -
      - -Once you've [installed the official CockroachDB Docker image](install-cockroachdb.html), it's simple to run an insecure multi-node cluster across multiple Docker containers on a single host, using Docker volumes to persist node data. - -{{site.data.alerts.callout_danger}}Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker and is not recommended for production deployments. To run a physically distributed cluster in containers, use an orchestration tool like Kubernetes or Docker Swarm. See Orchestration for more details.{{site.data.alerts.end}} - - - -
      -{% include {{ page.version.version }}/start-in-docker/mac-linux-steps.md %} - -## Step 5. Monitor the cluster - -When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host. To check out the Admin UI metrics for your cluster, point your browser to that port on `localhost`, i.e., `http://localhost:8080`, and click **Metrics** on the left-hand navigation bar. - -CockroachDB Admin UI - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Store** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_success}}For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our replication, rebalancing, fault tolerance demos.{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -{% include copy-clipboard.html %} -~~~ shell -$ docker stop roach1 roach2 roach3 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ docker rm roach1 roach2 roach3 -~~~ - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data -~~~ -
      - -
      -{% include {{ page.version.version }}/start-in-docker/mac-linux-steps.md %} - -## Step 5. Monitor the cluster - -When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host. To check out the Admin UI metrics for your cluster, point your browser to that port on `localhost`, i.e., `http://localhost:8080` and click **Metrics** on the left. - -CockroachDB Admin UI - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Store** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_success}}For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our replication, rebalancing, fault tolerance demos.{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -{% include copy-clipboard.html %} -~~~ shell -$ docker stop roach1 roach2 roach3 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ docker rm roach1 roach2 roach3 -~~~ - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data -~~~ -
      - -
      -## Before You Begin - -If you have not already installed the official CockroachDB Docker image, go to [Install CockroachDB](install-cockroachdb.html) and follow the instructions under **Use Docker**. - -## Step 1. Create a bridge network - -Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks. - -
      PS C:\Users\username> docker network create -d bridge roachnet
      - -We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like. - -## Step 2. Start the first node - -{{site.data.alerts.callout_info}}Be sure to replace <username> in the -v flag with your actual username.{{site.data.alerts.end}} - -
      PS C:\Users\username> docker run -d `
      ---name=roach1 `
      ---hostname=roach1 `
      ---net=roachnet `
      --p 26257:26257 -p 8080:8080 `
      --v "//c/Users/<username>/cockroach-data/roach1:/cockroach/cockroach-data" `
      -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure
      - -This command creates a container and starts the first CockroachDB node inside it. Let's look at each part: - -- `docker run`: The Docker command to start a new container. -- `-d`: This flag runs the container in the background so you can continue the next steps in the same shell. -- `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container. -- `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster. -- `--net`: The bridge network for the container to join. See step 1 for more details. -- `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the Admin UI (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser. -- `-v "//c/Users//cockroach-data/roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in `Users//cockroach-data/roach1` on the host and will persist after the container is stopped or deleted. For more details, see Docker's Bind Mounts topic. -- `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode. - -## Step 3. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. - -To simulate a real deployment, scale your cluster by adding two more nodes: - -{{site.data.alerts.callout_info}}Again, be sure to replace <username> in the -v flag with your actual username.{{site.data.alerts.end}} - -
      # Start the second container/node:
      -PS C:\Users\username> docker run -d `
      ---name=roach2 `
      ---hostname=roach2 `
      ---net=roachnet `
      --v "//c/Users/<username>/cockroach-data/roach2:/cockroach/cockroach-data" `
      -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
      -
      -# Start the third container/node:
      -PS C:\Users\username> docker run -d `
      ---name=roach3 `
      ---hostname=roach3 `
      ---net=roachnet `
      --v "//c/Users/<username>/cockroach-data/roach3:/cockroach/cockroach-data" `
      -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
      - -These commands add two more containers and start CockroachDB nodes inside them, joining them to the first node. There are only a few differences to note from step 2: - -- `-v`: This flag mounts a host directory as a data volume. Data and logs for these nodes will be stored in `Users//cockroach-data/roach2` and `Users//cockroach-data/roach3` on the host and will persist after the containers are stopped or deleted. -- `--join`: This flag joins the new nodes to the cluster, using the first container's `hostname`. Note that since each node is in a unique container, using identical default ports won’t cause conflicts. - -## Step 4. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, use the `docker exec` command to start the [built-in SQL shell](use-the-built-in-sql-client.html) in the first container: - -
      PS C:\Users\username> docker exec -it roach1 ./cockroach sql --insecure
      -# Welcome to the cockroach SQL interface.
      -# All statements must be terminated by a semicolon.
      -# To exit: CTRL + D.
      - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -~~~ sql -> CREATE DATABASE bank; - -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - -> INSERT INTO bank.accounts VALUES (1, 1000.50); - -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -~~~ sql -> \q -~~~ - -Then start the SQL shell in the second container: - -
      PS C:\Users\username> docker exec -it roach2 ./cockroach sql --insecure
      -# Welcome to the cockroach SQL interface.
      -# All statements must be terminated by a semicolon.
      -# To exit: CTRL + D.
      - -Now run the same `SELECT` query: - -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -When you're done, exit the SQL shell on node 2: - -~~~ sql -> \q -~~~ - -## Step 5. Monitor the cluster - -When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host. To check out the [Admin UI](admin-ui-overview.html) metrics for your cluster, point your browser to that port on `localhost`, i.e., `http://localhost:8080` and click **Metrics** on the left. - -CockroachDB Admin UI - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Store** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_success}}For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our replication, rebalancing, fault tolerance demos.{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -
      # Stop the containers:
      -PS C:\Users\username> docker stop roach1 roach2 roach3
      -
      -# Remove the containers:
      -PS C:\Users\username> docker rm roach1 roach2 roach3
      - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -
      Remove-Item C:\Users\username> cockroach-data -recurse
      - -
      - -## What's Next? - -- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html) -- [Install the client driver](install-client-drivers.html) for your preferred language -- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html) -- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, and fault tolerance diff --git a/src/current/v2.0/start-a-local-cluster.md b/src/current/v2.0/start-a-local-cluster.md deleted file mode 100644 index 3463a64130d..00000000000 --- a/src/current/v2.0/start-a-local-cluster.md +++ /dev/null @@ -1,270 +0,0 @@ ---- -title: Start a Local Cluster (Insecure) -summary: Run an insecure multi-node CockroachDB cluster locally with each node listening on a different port. -toc: true -toc_not_nested: true ---- - - - -Once you’ve [installed CockroachDB](install-cockroachdb.html), it’s simple to start an insecure multi-node cluster locally. - -{{site.data.alerts.callout_info}}Running multiple nodes on a single host is useful for testing out CockroachDB, but it's not recommended for production deployments. To run a physically distributed cluster in production, see Manual Deployment or Orchestrated Deployment.{{site.data.alerts.end}} - - -## Before You Begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start the first node - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---host=localhost -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: http://localhost:8080 -sql: postgresql://root@localhost:26257?sslmode=disable -logs: cockroach-data/logs -store[0]: path=cockroach-data -status: initialized new cluster -clusterID: {dab8130a-d20b-4753-85ba-14d8956a294c} -nodeID: 1 -~~~ - -This command starts a node in insecure mode, accepting most [`cockroach start`](start-a-node.html) defaults. - -- The `--insecure` flag makes communication unencrypted. -- Since this is a purely local cluster, `--host=localhost` tells the node to listens only on `localhost`, with default ports used for internal and client traffic (`26257`) and for HTTP requests from the Admin UI (`8080`). -- Node data is stored in the `cockroach-data` directory. -- The [standard output](start-a-node.html#standard-output) gives you helpful details such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. - -## Step 2. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. This step helps you simulate a real deployment locally. - -In a new terminal, add the second node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---join=localhost:26257 -~~~ - -In a new terminal, add the third node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---join=localhost:26257 -~~~ - -The main difference in these commands is that you use the `--join` flag to connect the new nodes to the cluster, specifying the address and port of the first node, in this case `localhost:26257`. Since you're running all nodes on the same machine, you also set the `--store`, `--port`, and `--http-port` flags to locations and ports not used by other nodes, but in a real deployment, with each node on a different machine, the defaults would suffice. - -## Step 3. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, open a new terminal and connect the [built-in SQL client](use-the-built-in-sql-client.html) to node 1: - -{{site.data.alerts.callout_info}}The SQL client is built into the cockroach binary, so nothing extra is needed.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bank.accounts VALUES (1, 1000.50); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Then connect the SQL shell to node 2, this time specifying the node's non-default port: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26258 -~~~ - -{{site.data.alerts.callout_info}}In a real deployment, all nodes would likely use the default port 26257, and so you wouldn't need to set the --port flag.{{site.data.alerts.end}} - -Now run the same `SELECT` query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -Exit the SQL shell on node 2: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 4. Monitor the cluster - -Access the [Admin UI](admin-ui-overview.html) for your cluster by pointing a browser to `http://localhost:8080`, or to the address in the `admin` field in the standard output of any node on startup. Then click **Metrics** on the left-hand navigation bar. - -CockroachDB Admin UI - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Node** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_info}}Capacity metrics can be incorrect when running multiple nodes on a single machine. For more details, see this limitation. {{site.data.alerts.end}} - -{{site.data.alerts.callout_success}}For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our replication, rebalancing, fault tolerance demos.{{site.data.alerts.end}} - -## Step 5. Stop the cluster - -Once you're done with your test cluster, switch to the terminal running the first node and press **CTRL-C** to stop the node. - -At this point, with 2 nodes still online, the cluster remains operational because a majority of replicas are available. To verify that the cluster has tolerated this "failure", connect the built-in SQL shell to nodes 2 or 3. You can do this in the same terminal or in a new terminal. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --port=26258 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Now stop nodes 2 and 3 by switching to their terminals and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}}For node 3, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 of 3 nodes left, a majority of replicas are not available, and so the cluster is no longer operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data node2 node3 -~~~ - -## Step 6. Restart the cluster - -If you decide to use the cluster for further testing, you'll need to restart at least 2 of your 3 nodes from the directories containing the nodes' data stores. - -Restart the first node from the parent directory of `cockroach-data/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---host=localhost -~~~ - -{{site.data.alerts.callout_info}}With only 1 node back online, the cluster will not yet be operational, so you will not see a response to the above command until after you restart the second node. -{{site.data.alerts.end}} - -In a new terminal, restart the second node from the parent directory of `node2/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node2 \ ---host=localhost \ ---port=26258 \ ---http-port=8081 \ ---join=localhost:26257 -~~~ - -In a new terminal, restart the third node from the parent directory of `node3/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node3 \ ---host=localhost \ ---port=26259 \ ---http-port=8082 \ ---join=localhost:26257 -~~~ - -## What's Next? - -- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html) -- [Install the client driver](install-client-drivers.html) for your preferred language -- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html) -- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, fault tolerance, and cloud migration. diff --git a/src/current/v2.0/start-a-node.md b/src/current/v2.0/start-a-node.md deleted file mode 100644 index 9f112266a21..00000000000 --- a/src/current/v2.0/start-a-node.md +++ /dev/null @@ -1,307 +0,0 @@ ---- -title: Start a Node -summary: To start a new CockroachDB cluster, or add a node to an existing cluster, run the cockroach start command. -toc: true ---- - -This page explains the `cockroach start` [command](cockroach-commands.html), which you use to start nodes as a new cluster or add nodes to an existing cluster. For a full walk-through of the cluster startup and initialization process, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -{{site.data.alerts.callout_info}}Node-level settings are defined by flags passed to the cockroach start command and cannot be changed without stopping and restarting the node. In contrast, some cluster-wide settings are defined via SQL statements and can be updated anytime after a cluster has been started. For more details, see Cluster Settings.{{site.data.alerts.end}} - - -## Synopsis - -~~~ shell -# Start a single-node cluster: -$ cockroach start - -# Start a multi-node cluster: -$ cockroach start & -$ cockroach init - -# Add a node to a cluster: -$ cockroach start - -# View help: -$ cockroach start --help -~~~ - -## Flags Changed in v2.0 - -The `start` command supports the following [general-use](#general) and -[logging](#logging) flags. All flags must be specified each time the -node is started, as they will not be remembered, with the exception of -the `--join` flag. Nevertheless, we recommend specifying -_all_ flags every time, including the `--join` flag, as that will -allow restarted nodes to join the cluster even if their data directory -was destroyed. - -{{site.data.alerts.callout_success}}When adding a node to an existing cluster, include the --join flag.{{site.data.alerts.end}} - -### General - -Flag | Description ------|----------- -`--advertise-host` | The IP address or hostname to tell other nodes to use. If it is a hostname, it must be resolvable from all nodes; if it is an IP address, it must be routable from all nodes.

      This flag's effect depends on how it is used in combination with `--host`. For more details, see [Networking](recommended-production-settings.html#networking). -`--attrs` | Arbitray strings, separated by colons, specifying node capability, which might include specialized hardware or number of cores, for example:

      `--attrs=ram:64gb`

      These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details. -`--background` | Set this to start the node in the background. This is better than appending `&` to the command because control is returned to the shell only once the node is ready to accept requests.

      **Note:** `--background` is suitable for writing automated test suites or maintenance procedures that need a temporary server process running in the background. It is not intended to be used to start a long-running server, because it does not fully detach from the controlling terminal. Consider using a service manager or a tool like [daemon(8)](https://www.freebsd.org/cgi/man.cgi?query=daemon&sektion=8) instead. -`--cache` | The total size for caches, shared evenly if there are multiple storage devices. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

      `--cache=.25`
      `--cache=25%`
      `--cache=1000000000 ----> 1000000000 bytes`
      `--cache=1GB ----> 1000000000 bytes`
      `--cache=1GiB ----> 1073741824 bytes`

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead.

      Changed in v1.1: **Default:** `128MiB`

      The default cache size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. See [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size) for more details. -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html). The directory must contain valid certificates if running in secure mode.

      **Default:** `${HOME}/.cockroach-certs/` -`--external-io-dir` | New in v2.0: The path of the external IO directory with which the local file access paths are prefixed while performing backup and restore operations using local node directories or NFS drives. If set to `disabled`, backups and restores using local node directories and NFS drives are disabled.

      **Default:** `extern` subdirectory of the first configured [`store`](#store).

      To set the `--external-io-dir` flag to the locations you want to use without needing to restart nodes, create symlinks to the desired locations from within the `extern` directory. -`--host` | The IP address or hostname to listen on for connections from other nodes and clients. The node will also advertise itself to other nodes using this value if `--advertise-host` is not specified.

      This flag's effect depends on how it is used in combination with `--advertise-host`. For more details, see [Networking](recommended-production-settings.html#networking).

      **Default:** Listen on all IP addresses and advertise the node's canonical hostname to other nodes -`--http-host` | The hostname or IP address to listen on for Admin UI HTTP requests.

      **Default:** same as `--host` -`--http-port` | The port to bind to for Admin UI HTTP requests.

      **Default:** `8080` -`--insecure` | Run in insecure mode. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

      Note the following risks: An insecure cluster is open to any client that can access any node's IP addresses; any user, even `root`, can log in without providing a password; any user, connecting as `root`, can read or write any data in your cluster; and there is no network encryption or authentication, and thus no confidentiality.

      **Default:** `false` -`--join`
      `-j` | The addresses for connecting the node to a cluster.

      Changed in v1.1: When starting a multi-node cluster for the first time, set this flag to the addresses of 3-5 of the initial nodes. Then run the [`cockroach init`](initialize-a-cluster.html) command against any of the nodes to complete cluster startup. See the [example](#start-a-multi-node-cluster) below for more details.

      When starting a singe-node cluster, leave this flag out. This will cause the node to initialize a new single-node cluster without needing to run the `cockroach init` command. See the [example](#start-a-single-node-cluster) below for more details.

      When adding a node to an existing cluster, set this flag to 3-5 of the nodes already in the cluster; it's easiest to use the same list of addresses that was used to start the initial nodes. -`--listening-url-file` | The file to which the node's SQL connection URL will be written on successful startup, in addition to being printed to the [standard output](#standard-output).

      This is particularly helpful in identifying the node's port when an unused port is assigned automatically (`--port=0`). -`--locality` | Arbitrary key-value pairs that describe the location of the node. Locality might include country, region, datacenter, rack, etc. For more details, see [Locality](#locality) below. -`--max-disk-temp-storage` | New in v1.1: The maximum on-disk storage capacity available to store temporary data for SQL queries that exceed the memory budget (see `--max-sql-memory`). This ensures that JOINs, sorts, and other memory-intensive SQL operations are able to spill intermediate results to disk. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit (e.g., `.25`, `25%`, `500GB`, `1TB`, `1TiB`).

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. Also, if expressed as a percentage, this value is interpreted relative to the size of the first store. However, the temporary space usage is never counted towards any store usage; therefore, when setting this value, it's important to ensure that the size of this temporary storage plus the size of the first store doesn't exceed the capacity of the storage device.

      New in v2.0: The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

      **Default:** `32GiB` -`--max-offset` | The maximum allowed clock offset for the cluster. If observed clock offsets exceed this limit, servers will crash to minimize the likelihood of reading inconsistent data. Increasing this value will increase the time to recovery of failures as well as the frequency of uncertainty-based read restarts.

      Note that this value must be the same on all nodes in the cluster and cannot be changed with a [rolling upgrade](upgrade-cockroach-version.html). In order to change it, first stop every node in the cluster. Then once the entire cluster is offline, restart each node with the new value.

      **Default:** `500ms` -`--max-sql-memory` | The maximum in-memory storage capacity available to store temporary data for SQL queries, including prepared queries and intermediate data rows during query execution. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

      `--max-sql-memory=.25`
      `--max-sql-memory=25%`
      `--max-sql-memory=10000000000 ----> 1000000000 bytes`
      `--max-sql-memory=1GB ----> 1000000000 bytes`
      `--max-sql-memory=1GiB ----> 1073741824 bytes`

      New in v2.0: The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead.

      Changed in v1.1: **Default:** `128MiB`

      The default SQL memory size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. See [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size) for more details. -`--pid-file` | The file to which the node's process ID will be written on successful startup. When this flag is not set, the process ID is not written to file. -`--port`
      `-p` | The port to bind to for internal and client communication.

      To have an unused port assigned automatically, pass `--port=0`.

      **Env Variable:** `COCKROACH_PORT`
      **Default:** `26257` -`--store`
      `-s` | The file path to a storage device and, optionally, store attributes and maximum size. When using multiple storage devices for a node, this flag must be specified separately for each device, for example:

      `--store=/mnt/ssd01 --store=/mnt/ssd02`

      For more details, see [Store](#store) below. -`--temp-dir` | New in v2.0: The path of the node's temporary store directory. On node start up, the location for the temporary files is printed to the standard output.

      **Default:** Subdirectory of the first [store](#store) - -### Locality - -The `--locality` flag accepts arbitrary key-value pairs that describe the location of the node. Locality might include country, region, datacenter, rack, etc. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. - -- CockroachDB spreads the replicas of each piece of data across as diverse a set of localities as possible, with the order determining the priority. However, locality can also be used to influence the location of data replicas in various ways using [replication zones](configure-replication-zones.html#replication-constraints). - -- When there is high latency between nodes (e.g., cross-datacenter deployments), CockroachDB uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance, also known as ["follow-the-workload"](demo-follow-the-workload.html). In a deployment across more than 3 datacenters, however, to ensure that all data benefits from "follow-the-workload", you must increase your replication factor to match the total number of datacenters. - -- Locality is also a prerequisite for using the [table partitioning](partitioning.html) and [**Node Map**](enable-node-map.html) enterprise features. - -#### Example - -~~~ shell -# Locality flag for nodes in US East datacenter: ---locality=region=us,datacenter=us-east - -# Locality flag for nodes in US Central datacenter: ---locality=region=us,datacenter=us-central - -# Locality flag for nodes in US West datacenter: ---locality=region=us,datacenter=us-west -~~~ - -### Store - -The `--store` flag supports the following fields. Note that commas are used to separate fields, and so are forbidden in all field values. - -{{site.data.alerts.callout_info}}In-memory storage is not suitable for production deployments at this time.{{site.data.alerts.end}} - -Field | Description -------|------------ -`type` | For in-memory storage, set this field to `mem`; otherwise, leave this field out. The `path` field must not be set when `type=mem`. -`path` | The file path to the storage device. When not setting `attr` or `size`, the `path` field label can be left out:

      `--store=/mnt/ssd01`

      When either of those fields are set, however, the `path` field label must be used:

      `--store=path=/mnt/ssd01,size=20GB`

      **Default:** `cockroach-data` -`attrs` | Arbitrary strings, separated by colons, specifying disk type or capability. These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details.

      In most cases, node-level `--locality` or `--attrs` are preferable to store-level attributes, but this field can be used to match capabilities for storage of individual databases or tables. For example, an OLTP database would probably want to allocate space for its tables only on solid state devices, whereas append-only time series might prefer cheaper spinning drives. Typical attributes include whether the store is flash (`ssd`) or spinny disk (`hdd`), as well as speeds and other specs, for example:

      `--store=path=/mnt/hda1,attrs=hdd:7200rpm` -`size` | The maximum size allocated to the node. When this size is reached, CockroachDB attempts to rebalance data to other nodes with available capacity. When there's no capacity elsewhere, this limit will be exceeded. Also, data may be written to the node faster than the cluster can rebalance it away; in this case, as long as capacity is available elsewhere, CockroachDB will gradually rebalance data down to the store limit.

      The `size` can be specified either in a bytes-based unit or as a percentage of hard drive space (notated as a decimal or with `%`), for example:

      `--store=path=/mnt/ssd01,size=10000000000 ----> 10000000000 bytes`
      `--store=path=/mnt/ssd01,size=20GB ----> 20000000000 bytes`
      `--store=path=/mnt/ssd01,size=20GiB ----> 21474836480 bytes`
      `--store=path=/mnt/ssd01,size=0.02TiB ----> 21474836480 bytes`
      `--store=path=/mnt/ssd01,size=20% ----> 20% of available space`
      `--store=path=/mnt/ssd01,size=0.2 ----> 20% of available space`
      `--store=path=/mnt/ssd01,size=.2 ----> 20% of available space`

      **Default:** 100%

      For an in-memory store, the `size` field is required and must be set to the true maximum bytes or percentage of available memory, for example:

      `--store=type=mem,size=20GB`
      `--store=type=mem,size=90%`

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. - -### Logging - -By default, `cockroach start` writes all messages to log files, and prints nothing to `stderr`. However, you can control the process's [logging](debug-and-error-logs.html) behavior with the following flags: - -{% include {{ page.version.version }}/misc/logging-flags.md %} - -#### Defaults - -`cockroach start` uses the equivalent values for these logging flags by default: - -- `--log-dir=/logs` -- `--logtostderr=NONE` - -This means, by default, CockroachDB writes all messages to log files, and never prints to `stderr`. - -## Standard Output - -When you run `cockroach start`, some helpful details are printed to the standard output: - -~~~ shell -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: http://ROACHs-MBP:8080 -sql: postgresql://root@ROACHs-MBP:26257?sslmode=disable -logs: node1/logs -temp dir: /node1/cockroach-temp430873933 -external I/O path: /node1/extern -attrs: ram:64gb -locality: datacenter=us-east1 -store[0]: path=node1,attrs=ssd -status: initialized new cluster -clusterID: 7b9329d0-580d-4035-8319-53ba8b74b213 -nodeID: 1 -~~~ - -{{site.data.alerts.callout_success}}These details are also written to the INFO log in the /logs directory in case you need to refer to them at a later time.{{site.data.alerts.end}} - -Field | Description -------|------------ -`build` | The version of CockroachDB you are running. -`admin` | The URL for accessing the Admin UI. -`sql` | The connection URL for your client. -`logs` | The directory containing debug log data. -`temp dir` | The temporary store directory of the node. -`external I/O path` | The external IO directory with which the local file access paths are prefixed while performing [backup](backup.html) and [restore](restore.html) operations using local node directories or NFS drives. -`attrs` | If node-level attributes were specified in the `--attrs` flag, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`locality` | If values describing the locality of the node were specified in the `--locality` field, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`store[n]` | The directory containing store data, where `[n]` is the index of the store, e.g., `store[0]` for the first store, `store[1]` for the second store.

      If store-level attributes were specified in the `attrs` field of the [`--store`](#store) flag, they are listed in this field as well. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`status` | Whether the node is the first in the cluster (`initialized new cluster`), joined an existing cluster for the first time (`initialized new node, joined pre-existing cluster`), or rejoined an existing cluster (`restarted pre-existing node`). -`clusterID` | The ID of the cluster.

      When trying to join a node to an existing cluster, if this ID is different than the ID of the existing cluster, the node has started a new cluster. This may be due to conflicting information in the node's data directory. For additional guidance, see the [troubleshooting](common-errors.html#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) docs. -`nodeID` | The ID of the node. - -## Examples - -### Start a single-node cluster - -
      - - -
      - -To start a single-node cluster, run the `cockroach start` command without the `--join` flag: - -
      -~~~ -$ cockroach start \ ---certs-dir=certs \ ---host= \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -
      -~~~ -$ cockroach start \ ---insecure \ ---host= \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -### Start a multi-node cluster - -
      - - -
      - -To start a multi-node cluster, run the `cockroach start` command for each node, setting the `--join` flag to the addressess of 3-5 of the initial nodes: - -
      -~~~ -$ cockroach start \ ---certs-dir=certs \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -~~~ -$ cockroach start \ ---certs-dir=certs \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -~~~ -$ cockroach start \ ---certs-dir=certs \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -
      -~~~ -$ cockroach start \ ---insecure \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -~~~ -$ cockroach start \ ---insecure \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -~~~ -$ cockroach start \ ---insecure \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -Then run the [`cockroach init`](initialize-a-cluster.html) command against any node to perform a one-time cluster initialization: - -
      -~~~ -$ cockroach init \ ---certs-dir=certs \ ---host=
      -~~~ -
      - -
      -~~~ -$ cockroach init \ ---insecure \ ---host=
      -~~~ -
      - -### Add a node to a cluster - -
      - - -
      - -To add a node to an existing cluster, run the `cockroach start` command, setting the `--join` flag to the addressess of 3-5 of the nodes already in the cluster: - -
      -~~~ -$ cockroach start \ ---certs-dir=certs \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -
      -~~~ -$ cockroach start \ ---insecure \ ---host= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -## See Also - -- [Initialize a Cluster](initialize-a-cluster.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](start-a-local-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/stop-a-node.md b/src/current/v2.0/stop-a-node.md deleted file mode 100644 index 47ba66c976c..00000000000 --- a/src/current/v2.0/stop-a-node.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: Stop a Node -summary: This page shows you how to use the cockroach quit command to temporarily stop a node that you plan to restart. -toc: true ---- - -This page shows you how to use the `cockroach quit` [command](cockroach-commands.html) to temporarily stop a node that you plan to restart, for example, during the process of [upgrading your cluster's version of CockroachDB](upgrade-cockroach-version.html) or to perform planned maintenance (e.g., upgrading system software). - -For information about permanently removing nodes to downsize a cluster or react to hardware failures, see [Remove Nodes](remove-nodes.html). - - -## Overview - -### How It Works - -When you stop a node, it performs the following steps: - -- Finishes in-flight requests. Note that this is a best effort that times out after the duration specified by the `server.shutdown.query_wait` [cluster setting](cluster-settings.html). -- Transfers all **range leases** and Raft leadership to other nodes. -- Gossips its draining state to the cluster, so that other nodes do not try to distribute query planning to the draining node, and no leases are transferred to the draining node. Note that this is a best effort that times out after the duration specified by the `server.shutdown.drain_wait` [cluster setting](cluster-settings.html), so other nodes may not receive the gossip info in time. -- No new ranges are transferred to the draining node, to avoid a possible loss of quorum after the node shuts down. - -If the node then stays offline for a certain amount of time (5 minutes by default), the cluster considers the node dead and starts to transfer its **range replicas** to other nodes as well. - -After that, if the node comes back online, its range replicas will determine whether or not they are still valid members of replica groups. If a range replica is still valid and any data in its range has changed, it will receive updates from another replica in the group. If a range replica is no longer valid, it will be removed from the node. - -Basic terms: - -- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -- **Range Lease:** For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. - -### Considerations - -{% include {{ page.version.version }}/faq/planned-maintenance.md %} - -## Synopsis - -~~~ shell -# Temporarily stop a node: -$ cockroach quit - -# View help: -$ cockroach quit --help -~~~ - -## Flags - -The `quit` command supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--decommission` | If specified, the node will be permanently removed instead of temporarily stopped. See [Remove Nodes](remove-nodes.html) for more details. - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -By default, the `quit` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Stop a Node from the Machine Where It's Running - -1. SSH to the machine where the node is running. - -2. If the node is running in the background and you are using a process manager for automatic restarts, use the process manager to stop the `cockroach` process without restarting it. - - If the node is running in the background and you are not using a process manager, send a kill signal to the `cockroach` process, for example: - - ~~~ shell - $ pkill cockroach - ~~~ - - If the node is running in the foreground, press `CTRL-C`. - -3. Verify that the `cockroach` process has stopped: - - ~~~ shell - $ ps aux | grep cockroach - ~~~ - - Alternately, you can check the node's logs for the message `server drained and shutdown completed`. - -### Stop a Node from Another Machine - -
      - - -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Create a `certs` directory and copy the CA certificate and the client certificate and key for the `root` user into the directory. - -3. Run the `cockroach quit` command without the `--decommission` flag: - - ~~~ shell - $ cockroach quit --certs-dir=certs --host=
      - ~~~ -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Run the `cockroach quit` command without the `--decommission` flag: - - ~~~ shell - $ cockroach quit --insecure --host=
      - ~~~ -
      - -## See Also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Permanently Remove Nodes from a Cluster](remove-nodes.html) -- [Upgrade a Cluster's Version](upgrade-cockroach-version.html) diff --git a/src/current/v2.0/string.md b/src/current/v2.0/string.md deleted file mode 100644 index f8af4ee51aa..00000000000 --- a/src/current/v2.0/string.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: STRING -summary: The STRING data type stores a string of Unicode characters. -toc: true ---- - -The `STRING` [data type](data-types.html) stores a string of Unicode characters. - - - - -## Aliases - -In CockroachDB, the following are aliases for `STRING`: - -- `CHARACTER` -- `CHAR` -- `VARCHAR` -- `TEXT` - -And the following are aliases for `STRING(n)`: - -- `CHARACTER(n)` -- `CHARACTER VARYING(n)` -- `CHAR(n)` -- `CHAR VARYING(n)` -- `VARCHAR(n)` - -## Length - -To limit the length of a string column, use `STRING(n)`, where `n` is the maximum number of Unicode code points (normally thought of as "characters") allowed. - -When inserting a string: - -- If the value exceeds the column's length limit, CockroachDB gives an error. -- If the value is cast as a string with a length limit (e.g., `CAST('hello world' AS STRING(5))`), CockroachDB truncates to the limit. -- If the value is under the column's length limit, CockroachDB does **not** add padding. This applies to `STRING(n)` and all its aliases. - -## Syntax - -A value of type `STRING` can be expressed using a variety of formats. -See [string literals](sql-constants.html#string-literals) for more details. - -When printing out a `STRING` value in the [SQL shell](use-the-built-in-sql-client.html), the shell uses the simple -SQL string literal format if the value doesn't contain special character, -or the escaped format otherwise. - -### Collations - -`STRING` values accept [collations](collate.html), which lets you sort strings according to language- and country-specific rules. - -## Size - -The size of a `STRING` value is variable, but it's recommended to keep values under 64 kilobytes to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Examples - -~~~ sql -> CREATE TABLE strings (a STRING PRIMARY KEY, b STRING(4), c TEXT); - -> SHOW COLUMNS FROM strings; -~~~ -~~~ -+-------+-----------+-------+---------+ -| Field | Type | Null | Default | -+-------+-----------+-------+---------+ -| a | STRING | false | NULL | -| b | STRING(4) | true | NULL | -| c | STRING | true | NULL | -+-------+-----------+-------+---------+ -~~~ -~~~ sql -> INSERT INTO strings VALUES ('a1b2c3d4', 'e5f6', 'g7h8i9'); - -> SELECT * FROM strings; -~~~ -~~~ -+----------+------+--------+ -| a | b | c | -+----------+------+--------+ -| a1b2c3d4 | e5f6 | g7h8i9 | -+----------+------+--------+ -~~~ - -## Supported Casting & Conversion - -`STRING` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`BOOL` | Requires supported [`BOOL`](bool.html) string format, e.g., `'true'`. -`BYTES` | For more details, [see here](bytes.html#supported-conversions). -`DATE` | Requires supported [`DATE`](date.html) string format, e.g., `'2016-01-25'`. -`DECIMAL` | Requires supported [`DECIMAL`](decimal.html) string format, e.g., `'1.1'`. -`FLOAT` | Requires supported [`FLOAT`](float.html) string format, e.g., `'1.1'`. -`INET` | Requires supported [`INET`](inet.html) string format, e.g, `'192.168.0.1'`. -`INT` | Requires supported [`INT`](int.html) string format, e.g., `'10'`. -`INTERVAL` | Requires supported [`INTERVAL`](interval.html) string format, e.g., `'1h2m3s4ms5us6ns'`. -`TIME` | New in v2.0: Requires supported [`TIME`](time.html) string format, e.g., `'01:22:12'` (microsecond precision). -`TIMESTAMP` | Requires supported [`TIMESTAMP`](timestamp.html) string format, e.g., `'2016-01-25 10:10:10.555555'`. - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/strong-consistency.md b/src/current/v2.0/strong-consistency.md deleted file mode 100644 index 8653d09b30a..00000000000 --- a/src/current/v2.0/strong-consistency.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Strong Consistency -summary: CockroachDB implements consistent replication via majority consensus between replicas. -toc: false ---- - -CockroachDB replicates your data multiple times and guarantees consistency between replicas. - -Key properties: - -- CockroachDB guarantees serializable SQL transactions - [as long as system clocks are synchronized with NTP](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/) -- No downtime for server restarts, machine failures, or datacenter outages -- Local or wide-area replication with no stale reads on failover -- Employs Raft, a popular successor to Paxos - -How does this work? - -- Stored data is versioned with MVCC, so reads simply limit - their scope to the data visible at the time the read transaction started. - -- Writes are serviced using the - [Raft consensus algorithm](https://raft.github.io/), a popular - alternative to - Paxos. - A consensus algorithm guarantees that any majority of replicas - together always agree on whether an update was committed - successfully. Updates (writes) must reach a majority of replicas (2 - out of 3 by default) before they are considered committed. - - To ensure that a write transaction does not interfere with - read transactions that start after it, CockroachDB also uses - a [timestamp cache](https://www.cockroachlabs.com/blog/serializable-lockless-distributed-isolation-cockroachdb/) - which remembers when data was last read by ongoing transactions. - - This ensures that clients always observe serializable consistency - with regards to other concurrent transactions. - -Strong consistency in CockroachDB - -## See Also - -- [Serializable, Lockless, Distributed: Isolation in CockroachDB](https://www.cockroachlabs.com/blog/serializable-lockless-distributed-isolation-cockroachdb/) -- [Consensus, Made Thrive](https://www.cockroachlabs.com/blog/consensus-made-thrive/) -- [Trust, But Verify: How CockroachDB Checks Replication](https://www.cockroachlabs.com/blog/trust-but-verify-cockroachdb-checks-replication/) -- [Living Without Atomic Clocks](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/) -- [The CockroachDB Architecture Document](https://github.com/cockroachdb/cockroach/blob/master/docs/design.md) diff --git a/src/current/v2.0/subqueries.md b/src/current/v2.0/subqueries.md deleted file mode 100644 index e633c3e9150..00000000000 --- a/src/current/v2.0/subqueries.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Subqueries -summary: Subqueries enable the use of the results from a query within another query. -toc: false ---- - -SQL subqueries enable reuse of the results from a [selection query](selection-queries.html) within another query. - -CockroachDB supports two kinds of subqueries: - -- **Relational** subqueries which appear as operand in [selection queries](selection-queries.html) or [table expressions](table-expressions.html). -- **Scalar** subqueries which appear as operand in a [scalar expression](scalar-expressions.html). - -## Data Writes in Subqueries - -When a subquery contains a data-modifying statement (`INSERT`, -`DELETE`, etc.), the data modification is always executed to -completion even if the surrounding query only uses a subset of the -result rows. - -This is true both for subqueries defined using the `(...)` or `[...]` -notations, and those defined using -[`WITH`](common-table-expressions.html). - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * - FROM [INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x] - LIMIT 1; -~~~ - -This query always inserts 3 rows into `t`, even though the surrounding -query only observes 1 row using [`LIMIT`](limit-offset.html). - -## Correlated Subqueries - -A subquery is said to be "correlated" when it uses table or column -names defined in the surrounding query. - -At this time, CockroachDB only supports non-correlated subqueries: all the table and column names listed in the subquery must be defined in the subquery itself. - -If you find yourself wanting to use a correlated subquery, consider that a correlated subquery can often be transformed into a non-correlated subquery using a [join expression](joins.html). - -For example: - -{% include copy-clipboard.html %} -~~~ sql -# Find every customer with at least one order. -> SELECT c.name - FROM customers c - WHERE EXISTS(SELECT * FROM orders o WHERE o.customer_id = c.id); -~~~ - -The subquery is correlated because it uses `c` defined in the -surrounding query. It is thus not yet supported by CockroachDB; -however, it can be transformed to the equivalent query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT DISTINCT ON(c.id) c.name - FROM customers c CROSS JOIN orders o - WHERE c.id = o.customer_id; -~~~ - -See also [this question on Stack Overflow: Procedurally transform subquery into join](https://stackoverflow.com/questions/1772609/procedurally-transform-subquery-into-join). - -{{site.data.alerts.callout_info}}CockroachDBs is currently undergoing major changes to introduce support for correlated subqueries. This limitation is expected to be lifted in a future release.{{site.data.alerts.end}} - -## Performance Best Practices - -{{site.data.alerts.callout_info}}CockroachDBs is currently undergoing major changes to evolve and improve the performance of subqueries. The restrictions and workarounds listed in this section will be lifted or made unnecessary over time.{{site.data.alerts.end}} - -- Scalar subqueries currently disable the distribution of the execution of a query. To ensure maximum performance on queries that process a large number of rows, make the client application compute the subquery results ahead of time and pass these results directly in the surrounding query. - -- The results of scalar subqueries are currently loaded entirely into memory when the execution of the surrounding query starts. To prevent execution errors due to memory exhaustion, ensure that subqueries return as few results as possible. - -## See Also - -- [Selection Queries](selection-queries.html) -- [Scalar Expressions](scalar-expressions.html) -- [Table Expressions](table-expressions.html) -- [Performance Best Practices - Overview](performance-best-practices-overview.html) diff --git a/src/current/v2.0/support-resources.md b/src/current/v2.0/support-resources.md deleted file mode 100644 index f24ce21740b..00000000000 --- a/src/current/v2.0/support-resources.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Support Resources -summary: There are various ways to reach out for support from Cockroach Labs and our community. -toc: false ---- - -For each major release of CockroachDB, Cockroach Labs provides maintenance support for at least 365 days and assistance support for at least an additional 180 days. For more details, see the [Release Support Policy](../releases/release-support-policy.html). - -If you're having an issue with CockroachDB, you can reach out for support from Cockroach Labs and our community: - -- [Troubleshooting documentation](troubleshooting-overview.html) -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [File a GitHub issue](file-an-issue.html) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) - -Because CockroachDB is open source, we also rely on contributions from users like you. If you know how to help users who might be struggling with a problem, we hope you will! diff --git a/src/current/v2.0/table-expressions.md b/src/current/v2.0/table-expressions.md deleted file mode 100644 index be37514e045..00000000000 --- a/src/current/v2.0/table-expressions.md +++ /dev/null @@ -1,391 +0,0 @@ ---- -title: Table Expressions -summary: Table expressions define a data source in selection clauses. -toc: true ---- - -Table expressions define a data source in the `FROM` sub-clause of -[simple `SELECT` clauses](select-clause.html), or as parameter to -[`TABLE`](selection-queries.html#table-clause). - -[SQL Joins](joins.html) are a particular kind of table -expression. - - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/table_ref.html %}
      -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | A [table or view name](#table-or-view-names). -`table_alias_name` | A name to use in an [aliased table expression](#aliased-table-expressions). -`name` | One or more aliases for the column names, to use in an [aliased table expression](#aliased-table-expressions). -`scan_parameters` | Optional syntax to [force index selection](#force-index-selection). -`func_application` | [Results from a function](#results-from-a-function). -`explainable_stmt` | [Use the result rows](#using-the-output-of-other-statements) of an [explainable statement](explain.html#explainable-statements). -`select_stmt` | A [selection query](selection-queries.html) to use as [subquery](#subqueries-as-table-expressions). -`joined_table` | A [join expression](joins.html). - -## Table Expressions Language - -The synopsis above really defines a mini-language to construct -complex table expressions from simpler parts. - -Construct | Description | Examples -----------|-------------|------------ -`table_name [@ scan_parameters]` | [Access a table or view](#access-a-table-or-view). | `accounts`, `accounts@name_idx` -`function_name ( exprs ... )` | Generate tabular data using a [scalar function](#scalar-function-as-data-source) or [table generator function](#table-generator-functions). | `sin(1.2)`, `generate_series(1,10)` -`
      [AS] name [( name [, ...] )]` | [Rename a table and optionally columns](#aliased-table-expressions). | `accounts a`, `accounts AS a`, `accounts AS a(id, b)` -`
      WITH ORDINALITY` | [Enumerate the result rows](#ordinality-annotation). | `accounts WITH ORDINALITY` -`
      JOIN
      ON ...` | [Join expression](joins.html). | `orders o JOIN customers c ON o.customer_id = c.id` -`(... subquery ...)` | A [selection query](selection-queries.html) used as [subquery](#subqueries-as-table-expressions). | `(SELECT * FROM customers c)` -`[... statement ...]` | [Use the result rows](#using-the-output-of-other-statements) of an [explainable statement](explain.html#explainable-statements).

      This is a CockroachDB extension. | `[SHOW COLUMNS FROM accounts]` - -The following sections provide details on each of these options. - -## Table Expressions That Generate Data - -The following sections describe primary table expressions that produce -data. - -### Access a Table or View - -#### Table or View Names - -Syntax: - -~~~ -identifier -identifier.identifier -identifier.identifier.identifier -~~~ - -A single SQL identifier in a table expression context designates -the contents of the table, [view](views.html), or sequence with that name -in the current database, as configured by [`SET DATABASE`](set-vars.html). - -Changed in v2.0: If the name is composed of two or more identifiers, [name resolution](sql-name-resolution.html) rules apply. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -- uses table `users` in the current database -> SELECT * FROM mydb.users; -- uses table `users` in database `mydb` -~~~ - -#### Force Index Selection - -By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table. - -{{site.data.alerts.callout_info}}Index selection can impact performance, but does not change the result of a query.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM accounts; -~~~ -~~~ -+----------+-------------------+--------+-----+--------+-----------+---------+----------+ -| Table | Name | Unique | Seq | Column | Direction | Storing | Implicit | -+----------+-------------------+--------+-----+--------+-----------+---------+----------+ -| accounts | primary | true | 1 | id | ASC | false | false | -| accounts | accounts_name_idx | false | 1 | name | ASC | false | false | -| accounts | accounts_name_idx | false | 2 | id | ASC | false | true | -+----------+-------------------+--------+-----+--------+-----------+---------+----------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance -FROM accounts@accounts_name_idx -WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 750 | -| Edna Barath | 2200 | -+-------------+---------+ -~~~ - -### Access a Common Table Expression - -A single identifier in a table expression context can refer to a -[common table expression](common-table-expressions.html) defined -earlier. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (SELECT * FROM users) - SELECT * FROM a; -- "a" refers to "WITH a AS .." -~~~ - -### Results From a Function - -A table expression can use the results from a function application as -a data source. - -Syntax: - -~~~ -name ( arguments... ) -~~~ - -The name of a function, followed by an opening parenthesis, followed -by zero or more [scalar expressions](scalar-expressions.html), followed by -a closing parenthesis. - -The resolution of the function name follows the same rules as the -resolution of table names. See [Name -Resolution](sql-name-resolution.html) for more details. - -#### Scalar Function as Data Source - -New in v2.0 - -When a [function returning a single -value](scalar-expressions.html#function-calls-and-sql-special-forms) is -used as a table expression, it is interpreted as tabular data with a -single column and single row containing the function results. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM sin(3.2) -~~~ -~~~ -+-----------------------+ -| sin | -+-----------------------+ -| -0.058374143427580086 | -+-----------------------+ -~~~ - -{{site.data.alerts.callout_info}}CockroachDB only supports this syntax for compatibility with PostgreSQL. The canonical syntax to evaluate scalar functions is as a direct target of SELECT, for example SELECT sin(3.2).{{site.data.alerts.end}} - - -#### Table Generator Functions - -Some functions directly generate tabular data with multiple rows from -a single function application. This is also called a "set-returning -function". - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM generate_series(1, 3) -~~~ -~~~ -+-----------------+ -| generate_series | -+-----------------+ -| 1 | -| 2 | -| 3 | -+-----------------+ -~~~ - -{{site.data.alerts.callout_info}}Currently CockroachDB only supports a small set of generator function compatible with the PostgreSQL set-generating functions of the same name.{{site.data.alerts.end}} - -## Operators That Extend a Table Expression - -The following sections describe table expressions that change the -metadata around tabular data, or add more data, without modifying the -data of the underlying operand. - -### Aliased Table Expressions - -Aliased table expressions rename tables and columns temporarily in -the context of the current query. - -Syntax: - -~~~ -
      AS -
      AS (, , ...) -~~~ - -In the first form, the table expression is equivalent to its left operand -with a new name for the entire table, and where columns retain their original name. - -In the second form, the columns are also renamed. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT c.x FROM (SELECT COUNT(*) AS x FROM users) AS c; -> SELECT c.x FROM (SELECT COUNT(*) FROM users) AS c(x); -~~~ - -### Ordinality Annotation - -Syntax: - -~~~ -
      WITH ORDINALITY -~~~ - -Designates a data source equivalent to the table expression operand with -an extra "Ordinality" column that enumerates every row in the data source. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM (VALUES('a'),('b'),('c')); -~~~ -~~~ -+---------+ -| column1 | -+---------+ -| a | -| b | -| c | -+---------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM (VALUES ('a'), ('b'), ('c')) WITH ORDINALITY; -~~~ -~~~ -+---------+------------+ -| column1 | ordinality | -+---------+------------+ -| a | 1 | -| b | 2 | -| c | 3 | -+---------+------------+ -~~~ - -{{site.data.alerts.callout_info}}WITH ORDINALITY necessarily prevents some optimizations of the surrounding query. Use it sparingly if performance is a concern, and always check the output of EXPLAIN in case of doubt. {{site.data.alerts.end}} - -## Join Expressions - -Join expressions combine the results of two or more table expressions -based on conditions on the values of particular columns. - -See [Join Expressions](joins.html) for more details. - -## Using Other Queries as Table Expressions - -The following sections describe how to use the results produced by -another SQL query or statement as a table expression. - -### Subqueries as Table Expressions - -Any [selection -query](selection-queries.html) enclosed -between parentheses can be used as a table expression, including -[simple `SELECT` clauses](select-clause.html). This is called a -"[subquery](subqueries.html)". - -Syntax: - -~~~ -( ... subquery ... ) -~~~ - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT c+2 FROM (SELECT COUNT(*) AS c FROM users); -> SELECT * FROM (VALUES(1), (2), (3)); -> SELECT firstname || ' ' || lastname FROM (TABLE employees); -~~~ - -{{site.data.alerts.callout_info}} -
        -
      • See also Subqueries for more details and performance best practices.
      • -
      • To use other statements that produce data in a table expression, for example SHOW, use the square bracket notation.
      • -
      -{{site.data.alerts.end}} - -
      - -### Using the Output of Other Statements - -Syntax: - -~~~ -[ ] -~~~ - -An [explainable statement](explain.html#explainable-statements) -between square brackets in a table expression context designates the -output of executing said statement. - -{{site.data.alerts.callout_info}}This is a CockroachDB extension. This -syntax complements the subquery syntax using -parentheses, which is restricted to selection queries. It was introduced to enable use of any explainable statement as subquery, including SHOW and other non-query statements.{{site.data.alerts.end}} - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT "Field" FROM [SHOW COLUMNS FROM customer]; -~~~ -~~~ -+---------+ -| Field | -+---------+ -| id | -| name | -| address | -+---------+ -~~~ - -The following statement inserts Albert in the `employee` table and -immediately creates a matching entry in the `management` table with the -auto-generated employee ID, without requiring a round trip with the SQL -client: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO management(manager, reportee) - VALUES ((SELECT id FROM employee WHERE name = 'Diana'), - (SELECT id FROM [INSERT INTO employee(name) VALUES ('Albert') RETURNING id])); -~~~ - -## Composability - -Table expressions are used in the [`SELECT`](select-clause.html) and -[`TABLE`](selection-queries.html#table-clause) variants of [selection -clauses](selection-queries.html#selection-clauses), and thus can appear everywhere where -a selection clause is possible. For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ... FROM
      ,
      , ... -> TABLE
      -> INSERT INTO ... SELECT ... FROM
      ,
      , ... -> INSERT INTO ... TABLE
      -> CREATE TABLE ... AS SELECT ... FROM
      ,
      , ... -> UPSERT INTO ... SELECT ... FROM
      ,
      , ... -~~~ - -For more options to compose query results, see [Selection Queries](selection-queries.html). - -## See Also - -- [Constants](sql-constants.html) -- [Selection Queries](selection-queries.html) - - [Selection Clauses](selection-queries.html#selection-clauses) -- [Explainable Statements](explain.html#explainable-statements) -- [Scalar Expressions](scalar-expressions.html) -- [Data Types](data-types.html) -- [Subqueries](subqueries.html) diff --git a/src/current/v2.0/time.md b/src/current/v2.0/time.md deleted file mode 100644 index b6388f9101f..00000000000 --- a/src/current/v2.0/time.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: TIME -summary: The TIME data type stores a time of day without a time zone. -toc: true ---- -New in v2.0: The `TIME` [data type](data-types.html) stores the time of day without a time zone. - - -## Aliases - -In CockroachDB, the following are aliases: - -- `TIME WITHOUT TIME ZONE` - -## Syntax - -A constant value of type `TIME` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `TIME` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`TIME`. - -The string format for time is `HH:MM:SS.SSSSSS`. For example: `TIME '05:40:00.000001'`. - -CockroachDB also supports using uninterpreted -[string literals](sql-constants.html#string-literals) in contexts -where a `TIME` value is otherwise expected. - -The fractional portion of `TIME` is optional and is rounded to microseconds (i.e., six digits after the decimal) for compatibility with the PostgreSQL wire protocol. - -## Size - -A `TIME` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE time (a INT PRIMARY KEY, b TIME); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM time; -~~~ -~~~ -+-------+------+-------+---------+-------------+ -| Field | Type | Null | Default | Indices | -+-------+------+-------+---------+-------------+ -| a | INT | false | NULL | {"primary"} | -| b | TIME | true | NULL | {} | -+-------+------+-------+---------+-------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO time VALUES (1, TIME '05:40:00'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM time; -~~~ -~~~ -+---+---------------------------+ -| a | b | -+---+---------------------------+ -| 1 | 0000-01-01 05:40:00+00:00 | -+---+---------------------------+ -~~~ -{{site.data.alerts.callout_info}}The cockroach sql shell displays the date and timezone due to the Go SQL driver it uses. Other client drivers may behave similarly. In such cases, however, the date and timezone are not relevant and are not stored in the database.{{site.data.alerts.end}} - -## Supported Casting & Conversion - -`TIME` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INTERVAL` | Converts to the span of time since midnight (00:00) -`STRING` | Converts to format `'HH:MM:SS.SSSSSS'` (microsecond precision) - -## See Also - -- [Data Types](data-types.html) -- [SQL Feature Support](sql-feature-support.html) diff --git a/src/current/v2.0/timestamp.md b/src/current/v2.0/timestamp.md deleted file mode 100644 index 7ceb259acb3..00000000000 --- a/src/current/v2.0/timestamp.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: TIMESTAMP / TIMESTAMPTZ -summary: The TIMESTAMP and TIMESTAMPTZ data types stores a date and time pair in UTC. -toc: true ---- - -The `TIMESTAMP` and `TIMESTAMPTZ` [data types](data-types.html) stores a date and time pair in UTC. - - -## Time Zone Details - -`TIMESTAMP` has two variants: - -- `TIMESTAMP WITH TIME ZONE` converts `TIMESTAMP` values from UTC to the client's session time zone (unless another time zone is specified for the value). However, it is conceptually important to note that `TIMESTAMP WITH TIME ZONE` *does not* store any time zone data. - - {{site.data.alerts.callout_info}}The default session time zone is UTC, which means that by default `TIMESTAMP WITH TIME ZONE` values display in UTC.{{site.data.alerts.end}} - -- `TIMESTAMP WITHOUT TIME ZONE` presents all `TIMESTAMP` values in UTC. - -The difference between these two types is that `TIMESTAMP WITH TIME ZONE` uses the client's session time zone, while the other simply does not. This behavior extends to functions like `now()` and `extract()` on `TIMESTAMP WITH TIME ZONE` values. - -### Best Practices - -We recommend always using the `...WITH TIME ZONE` variant because the `...WITHOUT TIME ZONE` variant can sometimes lead to unexpected behaviors when it ignores a session offset. However, we also recommend you avoid setting a session time for your database. - -## Aliases - -In CockroachDB, the following are aliases: - -- `TIMESTAMP`, `TIMESTAMP WITHOUT TIME ZONE` -- `TIMESTAMPTZ`, `TIMESTAMP WITH TIME ZONE` - -## Syntax - -A constant value of type `TIMESTAMP`/`TIMESTAMPTZ` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `TIMESTAMP`/`TIMESTAMPTZ` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`TIMESTAMP`/`TIMESTAMPTZ`. - -`TIMESTAMP` constants can be expressed using the -following string literal formats: - -Format | Example --------|-------- -Date only | `TIMESTAMP '2016-01-25'` -Date and Time | `TIMESTAMP '2016-01-25 10:10:10.555555'` -ISO 8601 | `TIMESTAMP '2016-01-25T10:10:10.555555'` - -To express a `TIMESTAMPTZ` value (with time zone offset from UTC), use -the following format: `TIMESTAMPTZ '2016-01-25 10:10:10.555555-05:00'` - -When it is unambiguous, a simple unannotated string literal can also -be automatically interpreted as type `TIMESTAMP` or `TIMESTAMPTZ`. - -Note that the fractional portion is optional and is rounded to -microseconds (6 digits after decimal) for compatibility with the -PostgreSQL wire protocol. - -## Size - -A `TIMESTAMP` column supports values up to 12 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -~~~ sql -> CREATE TABLE timestamps (a INT PRIMARY KEY, b TIMESTAMPTZ); - -> SHOW COLUMNS FROM timestamps; -~~~ -~~~ -+-------+--------------------------+-------+---------+ -| Field | Type | Null | Default | -+-------+--------------------------+-------+---------+ -| a | INT | false | NULL | -| b | TIMESTAMP WITH TIME ZONE | true | NULL | -+-------+--------------------------+-------+---------+ -(2 rows) -~~~ -~~~ sql -> INSERT INTO timestamps VALUES (1, TIMESTAMPTZ '2016-03-26 10:10:10-05:00'), (2, TIMESTAMPTZ '2016-03-26'); - -> SELECT * FROM timestamps; -~~~ -~~~ -+---+---------------------------+ -| a | b | -+---+---------------------------+ -| 1 | 2016-03-26 15:10:10+00:00 | -| 2 | 2016-03-26 00:00:00+00:00 | -+---+---------------------------+ -# Note that the first timestamp is UTC-05:00, which is the equivalent of EST. -~~~ - -## Supported Casting & Conversion - -`TIMESTAMP` values can be [cast](data-types.html#data-type-conversions-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`SERIAL` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`DECIMAL` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`FLOAT` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`TIME` | New in v2.0: Converts to the time portion (HH:MM:SS) of the timestamp -`DATE` | –– -`STRING` | –– - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/transactions.md b/src/current/v2.0/transactions.md deleted file mode 100644 index 330c887de7c..00000000000 --- a/src/current/v2.0/transactions.md +++ /dev/null @@ -1,279 +0,0 @@ ---- -title: Transactions -summary: CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. -toc: true ---- - -CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees [ACID semantics](https://en.wikipedia.org/wiki/ACID) spanning arbitrary tables and rows, even when data is distributed. If a transaction succeeds, all mutations are applied together with virtual simultaneity. If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. CockroachDB guarantees that while a transaction is pending, it is isolated from other concurrent transactions with serializable [isolation](#isolation-levels). - -{{site.data.alerts.callout_info}}For a detailed discussion of CockroachDB transaction semantics, see How CockroachDB Does Distributed Atomic Transactions and Serializable, Lockless, Distributed: Isolation in CockroachDB. Note that the explanation of the transaction model described in this blog post is slightly out of date. See the Transaction Retries section for more details.{{site.data.alerts.end}} - -## SQL Statements - -Each of the following SQL statements control transactions in some way. - -| Statement | Function | -|-----------|----------| -| [`BEGIN`](begin-transaction.html) | Initiate a transaction, as well as control its [priority](#transaction-priorities) and [isolation level](#isolation-levels). | -| [`SET TRANSACTION`](set-transaction.html) | Control a transaction's [priority](#transaction-priorities) and [isolation level](#isolation-levels). | -| [`SAVEPOINT cockroach_restart`](savepoint.html) | Declare the transaction as [retryable](#client-side-transaction-retries). This lets you retry the transaction if it doesn't succeed because a higher priority transaction concurrently or recently accessed the same values. | -| [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html) | Commit a [retryable transaction](#client-side-transaction-retries). | -| [`COMMIT`](commit-transaction.html) | Commit a non-retryable transaction or clear the connection after committing a retryable transaction. | -| [`ROLLBACK TO SAVEPOINT cockroach_restart`](rollback-transaction.html) | Handle [retryable errors](#error-handling) by rolling back a transaction's changes and increasing its priority. | -| [`ROLLBACK`](rollback-transaction.html) | Abort a transaction and roll the database back to its state before the transaction began. | -| [`SHOW`](show-vars.html) | Display the current transaction settings. | - -## Syntax - -In CockroachDB, a transaction is set up by surrounding SQL statements with the [`BEGIN`](begin-transaction.html) and [`COMMIT`](commit-transaction.html) statements. - -To use [client-side transaction retries](#client-side-transaction-retries), you should also include the `SAVEPOINT cockroach_restart`, `ROLLBACK TO SAVEPOINT cockroach_restart` and `RELEASE SAVEPOINT` statements. - -~~~ sql -> BEGIN; - -> SAVEPOINT cockroach_restart; - - - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -At any time before it's committed, you can abort the transaction by executing the [`ROLLBACK`](rollback-transaction.html) statement. - -Clients using transactions must also include logic to handle [retries](#transaction-retries). - -## Error Handling - -To handle errors in transactions, you should check for the following types of server-side errors: - -Type | Description ------|------------ -**Retryable Errors** | Errors with the code `40001` or string `retry transaction`, which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](#client-side-transaction-retries) for more details. -**Ambiguous Errors** | Errors with the code `40003` that are returned in response to `RELEASE SAVEPOINT` (or `COMMIT` when not using `SAVEPOINT`), which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. See [here](common-errors.html#result-is-ambiguous) for more about this kind of error. -**SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the Unique constraint generates an `23505` error. After encountering these errors, you can either issue a `COMMIT` or `ROLLBACK` to abort the transaction and revert the database to its state before the transaction began.

      If you want to attempt the same set of statements again, you must begin a completely new transaction. - -## Transaction Contention - -Transactions in CockroachDB lock data resources that are written during their execution. When a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. When a dependency cycle is detected between transactions, the transaction with the higher priority aborts the dependent transaction to avoid deadlock, which must be retried. - -For more details about transaction contention and best practices for avoiding contention, see [Understanding and Avoiding Transaction Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). - -## Transaction Retries - -Transactions may require retries if they experience deadlock or [read/write contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other concurrent transactions which cannot be resolved without allowing potential [serializable anomalies](https://en.wikipedia.org/wiki/Serializability). (However, it's possible to mitigate read-write conflicts by performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).) - -There are two cases for handling transaction retries: - -- [Automatic retries](#automatic-retries), which CockroachDB processes for you. -- [Client-side intervention](#client-side-intervention), which your application must handle. - -### Automatic Retries - -CockroachDB automatically retries individual statements (implicit transactions) -and transactions sent from the client as a single batch, as long as the size of -the results being produced for the client (including protocol overhead) is less -than 16KiB. Once that buffer overflows, CockroachDB starts streaming results back to -the client, at which point automatic retries cannot be performed any more. As -long as the results of a single statement or batch of statements are known to -stay clear of this limit, the client does not need to worry about transaction -retries. - -In future versions of CockroachDB, we plan on providing stronger guarantees for -read-only queries that return at most one row, regardless of the size of that -row. - -#### Individual Statements - -Individual statements are treated as implicit transactions, and so they fall -under the rules described above. If the results are small enough, they will be -automatically retried. In particular, `INSERT/UPDATE/DELETE` statements without -a `RETURNING` clause are guaranteed to have minuscule result sizes. -For example, the following statement would be automatically retried by CockroachDB: - -~~~ sql -> DELETE FROM customers WHERE id = 1; -~~~ - -#### Batched Statements - -Transactions can be sent from the client as a single batch. Batching implies that CockroachDB receives multiple statements without being asked to return results in between them; instead, CockroachDB returns results after executing all of the statements (except if the accumulated results overflow an internal buffer, in which case they are returned sooner and automatic retries can no longer be performed). - -Batching is generally controlled by your driver or client's behavior. Technically, it can be achieved in two ways, both supporting automatic retries: - -1. When the client/driver is using the [PostgreSQL Extended Query protocol](https://www.postgresql.org/docs/10/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY), a batch is made up of all queries sent in between two `Sync` messages. Many drivers support such batches through explicit batching constructs. - -2. The the client/driver is using the [PostgreSQL Simple Query protocol](https://www.postgresql.org/docs/10/static/protocol-flow.html#id-1.10.5.7.4), a batch is made up semicolon-separated strings sent as a unit to CockroachDB. For example, in Go, this code would send a single batch (which would be automatically retried): - - ~~~ go - db.Exec( - "BEGIN; - - DELETE FROM customers WHERE id = 1; - - DELETE orders WHERE customer = 1; - - COMMIT;" - ) - ~~~ - -{{site.data.alerts.callout_info}} -Within a batch of statements, CockroachDB infers that the statements are not -conditional on the results of previous statements, so it can retry all of them. -Of course, if the transaction relies on conditional logic (e.g., statement 2 is -executed only for some results of statement 1), then the transaction cannot be -all sent to CockroachDB as a single batch. In these common cases, CockroachDB -cannot retry, say, statement 2 in isolation. Since results for statement 1 have -already been delivered to the client by the time statement 2 is forcing the -transaction to retry, the client needs to be involved in retrying the whole -transaction and so you should write your transactions to use -[client-side intervention](#client-side-intervention). -{{site.data.alerts.end}} - -### Client-Side Intervention - -Your application should include client-side retry handling when the statements are sent individually, such as: - -~~~ sql -> BEGIN; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, status) VALUES (1, 'new'); - -> COMMIT; -~~~ - -To indicate a transaction must be retried, CockroachDB surfaces an error with the code `40001` and an error message that begins with the string `retry transaction`. - -To handle these types of errors you have two options: - -- *Recommended*: Use the `SAVEPOINT cockroach_restart` functions to create retryable transactions. Retryable transactions can improve performance because their priority's increased each time they are retried, making them more likely to succeed the longer they're in your system. - - For more information, see [Client-Side Transaction Retries](#client-side-transaction-retries). - -- Abort the transaction using `ROLLBACK`, and then reissue all of the statements in the transaction. This does *not* automatically increase the transaction's priority, so it's possible in high-contention workloads for transactions to take an incredibly long time to succeed. - -#### Client-Side Transaction Retries - -As one way to improve the performance of [contended transactions](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention), CockroachDB includes a set of statements that let you retry those transactions. Retrying transactions has the benefit of increasing their priority each time they're retried, increasing their likelihood to succeed. - -Retried transactions are also issued at a later timestamp, so the transaction now operates on a later snapshot of the database, so the reads might return updated data. - -Implementing client-side retries requires three statements: - -- [`SAVEPOINT cockroach_restart`](savepoint.html) declares the client's intent to retry the transaction if there are contention errors. It must be executed after `BEGIN` but before the first statement that manipulates a database. - -- [`ROLLBACK TO SAVEPOINT cockroach_restart`](rollback-transaction.html#retry-a-transaction) is used when your application detects `40001` / `retry transaction` errors. It provides you a chance to "retry" the transaction by rolling the database's state back to the beginning of the transaction and increasing the transaction's priority. - - After issuing `ROLLBACK TO SAVEPOINT cockroach_restart`, you must issue any statements you want the transaction to contain. Typically, this means recalculating values and reissuing a similar set of statements to the previous attempt. - -- [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html) commits the transaction. At this point, CockroachDB checks to see if the transaction contends with others for access to the same values; the highest priority transaction succeeds, and the others return `40001` / `retry transaction` errors. - - You must also execute `COMMIT` afterward to clear the connection for the next transaction. - -You can find examples of this in the [Syntax](#syntax) section of this page or in our [Build an App with CockroachDB](build-an-app-with-cockroachdb.html) tutorials. - -{{site.data.alerts.callout_success}}If you're building an application in the following languages, we have packages to make client-side retries simpler: -{{site.data.alerts.end}} - -It's also important to note that retried transactions are restarted at a later timestamp. This means that the transaction operates on a later snapshot of the database and related reads might retrieve updated data. - -For greater detail, here's the process a retryable transaction goes through. - -1. The transaction starts with the `BEGIN` statement. - -2. The `SAVEPOINT cockroach_restart` statement declares the intention to retry the transaction in the case of contention errors. Note that CockroachDB's savepoint implementation does not support all savepoint functionality, such as nested transactions. - -3. The statements in the transaction are executed. - -4. If a statement returns a retryable error (identified via the `40001` error code or `retry transaction` string at the start of the error message), you can issue the [`ROLLBACK TO SAVEPOINT cockroach_restart`](rollback-transaction.html) statement to restart the transaction. Alternately, the original `SAVEPOINT cockroach_restart` statement can be reissued to restart the transaction. - - You must now issue the statements in the transaction again. - - In cases where you do not want the application to retry the transaction, you can simply issue `ROLLBACK` at this point. Any other statements will be rejected by the server, as is generally the case after an error has been encountered and the transaction has not been closed. - -5. Once the transaction executes all statements without encountering contention errors, execute [`RELEASE SAVEPOINT cockroach_restart`](release-savepoint.html) to commit the changes. If this succeeds, all changes made by the transaction become visible to subsequent transactions and are guaranteed to be durable if a crash occurs. - - In some cases, the `RELEASE SAVEPOINT` statement itself can fail with a retryable error, mainly because transactions in CockroachDB only realize that they need to be restarted when they attempt to commit. If this happens, the retryable error is handled as described in step 4. - -## Transaction Parameters - -Each transaction is controlled by two parameters: its priority and its -isolation level. The following two sections detail these further. - -### Transaction Priorities - -Every transaction in CockroachDB is assigned an initial **priority**. By default, that priority is `NORMAL`, but for transactions that should be given preference in [high-contention scenarios](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention), the client can set the priority within the [`BEGIN`](begin-transaction.html) statement: - -~~~ sql -> BEGIN PRIORITY ; -~~~ - -Alternately, the client can set the priority immediately after the transaction is started as follows: - -~~~ sql -> SET TRANSACTION PRIORITY ; -~~~ - -The client can also display the current priority of the transaction with [`SHOW TRANSACTION PRIORITY`](show-vars.html). - -{{site.data.alerts.callout_info}}When two transactions contend for the same resources indirectly, they may create a dependency cycle leading to a deadlock situation, where both transactions are waiting on the other to finish. In these cases, CockroachDB allows the transaction with higher priority to abort the other, which must then retry. On retry, the transaction inherits the higher priority. This means that each retry makes a transaction more likely to succeed in the event it again experiences deadlock.{{site.data.alerts.end}} - -### Isolation Levels - -CockroachDB efficiently supports the strongest ANSI transaction isolation level: `SERIALIZABLE`. All other ANSI transaction isolaton levels (e.g., `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ`) are automatically upgraded to `SERIALIZABLE`. Weaker isolation levels have historically been used to maximize transaction throughput. However, [recent research](http://www.bailis.org/papers/acidrain-sigmod2017.pdf) has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks. CockroachDB continues to support an additional non-ANSI isolation level, `SNAPSHOT`, although it is deprecated. Clients can explicitly set a transaction's isolation when starting the transaction: - -~~~ sql -> BEGIN ISOLATION LEVEL ; -~~~ - -Alternately, the client can set the isolation level immediately after the transaction is started: - -~~~ sql -> SET TRANSACTION ISOLATION LEVEL ; -~~~ - -The client can also display the current isolation level of the transaction with [`SHOW TRANSACTION ISOLATION LEVEL`](show-vars.html). - -{{site.data.alerts.callout_info}}For a detailed discussion of isolation in CockroachDB transactions, see Serializable, Lockless, Distributed: Isolation in CockroachDB.{{site.data.alerts.end}} - -#### Serializable Isolation - -With `SERIALIZABLE` isolation, a transaction behaves as though it has the entire database all to itself for the duration of its execution. This means that no concurrent writers can affect the transaction unless they commit before it starts, and no concurrent readers can be affected by the transaction until it has successfully committed. This is the strongest level of isolation provided by CockroachDB and it's the default. - -Unlike `SNAPSHOT`, `SERIALIZABLE` isolation permits no anomalies. In order to prevent [write skew](https://en.wikipedia.org/wiki/Snapshot_isolation) anomalies, `SERIALIZABLE` isolation may require transaction restarts. - -#### Snapshot Isolation - -With `SNAPSHOT` isolation (**deprecated**), a transaction behaves as if it were reading the state of the database consistently at a fixed point in time. Unlike the `SERIALIZABLE` level, `SNAPSHOT` isolation permits the write skew anomaly. This isolation level is still supported for backwards compatibility, but you should avoid using it. It provides little benefit in terms of performance and can result in inconsistent state under certain complex workloads. Concurrency-based attacks can coerce inconsistencies into meaningfully adverse effects to system state. For this same reason, CockroachDB upgrades all requests for the much weaker ANSI `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` isolation levels into `SERIALIZABLE`. - -### Comparison to ANSI SQL Isolation Levels - -CockroachDB uses slightly different isolation levels than [ANSI SQL isolation levels](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Isolation_levels). - -#### Aliases - -- `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` are aliases for `SERIALIZABLE`. - -#### Comparison - -- The CockroachDB `SERIALIZABLE` level is stronger than the ANSI SQL `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` levels and equivalent to the ANSI SQL `SERIALIZABLE` level. -- The CockroachDB `SNAPSHOT` level (**deprecated**) is stronger than the ANSI SQL `READ UNCOMMITTED` and `READ COMMITTED` levels. - -For more information about the relationship between these levels, see [A Critique of ANSI SQL Isolation Levels](https://arxiv.org/ftp/cs/papers/0701/0701157.pdf). - -## See Also - -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`SHOW`](show-vars.html) -- [Retryable function code samples](build-an-app-with-cockroachdb.html) diff --git a/src/current/v2.0/troubleshooting-overview.md b/src/current/v2.0/troubleshooting-overview.md deleted file mode 100644 index 07eb30887ce..00000000000 --- a/src/current/v2.0/troubleshooting-overview.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Troubleshooting Overview -summary: Initial steps to take if you run in to issues with CockroachDB. -toc: false ---- - -If you run into issues with CockroachDB, there are a few initial steps you can always take: - -1. Check your [logs](debug-and-error-logs.html) for errors related to your issue. - - Logs are generated on a per-node basis, so you must either identify the node where the issue occurred or [collect the logs from all active nodes in your cluster](debug-zip.html). - - Alternately, you can [stop](stop-a-node.html) and [restart](start-a-node.html) problematic nodes with the `--logtostderr` flag to print logs to your terminal through `stderr`, letting you see all cluster activities as it occurs. - -2. Check our list of [common errors](common-errors.html) for a solution. - -3. If the problem doesn't match a common error, try the following pages: - - [Troubleshoot Cluster Setup](cluster-setup-troubleshooting.html) helps start your cluster and scale it by adding nodes. - - [Troubleshoot Query Behavior](query-behavior-troubleshooting.html) helps with unexpected query results. - -4. If you cannot resolve the issue easily yourself, the following tools can help you get unstuck: - - [Support Resources](support-resources.html) identifies ways you can get help with troubleshooting. - - [File an Issue](file-an-issue.html) provides details about filing issues that you're unable to resolve. diff --git a/src/current/v2.0/truncate.md b/src/current/v2.0/truncate.md deleted file mode 100644 index f4bd7864992..00000000000 --- a/src/current/v2.0/truncate.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: TRUNCATE -summary: The TRUNCATE statement deletes all rows from specified tables. -toc: true ---- - -The `TRUNCATE` [statement](sql-statements.html) deletes all rows from specified tables. - -{{site.data.alerts.callout_info}}The TRUNCATE removes all rows from a table by dropping the table and recreating a new table with the same name. For large tables, this is much more performant than deleting each of the rows. However, for smaller tables, it's more performant to use a DELETE statement without a WHERE clause.{{site.data.alerts.end}} - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/truncate.html %}
      - -## Required Privileges - -The user must have the `DROP` [privilege](privileges.html) on the table. - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table to truncate. -`CASCADE` | Truncate all tables with [Foreign Key](foreign-key.html) dependencies on the table being truncated.

      `CASCADE` does not list dependent tables it truncates, so should be used cautiously. -`RESTRICT` | _(Default)_ Do not truncate the table if any other tables have [Foreign Key](foreign-key.html) dependencies on it. - -## Examples - -### Truncate a Table (No Foreign Key Dependencies) - -~~~ sql -> SELECT * FROM t1; -~~~ - -~~~ -+----+------+ -| id | name | -+----+------+ -| 1 | foo | -| 2 | bar | -+----+------+ -(2 rows) -~~~ - -~~~ sql -> TRUNCATE t1; - -> SELECT * FROM t1; -~~~ - -~~~ -+----+------+ -| id | name | -+----+------+ -+----+------+ -(0 rows) -~~~ - -### Truncate a Table and Dependent Tables - -In these examples, the `orders` table has a [Foreign Key](foreign-key.html) relationship to the `customers` table. Therefore, it's only possible to truncate the `customers` table while simultaneously truncating the dependent `orders` table, either using `CASCADE` or explicitly. - -#### Truncate Dependent Tables Using `CASCADE` - -{{site.data.alerts.callout_danger}}CASCADE truncates all dependent tables without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend truncating tables explicitly in most cases. See Truncate Dependent Tables Explicitly for more details.{{site.data.alerts.end}} - -~~~ sql -> TRUNCATE customers; -~~~ - -~~~ -pq: "customers" is referenced by foreign key from table "orders" -~~~ - -~~~sql -> TRUNCATE customers CASCADE; - -> SELECT * FROM customers; -~~~ - -~~~ -+----+-------+ -| id | email | -+----+-------+ -+----+-------+ -(0 rows) -~~~ - -~~~ sql -> SELECT * FROM orders; -~~~ - -~~~ -+----+----------+------------+ -| id | customer | orderTotal | -+----+----------+------------+ -+----+----------+------------+ -(0 rows) -~~~ - -#### Truncate Dependent Tables Explicitly - -~~~ sql -> TRUNCATE customers, orders; - -> SELECT * FROM customers; -~~~ - -~~~ -+----+-------+ -| id | email | -+----+-------+ -+----+-------+ -(0 rows) -~~~ - -~~~ sql -> SELECT * FROM orders; -~~~ - -~~~ -+----+----------+------------+ -| id | customer | orderTotal | -+----+----------+------------+ -+----+----------+------------+ -(0 rows) -~~~ - -## See Also - -- [`DELETE](delete.html) -- [Foreign Key constraint](foreign-key.html) diff --git a/src/current/v2.0/unique.md b/src/current/v2.0/unique.md deleted file mode 100644 index 1ca7de4b3dd..00000000000 --- a/src/current/v2.0/unique.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -title: Unique Constraint -summary: The Unique constraint specifies that each non-NULL value in the constrained column must be unique. -toc: true ---- - -The Unique [constraint](constraints.html) specifies that each non-*NULL* value in the constrained column must be unique. - - -## Details - -- You can insert *NULL* values into columns with the Unique constraint because *NULL* is the absence of a value, so it is never equal to other *NULL* values and not considered a duplicate value. This means that it's possible to insert rows that appear to be duplicates if one of the values is *NULL*. - - If you need to strictly enforce uniqueness, use the [Not Null constraint](not-null.html) in addition to the Unique constraint. You can also achieve the same behavior through the table's [Primary Key](primary-key.html). - -- Columns with the Unique constraint automatically have an [index](indexes.html) created with the name `
      __key`. To avoid having two identical indexes, you should not create indexes that exactly match the Unique constraint's columns and order.

      The Unique constraint depends on the automatically created index, so dropping the index also drops the Unique constraint. -- When using the Unique constraint on multiple columns, the collective values of the columns must be unique. This *does not* mean that each value in each column must be unique, as if you had applied the Unique constraint to each column individually. -- You can define the Unique constraint when [creating a table](#syntax), or you can add it to existing tables through [`ADD CONSTRAINT`](add-constraint.html#add-the-unique-constraint). - -## Syntax - -Unique constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -### Column Level - -
      -{% include {{ page.version.version }}/sql/diagrams/unique_column_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the constrained column. | -| `column_type` | The constrained column's [data type](data-types.html). | -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE warehouses ( - warehouse_id INT PRIMARY KEY NOT NULL, - warehouse_name STRING(35) UNIQUE, - location_id INT - ); -~~~ - -### Table Level - -
      -{% include {{ page.version.version }}/sql/diagrams/unique_table_level.html %} -
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_def` | Definitions for any other columns in the table. | -| `name` | The name you want to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). | -| `column_name` | The name of the column you want to constrain.| -| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. | - -**Example** - -~~~ sql -> CREATE TABLE logon ( - login_id INT PRIMARY KEY, - customer_id INT, - logon_date TIMESTAMP, - UNIQUE (customer_id, logon_date) - ); -~~~ - -## Usage Example - -~~~ sql -> CREATE TABLE IF NOT EXISTS logon ( - login_id INT PRIMARY KEY, - customer_id INT NOT NULL, - sales_id INT, - UNIQUE (customer_id, sales_id) - ); - -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (1, 2, 1); - -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (2, 2, 1); -~~~ -~~~ -duplicate key value (customer_id,sales_id)=(2,1) violates unique constraint "logon_customer_id_sales_id_key" -~~~ - -As mentioned in the [details](#details) above, it is possible when using the Unique constraint alone to insert *NULL* values in a way that causes rows to appear to have rows with duplicate values. - -~~~ sql -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (3, 2, NULL); - -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (4, 2, NULL); - -> SELECT customer_id, sales_id FROM logon; -~~~ -~~~ -+-------------+----------+ -| customer_id | sales_id | -+-------------+----------+ -| 2 | 1 | -| 2 | NULL | -| 2 | NULL | -+-------------+----------+ -~~~ - -## See Also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [Check constraint](check.html) -- [Default Value constraint](default-value.html) -- [Foreign Key constraint](foreign-key.html) -- [Not Null constraint](not-null.html) -- [Primary Key constraint](primary-key.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v2.0/update.md b/src/current/v2.0/update.md deleted file mode 100644 index a500155108e..00000000000 --- a/src/current/v2.0/update.md +++ /dev/null @@ -1,425 +0,0 @@ ---- -title: UPDATE -summary: The UPDATE statement updates one or more rows in a table. -toc: true ---- - -The `UPDATE` [statement](sql-statements.html) updates rows in a table. - -{{site.data.alerts.callout_danger}}If you update a row that contains a column referenced by a foreign key constraint and has an ON UPDATE action, all of the dependent rows will also be updated.{{site.data.alerts.end}} - - -## Required Privileges - -The user must have the `SELECT` and `UPDATE` [privileges](privileges.html) on the table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/update.html %} -
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`table_name` | The name of the table that contains the rows you want to update. -`AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`column_name` | The name of the column whose values you want to update. -`a_expr` | The new value you want to use, the [aggregate function](functions-and-operators.html#aggregate-functions) you want to perform, or the [scalar expression](scalar-expressions.html) you want to use. -`DEFAULT` | To fill columns with their [default values](default-value.html), use `DEFAULT VALUES` in place of `a_expr`. To fill a specific column with its default value, leave the value out of the `a_expr` or use `DEFAULT` at the appropriate position. -`column_name` | The name of a column to update. -`select_stmt` | A [selection query](selection-queries.html). Each value must match the [data type](data-types.html) of its column on the left side of `=`. -`WHERE a_expr`| `a_expr` must be a [scalar expression](scalar-expressions.html) that returns Boolean values using columns (e.g., ` = `). Update rows that return `TRUE`.

      **Without a `WHERE` clause in your statement, `UPDATE` updates all rows in the table.** -`sort_clause` | An `ORDER BY` clause. See [Ordering Query Results](query-order.html) for more details. -`limit_clause` | A `LIMIT` clause. See [Limiting Query Results](limit-offset.html) for more details. -`RETURNING target_list` | Return values based on rows updated, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html).

      To return nothing in the response, not even the number of rows updated, use `RETURNING NOTHING`. - -## Examples - -### Update a Single Column in a Single Row - -~~~ sql -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 4000.0 | Julian | -| 3 | 8700.0 | Dario | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -~~~ sql -> UPDATE accounts SET balance = 5000.0 WHERE id = 2; - -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 5000.0 | Julian | -| 3 | 8700.0 | Dario | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -### Update Multiple Columns in a Single Row - -~~~ sql -> UPDATE accounts SET (balance, customer) = (9000.0, 'Kelly') WHERE id = 2; - -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | 8700.0 | Dario | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -~~~ sql -> UPDATE accounts SET balance = 6300.0, customer = 'Stanley' WHERE id = 3; - -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | 6300.0 | Stanley | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -### Update Using `SELECT` Statement -~~~ sql -> UPDATE accounts SET (balance, customer) = - (SELECT balance, customer FROM accounts WHERE id = 2) - WHERE id = 4; - -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | 6300.0 | Stanley | -| 4 | 9000.0 | Kelly | -+----+----------+----------+ -(4 rows) -~~~ - -### Update with Default Values - -~~~ sql -> UPDATE accounts SET balance = DEFAULT where customer = 'Stanley'; - -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | NULL | Stanley | -| 4 | 9000.0 | Kelly | -+----+----------+----------+ -(4 rows) -~~~ - -### Update All Rows - -{{site.data.alerts.callout_danger}}If you do not use the WHERE clause to specify the rows to be updated, the values for all rows will be updated.{{site.data.alerts.end}} - -~~~ sql -> UPDATE accounts SET balance = 5000.0; - -> SELECT * FROM accounts; -~~~ -~~~ -+----+---------+----------+ -| id | balance | customer | -+----+---------+----------+ -| 1 | 5000.0 | Ilya | -| 2 | 5000.0 | Kelly | -| 3 | 5000.0 | Stanley | -| 4 | 5000.0 | Kelly | -+----+---------+----------+ -(4 rows) -~~~ - -### Update and Return Values - -In this example, the `RETURNING` clause returns the `id` value of the row updated. The language-specific versions assume that you have installed the relevant [client drivers](install-client-drivers.html). - -{{site.data.alerts.callout_success}}This use of RETURNING mirrors the behavior of MySQL's last_insert_id() function.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}When a driver provides a query() method for statements that return results and an exec() method for statements that do not (e.g., Go), it's likely necessary to use the query() method for UPDATE statements with RETURNING.{{site.data.alerts.end}} - -
      - - - - - -
      - -
      -

      - -~~~ sql -> UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id; -~~~ - -~~~ -+----+ -| id | -+----+ -| 1 | -+----+ -(1 row) -~~~ - -
      - -
      -

      - -~~~ python -# Import the driver. -import psycopg2 - -# Connect to the "bank" database. -conn = psycopg2.connect( - database='bank', - user='root', - host='localhost', - port=26257 -) - -# Make each statement commit immediately. -conn.set_session(autocommit=True) - -# Open a cursor to perform database operations. -cur = conn.cursor() - -# Update a row in the "accounts" table -# and return the "id" value. -cur.execute( - 'UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id' -) - -# Print out the returned value. -rows = cur.fetchall() -print('ID:') -for row in rows: - print([str(cell) for cell in row]) - -# Close the database connection. -cur.close() -conn.close() -~~~ - -The printed value would look like: - -~~~ -ID: -['1'] -~~~ - -
      - -
      -

      - -~~~ ruby -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'root', - dbname: 'bank', - host: 'localhost', - port: 26257 -) - -# Update a row in the "accounts" table -# and return the "id" value. -conn.exec( - 'UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id' -) do |res| - -# Print out the returned value. -puts "ID:" - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() -~~~ - -The printed value would look like: - -~~~ -ID: -{"id"=>"1"} -~~~ - -
      - -
      -

      - -~~~ go -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - //Connect to the "bank" database. - db, err := sql.Open( - "postgres", - "postgresql://root@localhost:26257/bank?sslmode=disable" - ) - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Update a row in the "accounts" table - // and return the "id" value. - rows, err := db.Query( - "UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id", - ) - if err != nil { - log.Fatal(err) - } - - // Print out the returned value. - defer rows.Close() - fmt.Println("ID:") - for rows.Next() { - var id int - if err := rows.Scan(&id); err != nil { - log.Fatal(err) - } - fmt.Printf("%d\n", id) - } -} -~~~ - -The printed value would look like: - -~~~ -ID: -1 -~~~ - -
      - -
      -

      - -~~~ js -var async = require('async'); - -// Require the driver. -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'root', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -pg.connect(config, function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Update a row in the "accounts" table - // and return the "id" value. - client.query( - `UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id`, - next - ); - } - ], - function (err, results) { - if (err) { - console.error('error updating and selecting from accounts', err); - finish(); - } - // Print out the returned value. - console.log('ID:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); -~~~ - -The printed value would like: - -~~~ -ID: -{ id: '1' } -~~~ - -
      - -## See Also - -- [`DELETE`](delete.html) -- [`INSERT`](insert.html) -- [`UPSERT`](upsert.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) -- [Limiting Query Results](limit-offset.html) diff --git a/src/current/v2.0/upgrade-cockroach-version.md b/src/current/v2.0/upgrade-cockroach-version.md deleted file mode 100644 index ca469cfd5f6..00000000000 --- a/src/current/v2.0/upgrade-cockroach-version.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -title: Upgrade to CockroachDB v2.0 -summary: Learn how to upgrade your CockroachDB cluster to a new version. -toc: true -toc_not_nested: true ---- - -Because of CockroachDB's [multi-active availability](multi-active-availability.html) design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations. - -{{site.data.alerts.callout_info}} -This page shows you how to upgrade to the latest v2.0 release ({{page.release_info.version}}) from v1.1.x, or from any patch release in the v2.0.x series. To upgrade within the v1.1.x series, see [the v1.1 version of this page](../v1.1/upgrade-cockroach-version.html). -{{site.data.alerts.end}} - -## Step 1. Verify that you can upgrade - -When upgrading, you can skip patch releases, **but you cannot skip full releases**. Therefore, if you are upgrading from v1.0.x to v2.0: - -1. First [upgrade to v1.1](../v1.1/upgrade-cockroach-version.html). Be sure to complete all the steps, include the [finalization step](../v1.1/upgrade-cockroach-version.html#finalize-the-upgrade) (i.e., `SET CLUSTER SETTING version = '1.1';`). - -2. Then return to this page and perform a second rolling upgrade to v2.0. - -If you are upgrading from v1.1.x or from any v2.0.x patch release, you do not have to go through intermediate releases; continue to step 2. - -## Step 2. Prepare to upgrade - -Before starting the upgrade, complete the following steps. - -1. Make sure your cluster is behind a [load balancer](recommended-production-settings.html#load-balancing), or your clients are configured to talk to multiple nodes. If your application communicates with a single node, stopping that node to upgrade its CockroachDB binary will cause your application to fail. - -2. Verify the cluster's overall health by running the [`cockroach node status`](view-node-details.html) command against any node in the cluster. - - In the response: - - If any nodes that should be live are not listed, identify why the nodes are offline and restart them before begining your upgrade. - - Make sure the `build` field shows the same version of CockroachDB for all nodes. If any nodes are behind, upgrade them to the cluster's current version first, and then start this process over. - - Make sure `ranges_unavailable` and `ranges_underreplicated` show `0` for all nodes. If there are unavailable or underreplicated ranges in your cluster, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range unavailability and underreplication before beginning your upgrade. - {{site.data.alerts.callout_success}} - Pass the `--ranges` or `--all` flag to include these range details in the response. - {{site.data.alerts.end}} - -3. Capture the cluster's current state by running the [`cockroach debug zip`](debug-zip.html) command against any node in the cluster. If the upgrade does not go according to plan, the captured details will help you and Cockroach Labs troubleshoot issues. - -4. [Back up the cluster](back-up-data.html). If the upgrade does not go according to plan, you can use the data to restore your cluster to its previous state. - -## Step 3. Perform the rolling upgrade - -For each node in your cluster, complete the following steps. - -{{site.data.alerts.callout_success}} -We recommend creating scripts to perform these steps instead of performing them manually. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -Upgrade only one node at a time, and wait at least one minute after a node rejoins the cluster to upgrade the next node. Simultaneously upgrading more than one node increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. -{{site.data.alerts.end}} - -1. Connect to the node. - -2. Terminate the `cockroach` process. - - Without a process manager like `systemd`, use this command: - - {% include copy-clipboard.html %} - ~~~ shell - $ pkill cockroach - ~~~ - - If you are using `systemd` as the process manager, use this command to stop a node without `systemd` restarting it: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl stop - ~~~ - - Then verify that the process has stopped: - - {% include copy-clipboard.html %} - ~~~ shell - $ ps aux | grep cockroach - ~~~ - - Alternately, you can check the node's logs for the message `server drained and shutdown completed`. - -3. Download and install the CockroachDB binary you want to use: - -
      - - -
      -

      - -
      - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.darwin-10.9-amd64.tgz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ tar -xzf cockroach-{{page.release_info.version}}.darwin-10.9-amd64.tgz - ~~~ -
      - -
      - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.linux-amd64.tgz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ tar -xzf cockroach-{{page.release_info.version}}.linux-amd64.tgz - ~~~ -
      - -4. If you use `cockroach` in your `$PATH`, rename the outdated `cockroach` binary, and then move the new one into its place: - -
      - - -
      -

      - -
      - {% include copy-clipboard.html %} - ~~~ shell - i="$(which cockroach)"; mv "$i" "$i"_old - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{page.release_info.version}}.darwin-10.9-amd64/cockroach /usr/local/bin/cockroach - ~~~ -
      - -
      - {% include copy-clipboard.html %} - ~~~ shell - i="$(which cockroach)"; mv "$i" "$i"_old - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{page.release_info.version}}.linux-amd64/cockroach /usr/local/bin/cockroach - ~~~ -
      - -5. Start the node to have it rejoin the cluster. - - Without a process manager like `systemd`, use this command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --join=[IP address of any other node] [other flags] - ~~~ - `[other flags]` includes any flags you [use to a start node](start-a-node.html), such as it `--host`. - - If you are using `systemd` as the process manager, run this command to start the node: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl start - ~~~ - -6. Verify the node has rejoined the cluster through its output to `stdout` or through the [admin UI](admin-ui-access-and-navigate.html). - -7. If you use `cockroach` in your `$PATH`, you can remove the old binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm /usr/local/bin/cockroach_old - ~~~ - - If you leave versioned binaries on your servers, you do not need to do anything. - -8. Wait at least one minute after the node has rejoined the cluster, and then repeat these steps for the next node. - -## Step 4. Monitor the upgraded cluster - -After upgrading all nodes in the cluster, monitor the cluster's stability and performance for at least one day. - -{{site.data.alerts.callout_danger}} -During this phase, avoid using any new v2.0 features. Doing so may prevent you from being able to perform a rolling downgrade to v1.1, if necessary. Also, it is not recommended to run enterprise [`BACKUP`](backup.html) and [`RESTORE`](restore.html) jobs during this phase, as some features like detecting schema changes or ensuring correct target expansion may behave differently in mixed version clusters. -{{site.data.alerts.end}} - -## Step 5. Finalize or revert the upgrade - -Once you have monitored the upgraded cluster for at least one day: - -- If you are satisfied with the new version, complete the steps under [Finalize the upgrade](#finalize-the-upgrade). - -- If you are experiencing problems, follow the steps under [Revert the upgrade](#revert-the-upgrade). - -### Finalize the upgrade - -{{site.data.alerts.callout_info}} -These final steps are required after upgrading from v1.1.x to v2.0. For upgrades within the v2.0.x series, you do not need to take any further action. -{{site.data.alerts.end}} - -1. Start the [`cockroach sql`](use-the-built-in-sql-client.html) shell against any node in the cluster. - -2. Use the `crdb_internal.node_executable_version()` [built-in function](functions-and-operators.html) to check the CockroachDB version running on the node: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT crdb_internal.node_executable_version(); - ~~~ - - Make sure the version matches your expectations. Since you upgraded each node, this version should be running on all other nodes as well. - -3. Use the same function to finalize the upgrade: - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING version = crdb_internal.node_executable_version(); - ~~~ - - This step enables certain performance improvements and bug fixes that were introduced in v2.0. Note, however, that after completing this step, it will no longer be possible to perform a rolling downgrade to v1.1. In the event of a catastrophic failure or corruption due to usage of new features requiring v2.0, the only option is to start a new cluster using the old binary and then restore from one of the backups created prior to finalizing the upgrade. - -### Revert the upgrade - -1. Run the [`cockroach debug zip`](debug-zip.html) command against any node in the cluster to capture your cluster's state. - -2. [Reach out for support](support-resources.html) from Cockroach Labs, sharing your debug zip. - -3. If necessary, downgrade the cluster by repeating the [rolling upgrade process](#step-3-perform-the-rolling-upgrade), but this time switching each node back to the previous version. - -## See Also - -- [View Node Details](view-node-details.html) -- [Collect Debug Information](debug-zip.html) -- [View Version Details](view-version-details.html) -- [Release notes for our latest version](../releases/{{page.version.version}}.html) diff --git a/src/current/v2.0/upsert.md b/src/current/v2.0/upsert.md deleted file mode 100644 index 8483e3219a7..00000000000 --- a/src/current/v2.0/upsert.md +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: UPSERT -summary: The UPSERT statement inserts rows when values do not violate uniqueness constraints, and it updates rows when values do violate uniqueness constraints. -toc: true ---- - -The `UPSERT` [statement](sql-statements.html) is short-hand for [`INSERT ON CONFLICT`](insert.html#on-conflict-clause). It inserts rows in cases where specified values do not violate uniqueness constraints, and it updates rows in cases where values do violate uniqueness constraints. - - -## Considerations - -- `UPSERT` considers uniqueness only for [Primary Key](primary-key.html) columns. `INSERT ON CONFLICT` is more flexible and can be used to consider uniqueness for other columns. For more details, see [How `UPSERT` Transforms into `INSERT ON CONFLICT`](#how-upsert-transforms-into-insert-on-conflict) below. - -- When inserting/updating all columns of a table, and the table has no secondary indexes, `UPSERT` will be faster than the equivalent `INSERT ON CONFLICT` statement, as it will write without first reading. This may be particularly useful if you are using a simple SQL table of two columns to [simulate direct KV access](frequently-asked-questions.html#can-i-use-cockroachdb-as-a-key-value-store). - -- A single [multi-row `UPSERT`](#upsert-multiple-rows) statement is faster than multiple single-row `UPSERT` statements. Whenever possible, use multi-row `UPSERT` instead of multiple single-row `UPSERT` statements. - -## Required Privileges - -The user must have the `INSERT` and `UPDATE` [privileges](privileges.html) on the table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/upsert.html %} -
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`table_name` | The name of the table. -`AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`column_name` | The name of a column to populate during the insert. -`select_stmt` | A [selection query](selection-queries.html). Each value must match the [data type](data-types.html) of its column. Also, if column names are listed after `INTO`, values must be in corresponding order; otherwise, they must follow the declared order of the columns in the table. -`DEFAULT VALUES` | To fill all columns with their [default values](default-value.html), use `DEFAULT VALUES` in place of `select_stmt`. To fill a specific column with its default value, leave the value out of the `select_stmt` or use `DEFAULT` at the appropriate position. -`RETURNING target_list` | Return values based on rows inserted, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html).

      Within a [transaction](transactions.html), use `RETURNING NOTHING` to return nothing in the response, not even the number of rows affected. - -## How `UPSERT` Transforms into `INSERT ON CONFLICT` - -`UPSERT` considers uniqueness only for [primary key](primary-key.html) columns. For example, assuming that columns `a` and `b` are the primary key, the following `UPSERT` and `INSERT ON CONFLICT` statements are equivalent: - -~~~ sql -> UPSERT INTO t (a, b, c) VALUES (1, 2, 3); - -> INSERT INTO t (a, b, c) - VALUES (1, 2, 3) - ON CONFLICT (a, b) - DO UPDATE SET c = excluded.c; -~~~ - -`INSERT ON CONFLICT` is more flexible and can be used to consider uniqueness for columns not in the primary key. For more details, see the [Upsert that Fails (Conflict on Non-Primary Key)](#upsert-that-fails-conflict-on-non-primary-key) example below. - -## Examples - -### Upsert a Row (No Conflict) - -In this example, the `id` column is the primary key. Because the inserted `id` value does not conflict with the `id` value of any existing row, the `UPSERT` statement inserts a new row into the table. - -~~~ sql -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -+----+----------+ -~~~ -~~~ sql -> UPSERT INTO accounts (id, balance) VALUES (3, 6325.20); - -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -+----+----------+ -~~~ - -### Upsert Multiple Rows - -In this example, the `UPSERT` statement inserts multiple rows into the table. - -~~~ sql -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -+----+----------+ -~~~ -~~~ sql -> UPSERT INTO accounts (id, balance) VALUES (4, 1970.4), (5, 2532.9), (6, 4473.0); - -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -| 4 | 1970.4 | -| 5 | 2532.9 | -| 6 | 4473.0 | -+----+----------+ -~~~ - -### Upsert that Updates a Row (Conflict on Primary Key) - -In this example, the `id` column is the primary key. Because the inserted `id` value is not unique, the `UPSERT` statement updates the row with the new `balance`. - -~~~ sql -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -| 4 | 1970.4 | -| 5 | 2532.9 | -| 6 | 4473.0 | -+----+----------+ -~~~ -~~~ sql -> UPSERT INTO accounts (id, balance) VALUES (3, 7500.83); - -> SELECT * FROM accounts; -~~~ -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 7500.83 | -| 4 | 1970.4 | -| 5 | 2532.9 | -| 6 | 4473.0 | -+----+----------+ -~~~ - -### Upsert that Fails (Conflict on Non-Primary Key) - -`UPSERT` will not update rows when the uniquness conflict is on columns not in the primary key. In this example, the `a` column is the primary key, but the `b` column also has the [Unique constraint](unique.html). Because the inserted `b` value is not unique, the `UPSERT` fails. - -~~~ sql -> SELECT * FROM unique_test; -~~~ -~~~ -+---+---+ -| a | b | -+---+---+ -| 1 | 1 | -| 2 | 2 | -| 3 | 3 | -+---+---+ -~~~ -~~~ sql -> UPSERT INTO unique_test VALUES (4, 1); -~~~ -~~~ -pq: duplicate key value (b)=(1) violates unique constraint "unique_test_b_key" -~~~ - -In such a case, you would need to use the [`INSERT ON CONFLICT`](insert.html) statement to specify the `b` column as the column with the Unique constraint. - -~~~ sql -> INSERT INTO unique_test VALUES (4, 1) ON CONFLICT (b) DO UPDATE SET a = excluded.a; - -> SELECT * FROM unique_test; -~~~ -~~~ -+---+---+ -| a | b | -+---+---+ -| 2 | 2 | -| 3 | 3 | -| 4 | 1 | -+---+---+ -~~~ - -## See Also - -- [Selection Queries](selection-queries.html) -- [`DELETE`](delete.html) -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v2.0/use-the-built-in-sql-client.md b/src/current/v2.0/use-the-built-in-sql-client.md deleted file mode 100644 index fd87aceadc9..00000000000 --- a/src/current/v2.0/use-the-built-in-sql-client.md +++ /dev/null @@ -1,697 +0,0 @@ ---- -title: Use the Built-in SQL Client -summary: CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line. -toc: true ---- - -CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line. To use this client, run the `cockroach sql` [command](cockroach-commands.html) as described below. - -To exit the interactive shell, use `\q` or `ctrl-d`. - - -## Synopsis - -{% include copy-clipboard.html %} -~~~ shell -# Start the interactive SQL shell: -$ cockroach sql - -# Execute SQL from the command line: -$ cockroach sql --execute=";" --execute="" -$ echo ";" | cockroach sql -$ cockroach sql < file-containing-statements.sql - -# View help: -$ cockroach sql --help -~~~ - -## Flags - -The `sql` command supports the following types of flags: - -- [General Use](#general) -- [Client Connection](#client-connection) -- [Logging](#logging) - -### General - -- To start an interactive SQL shell, run `cockroach sql` with all appropriate connection flags or use just the [`--url` flag](#sql-flag-url), which includes [connection details](connection-parameters.html#connect-using-a-url). -- To execute SQL statements from the command line, use the [`--execute` flag](#sql-flag-execute). - -Flag | Description ------|------------ -`--database`
      `-d` | A database name to use as [current database](sql-name-resolution.html#current-database) in the newly created session. -`--echo-sql` | New in v1.1: Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below.

      This can also be enabled within the interactive SQL shell via the `\set echo` [shell command](#sql-shell-commands). - `--execute`
      `-e` | Execute SQL statements directly from the command line, without opening a shell. This flag can be set multiple times, and each instance can contain one or more statements separated by semi-colons. If an error occurs in any statement, the command exits with a non-zero status code and further statements are not executed. The results of each statement are printed to the standard output (see `--format` for formatting options).

      For a demonstration of this and other ways to execute SQL from the command line, see the [example](#execute-sql-statements-from-the-command-line) below. - `--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `pretty`, `raw`, `records`, `sql`, `html`.

      **Default:** `pretty` for [interactive sessions](#interactive-sessions), `tsv` for non-interactive sessions

      Corresponds to the [`display_format`](#sql-option-display-format) SQL shell option for use in interactive sessions. -`--safe-updates` | Changed in v2.0: Disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`.

      **Default:** `true` for [interactive sessions](#interactive-sessions), `false` otherwise.

      Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -By default, the `sql` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## SQL Shell Welcome - -When the SQL shell connects (or reconnects) to a CockroachDB node, it prints a welcome text with some tips and CockroachDB version and cluster details: - -~~~ shell -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -# -# Server version: CCL {{page.release_info.version}} (darwin amd64, built 2017/07/13 11:43:06, go1.8) (same version as client) -# Cluster ID: 7fb9f5b4-a801-4851-92e9-c0db292d03f1 -# -# Enter \? for a brief introduction. -# -> -~~~ - -New in v1.1: The **Version** and **Cluster ID** details are particularly noteworthy: - -- When the client and server versions of CockroachDB are the same, the shell prints the `Server version` followed by `(same version as client)`. -- When the client and server versions are different, the shell prints both the `Client version` and `Server version`. In this case, you may want to [plan an upgrade](upgrade-cockroach-version.html) of older client or server versions. -- Since every CockroachDB cluster has a unique ID, you can use the `Cluster ID` field to verify that your client is always connecting to the correct cluster. - -## Interactive Sessions - -`cockroach sql` distinguishes between an *interactive session* and other sessions that generate terminal output. An interactive session is one where there's a user manually entering queries and looking at the results. This is detected using the following criteria: - -- The `cockroach sql` command is used -- Standard input is a terminal, not redirected -- The [`--execute` flag](#sql-flag-execute) is not used - -When an interactive session is detected, the following options have their defaults changed: - -+ The [`--format` flag](#sql-flag-format) (and its corresponding [`display_format` option](#sql-option-display-format)) default to `pretty`. -+ The [`errexit` option](#sql-option-errexit) defaults to `false`. -+ The [`check_syntax` option](#sql-option-check-syntax) defaults to `true`. -+ The [`smart_prompt` option](#sql-option-smart-prompt) defaults to `true`. - -## SQL Shell Commands - -The following commands can be used within the interactive SQL shell: - -Command | Usage ---------|------------ -`\q`
      `ctrl-d` | Exit the shell.

      When no text follows the prompt, `ctrl-c` exits the shell as well; otherwise, `ctrl-c` clears the line. -`\!` | Run an external command and print its results to `stdout`. See the [example](#run-external-commands-from-the-sql-shell) below. -\| | Run the output of an external command as SQL statements. See the [example](#run-external-commands-from-the-sql-shell) below. -`\set
      - - - - -
      chickturtle
      🐥🐢
      -~~~ - -When piping output to another command or a file, `--format` defaults to `tsv`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---execute="SELECT '🐥' AS chick, '🐢' AS turtle" > out.txt \ ---user=maxroach \ ---host=12.345.67.89 \ ---port=26257 \ ---database=critterdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat out.txt -~~~ - -~~~ -1 row -chick turtle -🐥 🐢 -~~~ - -However, you can explicitly set `--format` to another format, for example, `pretty`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---format=pretty \ ---execute="SELECT '🐥' AS chick, '🐢' AS turtle" > out.txt \ ---user=maxroach \ ---host=12.345.67.89 \ ---port=26257 \ ---database=critterdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat out.txt -~~~ - -~~~ -+-------+--------+ -| chick | turtle | -+-------+--------+ -| 🐥 | 🐢 | -+-------+--------+ -(1 row) -~~~ - -### Make the output of `SHOW` statements selectable - -To make it possible to select from the output of `SHOW` statements, set `--format` to `raw`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---format=raw \ ---user=maxroach \ ---host=12.345.67.89 \ ---port=26257 \ ---database=critterdb -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE customers; -~~~ - -~~~ -# 2 columns -# row 1 -## 14 -test.customers -## 185 -CREATE TABLE customers ( - id INT NOT NULL, - email STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - UNIQUE INDEX customers_email_key (email ASC), - FAMILY "primary" (id, email) -) -# 1 row -~~~ - -When `--format` is not set to `raw`, you can use the `display_format` [SQL shell option](#sql-shell-options) to change the output format within the interactive session: - -{% include copy-clipboard.html %} -~~~ sql -> \set display_format raw -~~~ - -~~~ -# 2 columns -# row 1 -## 14 -test.customers -## 185 -CREATE TABLE customers ( - id INT NOT NULL, - email STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - UNIQUE INDEX customers_email_key (email ASC), - FAMILY "primary" (id, email) -) -# 1 row -~~~ - -### Execute SQL statements from a file - -In this example, we show and then execute the contents of a file containing SQL statements. - -{% include copy-clipboard.html %} -~~~ shell -$ cat > statements.sql -~~~ - -{% include copy-clipboard.html %} -~~~ -CREATE TABLE roaches (name STRING, country STRING); -INSERT INTO roaches VALUES ('American Cockroach', 'United States'), ('Brownbanded Cockroach', 'United States'); -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---user=maxroach \ ---host=12.345.67.89 \ ---port=26257 \ ---database=critterdb \ -< statements.sql -~~~ - -~~~ -CREATE TABLE -INSERT 2 -~~~ - -### Run external commands from the SQL shell - -In this example, we use `\!` to look at the rows in a CSV file before creating a table and then using `\|` to insert those rows into the table. - -{{site.data.alerts.callout_info}}This example works only if the values in the CSV file are numbers. For values in other formats, use an online CSV-to-SQL converter or make your own import program.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> \! cat test.csv -~~~ - -~~~ -12, 13, 14 -10, 20, 30 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE csv (x INT, y INT, z INT); -~~~ - -{% include copy-clipboard.html %} -~~~ -> \| IFS=","; while read a b c; do echo "insert into csv values ($a, $b, $c);"; done < test.csv; -~~~ - -{% include copy-clipboard.html %} -~~~ -> SELECT * FROM csv; -~~~ - -~~~ -+----+----+----+ -| x | y | z | -+----+----+----+ -| 12 | 13 | 14 | -| 10 | 20 | 30 | -+----+----+----+ -~~~ - -In this example, we create a table and then use `\|` to programmatically insert values. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE for_loop (x INT); -~~~ - -{% include copy-clipboard.html %} -~~~ -> \| for ((i=0;i<10;++i)); do echo "INSERT INTO for_loop VALUES ($i);"; done -~~~ - -{% include copy-clipboard.html %} -~~~ -> SELECT * FROM for_loop; -~~~ - -~~~ -+---+ -| x | -+---+ -| 0 | -| 1 | -| 2 | -| 3 | -| 4 | -| 5 | -| 6 | -| 7 | -| 8 | -| 9 | -+---+ -~~~ - -### Allow potentially unsafe SQL statements - -The `--safe-updates` flag defaults to `true`. This prevents SQL statements that may have broad, undesired side-effects. For example, by default, we cannot use `DELETE` without a `WHERE` clause to delete all rows from a table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --execute="SELECT * FROM db1.t1" -~~~ - -~~~ -+----+------+ -| id | name | -+----+------+ -| 1 | a | -| 2 | b | -| 3 | c | -| 4 | d | -| 5 | e | -| 6 | f | -| 7 | g | -| 8 | h | -| 9 | i | -| 10 | j | -+----+------+ -(10 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --execute="DELETE FROM db1.t1" -~~~ - -~~~ -Error: pq: rejected: DELETE without WHERE clause (sql_safe_updates = true) -Failed running "sql" -~~~ - -However, to allow an "unsafe" statement, you can set `--safe-updates=false`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --safe-updates=false --execute="DELETE FROM db1.t1" -~~~ - -~~~ -DELETE 10 -~~~ - -{{site.data.alerts.callout_info}}Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the sql_safe_updates session variable.{{site.data.alerts.end}} - -### Reveal the SQL statements sent implicitly by the command-line utility - -In this example, we use the `--execute` flag to execute statements from the command line and the `--echo-sql` flag to reveal SQL statements sent implicitly: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---execute="CREATE TABLE t1 (id INT PRIMARY KEY, name STRING)" \ ---execute="INSERT INTO t1 VALUES (1, 'a'), (2, 'b'), (3, 'c')" \ ---user=maxroach \ ---host=12.345.67.89 \ ---port=26257 \ ---database=db1 ---echo-sql -~~~ - -~~~ -# Server version: CockroachDB CCL f8f3c9317 (darwin amd64, built 2017/09/13 15:05:35, go1.8) (same version as client) -# Cluster ID: 847a4ba5-c78a-465a-b1a0-59fae3aab520 -> SET sql_safe_updates = TRUE -> CREATE TABLE t1 (id INT PRIMARY KEY, name STRING) -CREATE TABLE -> INSERT INTO t1 VALUES (1, 'a'), (2, 'b'), (3, 'c') -INSERT 3 -~~~ - -In this example, we start the interactive SQL shell and enable the `echo` shell option to reveal SQL statements sent implicitly: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---user=maxroach \ ---host=12.345.67.89 \ ---port=26257 \ ---database=db1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> \set echo -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO db1.t1 VALUES (4, 'd'), (5, 'e'), (6, 'f'); -~~~ - -~~~ -> INSERT INTO db1.t1 VALUES (4, 'd'), (5, 'e'), (6, 'f'); -INSERT 3 - -Time: 2.426534ms - -> SHOW TRANSACTION STATUS -> SHOW DATABASE -~~~ - -## See Also - -- [Client Connection Parameters](connection-parameters.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [SQL Statements](sql-statements.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) -- [Import Data](import-data.html) diff --git a/src/current/v2.0/uuid.md b/src/current/v2.0/uuid.md deleted file mode 100644 index 8436acaccd3..00000000000 --- a/src/current/v2.0/uuid.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: UUID -summary: The UUID data type stores 128-bit Universal Unique Identifiers. -toc: true ---- - -New in v1.1: The `UUID` (Universally Unique Identifier) [data type](data-types.html) stores a 128-bit value that is [unique across both space and time](https://www.ietf.org/rfc/rfc4122.txt). - -{{site.data.alerts.callout_success}}To auto-generate unique row IDs, we recommend using UUID with the gen_random_uuid() function as the default value. See the example below for more details.{{site.data.alerts.end}} - - -## Syntax -A `UUID` value can be expressed using the following formats: - -Format | Description --------|------------- -Standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format | Hyphen-separated groups of 8, 4, 4, 4, 12 hexadecimal digits.

      Example: `acde070d-8c4c-4f0d-9d8a-162843c10333` -With braces | The standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format with braces.

      Example: `{acde070d-8c4c-4f0d-9d8a-162843c10333}` -As `BYTES` | `UUID` value specified as bytes.

      Example: `b'kafef00ddeadbeed'` -`UUID` used as a URN | `UUID` can be used as a Uniform Resource Name (URN). In that case, the format is [specified](https://www.ietf.org/rfc/rfc2141.txt) as "urn:uuid:" followed by standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format.

      Example: `urn:uuid:63616665-6630-3064-6465-616462656564` - -## Size -A `UUID` value is 128 bits in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -### Create a table with manually-entered `UUID` values - -#### Create a table with `UUID` in standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format - -~~~ sql -> CREATE TABLE v (token uuid); - -> INSERT INTO v VALUES ('63616665-6630-3064-6465-616462656562'); - -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -+--------------------------------------+ -(1 row) -~~~ - -#### Create a table with `UUID` in standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format with braces - -~~~ sql -> INSERT INTO v VALUES ('{63616665-6630-3064-6465-616462656563}'); - -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -| 63616665-6630-3064-6465-616462656563 | -+--------------------------------------+ -(2 rows) -~~~ - -#### Create a table with `UUID` in byte format - -~~~ sql -> INSERT INTO v VALUES (b'kafef00ddeadbeed'); - -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -| 63616665-6630-3064-6465-616462656563 | -| 6b616665-6630-3064-6465-616462656564 | -+--------------------------------------+ -(3 rows) -~~~ - -#### Create a table with `UUID` used as URN - -~~~ sql -> INSERT INTO v VALUES ('urn:uuid:63616665-6630-3064-6465-616462656564'); - -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -| 63616665-6630-3064-6465-616462656563 | -| 6b616665-6630-3064-6465-616462656564 | -| 63616665-6630-3064-6465-616462656564 | -+--------------------------------------+ -(4 rows) -~~~ - -### Create a table with auto-generated unique row IDs - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -## Supported Casting & Conversion - -`UUID` values can be [cast](data-types.html#data-type-conversions-casts) to the following data type: - -Type | Details ------|-------- -`BYTES` | Requires supported [`BYTES`](bytes.html) string format, e.g., `b'\141\061\142\062\143\063'`. - -## See Also - -[Data Types](data-types.html) diff --git a/src/current/v2.0/validate-constraint.md b/src/current/v2.0/validate-constraint.md deleted file mode 100644 index 70b29e1f856..00000000000 --- a/src/current/v2.0/validate-constraint.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: VALIDATE CONSTRAINT -summary: Use the ADD COLUMN statement to add columns to tables. -toc: true ---- - -The `VALIDATE CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and checks whether values in a column match a [constraint](constraints.html) on the column. - -This statement is especially useful after applying a constraint to an existing column via [`ADD CONSTRAINT`](add-constraint.html). In this case, `VALIDATE CONSTRAINT` can be used to find values already in the column that do not match the constraint. - - -## Required Privileges - -The user must have the `CREATE` [privilege](privileges.html) on the table. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/validate_constraint.html %} -
      - -## Parameters - -| Parameter | Description | -|-------------------+-----------------------------------------------------------------------------| -| `table_name` | The name of the table in which the constraint you'd like to validate lives. | -| `constraint_name` | The name of the constraint on `table_name` you'd like to validate. | - -## Examples - -In [`ADD CONSTRAINT`](add-constraint.html), we [added a foreign key constraint](add-constraint.html#add-the-foreign-key-constraint-with-cascade) like so: - -~~~ sql -ALTER TABLE orders ADD CONSTRAINT customer_fk FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE; -~~~ - -In order to ensure that the data added to the `orders` table prior to the creation of the `customer_fk` constraint conforms to that constraint, run the following: - -~~~ sql -ALTER TABLE orders VALIDATE CONSTRAINT customer_fk; -~~~ - -{{site.data.alerts.callout_info}}If present in a CREATE TABLE statement, the table is considered validated because an empty table trivially meets its constraints.{{site.data.alerts.end}} - -## See Also - -- [Constraints](constraints.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`CREATE TABLE`](create-table.html) diff --git a/src/current/v2.0/view-node-details.md b/src/current/v2.0/view-node-details.md deleted file mode 100644 index 094adc78e9d..00000000000 --- a/src/current/v2.0/view-node-details.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -title: View Node Details -summary: To view details for each node in the cluster, use the cockroach node command with the appropriate subcommands and flags. -toc: true ---- - -To view details for each node in the cluster, use the `cockroach node` [command](cockroach-commands.html) with the appropriate subcommands and flags. - -New in v1.1: The `cockroach node` command is also used in the process of decommissioning nodes for permanent removal. See [Remove Nodes](remove-nodes.html) for more details. - - -## Subcommands - -Subcommand | Usage ------------|------ -`ls` | List the ID of each node in the cluster, excluding those that have been decommissioned and are offline. -`status` | View the status of one or all nodes, excluding nodes that have been decommissioned and taken offline. Depending on flags used, this can include details about range/replicas, disk usage, and decommissioning progress. -`decommission` | New in v1.1: Decommission nodes for permanent removal. See [Remove Nodes](remove-nodes.html) for more details. -`recommission` | New in v1.1: Recommission nodes that were accidentally decommissioned. See [Recommission Nodes](remove-nodes.html#recommission-nodes) for more details. - -## Synopsis - -~~~ shell -# List the IDs of active and inactive nodes: -$ cockroach node ls - -# Show status details for active and inactive nodes: -$ cockroach node status - -# Show status and range/replica details for active and inactive nodes: -$ cockroach node status --ranges - -# Show status and disk usage details for active and inactive nodes: -$ cockroach node status --stats - -# Show status and decommissioning details for active and inactive nodes: -$ cockroach node status --decommission - -# Show complete status details for active and inactive nodes: -$ cockroach node status --all - -# Show status details for a specific node: -$ cockroach node status - -# Decommission nodes: -$ cockroach node decommission - -# Recommission nodes: -$ cockroach node recommission - -# View help: -$ cockroach node --help -$ cockroach node ls --help -$ cockroach node status --help -$ cockroach node decommission --help -$ cockroach node recommission --help -~~~ - -## Flags - -All `node` subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `pretty`, `records`, `sql`, `html`.

      **Default:** `tsv` - -The `node ls` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--timeout` | New in v2.0: Set the duration of time that the subcommand is allowed to run before it returns an error and prints partial information. The timeout is specified with a suffix of `s` for seconds, `m` for minutes, and `h` for hours. If this flag is not set, the subcommand may hang. - -The `node status` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--all` | Show all node details. -`--decommission` | Show node decommissioning details. -`--ranges` | Show node details for ranges and replicas. -`--stats` | Show node disk usage details. -`--timeout` | New in v2.0: Set the duration of time that the subcommand is allowed to run before it returns an error and prints partial information. The timeout is specified with a suffix of `s` for seconds, `m` for minutes, and `h` for hours. If this flag is not set, the subcommand may hang. - -The `node decommission` subcommand also supports the following general flag: - -Flag | Description ------|------------ -`--wait` | When to return to the client. Possible values: `all`, `none`.

      If `all`, the command returns to the client only after all specified nodes are fully decommissioned. If any specified nodes are offline, the command will not return to the client until those nodes are back online.

      If `none`, the command does not wait for decommissioning to finish; it returns to the client after starting the decommissioning process on all specified nodes that are online. Any specified nodes that are offline will automatically be marked as decommissioned; if they come back online, the cluster will recognize this status and will not rebalance data to the nodes.

      **Default:** `all` - -### Client Connection - -{% include {{ page.version.version }}/sql/connection-parameters-with-url.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -By default, the `node` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Response - -The `cockroach node` subcommands return the following fields for each node. - -### `node ls` - -Field | Description -------|------------ -`id` | The ID of the node. - -### `node status` - -Field | Description -------|------------ -`id` | The ID of the node.

      **Required flag:** None -`address` | The address of the node.

      **Required flag:** None -`build` | The version of CockroachDB running on the node. If the binary was built from source, this will be the SHA hash of the commit used.

      **Required flag:** None -`updated_at` | The date and time when the node last recorded the information displayed in this command's output. When healthy, a new status should be recorded every 10 seconds or so, but when unhealthy this command's stats may be much older.

      **Required flag:** None -`started_at` | The date and time when the node was started.

      **Required flag:** None -`replicas_leaders` | The number of range replicas on the node that are the Raft leader for their range. See `replicas_leaseholders` below for more details.

      **Required flag:** `--ranges` or `--all` -`replicas_leaseholders` | The number of range replicas on the node that are the leaseholder for their range. A "leaseholder" replica handles all read requests for a range and directs write requests to the range's Raft leader (usually the same replica as the leaseholder).

      **Required flag:** `--ranges` or `--all` -`ranges` | The number of ranges that have replicas on the node.

      **Required flag:** `--ranges` or `--all` -`ranges_unavailable` | The number of unavailable ranges that have replicas on the node.

      **Required flag:** `--ranges` or `--all` -`ranges_underreplicated` | The number of underreplicated ranges that have replicas on the node.

      **Required flag:** `--ranges` or `--all` -`live_bytes` | The amount of live data used by both applications and the CockroachDB system. This excludes historical and deleted data.

      **Required flag:** `--stats` or `--all` -`key_bytes` | The amount of live and non-live data from keys in the key-value storage layer. This does not include data used by the CockroachDB system.

      **Required flag:** `--stats` or `--all` -`value_bytes` | The amount of live and non-live data from values in the key-value storage layer. This does not include data used by the CockroachDB system.

      **Required flag:** `--stats` or `--all` -`intent_bytes` | The amount of non-live data associated with uncommitted (or recently-committed) transactions.

      **Required flag:** `--stats` or `--all` -`system_bytes` | The amount of data used just by the CockroachDB system.

      **Required flag:** `--stats` or `--all` -`is_live` | If `true`, the node is currently live.

      **Required flag:** None -`gossiped_replicas` | The number of replicas on the node that are active members of a range. After decommissioning, this should be 0.

      **Required flag:** `--decommission` or `--all` -`is_decommissioning` | If `true`, the node is marked for decommissioning. See [Remove Nodes](remove-nodes.html) for more details.

      **Required flag:** `--decommission` or `--all` -`is_draining` | If `true`, the range replicas and range leases are being moved off the node. This happens when a live node is being decommissioned. See [Remove Nodes](remove-nodes.html) for more details.

      **Required flag:** `--decommission` or `--all` - -### `node decommission` - -Field | Description -------|------------ -`id` | The ID of the node. -`is_live` | If `true`, the node is live. -`gossiped_replicas` | The number of replicas on the node that are active members of a range. After decommissioning, this should be 0. -`is_decommissioning` | If `true`, the node is marked for decommissioning. See [Remove Nodes](remove-nodes.html) for more details. -`is_draining` | If `true`, the range replicas and range leases are being moved off the node. This happens when a live node is being decommissioned. See [Remove Nodes](remove-nodes.html) for more details. - -### `node recommission` - -Field | Description -------|------------ -`id` | The ID of the node. -`is_live` | If `true`, the node is live. -`gossiped_replicas` | The number of replicas on the node that are active members of a range. After decommissioning, this should be 0. -`is_decommissioning` | If `true`, the node is marked for decommissioning. See [Remove Nodes](remove-nodes.html) for more details. -`is_draining` | If `true`, the range replicas and range leases are being moved off the node. This happens when a live node is being decommissioned. See [Remove Nodes](remove-nodes.html) for more details. - -## Examples - -### List node IDs - -~~~ shell -$ cockroach node ls --insecure -~~~ - -~~~ -+----+ -| id | -+----+ -| 1 | -| 2 | -| 3 | -| 4 | -| 5 | -+----+ -~~~ - -### Show the status of a single node - -~~~ shell -$ cockroach node status 1 --insecure -~~~ - -~~~ -+----+-----------------------+---------+---------------------+---------------------+---------+ -| id | address | build | updated_at | started_at | is_live | -+----+-----------------------+---------+---------------------+---------------------+---------+ -| 1 | 165.227.60.76:26257 | 91a299d | 2017-09-07 18:16:03 | 2017-09-07 16:30:13 | true | -+----+-----------------------+---------+---------------------+---------------------+---------+ -(1 row) -~~~ - -### Show the status of all nodes - -~~~ shell -$ cockroach node status --insecure -~~~ - -~~~ -+----+-----------------------+---------+---------------------+---------------------+---------+ -| id | address | build | updated_at | started_at | is_live | -+----+-----------------------+---------+---------------------+---------------------+---------+ -| 1 | 165.227.60.76:26257 | 91a299d | 2017-09-07 18:16:03 | 2017-09-07 16:30:13 | true | -| 2 | 192.241.239.201:26257 | 91a299d | 2017-09-07 18:16:05 | 2017-09-07 16:30:45 | true | -| 3 | 67.207.91.36:26257 | 91a299d | 2017-09-07 18:16:06 | 2017-09-07 16:31:06 | true | -| 4 | 138.197.12.74:26257 | 91a299d | 2017-09-07 18:16:03 | 2017-09-07 16:44:23 | true | -| 5 | 174.138.50.192:26257 | 91a299d | 2017-09-07 18:10:07 | 2017-09-07 17:12:57 | false | -+----+-----------------------+---------+---------------------+---------------------+---------+ -(5 rows) -~~~ - -### Decommission nodes - -See [Remove Nodes](remove-nodes.html) - -### Recommission nodes - -See [Recommission Nodes](remove-nodes.html#recommission-nodes) - -## See Also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Remove Nodes](remove-nodes.html) diff --git a/src/current/v2.0/view-version-details.md b/src/current/v2.0/view-version-details.md deleted file mode 100644 index c126706b8bb..00000000000 --- a/src/current/v2.0/view-version-details.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: View Version Details -summary: To view version details for a specific cockroach binary, run the cockroach version command. -toc: false ---- - -To view version details for a specific `cockroach` binary, run the `cockroach version` [command](cockroach-commands.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach version -~~~ - -~~~ -Build Tag: {{page.release_info.version}} -Build Time: {{page.release_info.build_time}} -Distribution: CCL -Platform: darwin amd64 -Go Version: go1.8.3 -C Compiler: 4.2.1 Compatible Clang 3.8.0 (tags/RELEASE_380/final) -Build SHA-1: 5b757262d33d814bda1deb2af20161a1f7749df3 -Build Type: release -~~~ - -The `cockroach version` command outputs the following fields: - -Field | Description -------|------------ -`Build Tag` | The CockroachDB version. -`Build Time` | The date and time when the binary was built. -`Distribution` | The scope of the binary. If `CCL`, the binary contains open-source and enterprise functionality covered by the CockroachDB Community License. If `OSS`, the binary contains only open-source functionality.

      To obtain a pure open-source binary, you must [build from source](install-cockroachdb.html) using the `make buildoss` command. -`Platform` | The platform that the binary can run on. -`Go Version` | The version of Go in which the source code is written. -`C Compiler` | The C compiler used to build the binary. -`Build SHA-1` | The SHA-1 hash of the commit used to build the binary. -`Build Type` | The type of release. If `release`, `release-gnu`, or `release-musl`, the binary is for a [production release](../releases/#production-releases). If `development`, the binary is for a [testing release](../releases/#testing-releases). - -## See Also - -- [Install CockroachDB](install-cockroachdb.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v2.0/views.md b/src/current/v2.0/views.md deleted file mode 100644 index 7e7316c82ec..00000000000 --- a/src/current/v2.0/views.md +++ /dev/null @@ -1,350 +0,0 @@ ---- -title: Views -summary: -toc: true ---- - -A view is a stored [selection query](selection-queries.html) and provides a shorthand name for it. CockroachDB's views are **dematerialized**: they do not store the results of the underlying queries. Instead, the underlying query is executed anew every time the view is used. - - -## Why Use Views? - -There are various reasons to use views, including: - -- [Hide query complexity](#hide-query-complexity) -- [Limit access to underlying data](#limit-access-to-underlying-data) - -### Hide query complexity - -When you have a complex query that, for example, joins several tables, or performs complex calculations, you can store the query as a view and then select from the view as you would from a standard table. - -#### Example - -Let's say you're using our [sample `startrek` database](generate-cockroachdb-resources.html#generate-example-data), which contains two tables, `episodes` and `quotes`. There's a foreign key constraint between the `episodes.id` column and the `quotes.episode` column. To count the number of famous quotes per season, you could run the following join: - -~~~ sql -> SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -+--------+----------+ -| season | count(*) | -+--------+----------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+----------+ -(3 rows) -~~~ - -Alternatively, to make it much easier to run this complex query, you could create a view: - -~~~ sql -> CREATE VIEW startrek.quotes_per_season (season, quotes) - AS SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -CREATE VIEW -~~~ - -Then, executing the query is as easy as `SELECT`ing from the view: - -~~~ sql -> SELECT * FROM startrek.quotes_per_season; -~~~ - -~~~ -+--------+--------+ -| season | quotes | -+--------+--------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+--------+ -(3 rows) -~~~ - -### Limit access to underlying data - -When you do not want to grant a user access to all the data in one or more standard tables, you can create a view that contains only the columns and/or rows that the user should have access to and then grant the user permissions on the view. - -#### Example - -Let's say you have a `bank` database containing an `accounts` table: - -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+----------+---------+-----------------+ -| id | type | balance | email | -+----+----------+---------+-----------------+ -| 1 | checking | 1000 | max@roach.com | -| 2 | savings | 10000 | max@roach.com | -| 3 | checking | 15000 | betsy@roach.com | -| 4 | checking | 5000 | lilly@roach.com | -| 5 | savings | 50000 | ben@roach.com | -+----+----------+---------+-----------------+ -(5 rows) -~~~ - -You want a particular user, `bob`, to be able to see the types of accounts each user has without seeing the balance in each account, so you create a view to expose just the `type` and `email` columns: - -~~~ sql -> CREATE VIEW bank.user_accounts - AS SELECT type, email - FROM bank.accounts; -~~~ - -~~~ -CREATE VIEW -~~~ - -You then make sure `bob` does not have privileges on the underlying `bank.accounts` table: - -~~~ sql -> SHOW GRANTS ON bank.accounts; -~~~ - -~~~ -+----------+------+------------+ -| Table | User | Privileges | -+----------+------+------------+ -| accounts | root | ALL | -| accounts | toti | SELECT | -+----------+------+------------+ -(2 rows) -~~~ - -Finally, you grant `bob` privileges on the `bank.user_accounts` view: - -~~~ sql -> GRANT SELECT ON bank.user_accounts TO bob; -~~~ - -Now, `bob` will get a permissions error when trying to access the underlying `bank.accounts` table but will be allowed to query the `bank.user_accounts` view: - -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -pq: user bob does not have SELECT privilege on table accounts -~~~ - -~~~ sql -> SELECT * FROM bank.user_accounts; -~~~ - -~~~ -+----------+-----------------+ -| type | email | -+----------+-----------------+ -| checking | max@roach.com | -| savings | max@roach.com | -| checking | betsy@roach.com | -| checking | lilly@roach.com | -| savings | ben@roach.com | -+----------+-----------------+ -(5 rows) -~~~ - -## How Views Work - -### Creating Views - -To create a view, use the [`CREATE VIEW`](create-view.html) statement: - -~~~ sql -> CREATE VIEW bank.user_accounts - AS SELECT type, email - FROM bank.accounts; -~~~ - -~~~ -CREATE VIEW -~~~ - -{{site.data.alerts.callout_info}}Any selection query is valid as operand to CREATE VIEW, not just simple SELECT clauses.{{site.data.alerts.end}} - -### Listing Views - -Once created, views are listed alongside regular tables in the database: - -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+---------------+ -| Table | -+---------------+ -| accounts | -| user_accounts | -+---------------+ -(2 rows) -~~~ - -To list just views, you can query the `views` table in the [Information Schema](information-schema.html): - -~~~ sql -> SELECT * FROM bank.information_schema.views; -> SELECT * FROM startrek.information_schema.views; -~~~ - -~~~ -+---------------+-------------------+----------------------+---------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| table_catalog | table_schema | table_name | view_definition | check_option | is_updatable | is_insertable_into | is_trigger_updatable | is_trigger_deletable | is_trigger_insertable_into | -+---------------+-------------------+----------------------+---------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| bank | public | user_accounts | SELECT type, email FROM bank.accounts | NULL | NULL | NULL | NULL | NULL | NULL | -+---------------+-------------------+----------------------+---------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -(1 row) -+---------------+-------------------+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| table_catalog | table_schema | table_name | view_definition | check_option | is_updatable | is_insertable_into | is_trigger_updatable | is_trigger_deletable | is_trigger_insertable_into | -+---------------+-------------------+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| startrek | public | quotes_per_season | SELECT startrek.episodes.season, count(*) FROM startrek.quotes JOIN startrek.episodes ON startrek.quotes.episode = startrek.episodes.id GROUP BY startrek.episodes.season | NULL | NULL | NULL | NULL | NULL | NULL | -+---------------+-------------------+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -(1 row) -~~~ - -### Querying Views - -To query a view, target it with a [table -expression](table-expressions.html#table-or-view-names), for example -using a [`SELECT` clause](select-clause.html), just as you would with -a stored table: - -~~~ sql -> SELECT * FROM bank.user_accounts; -~~~ - -~~~ -+----------+-----------------+ -| type | email | -+----------+-----------------+ -| checking | max@roach.com | -| savings | max@roach.com | -| checking | betsy@roach.com | -| checking | lilly@roach.com | -| savings | ben@roach.com | -+----------+-----------------+ -(5 rows) -~~~ - -`SELECT`ing a view executes the view's stored `SELECT` statement, which returns the relevant data from the underlying table(s). To inspect the `SELECT` statement executed by the view, use the [`SHOW CREATE VIEW`](show-create-view.html) statement: - -~~~ sql -> SHOW CREATE VIEW bank.user_accounts; -~~~ - -~~~ -+--------------------+---------------------------------------------------------------------------+ -| View | CreateView | -+--------------------+---------------------------------------------------------------------------+ -| bank.user_accounts | CREATE VIEW "bank.user_accounts" AS SELECT type, email FROM bank.accounts | -+--------------------+---------------------------------------------------------------------------+ -(1 row) -~~~ - -You can also inspect the `SELECT` statement executed by a view by querying the `views` table in the [Information Schema](information-schema.html): - -~~~ sql -> SELECT view_definition FROM bank.information_schema.views WHERE table_name = 'user_accounts'; -~~~ - -~~~ -+----------------------------------------+ -| view_definition | -+----------------------------------------+ -| SELECT type, email FROM bank.accounts | -+----------------------------------------+ -(1 row) -~~~ - -### View Dependencies - -A view depends on the objects targeted by its underlying query. Attempting to rename an object referenced in a view's stored query therefore results in an error: - -~~~ sql -> ALTER TABLE bank.accounts RENAME TO bank.accts; -~~~ - -~~~ -pq: cannot rename table "bank.accounts" because view "user_accounts" depends on it -~~~ - -Likewise, attempting to drop an object referenced in a view's stored query results in an error: - -~~~ sql -> DROP TABLE bank.accounts; -~~~ - -~~~ -pq: cannot drop table "accounts" because view "user_accounts" depends on it -~~~ - -~~~ sql -> ALTER TABLE bank.accounts DROP COLUMN email; -~~~ - -~~~ -pq: cannot drop column email because view "bank.user_accounts" depends on it -~~~ - -There is an exception to the rule above, however: When [dropping a table](drop-table.html) or [dropping a view](drop-view.html), you can use the `CASCADE` keyword to drop all dependent objects as well: - -~~~ sql -> DROP TABLE bank.accounts CASCADE; -~~~ - -~~~ -DROP TABLE -~~~ - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -### Renaming Views - -To rename a view, use the [`ALTER VIEW`](alter-view.html) statement: - -~~~ sql -> ALTER VIEW bank.user_accounts RENAME TO bank.user_accts; -~~~ - -~~~ -RENAME VIEW -~~~ - -It is not possible to change the stored query executed by the view. Instead, you must drop the existing view and create a new view. - -### Removing Views - -To remove a view, use the [`DROP VIEW`](drop-view.html) statement: - -~~~ sql -> DROP VIEW bank.user_accounts -~~~ - -~~~ -DROP VIEW -~~~ - -## See Also - -- [Selection Queries](selection-queries.html) -- [Simple `SELECT` Clauses](select-clause.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE VIEW`](show-create-view.html) -- [`GRANT`](grant.html) -- [`ALTER VIEW`](alter-view.html) -- [`DROP VIEW`](drop-view.html) diff --git a/src/current/v2.0/window-functions.md b/src/current/v2.0/window-functions.md deleted file mode 100644 index 6bf53c6ca10..00000000000 --- a/src/current/v2.0/window-functions.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Window Functions -summary: A window function performs a calculation across a set of table rows that are somehow related to the current row. -toc: false ---- - -CockroachDB supports the application of an aggregate or window function over the subset ("window") of the rows selected by a query. - -Docs on this feature are coming soon. In the meantime, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/tutorial-window.html) for an introduction to this topic.