Skip to content

Commit 94c4d43

Browse files
committed
Data Lake wording improvements
1 parent e3e9f99 commit 94c4d43

File tree

6 files changed

+10
-13
lines changed

6 files changed

+10
-13
lines changed

docs/quix-cloud/managed-services/blob-storage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ Connect your cluster to a bucket/container so Quix can enable **Quix Lake** or a
1616
???+ info "Quix Lake at a glance"
1717
**Summary** - Quix Lake persists Kafka topic data as **Avro/Parquet** in your own bucket (S3, GCS, Azure Blob, MinIO), partitioned for fast discovery and full-fidelity **Replay**.
1818

19-
**Why it exists** - Preserve exact Kafka messages (timestamps, headers, partitions, offsets, gaps) with indexed metadata so **Catalog**, **Replay**, **Sinks**, and future services operate on open formats you control.
19+
**Why it exists** - Preserve exact Kafka messages (timestamps, headers, partitions, offsets, gaps) with indexed metadata so **API**, **Replay**, **Sinks**, and future services operate on open formats you control.
2020

2121
**Key properties**
2222
- **Portable** - open Avro & Parquet
2323
- **Efficient** - Hive-style partitions + Parquet metadata
2424
- **Flexible** - historical + live workflows
2525
- **Replay** - preserves order, partitions, timestamps, headers, gaps
2626

27-
**Flow** - **Ingest** (Avro) → **Index** (Parquet metadata) → **Discover** (Data Catalog & Metadata API) → **Replay** (full fidelity back to Kafka) → **Use** (explore, combine historical + live, run queries/export).
27+
**Flow** - **Ingest** (Avro) → **Index** (Parquet metadata) → **Discover** (Data Lake API & Metadata API) → **Replay** (full fidelity back to Kafka) → **Use** (explore, combine historical + live, run queries/export).
2828

2929
[Learn more about Quix Lake →](../quix-cloud/quixlake/overview.md)
3030

docs/quix-cloud/managed-services/replay.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,15 +123,15 @@ Per-file reads, computed waits, gap trimming, queue length, and throttling-usefu
123123

124124
## How it works (high-level)
125125

126-
1. Queries the **Quix Lake Catalog** to locate Avro segments for your topic, keys, partitions, and time window.
126+
1. Queries the **Quix Lake API** to locate Avro segments for your topic, keys, partitions, and time window.
127127
2. Streams records in order, applying **timestampsType** and **replaySpeed**.
128128
3. Optional controls adjust gaps, throughput caps, key remapping, and partition routing.
129129
4. Produces records to the **destinationTopic** in Kafka.
130130

131131

132132
## Configuration
133133

134-
Managed services use simplified config. Quix maps these keys to underlying environment variables and wiring (including the **Quix Lake Catalog API URL**, which is injected automatically).
134+
Managed services use simplified config. Quix maps these keys to underlying environment variables and wiring (including the **Quix Lake API URL**, which is injected automatically).
135135

136136
### Required
137137

docs/quix-cloud/managed-services/sink.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Identifier: `DataLake.Sink`
1919

2020
* Consumes from a Kafka topic (single or many sinks per environment)
2121
* Rolls **Avro** segments and writes them under the **Raw** prefix using a stable, partitioned layout (topic, key, partition, date)
22-
* Emits **Parquet** index files under **Metadata** so the **Quix Lake Catalog** and APIs can list and filter datasets without scanning Avro
22+
* Emits **Parquet** index files under **Metadata** so the **Quix Lake UI and API** can list and filter datasets without scanning Avro
2323
* Optionally accepts **custom metadata** you attach later via the Metadata API
2424

2525
**Example object names**
@@ -130,7 +130,7 @@ deployments:
130130
* **Parquet (Index)**
131131
Compact descriptors with `Path`, `Topic`, `Key`, `Partition`, `TimestampStart/End`, `OffsetStart/End`, `RecordCount`, `FileSizeBytes`, `CreatedAt`, `DeletedAt?`.
132132
* **Parquet (Custom metadata, optional)**
133-
Your key–value annotations (`Topic`, `Key`, `MetadataKey`, `MetadataValue`, `UpdatedUtc`) used for search and grouping in the Catalog.
133+
Your key–value annotations (`Topic`, `Key`, `MetadataKey`, `MetadataValue`, `UpdatedUtc`) used for search and grouping in the UI.
134134

135135
See [Open format](../quixlake/open-format.md) for full schemas and layout.
136136

docs/quix-cloud/quixlake/api.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ See the UI page: [Quix Lake User Interface](./user-interface.md).
1313

1414
* Authenticate with a **Bearer** JWT in the `Authorization` header.
1515
* Most routes are namespaced by **workspace** and **topic**.
16-
* You can open the in-product Swagger from the Catalog header in the UI.
16+
* You can open the in-product Swagger from the catalog header in the Data Lake UI.
1717

1818
![Open API](./images/user-interface-open-swagger.png)
1919

@@ -34,7 +34,7 @@ Together, these endpoints back the catalog’s search grid, topic/key lists, and
3434
### Search stream metadata
3535

3636
`POST /{workspaceId}/{topic}/search`
37-
Searches stream metadata and returns matches with a total count for paging and analytics. Results mirror the catalog grid. Supports free text, exact/prefix/suffix/fuzzy matching, time windows, sorting, paging, and optional inclusion of all tag fields.
37+
Searches stream metadata and returns matches with a total count for paging and analytics. Results mirror the catalog grid in the Data Lake UI. Supports free text, exact/prefix/suffix/fuzzy matching, time windows, sorting, paging, and optional inclusion of all tag fields.
3838

3939
**Behavior notes**
4040

@@ -85,9 +85,6 @@ Returns workspace identifiers that have discoverable data for the caller.
8585
The search response includes a total-count header so clients can page results consistently with the UI.
8686

8787

88-
Here’s a refined introduction for the **Data** section that aligns in tone and clarity with the improved **Catalog** section:
89-
90-
9188

9289
## Data endpoints
9390

docs/quix-cloud/quixlake/open-format.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Quix Lake stores Kafka messages and metadata as **open files** in your blob stor
5353

5454
=== "Index metadata (Parquet)"
5555

56-
Compact **Parquet** descriptors summarize where raw Avro files live so Catalog/UI/APIs can **discover datasets without scanning Avro**.
56+
Compact **Parquet** descriptors summarize where raw Avro files live so UI and APIs can **discover datasets without scanning Avro**.
5757

5858
**Columns**
5959

docs/quix-cloud/quixlake/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ See **[open format](./open-format.md)** for the full layout and schemas.
5353
* **Query externally** using DuckDB, Spark, Trino, Athena, or BigQuery over Avro and Parquet
5454

5555
!!! tip "Cross-environment access"
56-
With the right permissions, you can browse datasets written by other environments using the Environment switcher in the Catalog.
56+
With the right permissions, you can browse datasets written by other environments using the Environment switcher in the Data Lake UI.
5757

5858
## How it works
5959

0 commit comments

Comments
 (0)