-
Couldn't load subscription status.
- Fork 1.7k
"Gentle Introduction to Arrow / Record Batches" #11336 #18051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 10 commits
c316bfd
280a0af
7f83c2b
8ba8a39
8245f5c
b22f1c9
5f07d74
f44fa0e
4707c82
7f61048
8e5b732
4e99994
d80d48c
4f6446f
09d6ca6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,291 @@ | ||||||
| <!--- | ||||||
| Licensed to the Apache Software Foundation (ASF) under one | ||||||
| or more contributor license agreements. See the NOTICE file | ||||||
| distributed with this work for additional information | ||||||
| regarding copyright ownership. The ASF licenses this file | ||||||
| to you under the Apache License, Version 2.0 (the | ||||||
| "License"); you may not use this file except in compliance | ||||||
| with the License. You may obtain a copy of the License at | ||||||
|
|
||||||
| http://www.apache.org/licenses/LICENSE-2.0 | ||||||
|
|
||||||
| Unless required by applicable law or agreed to in writing, | ||||||
| software distributed under the License is distributed on an | ||||||
| "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||||||
| KIND, either express or implied. See the License for the | ||||||
| specific language governing permissions and limitations | ||||||
| under the License. | ||||||
| --> | ||||||
|
|
||||||
| # Introduction to `Arrow` & RecordBatches | ||||||
|
|
||||||
| ```{contents} | ||||||
| :local: | ||||||
| :depth: 2 | ||||||
| ``` | ||||||
|
|
||||||
| This guide helps DataFusion users understand [Apache Arrow]—a language-independent, in-memory columnar format and development platform for analytics. It defines a standardized columnar representation that enables different systems and languages (e.g., Rust and Python) to share data with zero-copy interchange, avoiding serialization overhead. A core building block is the [`RecordBatch`] format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues. | ||||||
|
|
||||||
| **Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators. | ||||||
|
||||||
|
|
||||||
| ## Why Columnar? The Arrow Advantage | ||||||
|
|
||||||
| Apache Arrow is an open **specification** that defines a common way to organize analytical data in memory. Think of it as a set of best practices that different systems agree to follow, not a database or programming language. | ||||||
|
|
||||||
| ### Row-oriented vs Columnar Layout | ||||||
|
|
||||||
| Quick visual: row-major (left) vs Arrow's columnar layout (right). For a deeper primer, see the [arrow2 guide]. | ||||||
|
|
||||||
| ``` | ||||||
| Traditional Row Storage: Arrow Columnar Storage: | ||||||
| ┌──────────────────┐ ┌─────────┬─────────┬──────────┐ | ||||||
| │ id │ name │ age │ │ id │ name │ age │ | ||||||
| ├────┼──────┼──────┤ ├─────────┼─────────┼──────────┤ | ||||||
| │ 1 │ A │ 30 │ │ [1,2,3] │ [A,B,C] │[30,25,35]│ | ||||||
| │ 2 │ B │ 25 │ └─────────┴─────────┴──────────┘ | ||||||
| │ 3 │ C │ 35 │ ↑ ↑ ↑ | ||||||
| └──────────────────┘ Int32Array StringArray Int32Array | ||||||
| (read entire rows) (process entire columns at once) | ||||||
| ``` | ||||||
|
|
||||||
| ### Why This Matters | ||||||
|
|
||||||
| - **Unified Type System**: All data sources (CSV, Parquet, JSON) convert to the same Arrow types, enabling seamless cross-format queries | ||||||
| - **Vectorized Execution**: Process entire columns at once using SIMD instructions | ||||||
| - **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data into CPU cache | ||||||
| - **Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead | ||||||
|
|
||||||
| Arrow is widely adopted for in-memory analytics precisely because of its **columnar format**—systems that natively store or process data in Arrow (DataFusion, Polars, InfluxDB 3.0), and runtimes that convert to Arrow for interchange (DuckDB, Spark, pandas), all organize data by column rather than by row. This cross-language, cross-platform adoption of the columnar model enables seamless data flow between systems with minimal conversion overhead. | ||||||
|
|
||||||
| Within this columnar design, Arrow's standard unit for packaging data is the **RecordBatch**—the key to making columnar format practical for real-world query engines. | ||||||
|
|
||||||
| ## What is a RecordBatch? (And Why Batch?) | ||||||
|
|
||||||
| A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays that conform to a defined schema. Each column within the slice is a contiguous Arrow array, and all columns have the same number of rows (length). This chunked, immutable unit enables efficient streaming and parallel execution. | ||||||
|
|
||||||
| Think of it as having two perspectives: | ||||||
|
|
||||||
| - **Columnar inside**: Each column (`id`, `name`, `age`) is a contiguous array optimized for vectorized operations | ||||||
| - **Row-chunked externally**: The batch represents a chunk of rows (e.g., rows 1-1000), making it a manageable unit for streaming | ||||||
|
|
||||||
| RecordBatches are **immutable snapshots**—once created, they cannot be modified. Any transformation produces a _new_ RecordBatch, enabling safe parallel processing without locks or coordination overhead. | ||||||
|
|
||||||
| This design allows DataFusion to process streams of row-based chunks while gaining maximum performance from the columnar layout. Let's see how this works in practice. | ||||||
|
|
||||||
| ### Configuring Batch Size | ||||||
|
|
||||||
| DataFusion uses a default batch size of 8192 rows per RecordBatch, balancing memory efficiency with vectorization benefits. You can adjust this via the session configuration or the [`datafusion.execution.batch_size`] configuration setting: | ||||||
|
|
||||||
| ```rust | ||||||
| use datafusion::execution::config::SessionConfig; | ||||||
| use datafusion::prelude::*; | ||||||
|
|
||||||
| let config = SessionConfig::new().with_batch_size(8192); // default value | ||||||
| let ctx = SessionContext::with_config(config); | ||||||
| ``` | ||||||
|
|
||||||
| You can also query and modify this setting using SQL: | ||||||
|
|
||||||
| ```sql | ||||||
| SHOW datafusion.execution.batch_size; | ||||||
| SET datafusion.execution.batch_size TO 1024; | ||||||
| ``` | ||||||
|
|
||||||
| See [configuration settings] for more details. | ||||||
|
|
||||||
| ## From files to Arrow | ||||||
|
|
||||||
| When you call [`read_csv`], [`read_parquet`], [`read_json`] or [`read_avro`], DataFusion decodes those formats into Arrow arrays and streams them to operators as RecordBatches. | ||||||
|
|
||||||
| The example below shows how to read data from different file formats. Each `read_*` method returns a [`DataFrame`] that represents a query plan. When you call [`.collect()`], DataFusion executes the plan and returns results as a `Vec<RecordBatch>`—the actual columnar data in Arrow format. | ||||||
|
|
||||||
| ```rust | ||||||
| use datafusion::prelude::*; | ||||||
|
|
||||||
| #[tokio::main] | ||||||
| async fn main() -> datafusion::error::Result<()> { | ||||||
| let ctx = SessionContext::new(); | ||||||
|
|
||||||
| // Pick ONE of these per run (each returns a new DataFrame): | ||||||
| let df = ctx.read_csv("data.csv", CsvReadOptions::new()).await?; | ||||||
| // let df = ctx.read_parquet("data.parquet", ParquetReadOptions::default()).await?; | ||||||
| // let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature; expects newline-delimited JSON (NDJSON) | ||||||
| // let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature | ||||||
|
|
||||||
| let batches = df | ||||||
| .select(vec![col("id")])? | ||||||
| .filter(col("id").gt(lit(10)))? | ||||||
| .collect() | ||||||
| .await?; // Vec<RecordBatch> | ||||||
|
|
||||||
| Ok(()) | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| ## Streaming Through the Engine | ||||||
|
|
||||||
| DataFusion processes queries as pull-based pipelines where operators request batches from their inputs. This streaming approach enables early result production, bounds memory usage (spilling to disk only when necessary), and naturally supports parallel execution across multiple CPU cores. | ||||||
|
|
||||||
| ``` | ||||||
|
||||||
| A user's query: SELECT name FROM 'data.parquet' WHERE id > 10 | ||||||
|
|
||||||
| The DataFusion Pipeline: | ||||||
| ┌─────────────┐ ┌──────────────┐ ┌────────────────┐ ┌──────────────────┐ ┌──────────┐ | ||||||
| │ Parquet │───▶│ Scan │───▶│ Filter │───▶│ Projection │───▶│ Results │ | ||||||
| │ File │ │ Operator │ │ Operator │ │ Operator │ │ │ | ||||||
| └─────────────┘ └──────────────┘ └────────────────┘ └──────────────────┘ └──────────┘ | ||||||
| (reads data) (id > 10) (keeps "name" col) | ||||||
| RecordBatch ───▶ RecordBatch ────▶ RecordBatch ────▶ RecordBatch | ||||||
| ``` | ||||||
|
|
||||||
| In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flow between the different stages of query execution. Each operator processes batches incrementally, enabling the system to produce results before reading the entire input. | ||||||
|
|
||||||
| ## Minimal: build a RecordBatch in Rust | ||||||
|
|
||||||
| Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`]. | ||||||
|
|
||||||
| Note: You'll see [`Arc`] used frequently in the code—Arrow arrays are wrapped in `Arc` (atomically reference-counted pointers) to enable cheap, thread-safe sharing across operators and tasks. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`. | ||||||
|
|
||||||
| ```rust | ||||||
| use std::sync::Arc; | ||||||
| use arrow_schema::ArrowError; | ||||||
| use arrow_array::{ArrayRef, Int32Array, StringArray, RecordBatch}; | ||||||
| use arrow_schema::{DataType, Field, Schema}; | ||||||
|
|
||||||
| fn make_batch() -> Result<RecordBatch, ArrowError> { | ||||||
| let ids = Int32Array::from(vec![1, 2, 3]); | ||||||
| let names = StringArray::from(vec![Some("alice"), None, Some("carol")]); | ||||||
|
|
||||||
| let schema = Arc::new(Schema::new(vec![ | ||||||
| Field::new("id", DataType::Int32, false), | ||||||
| Field::new("name", DataType::Utf8, true), | ||||||
| ])); | ||||||
|
|
||||||
| let cols: Vec<ArrayRef> = vec![Arc::new(ids), Arc::new(names)]; | ||||||
| RecordBatch::try_new(schema, cols) | ||||||
|
||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| ## Query an in-memory batch with DataFusion | ||||||
|
|
||||||
| Once you have a [`RecordBatch`], you can query it with DataFusion using a [`MemTable`]. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a [`MemTable`], registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. | ||||||
|
||||||
| Once you have a [`RecordBatch`], you can query it with DataFusion using a [`MemTable`]. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a [`MemTable`], registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. | |
| Once you have one or more [`RecordBatch`]es, you can query it with DataFusion using a [`MemTable`]. This is useful for testing, processing data from external systems, or combining in-memory data with other sources. The example below creates a batch, wraps it in a [`MemTable`], registers it as a named table, and queries it using SQL—demonstrating how Arrow serves as the bridge between your data and DataFusion's query engine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically there are ways to mutate the data in place -- for example https://docs.rs/arrow/latest/arrow/array/struct.PrimitiveArray.html#method.unary_mut
However I would say that is an advanced usecase and if it is mentioned at all could be just a reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As part of a follow on PR we can also copy some of the introductory material from https://jorgecarleitao.github.io/arrow2/main/guide/arrow.html#what-is-apache-arrow which I think is well written, though maybe it is too "database centric" 🤔