generated from codecrafters-io/course-template
-
Notifications
You must be signed in to change notification settings - Fork 11
Kafka/Produce API extension #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ryan-gang
wants to merge
46
commits into
main
Choose a base branch
from
kafka-produce
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 32 commits
Commits
Show all changes
46 commits
Select commit
Hold shift + click to select a range
f4cfd49
feat: add stage descriptions for Produce API extension
ryan-gang d6db37a
feat: enhance stage description for the Produce API, adding interacti…
ryan-gang 239ce70
feat: update stage description for handling Produce requests to non-e…
ryan-gang 1e0dc22
feat: update stage description to clarify handling of Produce request…
ryan-gang 62cb2bc
feat: enhance stage description for handling Produce requests to inva…
ryan-gang 65f3f7f
feat: add TODO note for enhancing cluster metadata binspec details in…
ryan-gang 03a3cb8
feat: enhance Produce stage description to clarify single record prod…
ryan-gang 982ec2e
refactor: streamline Produce stage description for single record prod…
ryan-gang 7043210
feat: update Produce stage description to clarify Kafka's on-disk log…
ryan-gang 17ea069
feat: enhance Produce stage description to clarify handling of multip…
ryan-gang 42396b8
feat: update Produce stage description to clarify handling of multipl…
ryan-gang 01cf9dc
feat: refine Produce stage descriptions to clarify handling of multip…
ryan-gang 2d085e6
feat: add TODO for testing multiple requests in Produce stage, focusi…
ryan-gang 27111ee
feat: add new challenge extension for producing messages in Kafka, in…
ryan-gang 1ce465d
chore: rename stage description files to match expected format
ryan-gang 0f0f4b3
feat: update Produce stage description to include validation for both…
ryan-gang 70fd9c5
feat: add support for producing messages to multiple topics and parti…
ryan-gang 5a213be
feat: update Produce stage name and description to clarify handling o…
ryan-gang 41a2c4d
fix: clarify language in Produce stage descriptions regarding request…
ryan-gang 4ea0ea6
fix: update stage names in course-definition.yml for improved clarity…
ryan-gang 23d9849
fix: enhance clarity in Produce stage descriptions by refining langua…
ryan-gang 1f8ddfa
fix: improve clarity in Produce stage descriptions by refining langua…
ryan-gang 3514f03
fix: refine language in Produce stage descriptions for consistency in…
ryan-gang fe38264
fix: enhance clarity in Produce stage descriptions by refining langua…
ryan-gang 80be2e8
refactor: apply suggestions from code review
ryan-gang 07abacc
fix: enhance clarity in Produce stage description by refining languag…
ryan-gang 4335570
fix: update Produce stage name for improved clarity by removing refer…
ryan-gang 539d2d6
fix: update links in Produce stage descriptions for improved accuracy…
ryan-gang c77abf3
fix: refine Produce stage descriptions for improved clarity by simpli…
ryan-gang eb78513
chore: add additional stage 3 in between
ryan-gang f5f5329
fix: simplify Produce stage description by hardcoding error response …
ryan-gang 7b40866
feat: add detailed description for handling Produce requests with val…
ryan-gang 170f1ee
docs: add links to interactive protocol inspector for Kafka's log fil…
ryan-gang 30bf549
fix: clarify Produce request description by specifying that data span…
ryan-gang 543120f
fix: improve clarity in stage descriptions by refining language and u…
ryan-gang fb20251
fix: update error code references in stage descriptions to improve co…
ryan-gang 51e6793
fix: enhance clarity in stage descriptions by consistently formatting…
ryan-gang b20bc2c
fix: update stage descriptions to clarify Produce request handling wi…
ryan-gang 1096865
refactor: apply suggestions from code review
ryan-gang 848e932
docs: remove interactive protocol inspector links for Produce request…
ryan-gang 7b3f26b
fix: update stage descriptions to reflect changes in Produce API vers…
ryan-gang da5378c
fix: update stage description to specify that a single `Produce` requ…
ryan-gang 5f3ce1d
fix: enhance stage descriptions for producing messages by adding inte…
ryan-gang 02caa7b
fix: clarify stage description for producing messages by specifying t…
ryan-gang 9970aa6
refactor: apply suggestions from code review
ryan-gang 0140d31
fix: update links in stage description for producing messages to refl…
ryan-gang File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,36 @@ | ||
| In this stage, you'll add an entry for the `Produce` API to the APIVersions response. | ||
|
|
||
| ## The Produce API | ||
|
|
||
| The [Produce API](https://kafka.apache.org/protocol#The_Messages_Produce) (API key `0`) is used to produce messages to a Kafka topic. | ||
|
|
||
| We've created an interactive protocol inspector for the request & response structures for `Produce`: | ||
|
|
||
| - 🔎 [Produce Request (v11)](https://binspec.org/kafka-produce-request-v11) | ||
| - 🔎 [Produce Response (v11)](https://binspec.org/kafka-produce-response-v11) | ||
|
|
||
| In this stage, you'll only need to add an entry for the `Produce` API to the APIVersions response you implemented in earlier stages. This lets clients know that your broker supports the `Produce` API. We'll get to responding to `Produce` requests in later stages. | ||
|
|
||
| ## Tests | ||
|
|
||
| The tester will execute your program like this: | ||
|
|
||
| ```bash | ||
| ./your_program.sh /tmp/server.properties | ||
| ``` | ||
|
|
||
| It'll then connect to your server on port 9092 and send a valid `APIVersions` (v4) request. | ||
|
|
||
| The tester will validate that: | ||
|
|
||
| - The first 4 bytes of your response (the "message length") are valid. | ||
| - The correlation ID in the response header matches the correlation ID in the request header. | ||
| - The error code in the response body is `0` (NO_ERROR). | ||
| - The response body contains at least one entry for the API key `0` (Produce). | ||
| - The `MaxVersion` for the Produce API is at least 11. | ||
|
|
||
| ## Notes | ||
|
|
||
| - You don't have to implement support for handling `Produce` requests in this stage. We'll get to this in later stages. | ||
| - You'll still need to include the entry for `APIVersions` in your response to pass earlier stages. | ||
| - The `MaxVersion` for `Produce` and `APIVersions` are different. For `APIVersions`, it is 4. For `Produce`, it is 11. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,42 @@ | ||
| In this stage, you'll add support for handling Produce requests to invalid topics or partitions. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Produce API Response for Invalid Topics or Partitions | ||
|
|
||
| When a Kafka broker receives a Produce request, it needs to validate that both the topic and partition exist. If either the topic or partition doesn't exist, it returns an appropriate error code and response. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| For this stage, you can hardcode the error response - assume that all Produce requests are for invalid topics or partitions and return the error code `3` (UNKNOWN_TOPIC_OR_PARTITION). In the next stage, you'll implement handling success responses. | ||
|
|
||
| We've created an interactive protocol inspector for the request & response structures for `Produce`: | ||
|
|
||
| - 🔎 [Produce Request (v11)](https://binspec.org/kafka-produce-request-v11) | ||
| - 🔎 [Produce Response (v11) - Invalid Topic](https://binspec.org/kafka-produce-error-response-v11-invalid-topic) | ||
|
|
||
| ## Tests | ||
|
|
||
| The tester will execute your program like this: | ||
|
|
||
| ```bash | ||
| ./your_program.sh /tmp/server.properties | ||
| ``` | ||
|
|
||
| It'll then connect to your server on port 9092 and send a `Produce` (v11) request with either an invalid topic name or a valid topic but invalid partition. | ||
|
|
||
| The tester will validate that: | ||
|
|
||
| - The first 4 bytes of your response (the "message length") are valid. | ||
| - The correlation ID in the response header matches the correlation ID in the request header. | ||
| - The `error_code` in the response body is `3` (UNKNOWN_TOPIC_OR_PARTITION). | ||
| - The `throttle_time_ms` field in the response is `0`. | ||
| - Inside the topic response: | ||
| - The `name` field matches the topic name in the request. | ||
| - Inside the partition response: | ||
| - The `index` field matches the partition in the request. | ||
| - The `base_offset` field is `-1`. | ||
| - The `log_append_time_ms` field is `-1`. | ||
| - The `log_start_offset` field is `-1`. | ||
|
|
||
| ## Notes | ||
|
|
||
| - You'll need to parse the `Produce` request in this stage to get the topic name and partition to send in the response. | ||
| - For this stage, simply hardcode the error response without implementing actual validation logic. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - The official docs for the `Produce` request can be found [here](https://kafka.apache.org/protocol.html#The_Messages_Produce). Make sure to scroll down to the "Produce Response (Version: 11)" section. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,57 @@ | ||
| In this stage, you'll add support for responding to Produce requests with a valid topic. | ||
|
|
||
| ## Produce API Response for Valid Topics | ||
|
|
||
| When a Kafka broker receives a Produce request, it needs to validate that both the topic and partition exist. If either the topic or partition doesn't exist, it returns an appropriate error code and response. | ||
|
|
||
| The broker performs validation in this order: | ||
| 1. **Topic validation**: Check if the topic exists by reading the `__cluster_metadata` topic's log file | ||
| 2. **Partition validation**: If the topic exists, check if the partition exists within that topic | ||
|
|
||
| ### Topic Validation | ||
|
|
||
| To validate that a topic exists, the broker reads the `__cluster_metadata` topic's log file, located at `/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log`. Inside the log file, the broker finds the topic's metadata, which is a `record` (inside a RecordBatch) with a payload of type `TOPIC_RECORD`. If there exists a `TOPIC_RECORD` with the given topic name and the topic ID, the topic exists. | ||
|
|
||
| ### Partition Validation | ||
|
|
||
| To validate that a partition exists, the broker reads the same `__cluster_metadata` topic's log file and finds the partition's metadata, which is a `record` (inside a RecordBatch) with a payload of type `PARTITION_RECORD`. If there exists a `PARTITION_RECORD` with the given partition index, the UUID of the topic it is associated with, and the UUID of the directory it is associated with, the partition exists. | ||
|
|
||
| We've created an interactive protocol inspector for the request & response structures for `Produce`: | ||
|
|
||
| - 🔎 [Produce Request (v11)](https://binspec.org/kafka-produce-request-v11) | ||
| - 🔎 [Produce Response (v11)](https://binspec.org/kafka-produce-response-v11) | ||
|
|
||
| We've also created an interactive protocol inspector for the `__cluster_metadata` topic's log file: | ||
| - 🔎 [__cluster_metadata log file](https://binspec.org/kafka-topic-log) | ||
|
|
||
| In this stage, you'll need to implement the response for a `Produce` request with a valid topic. In later stages, you'll handle successfully producing messages to valid topics and partitions and persist messages to disk using Kafka's RecordBatch format. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Tests | ||
|
|
||
| The tester will execute your program like this: | ||
|
|
||
| ```bash | ||
| ./your_program.sh /tmp/server.properties | ||
| ``` | ||
|
|
||
| It'll then connect to your server on port 9092 and send a `Produce` (v11) request with either an invalid topic name or a valid topic but invalid partition. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| The tester will validate that: | ||
|
|
||
| - The first 4 bytes of your response (the "message length") are valid. | ||
| - The correlation ID in the response header matches the correlation ID in the request header. | ||
| - The `error_code` in the response body is `3` (UNKNOWN_TOPIC_OR_PARTITION). | ||
| - The `throttle_time_ms` field in the response is `0`. | ||
| - Inside the topic response: | ||
| - The `name` field matches the topic name in the request. | ||
| - Inside the partition response: | ||
| - The `index` field matches the partition in the request. | ||
| - The `base_offset` field is `-1`. | ||
| - The `log_append_time_ms` field is `-1`. | ||
| - The `log_start_offset` field is `-1`. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Notes | ||
|
|
||
| - You'll need to parse the `Produce` request in this stage to get the topic name and partition to send in the response. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - The official docs for the `Produce` request can be found [here](https://kafka.apache.org/protocol.html#The_Messages_Produce). Make sure to scroll down to the "Produce Response (Version: 11)" section. | ||
| - The official Kafka docs don't cover the structure of records inside the `__cluster_metadata` topic, but you can find the definitions in the Kafka source code [here](https://github.com/apache/kafka/tree/5b3027dfcbcb62d169d4b4421260226e620459af/metadata/src/main/resources/common/metadata). | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,48 @@ | ||
| In this stage, you'll add support for successfully producing a single record. | ||
|
|
||
| ## Single Record Production | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| When a Kafka broker receives a Produce request, it needs to validate that the topic and partition exist (using the `__cluster_metadata` topic's log file), store the record in the appropriate log file using Kafka's on-disk format, and return a successful response with the assigned offset. | ||
|
|
||
| The record must be persisted to the topic's log file at `<log-dir>/<topic-name>-<partition-index>/00000000000000000000.log` using Kafka's RecordBatch format. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| We've created an interactive protocol inspector for the request & response structures for `Produce`: | ||
|
|
||
| - 🔎 [Produce Request (v11)](https://binspec.org/kafka-produce-request-v11) | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - 🔎 [Produce Response (v11)](https://binspec.org/kafka-produce-response-v11) | ||
|
|
||
| Kafka's on-disk log format uses the same [RecordBatch](https://binspec.org/kafka-record-batches) format that is used in `Produce` and `Fetch` requests. | ||
|
|
||
| You can refer to the following interactive protocol inspector for Kafka's log file format: | ||
| - 🔎 [A sample topic's log file](https://binspec.org/kafka-topic-log) | ||
|
|
||
| ## Tests | ||
|
|
||
| The tester will execute your program like this: | ||
|
|
||
| ```bash | ||
| ./your_program.sh /tmp/server.properties | ||
| ``` | ||
|
|
||
| It'll then connect to your server on port 9092 and send multiple successive `Produce` (v11) requests with single records. | ||
|
|
||
| The tester will validate that: | ||
|
|
||
| - The first 4 bytes of your response (the "message length") are valid. | ||
| - The correlation ID in the response header matches the correlation ID in the request header. | ||
| - The error code in the response body is `0` (NO_ERROR). | ||
| - The `throttle_time_ms` field in the response is `0`. | ||
| - Inside the topic response: | ||
| - The `name` field matches the topic name in the request. | ||
| - Inside the partition response: | ||
| - The `index` field matches the partition in the request. | ||
| - The `base_offset` field contains the assigned offset for the record. (The `base_offset` is the offset of the record in the partition, not the offset of the batch. So 0 for the first record, 1 for the second record, and so on.) | ||
| - The `log_append_time_ms` field is `-1` (signifying that the timestamp is the latest). | ||
| - The `log_start_offset` field is `0`. | ||
| - The record is persisted to the appropriate log file on disk at `<log-dir>/<topic-name>-<partition-index>/00000000000000000000.log`. | ||
|
|
||
| ## Notes | ||
|
|
||
| - On-disk log files must be stored in `RecordBatch` format. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - The offset assignment should start from `0` for new partitions and increment for each subsequent record. | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - The official docs for the `Produce` request can be found [here](https://kafka.apache.org/protocol.html#The_Messages_Produce). Make sure to scroll down to the "Produce Response (Version: 11)" section. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,40 @@ | ||
| In this stage, you'll add support for producing multiple records in a single `Produce` request. | ||
|
|
||
| ## Batch Processing | ||
ryan-gang marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| When a Kafka broker receives a `Produce` request containing a `RecordBatch` with multiple records, it needs to validate that the topic and partition exist, assign sequential offsets to each record within the batch, and store the entire batch atomically to the log file. The `RecordBatch` containing multiple records must be stored as a single unit to the topic's log file at `<log-dir>/<topic-name>-<partition-index>/00000000000000000000.log`. | ||
|
|
||
| We've created an interactive protocol inspector for the request & response structures for `Produce`: | ||
|
|
||
| - 🔎 [Produce Request (v11)](https://binspec.org/kafka-produce-request-v11) | ||
| - 🔎 [Produce Response (v11)](https://binspec.org/kafka-produce-response-v11) | ||
|
|
||
| ## Tests | ||
|
|
||
| The tester will execute your program like this: | ||
|
|
||
| ```bash | ||
| ./your_program.sh /tmp/server.properties | ||
| ``` | ||
|
|
||
| It'll then connect to your server on port 9092 and send a `Produce` (v11) request containing a RecordBatch with multiple records. | ||
|
|
||
| The tester will validate that: | ||
|
|
||
| - The first 4 bytes of your response (the "message length") are valid. | ||
| - The correlation ID in the response header matches the correlation ID in the request header. | ||
| - The error code in the response body is `0` (NO_ERROR). | ||
| - The `throttle_time_ms` field in the response is `0`. | ||
| - Inside the topic response: | ||
| - The `name` field matches the topic name in the request. | ||
| - Inside the partition response: | ||
| - The `index` field matches the partition in the request. | ||
| - The `base_offset` field is 0 (the base offset for the batch). | ||
| - The `log_append_time_ms` field is `-1` (signifying that the timestamp is the latest). | ||
| - The `log_start_offset` field is `0`. | ||
| - The records are persisted to the appropriate log file on disk at `<log-dir>/<topic-name>-<partition-index>/00000000000000000000.log` with sequential offsets. | ||
|
|
||
| ## Notes | ||
|
|
||
| - Records within a batch must be assigned sequential offsets (e.g., if the base offset is 5, records get offsets 5, 6, 7). | ||
| - The official docs for the `Produce` request can be found [here](https://kafka.apache.org/protocol.html#The_Messages_Produce). Make sure to scroll down to the "Produce Response (Version: 11)" section. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,45 @@ | ||
| In this stage, you'll add support for producing to multiple partitions of the same topic in a single request. | ||
|
|
||
| ## Partition Routing | ||
|
|
||
| When a Kafka broker receives a Produce request targeting multiple partitions of the same topic, it needs to validate that the topic and all partitions exist, write records to each partition's log file independently, and return a response containing results for all partitions. Each partition maintains its own offset sequence independently, so partition 0 and partition 1 can both have records starting at offset 0. | ||
|
|
||
| We've created an interactive protocol inspector for the request & response structures for `Produce`: | ||
|
|
||
| - 🔎 [Produce Request (v11)](https://binspec.org/kafka-produce-request-v11) | ||
| - 🔎 [Produce Response (v11)](https://binspec.org/kafka-produce-response-v11) | ||
|
|
||
| ## Tests | ||
|
|
||
| The tester will execute your program like this: | ||
|
|
||
| ```bash | ||
| ./your_program.sh /tmp/server.properties | ||
| ``` | ||
|
|
||
| It'll then connect to your server on port 9092 and send a `Produce` (v11) request targeting multiple partitions of the same topic. The request will contain multiple RecordBatches, one for each partition. Each RecordBatch will contain a single record. | ||
|
|
||
| The tester will validate that: | ||
|
|
||
| - The first 4 bytes of your response (the "message length") are valid. | ||
| - The correlation ID in the response header matches the correlation ID in the request header. | ||
| - The error code in the response body is `0` (NO_ERROR). | ||
| - The `throttle_time_ms` field in the response is `0`. | ||
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - There is a single topic present in the response. | ||
| - Inside the topic response: | ||
| - The `name` field matches the topic name in the request. | ||
| - Each partition in the request has a corresponding partition response. | ||
| - Inside each partition response: | ||
| - The `index` field matches the partition in the request. | ||
| - The `base_offset` field contains the assigned offset for that partition. | ||
| - The error code is `0` (NO_ERROR). | ||
| - The `log_append_time_ms` field is `-1` (signifying that the timestamp is the latest). | ||
| - The `log_start_offset` field is `0`. | ||
| - Records are persisted to the correct partition log files on disk. | ||
| - Offset assignment is independent per partition (partition 0 and partition 1 can both have offset 0). | ||
|
|
||
| ## Notes | ||
|
|
||
| - The response must include entries for all requested partitions. | ||
| - On-disk log files must be stored in `RecordBatch` format with proper CRC validation. | ||
| - The official docs for the `Produce` request can be found [here](https://kafka.apache.org/protocol.html#The_Messages_Produce). Make sure to scroll down to the "Produce Response (Version: 11)" section. | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ryan-gang did we settle on v11 vs v12? What was the conclusion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we are going to go with v12.
The Request and Response structures are identical.
Changes in stage descriptions and binspec would be very minimal.
For tester, changes would be limited, but we would have to change the version of Kafka we use for testing on CI and locally, + fixtures are going to change a lot.
So I thought I would ship tester with v11, then in a separate PR, upgrade version + update CI + fixtures.