This module implements Event Streams with topics, partitions, throughput, storage size, cleanup policy, retention time, retention size, segment size, and schema.
The Event Streams service supports payload data encryption that uses a root key CRN of a key management service, such as Key Protect or Hyper Protect Crypto Services (HPCS). You specify the root key CRN with the kms_key_crn
input. For more information, see Managing encryption in Event Streams.
Before you run the module, configure an authorization policy to allow the Event Streams service to access the key management service instance with the reader role. For more information, see Using authorizations to grant access between services.
You can't manage the policy in the same Terraform state file as the Event Streams service instance. When you issue a terraform destroy
command, the instance is only soft deleted and remains as a reclamation resource for a while to support recovery (reclamation). An authorization policy must exist when the instance is hard deleted or reclaimed or else the unregistration of the instance from the root key fails on the backend. If the policy doesn't exist, the only way to unregister the instance, which is a requirement for deletion of the root key, is by opening a support case. For more information, see Using a customer-managed key.
module "event_streams" {
source = "terraform-ibm-modules/event-streams/ibm"
version = "latest" # Replace "latest" with a release version to lock into a specific release
resource_group = "event-streams-rg"
plan = "standard"
topics = [
{
name = "topic-1"
partitions = 1
config = {
"cleanup.policy" = "delete"
"retention.ms" = "86400000"
"retention.bytes" = "10485760"
"segment.bytes" = "10485760"
}
},
{
name = "topic-2"
partitions = 1
config = {
"cleanup.policy" = "compact,delete"
"retention.ms" = "86400000"
"retention.bytes" = "1073741824"
"segment.bytes" = "536870912"
}
}
]
schema_id = [{
schema_id = "my-es-schema_1"
schema = {
type = "string"
name = "name_1"
}
},
{
schema_id = "my-es-schema_2"
schema = {
type = "string"
name = "name_2"
}
},
{
schema_id = "my-es-schema_3"
schema = {
type = "string"
name = "name_3"
}
}
]
}
You need the following permissions to run this module.
- Account Management
- Resource Group service
Viewer
platform access
- Resource Group service
- IAM Services
- Event Streams service
Editor
platform accessManager
service access
- Event Streams service
Name | Version |
---|---|
terraform | >= 1.3.0, <1.6.0 |
ibm | >= 1.56.1, < 2.0.0 |
Name | Source | Version |
---|---|---|
cbr_rule | terraform-ibm-modules/cbr/ibm//modules/cbr-rule-module | 1.18.1 |
Name | Type |
---|---|
ibm_event_streams_schema.es_schema | resource |
ibm_event_streams_topic.es_topic | resource |
ibm_resource_instance.es_instance | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
cbr_rules | (Optional, list) List of CBR rules to create | list(object({ |
[] |
no |
create_timeout | Creation timeout value of the Event Streams module. Use 3h when creating enterprise instance, add more 1h for each level of non-default throughput, add more 30m for each level of non-default storage_size | string |
"3h" |
no |
delete_timeout | Deleting timeout value of the Event Streams module | string |
"15m" |
no |
es_name | The name to give the IBM Event Streams instance created by this module. | string |
n/a | yes |
kms_key_crn | The root key CRN of a Key Management Services like Key Protect or Hyper Protect Crypto Services (HPCS) that you want to use payload data encryption. Only used if var.kms_encryption_enabled is set to true. Note an authorization policy to allow the Event Streams service to access the key management service instance as a Reader MUST be configured in advance and should not be managed as part of the same terraform state as the event streams instance, see https://cloud.ibm.com/docs/account?topic=account-serviceauth | string |
null |
no |
plan | Plan for the event streams instance : lite, standard or enterprise-3nodes-2tb | string |
"standard" |
no |
region | IBM Cloud region where event streams will be created | string |
"us-south" |
no |
resource_group_id | The resource group ID where the Event Streams instance will be created. | string |
n/a | yes |
schemas | The list of schema object which contains schema id and format of the schema | list(object( |
[] |
no |
service_endpoints | Specify whether you want to enable the public, private, or both service endpoints. Supported values are 'public', 'private', or 'public-and-private'. | string |
"public" |
no |
storage_size | Storage size of the event streams in GB. For enterprise instance only. Options are: 2048, 4096, 6144, 8192, 10240, 12288,. Note: When throughput is 300, storage_size starts from 4096, when throughput is 450, storage_size starts from 6144. Storage capacity cannot be scaled down once instance is created. | number |
"2048" |
no |
tags | List of tags associated with the Event Steams instance | list(string) |
[] |
no |
throughput | Throughput capacity in MB per second. For enterprise instance only. Options are: 150, 300, 450. | number |
"150" |
no |
topics | List of topics. For lite plan only one topic is allowed. | list(object( |
[] |
no |
update_timeout | Updating timeout value of the Event Streams module. Use 1h when updating enterprise instance, add more 1h for each level of non-default throughput, add more 30m for each level of non-default storage_size. | string |
"1h" |
no |
Name | Description |
---|---|
crn | Event Streams crn |
guid | Event Streams guid |
id | Event Streams instance id |
kafka_brokers_sasl | (Array of Strings) Kafka brokers use for interacting with Kafka native API |
kafka_http_url | The API endpoint to interact with Event Streams REST API |
You can report issues and request features for this module in GitHub issues in the module repo. See Report an issue or request a feature.
To set up your local development environment, see Local development setup in the project documentation.