Skip to content

Conversation

@yannrouillard
Copy link

Summary

This PR implements proactive rate limiting for the Auth0 Terraform Provider, mirroring the approach used in the Okta Terraform Provider. The implementation provides intelligent throttling to prevent hitting Auth0 API rate limits by monitoring usage and proactively sleeping when approaching capacity thresholds.

Key Features

  • Configurable Rate Limiting: New max_api_capacity parameter (1-100%) with environment variable support
  • Proactive Throttling: Prevents API limit violations by monitoring usage patterns
  • Auth0-Specific Integration: Supports Auth0's x-ratelimit-* headers and rate limit policy
  • Comprehensive Endpoint Mapping: Maps API endpoints to appropriate rate limit buckets
  • Context-Aware: Supports request cancellation and timeout handling
  • High Test Coverage: 98.1% coverage for rate limiting, 93.5% for transport

Configuration

provider "auth0" {
  domain           = var.auth0_domain
  client_id        = var.auth0_client_id
  client_secret    = var.auth0_client_secret
  max_api_capacity = 70  # Use only 70% of available rate limit capacity
}

Or via environment variable:

export AUTH0_MAX_API_CAPACITY=70

Implementation Details

  • Rate Limit Manager (internal/ratelimit): Tracks API usage per endpoint and bucket
  • Governed Transport (internal/transport): HTTP transport wrapper with throttling logic
  • Provider Integration: Seamless integration with existing configuration system
  • Backward Compatible: Rate limiting is disabled by default (100% capacity)

Performance Impact

  • Zero overhead when disabled (default behavior)
  • Minimal latency when enabled - only throttles when approaching limits
  • Intelligent sleep duration calculation based on Auth0's reset windows

Test Plan

  • Unit tests for rate limit manager with comprehensive edge cases
  • Transport layer tests with various header formats and error conditions
  • Provider configuration validation and integration tests
  • Context cancellation and timeout handling tests
  • Regex pattern validation for Auth0 ID formats
  • Coverage analysis confirming high test coverage (98.1%/93.5%)

Testing Commands

# Run rate limiting tests
go test ./internal/ratelimit/... -v -cover

# Run transport tests  
go test ./internal/transport/... -v -cover

# Run all tests
make test

Manual Testing

The implementation can be tested by:

  1. Setting max_api_capacity to a low value (e.g., 10%)
  2. Making rapid API calls through Terraform
  3. Observing throttling logs and rate limit respect behavior

This implementation follows Auth0's rate limit policy documentation and provides a robust foundation for preventing API limit violations in large-scale Terraform deployments.

This implementation mirrors the rate limiting approach used in the Okta Terraform Provider,
providing proactive throttling to prevent hitting Auth0 API rate limits.

Key features:
- Configurable max_api_capacity parameter (1-100%) with default 100%
- Proactive request throttling when approaching rate limit thresholds
- Support for Auth0's x-ratelimit-* headers and alternative formats
- Comprehensive endpoint mapping based on Auth0's rate limit policy
- Regex-based ID normalization for consistent bucket classification
- Context-aware request cancellation and timeout handling
- Extensive test coverage (98.1% ratelimit, 93.5% transport)

The rate limiting is disabled by default (100% capacity) and can be enabled
by setting max_api_capacity to a lower percentage or using the
AUTH0_MAX_API_CAPACITY environment variable.

Components added:
- internal/ratelimit: Core rate limit management and status tracking
- internal/transport: HTTP transport wrapper with throttling logic
- Provider configuration: max_api_capacity parameter
- Configuration integration: Rate-limited HTTP client setup
- Comprehensive test suites for both packages
- Usage example and documentation updates
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants