Hedera Stats is a PostgreSQL-based analytics platform that provides quantitative statistical measurements for the Hedera network. The platform calculates and stores metrics using SQL functions and procedures, leveraging open-source methodologies, Hedera mirror node data, third-party data sources, and Hgraph's GraphQL API. These statistics include network performance metrics, NFT analytics, account activities, and economic indicators, enabling transparent and consistent analysis of the Hedera ecosystem.
All metrics are pre-computed and stored in the ecosystem.metric
table, with automated updates via pg_cron and visualization through Grafana dashboards.
- PostgreSQL database (v14+) with the following extensions:
- pg_http - For external API calls (HBAR price data, etc.)
- pg_cron - For automated metric updates (requires superuser privileges)
- timestamp9 - For Hedera's nanosecond precision timestamps
- Hedera Mirror Node or access to Hgraph's GraphQL API
- Prometheus (
promtool
) foravg_time_to_consensus
(view docs) - DeFiLlama API for decentralized finance metrics (view docs)
- Grafana (optional) for visualization dashboards
Clone this repository:
git clone https://github.com/hgraph-io/hedera-stats.git
cd hedera-stats
Install Prometheus CLI (promtool
):
curl -L -O https://github.com/prometheus/prometheus/releases/download/v3.1.0/prometheus-3.1.0.linux-amd64.tar.gz
tar -xvf prometheus-3.1.0.linux-amd64.tar.gz
# one way to add the tool to the PATH
cp prometheus-3.1.0.linux-amd64/promtool /usr/bin
Set up your database:
-
Create schema and tables:
-- Execute the setup script psql -d your_database -f src/setup/up.sql
-
Load metric functions and procedures:
- Execute SQL scripts from
src/metrics/
directories - Load job procedures from
src/jobs/procedures/
- Initialize metric descriptions from
src/metric_descriptions.sql
- Execute SQL scripts from
Configure environment variables (example .env
):
DATABASE_URL="postgresql://user:password@localhost:5432/hedera_stats"
HGRAPH_API_KEY="your_api_key"
Schedule automated updates:
-
pg_cron for metric updates:
-- Edit src/jobs/pg_cron_metrics.sql -- Replace <database_name> and <database_user> placeholders psql -d your_database -f src/jobs/pg_cron_metrics.sql
-
Time-to-consensus updates (if using Prometheus):
crontab -e 1 * * * * cd /path/to/hedera-stats/src/time-to-consensus && bash ./run.sh >> ./.raw/cron.log 2>&1
hedera-stats/
├── src/
│ ├── dashboard/ # Grafana dashboards and SQL queries
│ ├── jobs/ # Automated data loading and scheduling
│ ├── metrics/ # Metric calculation SQL functions
│ ├── setup/ # Database schema setup
│ └── time-to-consensus/ # Consensus time metrics ETL
├── CLAUDE.md # AI assistant guidance
├── LICENSE
└── README.md
- ecosystem.metric - Central table storing all calculated metrics with time ranges
- ecosystem.metric_total - Standard return type for metric functions: (int8range, total)
int8range
: PostgreSQL range type for timestamp boundariestotal
: Bigint value representing the metric count/value
- ecosystem.metric_description - Metadata and descriptions for each metric
- External Data Sources → PostgreSQL via pg_http or mirror node tables
- Metric Functions → Calculate metrics using SQL/PL/pgSQL
- Job Procedures → Orchestrate metric loading via stored procedures
- pg_cron Jobs → Automate execution on schedule
- ecosystem.metric table → Store pre-computed results
- Grafana Dashboards → Visualize metrics via SQL queries
Metrics categories include:
- Accounts & Network Participants
- NFT-specific Metrics
- Network Performance & Economic Metrics
View all metrics & documentation →
-- Query the pre-computed metrics from ecosystem.metric table
SELECT *
FROM ecosystem.metric
WHERE name = '<metric_name>'
ORDER BY timestamp_range DESC
LIMIT 20;
-- Test a metric function directly
SELECT * FROM ecosystem.metric_activity_active_accounts('mainnet', 'hour');
-- Check job status
SELECT * FROM cron.job_run_details ORDER BY start_time DESC LIMIT 10;
Use Grafana to visualize metrics:
- Import
Hedera_KPI_Dashboard.json
fromsrc/dashboard
. - SQL queries provided in the same directory serve as data sources.
Query available metrics dynamically via GraphQL API (test in our developer playground):
query AvailableMetrics {
ecosystem_metric(distinct_on: name) {
name
description {
description
methodology
}
}
}
- Verify you're querying the correct API endpoint:
- Staging environment (
hgraph.dev
) may have incomplete data. - Production endpoint (
hgraph.io
) requires an API key.
- Staging environment (
- Use broader granularity (day/month) for extensive periods.
- Limit result size with
limit
andorder_by
. - Cache frequently accessed data.
- Full Hedera Stats Documentation →
- Hedera Mirror Node Docs
- Hedera Transaction Result Codes
- Hedera Transaction Types
- DeFiLlama API Documentation
- PostgreSQL Documentation
We welcome contributions!
- Fork this repository.
- Create your feature branch (
git checkout -b feature/new-metric
). - Commit changes (
git commit -am 'Add new metric'
). - Push to the branch (
git push origin feature/new-metric
). - Submit a Pull Request detailing your changes.