Component Overview
EVM Blockchains- Supports all EVM-compatible chains (Ethereum, Polygon, Arbitrum, Base, Optimism, Rootstock, etc.)
- Connects via RPC/WebSocket to blockchain nodes
- Real-time transaction monitoring
- Blockchain Listeners: Connect to EVM nodes via WebSocket (preferred) or HTTP polling
- Handles multiple chains simultaneously
- Rule Loader: Parses YAML rules and validates syntax
- Rule Evaluator: Evaluates conditions against transactions using expression engine
- Transaction Analyzer: Fetches receipts (if enabled), extracts function selectors from input data (if enabled), analyzes gas usage (if enabled)
- Alerter: Routes findings to configured channels (log, file, Slack, Discord, Email, Webhooks)
- Exporters: Formats findings for external systems (NDJSON for log aggregators, SARIF for CI/CD)
- gRPC Server: Core API for internal communication
- REST API: HTTP API for external integrations and dashboards
- Prometheus Metrics: Performance and operational metrics
- Health Checks: System health monitoring
Data Flow
- Ingestion: Blockchain listeners connect to EVM nodes and stream transactions in real-time
- Rule Loading: YAML rules are parsed, validated, and loaded into the rule engine
- Analysis: Transactions are analyzed by the transaction analyzer (parallel processing with worker pool)
- Evaluation: Rule evaluator checks conditions against analyzed transaction data (parallel evaluation)
- Output: Findings are routed to configured alert channels and exported in various formats
- Observability: Metrics and health checks provide operational visibility
Performance Architecture
Blocklight is designed for high-performance transaction processing with a focus on parallel execution and efficient resource utilization. However, performance ultimately depends on your RPC provider’s capabilities.Worker Pool Architecture
Blocklight uses a worker pool pattern to process transactions in parallel:- Configurable Workers: Default 16 workers (configurable via
go_core.workersinconfig.yaml) - Parallel Processing: Each worker processes transactions independently from a shared channel
- Non-Blocking: Workers don’t block each other, allowing concurrent rule evaluation
- With fast RPC: ~5-20 transactions/second per worker
- Theoretical maximum: ~80-320 transactions/second (16 workers)
- Real-world: ~50-150 transactions/second (limited by RPC latency)
Conditional Analysis
Blocklight only performs expensive RPC analysis when necessary:- Smart Detection: Scans enabled rules to detect if analysis is required
- Analysis Fields: Only analyzes if rules use
tx.status,tx.gas_used,tx.logs_count, ortx.logs - Performance Gain: Rules using only basic fields (e.g.,
tx.value,tx.from,tx.to) skip analysis entirely
RPC Dependency & Timeouts
RPC performance is the primary bottleneck:- Timeout Protection: All RPC calls use configurable timeouts (default: 5 seconds)
- Non-Blocking Errors: Analysis failures don’t block rule evaluation
- Graceful Degradation: Rules that don’t require analysis continue working even if RPC fails
| Tier | Latency | Throughput | Cost |
|---|---|---|---|
| Free (Public RPC) | 200-2000ms | ~1-5 tx/sec | Free |
| Paid (Alchemy/Infura) | 50-200ms | ~50-200 tx/sec | $50-500/month |
| Dedicated Node | 10-50ms | ~500+ tx/sec | $500+/month |
Optimizations
Blocklight includes several performance optimizations:- Caching: Parsed transaction values (
tx.value,tx.gas_price) are cached to avoid redundant parsing - Condition Routing: Priority-based routing table replaces 20+ sequential string checks
- Pre-compiled Regex: Regular expressions are compiled once at startup
- Rule Filtering: Disabled rules are filtered before evaluation loops
- Shallow Copy: Metadata uses shallow copy (deep copy only for analysis map)
Scaling Recommendations
For High-Volume Production:- Increase Workers: Set
workers: 32-64(if RPC can handle it) - Use Paid RPC: Upgrade to Alchemy/Infura paid tier or dedicated node
- WebSocket Preferred: Use
ws_urlinstead ofrpc_urlfor lower latency - Batch Processing: Configure appropriate
batch_sizefor your RPC limits - Monitor Metrics: Use Prometheus metrics to identify bottlenecks
Performance Monitoring
Monitor performance using Prometheus metrics:blocklight_transactions_processed_total: Total transactions processedblocklight_findings_generated_total: Total findings generatedblocklight_avg_evaluation_time_ms: Average rule evaluation timeblocklight_workers_active: Active worker count
Next Steps
Getting Started
Install Blocklight and create your first detection rule.
Deployment
Deploy Blocklight in production with Docker.
Configuration
Configure Blocklight for your environment.
Observability
Set up monitoring and metrics.