Blocklight follows a modular, event-driven architecture designed for high performance and extensibility.
Component Overview
EVM Blockchains
- Supports all EVM-compatible chains (Ethereum, Polygon, Arbitrum, Base, Optimism, Rootstock, etc.)
- Connects via RPC/WebSocket to blockchain nodes
- Real-time transaction monitoring
Ingestion
- Blockchain Listeners: Connect to EVM nodes via WebSocket (preferred) or HTTP polling
- Handles multiple chains simultaneously
Rule Engine
- Rule Loader: Parses YAML rules and validates syntax
- Rule Evaluator: Evaluates conditions against transactions using expression engine
Analysis
- Transaction Analyzer: Fetches receipts (if enabled), extracts function selectors from input data (if enabled), analyzes gas usage (if enabled)
Output
- Alerter: Routes findings to configured channels (log, file, Slack, Discord, Email, Webhooks)
- Exporters: Formats findings for external systems (NDJSON for log aggregators, SARIF for CI/CD)
API Layer
- gRPC Server: Core API for internal communication
- REST API: HTTP API for external integrations and dashboards
Observability
- Prometheus Metrics: Performance and operational metrics
- Health Checks: System health monitoring
Data Flow
- Ingestion: Blockchain listeners connect to EVM nodes and stream transactions in real-time
- Rule Loading: YAML rules are parsed, validated, and loaded into the rule engine
- Analysis: Transactions are analyzed by the transaction analyzer (parallel processing with worker pool)
- Evaluation: Rule evaluator checks conditions against analyzed transaction data (parallel evaluation)
- Output: Findings are routed to configured alert channels and exported in various formats
- Observability: Metrics and health checks provide operational visibility
Blocklight is designed for high-performance transaction processing with a focus on parallel execution and efficient resource utilization. However, performance ultimately depends on your RPC provider’s capabilities.
Worker Pool Architecture
Blocklight uses a worker pool pattern to process transactions in parallel:
- Configurable Workers: Default 16 workers (configurable via
go_core.workers in config.yaml)
- Parallel Processing: Each worker processes transactions independently from a shared channel
- Non-Blocking: Workers don’t block each other, allowing concurrent rule evaluation
go_core:
workers: 16 # Number of parallel rule evaluators (min: 1, max: 64)
Performance Impact:
- With fast RPC: ~5-20 transactions/second per worker
- Theoretical maximum: ~80-320 transactions/second (16 workers)
- Real-world: ~50-150 transactions/second (limited by RPC latency)
Conditional Analysis
Blocklight only performs expensive RPC analysis when necessary:
- Smart Detection: Scans enabled rules to detect if analysis is required
- Analysis Fields: Only analyzes if rules use
tx.status, tx.gas_used, tx.logs_count, or tx.logs
- Performance Gain: Rules using only basic fields (e.g.,
tx.value, tx.from, tx.to) skip analysis entirely
Example:
# This rule does NOT require analysis (uses only basic fields)
- rule: High Value Transfer
condition: tx.value > 100 ether and tx.to != null
# No RPC calls needed!
# This rule DOES require analysis (uses receipt fields)
- rule: Failed High Gas Transaction
condition: tx.status == 0 and tx.gas_used > 1000000
# RPC call to fetch receipt required
RPC Dependency & Timeouts
RPC performance is the primary bottleneck:
- Timeout Protection: All RPC calls use configurable timeouts (default: 5 seconds)
- Non-Blocking Errors: Analysis failures don’t block rule evaluation
- Graceful Degradation: Rules that don’t require analysis continue working even if RPC fails
analysis:
transaction:
timeout_seconds: 5 # RPC timeout (min: 1, max: 60)
RPC Performance Tiers:
| Tier | Latency | Throughput | Cost |
|---|
| Free (Public RPC) | 200-2000ms | ~1-5 tx/sec | Free |
| Paid (Alchemy/Infura) | 50-200ms | ~50-200 tx/sec | $50-500/month |
| Dedicated Node | 10-50ms | ~500+ tx/sec | $500+/month |
Important: Your RPC provider’s performance directly impacts Blocklight’s throughput. For production deployments, use paid RPC providers (Alchemy, Infura, QuickNode) or dedicated nodes for optimal performance.
Optimizations
Blocklight includes several performance optimizations:
- Caching: Parsed transaction values (
tx.value, tx.gas_price) are cached to avoid redundant parsing
- Condition Routing: Priority-based routing table replaces 20+ sequential string checks
- Pre-compiled Regex: Regular expressions are compiled once at startup
- Rule Filtering: Disabled rules are filtered before evaluation loops
- Shallow Copy: Metadata uses shallow copy (deep copy only for analysis map)
Scaling Recommendations
For High-Volume Production:
- Increase Workers: Set
workers: 32-64 (if RPC can handle it)
- Use Paid RPC: Upgrade to Alchemy/Infura paid tier or dedicated node
- WebSocket Preferred: Use
ws_url instead of rpc_url for lower latency
- Batch Processing: Configure appropriate
batch_size for your RPC limits
- Monitor Metrics: Use Prometheus metrics to identify bottlenecks
Example Production Config:
go_core:
workers: 32 # Higher worker count for paid RPC
chains:
ethereum:
ws_url: wss://eth-mainnet.g.alchemy.com/v2/${ALCHEMY_API_KEY} # WebSocket for lower latency
batch_size: 100
analysis:
transaction:
timeout_seconds: 3 # Aggressive timeout for fast RPC
Monitor performance using Prometheus metrics:
blocklight_transactions_processed_total: Total transactions processed
blocklight_findings_generated_total: Total findings generated
blocklight_avg_evaluation_time_ms: Average rule evaluation time
blocklight_workers_active: Active worker count
See Observability for detailed metrics documentation.
Next Steps