Your Cosmos blockchain infrastructure is now live and optimized. We have deployed and fine-tuned three high-performance full nodes (Neutron, dYdX, and Pryzm) specifically configured for your requirements.
All endpoints are secured with TLS and accessible via:
- RPC: https://neutron-rpc.bryanlabs.net
- WSS: wss://neutron-rpc.bryanlabs.net/websocket
- API: https://neutron-api.bryanlabs.net
- GRPC:
neutron-grpc.bryanlabs.net:443
- RPC: https://dydx-rpc.bryanlabs.net
- WSS: wss://dydx-rpc.bryanlabs.net/websocket
- API: https://dydx-api.bryanlabs.net
- GRPC:
dydx-grpc.bryanlabs.net:443
- RPC: https://pryzm-rpc.bryanlabs.net
- WSS: wss://pryzm-rpc.bryanlabs.net/websocket
- API: https://pryzm-api.bryanlabs.net
- GRPC:
pryzm-grpc.bryanlabs.net:443
Chain | SDK Version | Block Time | Status |
---|---|---|---|
Neutron | v0.50.13-neutron-rpc | 1.1s | β Optimized |
dYdX | v0.50.6-0.20250708185419 | 1.1s | β Optimized |
Pryzm | v0.47.17 | 5.6s | β Optimized |
Your infrastructure is backed by enterprise-grade connectivity through our dedicated network infrastructure:
Network Specifications:
- Autonomous System Number (ASN): 401711 (ARIN registered)
- Service Level Agreement: 99.9% uptime guarantee
- Port Speeds: 1 Gbps to 100 Gbps capabilities
- Routing: Multi-path routing for enhanced redundancy
Geographic Presence:
- Primary Peering Hub: Ashburn, VA (Equinix IX)
- Additional Locations: Reston, VA β’ Baltimore, MD β’ Silver Spring, MD
- Operations Center: 24/7 monitoring from Silver Spring, MD
Direct Peering Partners:
- Cloud Providers: AWS, Google Cloud, Microsoft Azure, IBM Cloud
- CDN Networks: Cloudflare, Apple, Netflix, Meta, GitHub, Cisco
- Internet Exchanges: Equinix IX (Ashburn), NYIIX, FCIX
- Satellite Connectivity: SpaceX Starlink integration
Technical Advantage: Enterprise-grade connectivity powered by DACS-IX peering fabric ensures optimal performance and redundancy for blockchain infrastructure.
Your infrastructure includes a comprehensive monitoring stack built on enterprise-grade observability tools, providing complete visibility into node performance, network activity, storage utilization, and service health.
Core Stack:
- Prometheus - Metrics collection and time-series database
- Grafana - Interactive dashboards and visualization
- Loki - Log aggregation and search
- Node Exporters - System-level metrics (CPU, memory, disk, network)
- CometBFT Exporters - Blockchain-specific metrics (block height, sync status, peers)
- HAProxy Exporter - Load balancer and service health metrics
- TopoLVM Exporter - Storage volume monitoring
Key Features:
- Dedicated Instance - Your own isolated Grafana instance at
grafana-icf.bryanlabs.net
- Real-time Metrics - 30-second collection intervals for most metrics
- Interactive Dashboards - Filter, search, and drill down into specific data
- Historical Data - Full retention for performance trending analysis
1. HAProxy Services Dashboard (Most Important)
Purpose: Real-time service health monitoring
- β Service Status - Up/down status for all RPC, API, and GRPC endpoints
- π Request Activity - Live traffic and request patterns per service
β οΈ Error Detection - HTTP response codes and service issues- π₯ Usage Analytics - Which endpoints are being used and by whom
- π Session Tracking - Connection data and bandwidth per service
Purpose: Blockchain node health and synchronization
- π Sync Status - Real-time block synchronization monitoring (positive = ahead, negative = behind)
- π Block Monitoring - Block production, validation, and processing times
- π Interactive Log Search - Real-time log filtering and search capabilities
- π₯ Peer Connectivity - Network peer status and connection health
- π« Transaction Monitoring - Mempool status and transaction processing
Purpose: Storage utilization and capacity planning
- πΎ Total Capacity - 11.6TB total storage across high-performance NVMe drives
- π Usage Trends - Storage growth patterns per chain
- β‘ Performance Metrics - I/O performance and throughput
- π Per-Chain Breakdown - Storage allocation by blockchain
Purpose: Network utilization monitoring
- π Per-Chain Bandwidth - Network usage by each blockchain
- π‘ Peer Synchronization - Data transfer for blockchain synchronization
- π Traffic Patterns - Ingress/egress network patterns
- β‘ Real-time Updates - 5-second refresh for live network monitoring
Proactive Issue Detection:
- Sync lag alerts
- Service health monitoring with immediate visibility
Performance Optimization:
- Monitor endpoint performance
- Track resource utilization trends
Operational Excellence:
- 24/7 monitoring
- Historical data
- Complete transparency into infrastructure performance
Based on your feedback about needing event indexing, week-long data retention, and heavy usage of GetTxResponse
, GetTxResponsesForEvent
, and BankQueryBalance
endpoints, we've made specific optimizations to ensure optimal performance for your application.
Your Need: "We rely on event indexing as well"
Our Optimization:
# Targeted event indexing for applications
index-events = [
"tx", "message.action", "message.sender", "message.module",
"transfer.recipient", "transfer.sender", "transfer.amount",
"coin_spent.spender", "coin_received.receiver", "coin_received.amount"
]
Why: We've specifically configured indexing for the most common type of events applications need most, focusing on transfers, message routing, and balance changes rather than indexing everything. These optimizations can be adjusted as needed - see Future Optimization Opportunities below for additional customization options.
Your Need: "Would be great if we could retain a week of data"
Our Optimization:
# Custom pruning for 1-week retention (chain-specific)
pruning:
strategy: custom
keepRecent: 604800
interval: 5000
minRetainBlocks: 604800
# For slower chains (6s blocks):
pruning-keep-recent = 100800 # 1 week at 6s blocks = 100,800 blocks
pruning-interval = 1000 # Less frequent pruning needed
Why: We pre-calculated exact block retention by analyzing the last 1000 blocks for each chain to determine average block times. Fast chains (1s blocks) need 604,800 blocks for one week, while slower chains (6s blocks) only need 100,800 blocks. This ensures precise 7-day data retention regardless of chain speed.
Your Need: "We use three endpoints heavily: GetTxResponse, GetTxResponsesForEvent, and BankQueryBalance"
Our Optimizations:
For GetTxResponse (Transaction Lookups):
# Faster transaction queries
inter-block-cache = true
iavl-cache-size = 3000000 # Optimized for 1s blocks
For GetTxResponsesForEvent (Event-based Queries):
# Faster event queries with proper indexing
query-gas-limit = 2000000000 # Handle complex event queries
# + Event indexing configuration above
For BankQueryBalance (Balance Queries):
# IAVL optimizations for state queries
iavl-disable-fastnode = false # Keep fastnode for faster balance lookups
iavl-lazy-loading = false # Immediate loading for faster queries
Your Problem: "We've had issues with lagging nodes and general downtime"
Our Optimizations:
Mempool Optimization:
[mempool]
max-txs = 15000
recheck = true
keep-invalid-txs-in-cache = false
size = 15000
max_txs_bytes = 1073741824
cache_size = 30000
Network Performance:
# Fast chains (1s blocks) - optimized for high throughput
send_rate = 5120000
recv_rate = 5120000
timeout_broadcast_tx_commit = "10s"
# Slower chains (6s blocks) - conservative settings
send_rate = 2048000
recv_rate = 2048000
timeout_broadcast_tx_commit = "60s"
Connection Management:
# Optimized for your current needs with room for growth
max_subscription_clients = 100
max_open_connections = 100
experimental_close_on_slow_client = true
# API optimizations
max-open-connections = 1500
rpc-read-timeout = 15
rpc-write-timeout = 15
Current Status: Infrastructure uses optimized exact height queries achieving <50ms response times.
For Advanced Use Cases: If applications require range queries (tx.height>X
), we can implement targeted optimizations. Several strategies available:
- IAVL Cache Tuning - Immediate performance boost
- PostgreSQL Indexer - Dramatic improvement for range queries
- Selective Event Indexing - Reduced index size optimization
- Resource Scaling - Enhanced compute if needed
Please let us know if you notice any issues or want different tuning - we'll update ASAP. We're confident that our deep knowledge of the Cosmos SDK and ability to iterate fast will ensure you're always able to query the data you need on the node in a timely manner that's always available.
Key Advantages:
- π― Targeted optimizations based on your specific endpoints
- β‘ Fast iteration - changes deployed quickly
- π Performance monitoring via our benchmark tool
- π§ Resource efficient - optimized for multi-chain environment
- π Measurable results - clear performance metrics
If you encounter any issues or have questions:
- Telegram: Contact Dan Bryan directly in our existing Telegram channel
- GitHub Issues: Create an issue in this repository and tag
@danb
- I'll resolve it promptly
- Telegram: Usually within a few hours during business hours
- GitHub Issues: Typically resolved within 24 hours
- Critical Issues: Immediate response via Telegram
As part of your delivery, we've included a custom benchmark tool that validates the performance optimizations and confirms your infrastructure is operating at peak efficiency.
Based on your application requirements:
- Data Retention - Confirms 1+ week of historical blockchain data availability
- GetTxResponse - Individual transaction lookup latency and functionality
- GetTxResponsesForEvent - Event-based query performance + indexing verification
- BankQueryBalance - Account balance query response times
- Event Indexing - Validates application-relevant events are properly indexed
- Protocol Coverage - Tests RPC, REST API, and GRPC endpoints
Prerequisites:
- Go 1.23+ - Install Go
- Git - For cloning the repository
Installation & Usage:
# Clone the repository
git clone https://github.com/bryanlabs/icf-nodes
cd icf-nodes
# Build both tools
make build
# Test individual chains with benchmark tool
./benchmark neutron
./benchmark dydx
./benchmark pryzm
# Test all enabled chains
./benchmark all
# Or run directly with Go
go run ./cmd/benchmark all
# Run stress tests
./stress neutron 100 60 # 100 workers for 60 seconds
go run ./cmd/stress pryzm 50 30 # Or run directly
Configuration:
The benchmark tool uses the included config.yaml
file which defines:
- Chain endpoints (RPC, API, GRPC, WebSocket URLs)
- Block times (pre-calculated from chain analysis)
- Performance thresholds (pass/warn/fail criteria)
- Chain-specific optimizations (SDK versions, query styles)
This repository includes two separate testing tools:
- cmd/benchmark/ - Performance benchmarking tool
- cmd/stress/ - Aggressive load testing tool
The tools are organized in separate directories to avoid naming conflicts:
# Build both tools
make build
# Run the benchmark tool
make benchmark ARGS=neutron
# or directly:
go run ./cmd/benchmark neutron
./benchmark neutron # If already built
# Run the stress test tool
make stress ARGS='neutron 100 60'
# or directly:
go run ./cmd/stress neutron 100 60
./stress neutron 100 60 # If already built
Project Structure:
icf-nodes/
βββ cmd/
β βββ benchmark/
β β βββ main.go # Benchmark tool
β βββ stress/
β βββ main.go # Stress test tool
βββ config.yaml # Benchmark configuration
βββ stress-config.yaml # Stress test configuration
βββ Makefile # Build automation