Building Real-Time Dashboards with SignalR and .NET 8: Step by Step

Most SignalR tutorials teach you how to build a chat app. That's great for learning the basics, but it tells you nothing about what happens when you need to push 100K+ transactions per day to a live dashboard without melting your server.
1. The Problem: Why Naive Real-Time Dashboards Break
I've built real-time dashboards in production for financial systems — transaction monitoring, compliance dashboards, live reporting panels. The patterns I use are fundamentally different from what you'll find in a typical Medium tutorial.
The "obvious" approach to a real-time dashboard looks like this:
- Transaction comes in → save to database
- Immediately broadcast to all connected clients via SignalR
- Each client re-renders the chart
- 10 SignalR broadcasts/second × N connected clients
- 10 Chart.js re-renders/second per client
- 10 database queries/second if you're computing metrics live
2. Architecture Overview
The system runs as a single Blazor Server application with four background services that form a processing pipeline:
Each component has a single responsibility and communicates through well-defined channels. Let me break down the three patterns that make this work.
3. Pattern 1: Batched Broadcasting
This is the single most important architectural decision. Instead of broadcasting per-transaction, we collect transactions in a buffer and send them as a batch every 500ms.
Why 500ms?
Human perception on dashboards plateaus around 200–500ms. Below 200ms, users can't distinguish individual updates. Above 1000ms, it feels laggy. At 500ms we get:
- Max 2 broadcasts/second regardless of transaction volume
- Single serialization per batch instead of per-transaction
- Client updates the chart once per batch, not per-transaction
4. Pattern 2: Pre-Computed Metrics
The dashboard shows metrics like total volume, success rate, TPS, and flagged transactions. Computing these from raw data on every refresh is a death sentence at scale.
Instead, a background service pre-computes everything on a fixed interval and caches the results: The broadcaster reads from cache — zero database queries per client refresh. Whether you have 5 connected clients or 500, the database load is the same: one aggregation query every 500ms.5. Pattern 3: Channel<T> as Internal Message Bus
The generates transactions. The persists them. The sends them to clients. These three services need to communicate without tight coupling. provides exactly this — an async-friendly, bounded producer-consumer queue with built-in backpressure:Why Channel<T> over ConcurrentQueue?
- Backpressure: blocks the producer when the buffer is full — no unbounded memory growth
- Async enumeration: instead of polling loops
- Cancellation: Native support
- Multiple consumers: Both the and can read from the same channel
6. The SignalR Hub: Keep It Thin
The hub itself does almost nothing — by design. All logic lives in the background services:
The typed interface () ensures compile-time safety — no magic strings like .7. Database Strategy: Batch Writes
Individual calls at 100 TPS create unnecessary overhead. Instead, the accumulates transactions and flushes them in bulk: Flush triggers: 100 transactions accumulated OR 1 second elapsed — whichever comes first. This gives ~10x throughput improvement over individual inserts.8. Client-Side: Blazor + Chart.js
The Blazor component connects to the hub and updates Chart.js through JavaScript interop:
Key decisions on the client
- Keep only 100 recent transactions in memory — older ones drop off
- Chart.js sliding window of 5 minutes (max 300 data points)
- called once per batch, not per transaction
- Automatic reconnection with exponential backoff
9. Scaling Beyond: What Changes at 500K, 1M+
The architecture in this repo handles ~100K daily transactions on a single instance. Here's what you'd change as you scale:
| Scale | Change | Why |
|---|---|---|
| 500K/day | MySQL date partitioning | Query performance on large tables |
| 500K/day | Redis for metrics cache | Share cache across multiple instances |
| 1M/day | SignalR Redis Backplane | Horizontal scaling of SignalR |
| 1M/day | MySQL read replica | Separate read/write load |
| 10M/day | Kafka replaces Channel<T> | Cross-service communication |
| 10M/day | Azure SignalR Service | Managed, auto-scaling WebSockets |
| 10M/day | ClickHouse/TimescaleDB | Purpose-built for time-series aggregation |
10. Performance Targets
| Metric | Target |
|---|---|
| Dashboard refresh latency | < 100ms |
| SignalR broadcast (hub → client) | < 50ms |
| Concurrent connections | 500+ (single instance) |
| Transaction throughput | 100+ TPS simulated |
| Memory steady state | < 512MB |
11. Run It Yourself
The dashboard starts generating simulated transactions immediately. Open and watch the metrics flow. → Access the live demo12. Key Takeaways
- Never broadcast per-transaction. Batch at 500ms intervals — users can't tell the difference, your server can.
- Pre-compute everything. Dashboard metrics should come from cache, not live queries.
- Channel<T> is underrated. It's the perfect internal message bus for .NET — async, bounded, backpressure-aware.
- Keep the hub thin. Business logic belongs in services, not in the SignalR hub.
- Design for the next scale. The patterns in this repo work at 100K/day; the same patterns with different infrastructure work at 10M/day.
This architecture is based on patterns I've implemented in production for financial transaction monitoring systems. The source code is MIT licensed — clone it, learn from it, adapt it for your use case.
→ Full Source Code on GitHub→ Contact — No sales pitch, just an honest technical conversation.
Related Articles
Why Your Legacy .NET App Is Costing You More Than a Rewrite
The hidden costs of maintaining .NET Framework 4.x applications: security patches, lost developer productivity, and missed integrations. A practical guide to deciding when to modernize.
Multi-tenant SaaS in .NET: secure architecture to scale without rewriting
Practical guide to multi-tenant architecture in .NET: patterns, security, EF Core, and migration from single-tenant without breaking your product.
The Hidden Cost of Building Internal Tools with Off-the-Shelf Software
Build vs buy for enterprise software in LATAM: a decision framework from 9+ years of custom software development, CRM migrations, and internal tool projects across Panama and the region.
Need help building something like this?
I build enterprise systems using the same technologies I write about. 9+ years delivering .NET solutions for banking, retail, and legal companies across LATAM.
Discuss Your Project