8 MIN READ

Building Real-Time Dashboards with SignalR and .NET 8: Step by Step

Share this article
Building Real-Time Dashboards with SignalR and .NET 8: Step by Step

Most SignalR tutorials teach you how to build a chat app. That's great for learning the basics, but it tells you nothing about what happens when you need to push 100K+ transactions per day to a live dashboard without melting your server.


1. The Problem: Why Naive Real-Time Dashboards Break

I've built real-time dashboards in production for financial systems — transaction monitoring, compliance dashboards, live reporting panels. The patterns I use are fundamentally different from what you'll find in a typical Medium tutorial.

The "obvious" approach to a real-time dashboard looks like this:

  • Transaction comes in → save to database
  • Immediately broadcast to all connected clients via SignalR
  • Each client re-renders the chart
At 10 transactions per second, this means:
  • 10 SignalR broadcasts/second × N connected clients
  • 10 Chart.js re-renders/second per client
  • 10 database queries/second if you're computing metrics live
At 100 TPS? That's 100 serializations, 100 × N network writes, and 100 chart renders per second. The browser freezes, the server chokes, and your dashboard becomes a slideshow. The fix isn't "use a faster server." It's architecture.

2. Architecture Overview

The system runs as a single Blazor Server application with four background services that form a processing pipeline:

Each component has a single responsibility and communicates through well-defined channels. Let me break down the three patterns that make this work.


3. Pattern 1: Batched Broadcasting

This is the single most important architectural decision. Instead of broadcasting per-transaction, we collect transactions in a buffer and send them as a batch every 500ms.

Why 500ms?

Human perception on dashboards plateaus around 200–500ms. Below 200ms, users can't distinguish individual updates. Above 1000ms, it feels laggy. At 500ms we get:

  • Max 2 broadcasts/second regardless of transaction volume
  • Single serialization per batch instead of per-transaction
  • Client updates the chart once per batch, not per-transaction
At 100 TPS, this reduces SignalR overhead by ~98% compared to per-transaction broadcasting.

4. Pattern 2: Pre-Computed Metrics

The dashboard shows metrics like total volume, success rate, TPS, and flagged transactions. Computing these from raw data on every refresh is a death sentence at scale.

Instead, a
background service pre-computes everything on a fixed interval and caches the results:
The broadcaster reads from cache — zero database queries per client refresh. Whether you have 5 connected clients or 500, the database load is the same: one aggregation query every 500ms.

5. Pattern 3: Channel<T> as Internal Message Bus

The
generates transactions. The
persists them. The
sends them to clients. These three services need to communicate without tight coupling.
provides exactly this — an async-friendly, bounded producer-consumer queue with built-in backpressure:

Why Channel<T> over ConcurrentQueue?

  • Backpressure:
    blocks the producer when the buffer is full — no unbounded memory growth
  • Async enumeration:
    instead of polling loops
  • Cancellation: Native
    support
  • Multiple consumers: Both the
    and
    can read from the same channel

6. The SignalR Hub: Keep It Thin

The hub itself does almost nothing — by design. All logic lives in the background services:

The typed interface (
) ensures compile-time safety — no magic strings like
.

7. Database Strategy: Batch Writes

Individual
calls at 100 TPS create unnecessary overhead. Instead, the
accumulates transactions and flushes them in bulk:
Flush triggers: 100 transactions accumulated OR 1 second elapsed — whichever comes first. This gives ~10x throughput improvement over individual inserts.

8. Client-Side: Blazor + Chart.js

The Blazor component connects to the hub and updates Chart.js through JavaScript interop:

Key decisions on the client

  • Keep only 100 recent transactions in memory — older ones drop off
  • Chart.js sliding window of 5 minutes (max 300 data points)
  • called once per batch, not per transaction
  • Automatic reconnection with exponential backoff

9. Scaling Beyond: What Changes at 500K, 1M+

The architecture in this repo handles ~100K daily transactions on a single instance. Here's what you'd change as you scale:

ScaleChangeWhy
500K/dayMySQL date partitioningQuery performance on large tables
500K/dayRedis for metrics cacheShare cache across multiple instances
1M/daySignalR Redis BackplaneHorizontal scaling of SignalR
1M/dayMySQL read replicaSeparate read/write load
10M/dayKafka replaces Channel<T>Cross-service communication
10M/dayAzure SignalR ServiceManaged, auto-scaling WebSockets
10M/dayClickHouse/TimescaleDBPurpose-built for time-series aggregation
The patterns stay the same — batching, pre-computation, producer-consumer. Only the infrastructure changes.

10. Performance Targets

MetricTarget
Dashboard refresh latency< 100ms
SignalR broadcast (hub → client)< 50ms
Concurrent connections500+ (single instance)
Transaction throughput100+ TPS simulated
Memory steady state< 512MB
Real benchmark numbers are available in the repo under
.

11. Run It Yourself

The dashboard starts generating simulated transactions immediately. Open
and watch the metrics flow. Access the live demo

12. Key Takeaways

  1. Never broadcast per-transaction. Batch at 500ms intervals — users can't tell the difference, your server can.
  2. Pre-compute everything. Dashboard metrics should come from cache, not live queries.
  3. Channel<T> is underrated. It's the perfect internal message bus for .NET — async, bounded, backpressure-aware.
  4. Keep the hub thin. Business logic belongs in services, not in the SignalR hub.
  5. Design for the next scale. The patterns in this repo work at 100K/day; the same patterns with different infrastructure work at 10M/day.

This architecture is based on patterns I've implemented in production for financial transaction monitoring systems. The source code is MIT licensed — clone it, learn from it, adapt it for your use case.

Full Source Code on GitHub

Contact — No sales pitch, just an honest technical conversation.

Ready to start your project?

Let's discuss how I can help you build modern, scalable solutions for your business.

Get in touch