Distributed Caching with Redis: Patterns That Scale

TL;DR Redis caching patterns for ASP.NET Core: IDistributedCache configuration, cache invalidation strategies, TTL decisions, and connection resilience.

Common Questions This Answers

  • How do I configure Redis with ASP.NET Core?
  • When should I use distributed cache vs output cache?
  • How do I invalidate cached data across multiple servers?
  • What are the cache-aside pattern pitfalls?
  • How do I handle Redis connection failures gracefully?

Definition

Distributed caching stores data in a shared cache accessible by all application instances. Unlike in-memory caching (IMemoryCache), distributed caches survive app restarts, scale across servers, and provide consistent data across your deployment. Redis is the dominant choice for production distributed caching.

Terms Used

  • Cache-aside: Application code checks cache first, falls back to source, then populates cache
  • TTL (Time-to-Live): Automatic expiration after a duration
  • Sliding expiration: TTL resets on each access
  • Absolute expiration: Fixed expiration regardless of access
  • Cache stampede: Multiple requests simultaneously rebuilding expired cache
  • Write-through: Cache updated on every write operation
  • Write-behind: Cache writes batched and persisted asynchronously

Reader Contract

After reading this article, you will:

  1. Configure Redis distributed caching in ASP.NET Core
  2. Implement cache-aside pattern correctly
  3. Choose between TTL strategies
  4. Handle cache invalidation across instances
  5. Build resilient Redis connections

Prerequisites: Basic ASP.NET Core knowledge, understanding of OutputCache vs response caching (see OutputCache Production Patterns).

Time to implement: 30 minutes for basic setup, 2-4 hours for production hardening.

Quick Start (10 Minutes)

Install the Redis caching package and configure:

// Program.cs
builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = builder.Configuration.GetConnectionString("Redis");
    options.InstanceName = "MyApp:";
});

// appsettings.json
{
  "ConnectionStrings": {
    "Redis": "localhost:6379,abortConnect=false,connectTimeout=5000"
  }
}

Use IDistributedCache in your services:

public class ProductService(IDistributedCache cache, AppDbContext db)
{
    public async Task<Product?> GetProductAsync(int id, CancellationToken ct)
    {
        var cacheKey = $"product:{id}";

        var cached = await cache.GetStringAsync(cacheKey, ct);
        if (cached is not null)
        {
            return JsonSerializer.Deserialize<Product>(cached);
        }

        var product = await db.Products.FindAsync([id], ct);
        if (product is not null)
        {
            await cache.SetStringAsync(
                cacheKey,
                JsonSerializer.Serialize(product),
                new DistributedCacheEntryOptions
                {
                    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10)
                },
                ct);
        }

        return product;
    }
}

IDistributedCache Interface

The IDistributedCache interface provides four core operations:

Method Purpose
Get/GetAsync Retrieve bytes from cache
Set/SetAsync Store bytes with options
Remove/RemoveAsync Delete cache entry
Refresh/RefreshAsync Reset sliding expiration

The interface works with byte[]. Extension methods provide GetString/SetString for text data.

DistributedCacheEntryOptions

Control expiration with three properties:

var options = new DistributedCacheEntryOptions
{
    // Cache expires 10 minutes after creation
    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10),

    // OR: Cache expires at specific time
    AbsoluteExpiration = DateTimeOffset.UtcNow.AddHours(1),

    // Reset expiration on each access (use with AbsoluteExpiration)
    SlidingExpiration = TimeSpan.FromMinutes(2)
};

Sliding expiration alone is dangerous. Without an absolute cap, frequently accessed data never expires. Always combine sliding with absolute expiration:

var options = new DistributedCacheEntryOptions
{
    SlidingExpiration = TimeSpan.FromMinutes(5),
    AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1)
};

Cache-Aside Pattern

Cache-aside is the standard pattern for distributed caching:

1. Check cache for data
2. If cache miss, load from source
3. Store in cache
4. Return data

Production Implementation

Handle serialization and null values properly:

public class CachedProductService(
    IDistributedCache cache,
    AppDbContext db,
    ILogger<CachedProductService> logger)
{
    private static readonly JsonSerializerOptions JsonOptions = new()
    {
        PropertyNamingPolicy = JsonNamingPolicy.CamelCase
    };

    public async Task<Product?> GetProductAsync(int id, CancellationToken ct)
    {
        var cacheKey = $"product:{id}";

        try
        {
            var cached = await cache.GetStringAsync(cacheKey, ct);
            if (cached is not null)
            {
                // Handle explicit null marker for "known missing"
                if (cached == "null")
                {
                    return null;
                }
                return JsonSerializer.Deserialize<Product>(cached, JsonOptions);
            }
        }
        catch (Exception ex)
        {
            // Cache failure should not break the app
            logger.LogWarning(ex, "Cache read failed for {Key}", cacheKey);
        }

        var product = await db.Products
            .AsNoTracking()
            .FirstOrDefaultAsync(p => p.Id == id, ct);

        try
        {
            // Cache both hits and misses to prevent repeated DB queries
            var value = product is not null
                ? JsonSerializer.Serialize(product, JsonOptions)
                : "null";

            await cache.SetStringAsync(cacheKey, value, GetOptions(), ct);
        }
        catch (Exception ex)
        {
            logger.LogWarning(ex, "Cache write failed for {Key}", cacheKey);
        }

        return product;
    }

    private static DistributedCacheEntryOptions GetOptions() => new()
    {
        SlidingExpiration = TimeSpan.FromMinutes(5),
        AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1)
    };
}

Cache Negative Results

Always cache "not found" results. Without this, queries for non-existent data hit the database every time:

// Bad: Only caches successful lookups
if (product is not null)
{
    await cache.SetStringAsync(key, Serialize(product), options, ct);
}

// Good: Cache null results with shorter TTL
var value = product is not null
    ? Serialize(product)
    : "null";
await cache.SetStringAsync(key, value, options, ct);

Use a shorter TTL for negative caching to allow recovery when data is created.

Cache Invalidation Strategies

Cache invalidation is famously difficult. Choose your strategy based on consistency requirements.

Strategy 1: TTL-Based Expiration

Let caches expire naturally. Simplest approach, works when stale data is acceptable.

When to use: Read-heavy data that changes infrequently (product catalogs, configuration)

Tradeoff: Users see stale data until TTL expires

// 5-minute TTL: data may be stale for up to 5 minutes
await cache.SetStringAsync(key, value, new()
{
    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5)
}, ct);

Strategy 2: Explicit Invalidation

Remove cache entries when source data changes:

public class ProductService(IDistributedCache cache, AppDbContext db)
{
    public async Task UpdateProductAsync(Product product, CancellationToken ct)
    {
        db.Products.Update(product);
        await db.SaveChangesAsync(ct);

        // Invalidate cache after successful write
        await cache.RemoveAsync($"product:{product.Id}", ct);

        // Invalidate related caches
        await cache.RemoveAsync($"products:category:{product.CategoryId}", ct);
    }
}

Challenge: You must track all cache keys affected by a change. Missing one creates inconsistency.

Strategy 3: Cache Tags with Redis

Group related cache entries for bulk invalidation. Redis supports this through key patterns:

public class TaggedCache(IConnectionMultiplexer redis)
{
    private readonly IDatabase _db = redis.GetDatabase();

    public async Task SetWithTagsAsync<T>(
        string key,
        T value,
        string[] tags,
        TimeSpan expiration)
    {
        var json = JsonSerializer.Serialize(value);

        // Store the value
        await _db.StringSetAsync(key, json, expiration);

        // Track key in tag sets
        foreach (var tag in tags)
        {
            await _db.SetAddAsync($"tag:{tag}", key);
            await _db.KeyExpireAsync($"tag:{tag}", expiration);
        }
    }

    public async Task InvalidateTagAsync(string tag)
    {
        var keys = await _db.SetMembersAsync($"tag:{tag}");

        foreach (var key in keys)
        {
            await _db.KeyDeleteAsync(key.ToString());
        }

        await _db.KeyDeleteAsync($"tag:{tag}");
    }
}

// Usage
await taggedCache.SetWithTagsAsync(
    $"product:{id}",
    product,
    ["products", $"category:{product.CategoryId}"],
    TimeSpan.FromHours(1));

// Invalidate all products in a category
await taggedCache.InvalidateTagAsync($"category:{categoryId}");

Strategy 4: Pub/Sub Invalidation

For multi-instance deployments, broadcast invalidation messages:

public class CacheInvalidationService(IConnectionMultiplexer redis)
{
    private const string Channel = "cache:invalidate";

    public async Task PublishInvalidationAsync(string cacheKey)
    {
        var subscriber = redis.GetSubscriber();
        await subscriber.PublishAsync(Channel, cacheKey);
    }

    public void SubscribeToInvalidations(Action<string> onInvalidate)
    {
        var subscriber = redis.GetSubscriber();
        subscriber.Subscribe(Channel, (_, message) =>
        {
            if (message.HasValue)
            {
                onInvalidate(message.ToString());
            }
        });
    }
}

Use Pub/Sub when you have local memory caches that need synchronization with Redis, or when you need real-time invalidation across services.

TTL Decision Framework

Data Type TTL Strategy
User session 30 min sliding Sliding + 24h absolute
Product catalog 1 hour TTL only, stale OK
Inventory count 30 seconds Short TTL, accuracy matters
User preferences 1 hour sliding Sliding + 1 week absolute
Search results 5 minutes TTL only, frequent changes
Static config 24 hours TTL + explicit invalidation

Rules of thumb:

  • High-read, low-change: longer TTL (hours)
  • Accuracy-critical: short TTL (seconds to minutes)
  • User-specific: sliding expiration with absolute cap
  • Expensive to compute: longer TTL, explicit invalidation

Connection Pooling and Resilience

StackExchange.Redis manages connection pooling automatically. Configure for resilience:

builder.Services.AddStackExchangeRedisCache(options =>
{
    options.ConfigurationOptions = new ConfigurationOptions
    {
        EndPoints = { "redis.example.com:6380" },
        Password = builder.Configuration["Redis:Password"],
        Ssl = true,
        AbortOnConnectFail = false,  // Critical for resilience
        ConnectTimeout = 5000,
        SyncTimeout = 5000,
        AsyncTimeout = 5000,
        ConnectRetry = 3,
        ReconnectRetryPolicy = new ExponentialRetry(5000)
    };
    options.InstanceName = "MyApp:";
});

Key Configuration Options

Option Recommendation Why
AbortOnConnectFail false App starts even if Redis unavailable
ConnectTimeout 5000ms Balance between fast fail and network latency
SyncTimeout 5000ms Timeout for synchronous operations
ConnectRetry 3 Retry connection on failure
Ssl true Always in production

Handling Connection Failures

Redis should be a performance optimization, not a critical dependency:

public class ResilientCacheService(
    IDistributedCache cache,
    ILogger<ResilientCacheService> logger)
{
    public async Task<T?> GetOrCreateAsync<T>(
        string key,
        Func<CancellationToken, Task<T?>> factory,
        DistributedCacheEntryOptions options,
        CancellationToken ct) where T : class
    {
        // Try cache first
        try
        {
            var cached = await cache.GetStringAsync(key, ct);
            if (cached is not null)
            {
                return JsonSerializer.Deserialize<T>(cached);
            }
        }
        catch (Exception ex)
        {
            logger.LogWarning(ex, "Cache read failed, falling back to source");
        }

        // Cache miss or failure - get from source
        var result = await factory(ct);

        // Try to cache result
        if (result is not null)
        {
            try
            {
                var json = JsonSerializer.Serialize(result);
                await cache.SetStringAsync(key, json, options, ct);
            }
            catch (Exception ex)
            {
                logger.LogWarning(ex, "Cache write failed");
            }
        }

        return result;
    }
}

Serialization Choices

JSON (Default)

Simple, human-readable, works everywhere:

private static readonly JsonSerializerOptions Options = new()
{
    PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
    DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
};

var json = JsonSerializer.Serialize(product, Options);
await cache.SetStringAsync(key, json, cacheOptions, ct);

MessagePack (High Performance)

For large objects or high throughput, MessagePack offers 2-4x better performance:

// Install MessagePack NuGet package
using MessagePack;

[MessagePackObject]
public class CachedProduct
{
    [Key(0)]
    public int Id { get; set; }

    [Key(1)]
    public string Name { get; set; } = "";

    [Key(2)]
    public decimal Price { get; set; }
}

// Serialize
var bytes = MessagePackSerializer.Serialize(product);
await cache.SetAsync(key, bytes, cacheOptions, ct);

// Deserialize
var cached = await cache.GetAsync(key, ct);
var product = MessagePackSerializer.Deserialize<CachedProduct>(cached);

Tradeoff: MessagePack requires explicit attribute decoration and is not human-readable in Redis.

When to Cache What

Scenario Cache Strategy
Database query results Cache-aside with TTL
API responses from external services Cache-aside with short TTL
Computed aggregations Cache-aside with explicit invalidation
User sessions Sliding expiration with absolute cap
Feature flags Short TTL (30s), tolerate brief staleness
Full-page output Use OutputCache instead

OutputCache vs IDistributedCache

Use OutputCache for:

  • Full HTTP responses
  • CDN-style caching
  • VaryByQuery/Header requirements

Use IDistributedCache for:

  • Arbitrary data caching
  • Fine-grained control
  • Cross-service cache sharing
  • Custom invalidation logic

See OutputCache Production Patterns for output caching details.

Copy/Paste Artifact: Production Redis Setup

// Program.cs - Production Redis configuration
builder.Services.AddStackExchangeRedisCache(options =>
{
    var redisConfig = builder.Configuration.GetSection("Redis");

    options.ConfigurationOptions = new ConfigurationOptions
    {
        EndPoints = { redisConfig["Endpoint"] ?? "localhost:6379" },
        Password = redisConfig["Password"],
        Ssl = builder.Environment.IsProduction(),
        AbortOnConnectFail = false,
        ConnectTimeout = 5000,
        SyncTimeout = 5000,
        AsyncTimeout = 5000,
        ConnectRetry = 3,
        DefaultDatabase = 0
    };

    options.InstanceName = $"{builder.Environment.ApplicationName}:";
});

// Register resilient cache wrapper
builder.Services.AddSingleton<ResilientCacheService>();

// appsettings.Production.json
{
  "Redis": {
    "Endpoint": "your-redis.redis.cache.windows.net:6380",
    "Password": "from-key-vault"
  }
}

Common Failure Modes

Cache Stampede

When cache expires, many requests simultaneously hit the database:

// Problem: 100 concurrent requests all miss cache, all query DB
var cached = await cache.GetStringAsync(key, ct);
if (cached is null)
{
    var data = await db.ExpensiveQuery(); // 100 concurrent queries!
    await cache.SetStringAsync(key, Serialize(data), options, ct);
}

Solution: Use a distributed lock or cache early refresh:

// Refresh cache before expiration
var options = new DistributedCacheEntryOptions
{
    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10)
};

// Background job refreshes at 8 minutes, before 10-minute expiration

Serialization Mismatch

Cached data becomes unreadable after model changes:

// v1: Product has Name
// v2: Product has Name and Description
// Old cache entries fail deserialization

Solution: Version your cache keys or handle deserialization failures gracefully:

var cacheKey = $"product:v2:{id}";  // Version in key

Memory Pressure

Caching too much data exhausts Redis memory:

Solution: Set maxmemory-policy in Redis to evict old entries:

maxmemory 256mb
maxmemory-policy allkeys-lru

Thundering Herd on Startup

Application restart clears no cache, but all instances query Redis simultaneously:

Solution: Stagger cache population or pre-warm critical caches:

// Startup cache warming
public class CacheWarmupService(
    IDistributedCache cache,
    AppDbContext db) : IHostedService
{
    public async Task StartAsync(CancellationToken ct)
    {
        // Warm critical caches on startup
        var popularProducts = await db.Products
            .OrderByDescending(p => p.ViewCount)
            .Take(100)
            .ToListAsync(ct);

        foreach (var product in popularProducts)
        {
            await cache.SetStringAsync(
                $"product:{product.Id}",
                JsonSerializer.Serialize(product),
                new() { AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1) },
                ct);
        }
    }

    public Task StopAsync(CancellationToken ct) => Task.CompletedTask;
}

Checklist

Before deploying Redis caching:

  • AbortOnConnectFail = false configured
  • Cache operations wrapped in try/catch
  • Negative results cached with shorter TTL
  • Sliding expiration has absolute cap
  • Cache keys include version or type prefix
  • Connection string in secure configuration
  • SSL/TLS enabled in production
  • Timeouts configured appropriately
  • Monitoring for cache hit/miss ratio
  • Redis maxmemory policy configured

FAQ

Should I use IDistributedCache or IConnectionMultiplexer directly?

Use IDistributedCache for simple cache-aside patterns. Use IConnectionMultiplexer when you need Redis-specific features like Pub/Sub, Lua scripts, or transactions.

How do I handle cache in unit tests?

Use MemoryDistributedCache in tests. It implements IDistributedCache without Redis:

services.AddDistributedMemoryCache();

What about HybridCache in .NET 9+?

HybridCache combines L1 memory cache with L2 distributed cache. Consider it for new projects targeting .NET 9+. It handles stampede protection automatically.

How do I monitor cache effectiveness?

Track hit/miss ratio with custom metrics or Redis INFO command:

var server = redis.GetServer("localhost", 6379);
var info = server.Info("stats");
// Parse keyspace_hits and keyspace_misses

What to do next

Identify your highest-traffic read queries and evaluate them for caching. Start with data that changes infrequently and is expensive to compute.

For more on caching strategies, read OutputCache Production Patterns.

If you want help designing a caching strategy for your application, reach out via Contact.

References

Author notes

Decisions:

  • Use TTL-based expiration as the primary invalidation strategy. Rationale: simpler to implement and reason about than event-driven invalidation.
  • Configure connection resilience with exponential backoff. Rationale: Redis connection failures are transient; automatic reconnection prevents cascading failures.
  • Use cache-aside pattern over write-through. Rationale: more flexible and doesn't require cache to be in the write path.

Observations:

  • Cache stampedes occur when many requests miss cache simultaneously after TTL expiration.
  • Connection pool exhaustion manifests as timeout exceptions during traffic spikes.
  • Stale data bugs often trace back to missing invalidation after write operations.