Serilog tutorials show you how to log. Production requires you to know what not to log, how to correlate requests, and how to configure for different environments. This article covers the configuration that survives real workloads.
Common questions this answers
- How do I configure Serilog differently for Development vs Production?
- How do I add correlation IDs to trace requests across logs?
- How do I prevent sensitive data from appearing in logs?
- Should I use appsettings.json or fluent API for configuration?
Definition (what this means in practice)
Structured logging captures log data as key-value properties rather than plain text. Serilog is a .NET logging library built around this concept. Production configuration means environment-aware log levels, request correlation, sensitive data filtering, and sink selection based on operational needs.
In practice, this means configuring via appsettings.json for flexibility, adding middleware for correlation IDs, and establishing patterns that prevent accidental data exposure.
Terms used
- Structured logging: logging with typed properties that can be queried and filtered.
- Sink: a destination for log events (Console, File, Seq, Application Insights, etc.).
- Enricher: a component that adds properties to every log event.
- Correlation ID: a unique identifier that ties together all logs from a single request.
- Message template: Serilog's format string syntax with named placeholders.
Reader contract
This article is for:
- Engineers deploying ASP.NET Core applications to production.
- Teams establishing logging standards.
You will leave with:
- Production-ready appsettings.json configuration.
- Correlation ID middleware you can copy/paste.
- A sink selection decision table.
This is not for:
- Serilog beginners (assumes basic familiarity).
- Platform-specific sink deep-dives.
Quick start (10 minutes)
If you do nothing else, do this:
Verified on: ASP.NET Core (.NET 10), Serilog.AspNetCore 10.x.
- Install packages:
Serilog.AspNetCore+ a sink (for exampleSerilog.Sinks.Console). - If you want JSON/appsettings configuration, add
Serilog.Settings.Configuration. - Configure in appsettings.json (not Program.cs).
- Add
UseSerilogRequestLogging()for HTTP request logs. - Set Production log level to Warning or higher for noisy namespaces.
- Add correlation ID middleware.
dotnet add package Serilog.AspNetCore
dotnet add package Serilog.Sinks.Console
dotnet add package Serilog.Settings.Configuration
// Program.cs
using Serilog;
var builder = WebApplication.CreateBuilder(args);
builder.Host.UseSerilog((context, services, configuration) =>
configuration.ReadFrom.Configuration(context.Configuration));
var app = builder.Build();
app.UseSerilogRequestLogging();
// ... rest of pipeline
app.Run();
Notes:
ReadFrom.Configuration(...)comes fromSerilog.Settings.Configuration.- Your sinks/enrichers are additional packages (for example,
Serilog.Sinks.File,Serilog.Enrichers.Thread).
Why appsettings over fluent API
Serilog supports both fluent API configuration in code and JSON configuration in appsettings.json. For production systems, prefer appsettings.json.
Benefits of appsettings.json:
- Change log levels without redeploying.
- Environment-specific overrides (appsettings.Production.json).
- Configuration is visible and auditable.
- Operations teams can adjust without code changes.
When fluent API makes sense:
- Complex conditional logic.
- Dynamic sink configuration.
- Libraries that configure logging internally.
For most applications, appsettings.json provides the flexibility you need.
Environment-aware configuration
Production and Development have different logging needs. Development wants verbose output for debugging. Production wants concise output to reduce noise and cost.
appsettings.json (base)
{
"Serilog": {
"Using": ["Serilog.Sinks.Console"],
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft.AspNetCore": "Warning",
"Microsoft.EntityFrameworkCore": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{ "Name": "Console" }
],
"Enrich": ["FromLogContext", "WithMachineName", "WithThreadId"]
}
}
appsettings.Development.json
{
"Serilog": {
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft.AspNetCore": "Information",
"Microsoft.EntityFrameworkCore.Database.Command": "Information"
}
}
}
}
appsettings.Production.json
{
"Serilog": {
"MinimumLevel": {
"Default": "Warning",
"Override": {
"Microsoft.AspNetCore": "Warning",
"Microsoft.EntityFrameworkCore": "Error"
}
}
}
}
The base configuration is overridden per environment. In Development, you see EF Core SQL commands. In Production, you only see warnings and errors.
Microsoft guidance aligns with this approach: configure log levels via configuration (for example appsettings.{ENVIRONMENT}.json) and tune categories/namespaces by environment.
Correlation: built-in trace context first
Before you invent a custom correlation header, know that ASP.NET Core and .NET already have correlation primitives:
HttpContext.TraceIdentifieruniquely identifies a request and is useful for logging/diagnostics.- .NET uses
System.Diagnostics.Activityfor tracing. When W3Ctraceparentis present on inbound requests, trace/span identifiers flow through the request and can be included in log scopes.
Practical production rule:
- Prefer W3C trace context (
traceparent) for cross-service correlation. - Optionally add a custom header (like
X-Correlation-ID) only if you have a specific operational need and a clear propagation standard.
Correlation ID middleware
Correlation IDs let you trace a request across all log entries. Without them, debugging distributed systems or even simple request flows becomes guesswork.
The middleware
using Serilog.Context;
public class CorrelationIdMiddleware(RequestDelegate next)
{
private const string CorrelationIdHeader = "X-Correlation-ID";
public async Task InvokeAsync(HttpContext context)
{
var correlationId = GetOrCreateCorrelationId(context);
// Add to response headers for client visibility
context.Response.OnStarting(() =>
{
context.Response.Headers.TryAdd(CorrelationIdHeader, correlationId);
return Task.CompletedTask;
});
// Push to Serilog LogContext for all logs in this request
using (LogContext.PushProperty("CorrelationId", correlationId))
{
await next(context);
}
}
private static string GetOrCreateCorrelationId(HttpContext context)
{
if (context.Request.Headers.TryGetValue(CorrelationIdHeader, out var values))
{
var existing = values.ToString();
if (!string.IsNullOrWhiteSpace(existing) && existing.Length <= 128)
{
return existing;
}
}
return Guid.NewGuid().ToString("D");
}
}
Registration
// Program.cs - add early in the pipeline
app.UseMiddleware<CorrelationIdMiddleware>();
app.UseSerilogRequestLogging();
Result
Every log entry within a request now includes CorrelationId. You can filter logs by this value to see the complete request flow.
[INF] HTTP GET /articles responded 200 in 45ms {CorrelationId: "a1b2c3d4-..."}
[INF] Loaded article "my-slug" from database {CorrelationId: "a1b2c3d4-..."}
Also capture trace/span IDs (works with distributed tracing)
If you use distributed tracing (or anything that emits Activity), attach trace IDs to your request completion event:
using System.Diagnostics;
app.UseSerilogRequestLogging(options =>
{
options.EnrichDiagnosticContext = (diagnosticContext, httpContext) =>
{
diagnosticContext.Set("RequestId", httpContext.TraceIdentifier);
var activity = Activity.Current;
if (activity is not null)
{
diagnosticContext.Set("TraceId", activity.TraceId.ToString());
diagnosticContext.Set("SpanId", activity.SpanId.ToString());
}
};
});
This gives you correlation within a process (RequestId) and across services (TraceId).
Sensitive data filtering
Logs should never contain passwords, tokens, credit card numbers, or PII. Serilog provides several mechanisms to prevent accidental exposure.
Destructuring policies
Control how complex objects are logged:
// Program.cs
builder.Host.UseSerilog((context, services, configuration) =>
configuration
.ReadFrom.Configuration(context.Configuration)
.Destructure.ByTransforming<UserDto>(u => new
{
u.Id,
u.Email, // Include
Password = "***REDACTED***" // Never log
}));
Filter expressions
Exclude specific properties globally:
{
"Serilog": {
"Filter": [
{
"Name": "ByExcluding",
"Args": {
"expression": "RequestPath like '/health%'"
}
}
]
}
}
What to filter
| Data type | Action |
|---|---|
| Passwords | Never log |
| API keys/tokens | Never log |
| Credit card numbers | Never log |
| Email addresses | Log only if necessary, consider hashing |
| IP addresses | Log for security, consider retention policy |
| Request bodies | Filter sensitive fields or skip entirely |
| Health check requests | Exclude to reduce noise |
Complete PII redaction implementation
For comprehensive redaction, create a custom enricher that scrubs known sensitive patterns:
public class SensitiveDataMaskingEnricher : ILogEventEnricher
{
private static readonly HashSet<string> SensitivePropertyNames = new(StringComparer.OrdinalIgnoreCase)
{
"password", "pwd", "secret", "token", "apikey", "api_key",
"connectionstring", "connection_string", "authorization",
"creditcard", "credit_card", "ssn", "socialsecurity"
};
private static readonly Regex CreditCardPattern = new(
@"\b(?:\d[ -]*?){13,16}\b",
RegexOptions.Compiled);
private static readonly Regex EmailPattern = new(
@"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b",
RegexOptions.Compiled);
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
var propertiesToUpdate = new List<LogEventProperty>();
foreach (var property in logEvent.Properties)
{
if (SensitivePropertyNames.Contains(property.Key))
{
propertiesToUpdate.Add(
propertyFactory.CreateProperty(property.Key, "[REDACTED]"));
}
else if (property.Value is ScalarValue scalar &&
scalar.Value is string stringValue)
{
var masked = MaskSensitivePatterns(stringValue);
if (masked != stringValue)
{
propertiesToUpdate.Add(
propertyFactory.CreateProperty(property.Key, masked));
}
}
}
foreach (var prop in propertiesToUpdate)
{
logEvent.AddOrUpdateProperty(prop);
}
}
private static string MaskSensitivePatterns(string value)
{
// Mask credit card numbers
value = CreditCardPattern.Replace(value, "[CARD-REDACTED]");
// Optionally mask emails (configure based on your policy)
// value = EmailPattern.Replace(value, "[EMAIL-REDACTED]");
return value;
}
}
// Registration
builder.Host.UseSerilog((context, services, configuration) =>
configuration
.ReadFrom.Configuration(context.Configuration)
.Enrich.With<SensitiveDataMaskingEnricher>());
Request/Response body filtering
If you must log request bodies, filter sensitive fields:
public class SafeRequestBodyLoggingMiddleware(RequestDelegate next, ILogger<SafeRequestBodyLoggingMiddleware> logger)
{
private static readonly HashSet<string> SensitiveFields = new(StringComparer.OrdinalIgnoreCase)
{
"password", "token", "secret", "creditCard", "ssn", "apiKey"
};
public async Task InvokeAsync(HttpContext context)
{
if (context.Request.ContentType?.Contains("application/json") == true)
{
context.Request.EnableBuffering();
using var reader = new StreamReader(context.Request.Body, leaveOpen: true);
var body = await reader.ReadToEndAsync();
context.Request.Body.Position = 0;
var sanitized = SanitizeJson(body);
logger.LogDebug("Request body: {RequestBody}", sanitized);
}
await next(context);
}
private static string SanitizeJson(string json)
{
try
{
using var doc = JsonDocument.Parse(json);
return SanitizeElement(doc.RootElement);
}
catch
{
return "[Invalid JSON]";
}
}
private static string SanitizeElement(JsonElement element)
{
// Recursively redact sensitive fields
// Implementation omitted for brevity
return element.ToString();
}
}
Defense in depth
Do not rely solely on filtering. Also:
- Review log output during development.
- Use code review to catch logging of sensitive types.
- Audit production logs periodically.
- Use log sampling for high-volume endpoints.
EF Core: do not enable sensitive data logging in production
EF Core can include application data in logs/exceptions when sensitive data logging is enabled. Treat this as a development-only switch unless you have strong compensating controls.
builder.Services.AddDbContext<MyDbContext>(options =>
{
options.UseSqlServer(builder.Configuration.GetConnectionString("Default"));
if (builder.Environment.IsDevelopment())
{
options.EnableSensitiveDataLogging();
}
});
Request logging middleware
Serilog.AspNetCore provides UseSerilogRequestLogging() which logs HTTP requests with timing, status codes, and more.
This is different from ASP.NET Core HTTP Logging middleware:
- Serilog request logging: one structured event per request (good default).
- HTTP Logging middleware: can log headers and bodies (higher risk of PII leakage + perf overhead).
HTTP logging middleware (use sparingly)
ASP.NET Core has a built-in HTTP logging middleware that can log request/response properties, headers, and bodies.
Production guidance:
- Avoid logging request/response bodies by default.
- Prefer logging a small set of safe headers.
- If you enable it, use redaction and test performance impact.
Also note: the general logging APIs are synchronous; if a destination is slow, avoid writing directly to it on the request thread (buffer/queue, or use an async wrapper sink with a clear backpressure plan).
using Microsoft.AspNetCore.HttpLogging;
builder.Services.AddHttpLogging(options =>
{
options.LoggingFields = HttpLoggingFields.RequestPropertiesAndHeaders |
HttpLoggingFields.ResponsePropertiesAndHeaders |
HttpLoggingFields.Duration;
// Only log specific headers; everything else is redacted.
options.RequestHeaders.Add("User-Agent");
options.ResponseHeaders.Add("Content-Type");
});
var app = builder.Build();
app.UseHttpLogging();
Basic usage
app.UseSerilogRequestLogging();
Customization
app.UseSerilogRequestLogging(options =>
{
// Customize the message template
options.MessageTemplate =
"HTTP {RequestMethod} {RequestPath} responded {StatusCode} in {Elapsed:0.0000}ms";
// Add additional properties
options.EnrichDiagnosticContext = (diagnosticContext, httpContext) =>
{
diagnosticContext.Set("RequestHost", httpContext.Request.Host.Value);
diagnosticContext.Set("UserAgent", httpContext.Request.Headers.UserAgent.ToString());
};
// Adjust log level based on status code
options.GetLevel = (httpContext, elapsed, ex) =>
{
if (ex is not null || httpContext.Response.StatusCode >= 500)
return Serilog.Events.LogEventLevel.Error;
if (httpContext.Response.StatusCode >= 400)
return Serilog.Events.LogEventLevel.Warning;
return Serilog.Events.LogEventLevel.Information;
};
});
Sink selection
Choose sinks based on your operational environment and requirements.
| Sink | Use when | Considerations |
|---|---|---|
| Console | Development, containerized apps | Ephemeral; use with log aggregator |
| File | Simple deployments, audit trails | Manage rotation and retention |
| Seq | Team needs searchable structured logs | Self-hosted or cloud; excellent for development |
| Application Insights | Azure deployments | Integrated with Azure Monitor |
| Elasticsearch | Large-scale log aggregation | Requires infrastructure |
| Datadog/Splunk | Enterprise observability platforms | Vendor-specific sinks available |
Package installation
# Console
dotnet add package Serilog.Sinks.Console
# File
dotnet add package Serilog.Sinks.File
# Seq
dotnet add package Serilog.Sinks.Seq
# Application Insights
dotnet add package Serilog.Sinks.ApplicationInsights
Multi-sink configuration
{
"Serilog": {
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
"path": "logs/app-.log",
"rollingInterval": "Day",
"retainedFileCountLimit": 7
}
}
]
}
}
Enterprise sink configurations
Production deployments often require integration with enterprise observability platforms.
Azure Application Insights
dotnet add package Serilog.Sinks.ApplicationInsights
{
"Serilog": {
"Using": ["Serilog.Sinks.ApplicationInsights"],
"WriteTo": [
{
"Name": "ApplicationInsights",
"Args": {
"connectionString": "[Your Connection String]",
"telemetryConverter": "Serilog.Sinks.ApplicationInsights.TelemetryConverters.TraceTelemetryConverter, Serilog.Sinks.ApplicationInsights"
}
}
]
}
}
Best practices:
- Use connection string (not instrumentation key)
- Set appropriate sampling to control costs
- Correlate with Application Insights distributed tracing
Elasticsearch / OpenSearch
dotnet add package Serilog.Sinks.Elasticsearch
{
"Serilog": {
"Using": ["Serilog.Sinks.Elasticsearch"],
"WriteTo": [
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "https://elasticsearch.example.com:9200",
"indexFormat": "app-logs-{0:yyyy.MM.dd}",
"autoRegisterTemplate": true,
"autoRegisterTemplateVersion": "ESv7",
"numberOfReplicas": 1,
"numberOfShards": 2
}
}
]
}
}
Best practices:
- Use index lifecycle management (ILM) for retention
- Set appropriate shard count based on volume
- Use bulk insert settings for high throughput
Datadog
dotnet add package Serilog.Sinks.Datadog.Logs
{
"Serilog": {
"Using": ["Serilog.Sinks.Datadog.Logs"],
"WriteTo": [
{
"Name": "DatadogLogs",
"Args": {
"apiKey": "[Your API Key]",
"source": "csharp",
"service": "my-service",
"host": "my-host",
"tags": ["env:production", "version:1.0.0"]
}
}
]
}
}
Best practices:
- Use environment variables for API keys
- Set service/host/env tags for filtering
- Enable log correlation with APM traces
Splunk
dotnet add package Serilog.Sinks.Splunk
{
"Serilog": {
"Using": ["Serilog.Sinks.Splunk"],
"WriteTo": [
{
"Name": "EventCollector",
"Args": {
"splunkHost": "https://splunk.example.com:8088",
"eventCollectorToken": "[Your HEC Token]",
"source": "my-app",
"sourceType": "_json",
"index": "main"
}
}
]
}
}
Best practices:
- Use HTTPS for HEC endpoint
- Set appropriate index based on retention requirements
- Configure batching for high-volume scenarios
Enterprise sink comparison
| Platform | Strengths | Considerations |
|---|---|---|
| Application Insights | Azure integration, APM correlation | Azure-centric, cost at scale |
| Elasticsearch | Open source, powerful queries | Infrastructure overhead |
| Datadog | Full observability platform, easy setup | Vendor lock-in, cost |
| Splunk | Enterprise features, compliance | Complex pricing, learning curve |
| Seq | Developer-friendly, structured log UI | Self-hosted, smaller scale |
High-throughput configuration
For applications generating 10,000+ logs/second, wrap sinks with async buffering:
dotnet add package Serilog.Sinks.Async
builder.Host.UseSerilog((context, services, configuration) =>
configuration
.ReadFrom.Configuration(context.Configuration)
.WriteTo.Async(a => a.Elasticsearch(/* config */))
.WriteTo.Async(a => a.Console()));
Async sink settings:
bufferSize: default 10,000 events (increase for spiky loads)blockWhenFull: false by default (drops logs when buffer full)- Monitor dropped log count in production
Log enrichment
Enrichers add contextual properties to every log event. Use them for information that should always be present.
Built-in enrichers
{
"Serilog": {
"Enrich": ["FromLogContext", "WithMachineName", "WithThreadId", "WithProcessId"]
}
}
Requires packages:
dotnet add package Serilog.Enrichers.Environment
dotnet add package Serilog.Enrichers.Thread
dotnet add package Serilog.Enrichers.Process
Custom enrichment
// Add application version to all logs
.Enrich.WithProperty("Application", "MyApp")
.Enrich.WithProperty("Version", Assembly.GetExecutingAssembly().GetName().Version?.ToString())
FromLogContext
The FromLogContext enricher is essential. It enables LogContext.PushProperty() used by the correlation ID middleware and other request-scoped properties.
Copy/paste artifact: production appsettings.json
{
"Serilog": {
"Using": [
"Serilog.Sinks.Console",
"Serilog.Sinks.File"
],
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft.AspNetCore": "Warning",
"Microsoft.EntityFrameworkCore": "Warning",
"Microsoft.EntityFrameworkCore.Database.Command": "Warning",
"System": "Warning",
"System.Net.Http.HttpClient": "Warning"
}
},
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj} {Properties:j}{NewLine}{Exception}"
}
},
{
"Name": "File",
"Args": {
"path": "logs/app-.log",
"rollingInterval": "Day",
"retainedFileCountLimit": 14,
"outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level:u3}] {Message:lj} {Properties:j}{NewLine}{Exception}"
}
}
],
"Enrich": ["FromLogContext", "WithMachineName", "WithThreadId"]
}
}
Copy/paste artifact: correlation ID middleware
using Serilog.Context;
namespace YourNamespace.Middleware;
public class CorrelationIdMiddleware(RequestDelegate next)
{
private const string CorrelationIdHeader = "X-Correlation-ID";
public async Task InvokeAsync(HttpContext context)
{
var correlationId = GetOrCreateCorrelationId(context);
context.Response.OnStarting(() =>
{
context.Response.Headers.TryAdd(CorrelationIdHeader, correlationId);
return Task.CompletedTask;
});
using (LogContext.PushProperty("CorrelationId", correlationId))
{
await next(context);
}
}
private static string GetOrCreateCorrelationId(HttpContext context)
{
if (context.Request.Headers.TryGetValue(CorrelationIdHeader, out var values))
{
var existing = values.ToString();
if (!string.IsNullOrWhiteSpace(existing) && existing.Length <= 128)
{
return existing;
}
}
return Guid.NewGuid().ToString("D");
}
}
Common failure modes
- Logging sensitive data (passwords, tokens, PII) without filtering.
- Using Debug level in Production, creating log volume and cost issues.
- Missing correlation IDs, making request tracing impossible.
- Configuring only in code, preventing runtime log level changes.
- Not suppressing noisy framework logs in Production.
- Logging to Console only in containerized apps without aggregation.
Checklist
- Serilog configured via appsettings.json (not just fluent API).
- Environment-specific overrides in place (Development vs Production).
- Correlation ID middleware added early in pipeline.
- Sensitive data filtering reviewed and tested.
- Framework namespaces (Microsoft., System.) suppressed appropriately.
- UseSerilogRequestLogging() added for HTTP request logs.
- Production sink strategy defined (Console + aggregator, or direct to platform).
FAQ
Should I use async sinks?
For high-throughput applications, consider wrapping sinks with Serilog.Sinks.Async to prevent logging from blocking request processing. For most applications, synchronous sinks are adequate.
How do I view structured logs locally?
Use Seq (free for local development). It provides a searchable UI for structured logs. Alternatively, use the Console sink with a JSON formatter.
What log level should Production use?
Start with Information for your application code and Warning for framework namespaces. Adjust based on log volume and operational needs. Some teams use Warning as the default and selectively enable Information for specific namespaces.
How do I log to multiple sinks?
Add multiple entries to the WriteTo array in appsettings.json. Each sink can have its own minimum level via the restrictedToMinimumLevel argument.
Should I log request bodies?
Generally no. Request bodies may contain sensitive data and increase log volume significantly. If you must log them, filter sensitive fields and consider only logging for specific endpoints.
How do I correlate logs across services?
Pass the X-Correlation-ID header between services. Each service's correlation ID middleware will pick it up and include it in logs. This creates a distributed trace across your system.
What to do next
Add Serilog.AspNetCore to your project and configure via appsettings.json. Add the correlation ID middleware. Review your logs for any sensitive data exposure.
For more on building production-quality ASP.NET Core applications, read EF Core Performance Mistakes That Ship to Production.
If you want help establishing logging standards for your team, reach out via Contact.
References
- Serilog Documentation
- Serilog.AspNetCore GitHub
- Serilog Configuration
- ASP.NET Core Logging
- HTTP logging in ASP.NET Core
- EF Core logging and sensitive data
- .NET distributed tracing concepts
- Add distributed tracing instrumentation (W3C trace context)
- .NET logging and tracing
Author notes
Decisions:
- Prefer appsettings.json over fluent API. Rationale: enables runtime configuration changes without redeployment.
- Include correlation ID middleware by default. Rationale: essential for debugging and distributed tracing.
- Suppress framework logs in Production. Rationale: reduces noise and log storage costs.
Observations:
- Teams that skip correlation IDs spend significantly more time debugging production issues.
- Sensitive data in logs often comes from logging entire request/response objects.
- Log volume in Production is often 10-100x higher than necessary due to Debug-level defaults.