EF Core makes data access easy. It also makes performance mistakes easy. These are the patterns that pass code review and then slow down production.
Common questions this answers
- How do I detect N+1 queries before they ship?
- When should I use
.AsNoTracking()vs tracking queries? - What causes cartesian explosion and how do I fix it?
- How do I know if my queries are hitting indexes?
Definition (what this means in practice)
EF Core performance mistakes are query patterns that work in development but degrade under production load. They compile, they pass tests, and they ship. Then they cause latency spikes, database connection exhaustion, and memory pressure.
In practice, this means logging queries during development, reviewing generated SQL before merge, and having a validation ladder that catches regressions.
Terms used
- N+1 problem: one query to load a list, then N additional queries to load related data for each item.
- Tracking: EF Core monitors entity changes for SaveChanges. Useful for writes, wasteful for reads.
- Cartesian explosion: a JOIN that multiplies rows, causing duplicate data transfer.
- Projection: selecting specific columns into a DTO instead of loading full entities.
- Split query: breaking a multi-include query into separate queries to avoid cartesian explosion.
Reader contract
This article is for:
- Engineers shipping ASP.NET Core apps with EF Core.
- Anyone reviewing EF Core queries in pull requests.
You will leave with:
- Detection patterns for the 6 most common EF Core performance mistakes.
- A decision matrix for tracking vs no-tracking.
- Copy/paste validation commands.
This is not for:
- EF Core beginners (assumes working knowledge).
- Write-heavy OLTP optimization (separate concern).
Quick start (10 minutes)
If you do nothing else, do this before your next PR:
Verified on: ASP.NET Core (.NET 10), EF Core 10.
- Enable query logging in development.
- Run your list endpoint and count the queries.
- If you see N+1 (one query per item), switch to eager loading (
.Include()/.ThenInclude()) or project to scalars/DTOs (avoid projecting collection navigations). - Use
.AsNoTracking()for read-only queries. - Check that your WHERE and ORDER BY columns have indexes.
// appsettings.Development.json - enable EF Core query logging
{
"Logging": {
"LogLevel": {
"Microsoft.EntityFrameworkCore.Database.Command": "Information"
}
}
}
The N+1 problem: detection and fix
N+1 is the most common EF Core performance killer. It happens when you load a list and then access a navigation property in a loop.
How N+1 happens
// BAD: N+1 - one query for authors, then one query per author for book counts
var authors = await db.Authors
.AsNoTracking()
.ToListAsync();
foreach (var author in authors)
{
var bookCount = await db.Books
.AsNoTracking()
.Where(b => b.AuthorId == author.Id)
.CountAsync();
Console.WriteLine($"{author.Name}: {bookCount} books");
}
With 100 authors, this executes 101 queries.
Detection
Enable query logging and look for repeated SELECT statements with different parameter values. If you see the same query shape executing once per item, you have N+1.
Fix patterns
Option 1: Eager loading with Include
// GOOD: Single query with JOIN
var authors = await db.Authors
.Include(a => a.Books)
.ToListAsync();
Option 2: Projection (preferred for read-only)
// BETTER: Only load what you need
var authors = await db.Authors
.Select(a => new
{
a.Name,
BookCount = a.Books.Count
})
.ToListAsync();
Projection avoids loading full entities and lets SQL Server do the counting.
Tracking vs no-tracking decision matrix
By default, EF Core tracks entity instances it materializes. Tracking enables change detection and identity resolution, but also adds overhead (change tracker bookkeeping + snapshotting). For read-only queries, Microsoft recommends using no-tracking queries.
| Scenario | Use Tracking | Use AsNoTracking |
|---|---|---|
| Loading entity to update it | Yes | No |
| Loading entity to delete it | Yes | No |
| List page (display only) | No | Yes |
| Detail page (display only) | No | Yes |
| Search results | No | Yes |
| Reports and exports | No | Yes |
| API GET endpoints | No | Yes |
The pattern
// Read-only: always use AsNoTracking
var articles = await db.Articles
.AsNoTracking()
.Where(a => a.IsPublished)
.OrderByDescending(a => a.PublishedAt)
.Take(20)
.ToListAsync();
// Write path: use tracking (default)
var article = await db.Articles
.FirstOrDefaultAsync(a => a.Id == id);
if (article is not null)
{
article.Title = newTitle;
await db.SaveChangesAsync();
}
Memory impact
Microsoft publishes a benchmark where AsNoTracking reduced allocations (e.g., from ~380 KB to ~233 KB) and improved throughput for one representative query; your numbers will vary with graph shape, result size, and identity resolution needs.
If you want no-tracking but still need identity resolution (deduplicated instances), EF Core also supports .AsNoTrackingWithIdentityResolution().
Cartesian explosion and split queries
When you use multiple .Include() calls on collections, EF Core generates a single query with multiple JOINs. This can cause cartesian explosion where rows multiply.
The problem
// Potentially dangerous: multiple collection includes
var blogs = await db.Blogs
.Include(b => b.Posts)
.Include(b => b.Tags)
.ToListAsync();
If a blog has 10 posts and 5 tags, the result set contains 50 rows for that blog. Data for the blog entity is duplicated in every row.
Detection
Look for queries where the result row count is much higher than the entity count. If you expect 100 blogs but see 5000 rows, you have cartesian explosion.
Fix: Split queries
// GOOD: Split into separate queries
var blogs = await db.Blogs
.Include(b => b.Posts)
.Include(b => b.Tags)
.AsSplitQuery()
.ToListAsync();
Split queries execute one SQL query for the root entity set and an additional query per included collection navigation. This avoids cartesian explosion, but adds extra round trips and can require buffering; also, multiple queries can observe inconsistent results if rows change concurrently (mitigate with an appropriate transaction if needed).
Decision table
| Situation | Use Single Query | Use Split Query |
|---|---|---|
| One collection include | Yes | No |
| Multiple collection includes | No | Yes |
| High-latency database connection | Yes | No |
| Large collections (100+ items) | No | Yes |
Projection-first patterns
Loading full entities when you only need a few fields wastes bandwidth and memory.
The problem
// BAD: Loading everything when you need 3 fields
var articles = await db.Articles
.Where(a => a.IsPublished)
.ToListAsync();
// Then in the view, you only use Title, Slug, and PublishedAt
If Article has a large text/binary column (e.g., body content), you just transferred data you will never render.
Even when you don't have huge columns, projecting only what you need is a core EF Core efficiency recommendation.
Important: projecting collection navigations can lead to unexpected extra roundtrips in some query shapes. Prefer projecting scalars/aggregates (e.g., counts) or use eager loading when you need related collections.
The fix
// GOOD: Project to exactly what you need
var articles = await db.Articles
.AsNoTracking()
.Where(a => a.IsPublished)
.OrderByDescending(a => a.PublishedAt)
.Select(a => new ArticleListItem
{
Title = a.Title,
Slug = a.Slug,
PublishedAt = a.PublishedAt,
ReadingTimeMinutes = a.ReadingTimeMinutes
})
.ToListAsync();
Projection also helps EF Core generate more efficient SQL because it only selects the columns you specify.
Pagination that scales
Loading unbounded result sets is a denial-of-service waiting to happen.
The problem
// BAD: No limit
var allArticles = await db.Articles.ToListAsync();
With 10,000 articles, this loads everything into memory.
The fix
// GOOD: Always paginate
const int PageSize = 20;
var page = Math.Max(1, requestedPage);
var articles = await db.Articles
.AsNoTracking()
.Where(a => a.IsPublished)
.OrderByDescending(a => a.PublishedAt)
.Skip((page - 1) * PageSize)
.Take(PageSize)
.ToListAsync();
Pagination checklist
- Always have a
Take()orFirstOrDefault(). - Clamp page numbers to valid ranges.
- Consider keyset pagination for large datasets (more efficient than offset).
Context pooling: when it matters
Creating a DbContext is cheap. But "cheap" times 10,000 requests per second adds up.
When to use pooling
Context pooling helps in high-throughput scenarios where context creation overhead is measurable. For most applications, it is not necessary.
// Enable context pooling
builder.Services.AddDbContextPool<AppDbContext>(options =>
options.UseSqlServer(connectionString));
When to skip pooling
- Low to medium traffic applications.
- When you need per-request context configuration.
- When context lifetime management is complex.
Measure before optimizing. Pooling adds constraints (pooled contexts are reset between uses).
Query-to-index mapping
Indexes are how you make WHERE and ORDER BY fast. If your query patterns do not match your indexes, you are doing table scans.
| Query pattern | Required index |
|---|---|
WHERE Slug = @slug |
Unique index on Slug |
WHERE IsPublished = 1 ORDER BY PublishedAt DESC |
Composite index (IsPublished, PublishedAt DESC) |
WHERE SeriesId = @id ORDER BY SeriesOrder |
Composite index (SeriesId, SeriesOrder) |
WHERE TagId = @id (via join table) |
Index on join table foreign key |
Verification
Check your query plans. In SQL Server Management Studio, enable "Include Actual Execution Plan" and look for Index Seeks (good) vs Index Scans or Table Scans (bad).
Copy/paste artifact: EF Core query validation ladder
Run this checklist before merging EF Core changes:
EF Core Query Validation Ladder
1. Enable query logging
- Set Microsoft.EntityFrameworkCore.Database.Command to Information
- Run the endpoint and capture logs
2. Count queries
- List endpoint should be 1-3 queries, not N+1
- Detail endpoint should be 1-2 queries
3. Check for AsNoTracking
- Every read-only query should have .AsNoTracking()
- Heuristic: search for common query materializers and review call sites (e.g., `ToListAsync`, `FirstOrDefaultAsync`) for tracking intent
4. Check for pagination
- Every list query should have .Take() or equivalent
- No unbounded .ToListAsync() on large tables
5. Check for projection
- List endpoints should use .Select() to a DTO
- Avoid loading full entities with large text/binary columns
6. Verify indexes exist
- Every WHERE column should have an index
- Every ORDER BY column should have an index
- Composite indexes for multi-column filters
Copy/paste artifact: query logging configuration
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning",
"Microsoft.EntityFrameworkCore.Database.Command": "Information"
}
}
}
Common failure modes
- N+1 queries that work fine with 10 items but timeout with 1000.
- Removing
.AsNoTracking()during a refactor. - Adding
.Include()chains that cause cartesian explosion. - Loading full entities for list pages with large text columns.
- Missing indexes on foreign key columns.
- Pagination that uses offset on millions of rows (use keyset instead).
Checklist
- Query logging is enabled in development.
- List endpoints execute a bounded number of queries (no N+1).
- Read-only queries use
.AsNoTracking(). - List queries use projection to DTOs.
- All queries have pagination with
.Take(). - Indexes exist for WHERE and ORDER BY columns.
FAQ
How do I find N+1 queries in an existing codebase?
Enable query logging, exercise every list endpoint, and look for repeated query patterns. Tools like MiniProfiler can also visualize query counts per request.
Is AsNoTracking always faster?
For read-only scenarios, Microsoft guidance is that no-tracking queries are generally quicker because EF Core doesn't set up change tracking. However, tracking can be beneficial when identity resolution (deduplicating repeated rows into a single instance) matters. Measure on your query shape.
For queries where you will modify and save the entity, tracking is required (or you take on manual attach/update responsibility).
When should I use AsSplitQuery?
When you have multiple collection includes and see cartesian explosion (row count much higher than entity count). Start with single query and switch to split if you measure problems.
How do I know if my indexes are being used?
Check the query execution plan. In SQL Server, use SET STATISTICS IO ON or view the actual execution plan. Look for Index Seek operations.
Should I use raw SQL instead of EF Core for performance?
Rarely. Well-written EF Core queries generate efficient SQL. Use raw SQL only for complex queries that EF Core cannot express or when you have measured a specific bottleneck.
What is keyset pagination and when should I use it?
Keyset pagination uses WHERE clauses instead of OFFSET. Use it when paginating large datasets (100K+ rows) where OFFSET becomes slow.
// Keyset pagination example
var articles = await db.Articles
.AsNoTracking()
.Where(a => a.PublishedAt < lastSeenDate)
.OrderByDescending(a => a.PublishedAt)
.Take(20)
.ToListAsync();
What to do next
Enable query logging in your development environment today. Run your list endpoints and count the queries. If you see N+1, fix it before it ships.
For more on building fast ASP.NET Core applications, read Performance Defaults That Beat Clever Optimizations.
If you want help reviewing EF Core query patterns in your codebase, reach out via Contact.
References
- EF Core Efficient Querying
- EF Core Tracking vs No-Tracking
- EF Core Split Queries
- EF Core Performance Introduction
- Advanced Performance Topics - EF Core (DbContext pooling, compiled queries)
- Pagination - EF Core (keyset pagination)
Author notes
Decisions:
- Recommend projection over Include for list endpoints. Rationale: projection reduces data transfer and avoids cartesian explosion.
- Recommend AsNoTracking as default for reads. Rationale: Microsoft recommends no-tracking for read-only queries to avoid change tracking overhead; use
AsNoTrackingWithIdentityResolutionwhen you need deduped instances without tracking. - Recommend split queries for multiple collection includes. Rationale: avoids row multiplication while keeping queries simple.
Observations (measure on your system):
- N+1 fixes can collapse "N extra roundtrips" down to 0.
- Projection can reduce network transfer and allocations by not selecting large columns.
- No-tracking can reduce change-tracker overhead for read-heavy endpoints.