AI-assisted development accelerates the part where you type. It also accelerates the part where you ship vulnerabilities. Security has to be a post-diff practice, because the diff is the artifact.
You're reading Part 3 of 5 in the AI-assisted development series. Previous: Part 2: A Spec-Driven AI Workflow That Holds Up in Production Next: Part 4: Performance Defaults That Beat Clever Optimizations
This series moves from workflow -> safety -> performance -> publishing, using DAP iQ as the working system.
Common questions this answers
- What are the most common AI-generated security regressions in ASP.NET Core?
- How do you define trust boundaries for headers, input, and output?
- What review checklist catches the high-risk mistakes fast?
Definition (what this means in practice)
Security boundaries are the rules that separate attacker-controlled inputs from trusted data. For AI-assisted development, the boundary is enforced after the diff: in review, validation, and runtime guardrails.
In practice, this means reviewing every AI-generated diff for trust boundary violations and running the security checklist before merge.
Terms used
- Trust boundary: a point where attacker-controlled data could influence behavior.
- Guardrail: a small, enforced rule (max length, normalization, allowlists, timeouts).
- SSRF: server-side request forgery (the server makes outbound requests to attacker-chosen destinations).
- XSS: cross-site scripting (attacker-controlled script runs in a user's browser).
Reader contract
This article is for:
- ASP.NET Core engineers shipping production web apps.
- Reviewers trying to prevent AI-generated security bugs.
You will leave with:
- A threat model table that maps AI failure modes to guardrails.
- Concrete code patterns for forwarded headers, route guards, and rate limits.
- A PR checklist for AI-generated diffs.
This is not for:
- "security is a separate team" org charts.
- apps that do not know their trust boundaries.
Why this exists
I want the speed benefits of AI-assisted development without lowering the security bar. The only sustainable approach is to define trust boundaries and review rules that apply to every diff.
Default rule
Treat AI-generated diffs as untrusted until reviewed and validated.
Quick start (10 minutes)
If you want immediate value, apply this checklist to your next AI-assisted PR:
Verified on: ASP.NET Core (.NET 10).
- List every trust boundary the diff touches: input, output, headers, auth, network.
- Reject any diff that adds a new trust assumption without tests.
- Require max length and normalization for every route parameter.
- Require rate limiting for every state change.
- Grep for raw output rendering (
Html.Raw) and header usage (X-Forwarded-For).
Threat model: where AI breaks first
Treat AI-generated changes as untrusted until proven otherwise. In web apps, failures cluster in a small set of places.
| Surface | AI failure mode | Impact | Guardrail | How to test |
|---|---|---|---|---|
| Input | Removes max length checks | DOS, expensive queries | Explicit max length + normalization | Fuzz long inputs, verify 404/400 |
| Output | Switches to raw rendering | XSS | Encode by default; disable raw HTML in markdown | Try <script> payloads in content |
| Headers | Trusts client headers directly | Spoofed IP/scheme | ForwardedHeaders + KnownProxies + KnownIPNetworks (KnownNetworks on older runtimes) | Send fake headers, confirm ignored |
| Network | Adds server-side fetch | SSRF | No outbound fetch or allowlist-only | Try http://169.254.169.254/ |
| Auth | Adds "temporary" bypass | Account compromise | No debug bypasses in prod | Scan for AllowAnonymous changes |
| Logging | Logs tokens/PII | Data exposure | Redaction rules | Review log statements, run with sample data |
DAP iQ examples:
- Slug routes have max length guards to keep parsing and query costs predictable.
- Rate limiting is applied at the MVC route layer, not at static assets.
Trust boundaries: forwarded headers and IP-based policies
If you use client IP for anything, you must define where that IP comes from. Do not trust client-supplied headers.
DAP iQ configures forwarded headers with known internal networks so ASP.NET Core only honors forwarded values from trusted proxies.
The rate limiter then reads Connection.RemoteIpAddress, not X-Forwarded-For.
Middleware ordering matters.
UseForwardedHeaders() must run early, before auth, rate limiting, and anything that reads scheme or IP.
Prefer explicit proxies when you can. Networks are a fallback when proxy IPs are not stable.
Option A (recommended): explicit reverse proxy IPs
using System.Net;
using Microsoft.AspNetCore.HttpOverrides;
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
options.KnownProxies.Add(IPAddress.Parse("203.0.113.10")); // replace with your proxy
});
app.UseForwardedHeaders();
Option B (fallback): RFC1918 networks
using System.Net;
using Microsoft.AspNetCore.HttpOverrides;
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
// If your reverse proxy lives in private address space and its IP is not stable.
options.KnownIPNetworks.Add(new IPNetwork(IPAddress.Parse("10.0.0.0"), 8));
options.KnownIPNetworks.Add(new IPNetwork(IPAddress.Parse("172.16.0.0"), 12));
options.KnownIPNetworks.Add(new IPNetwork(IPAddress.Parse("192.168.0.0"), 16));
});
app.UseForwardedHeaders();
Optional note: some platforms present proxy IPs as IPv4-mapped IPv6 (for example ::ffff:10.0.0.0/104). If you see that in Connection.RemoteIpAddress, add those networks explicitly.
If AI suggests "just read X-Forwarded-For", treat that as a bug.
Baseline security headers (content site)
If the assistant adds a new endpoint or rendering path, make it pass a basic headers baseline.
Minimal baseline:
- Content-Security-Policy: start restrictive and add only what you need
- X-Content-Type-Options:
nosniff - Referrer-Policy
- Permissions-Policy
Example (minimal, safe defaults):
app.Use(async (context, next) =>
{
context.Response.Headers.XContentTypeOptions = "nosniff";
context.Response.Headers["Referrer-Policy"] = "strict-origin-when-cross-origin";
context.Response.Headers["X-Frame-Options"] = "DENY";
context.Response.Headers["Permissions-Policy"] = "geolocation=(), microphone=(), camera=()";
await next();
});
Copy/paste artifact: AI-assisted security review checklist
- Inputs: every route param has max length + normalization.
- Outputs: no new
Html.Rawand no raw HTML enabled in markdown. - Headers: forwarded headers are enabled with KnownIPNetworks/KnownProxies; no direct
X-Forwarded-*reads. - State changes: every POST is CSRF-protected and rate limited.
- Secrets: no tokens/keys/PII in logs.
- Network: no new server-side fetch unless allowlisted.
Route guardrails: slug normalization and max length
AI loves to remove "defensive" code. In production, defensive code is the line between stable and expensive.
Add a slug helper and use it everywhere you accept slugs:
public static class Slug
{
public const int MaxLength = 200;
public static bool IsValid(string value)
{
if (string.IsNullOrWhiteSpace(value) || value.Length > MaxLength) return false;
for (int i = 0; i < value.Length; i++)
{
char c = value[i];
bool ok = (c >= 'a' && c <= 'z') || (c >= '0' && c <= '9') || c == '-';
if (!ok) return false;
}
return true;
}
public static string Normalize(string value) => value.Trim().ToLowerInvariant();
}
Then the route guard becomes:
public async Task<IActionResult> Read(string slug)
{
slug = Slug.Normalize(slug);
if (!Slug.IsValid(slug))
{
return NotFound();
}
// query...
}
If you do canonical redirects, do them deliberately and test them. Never let an assistant "fix" routing without you owning the SEO implications.
Output encoding: treat content as hostile
AI loves to "help" by switching to raw output rendering. That is a common XSS footgun.
DAP iQ renders long-form content by converting Markdown to HTML through a pipeline that disables raw HTML. That gives you safe-ish HTML output with a narrower attack surface.
_pipeline = new MarkdownPipelineBuilder()
.DisableHtml()
.UseAdvancedExtensions()
.UseAutoLinks()
.Build();
This is not a replacement for review. It is a boundary that makes unsafe diffs harder to land.
What can still go wrong (even with DisableHtml()):
- unsafe
Html.Raw(...)usage in Razor views - unsafe attribute construction (string concatenation into
href/src/style) - allowing arbitrary iframes or embeds without allowlists
- weakening CSP or security headers to "make it work"
SSRF boundary: do not add server-side fetching casually
If the assistant suggests "fetch this URL" for previews, thumbnails, or metadata, pause. Server-side fetching is SSRF until proven otherwise.
If you must fetch, the baseline requirements are:
- allowlist hosts
- disallow link-local and RFC1918 ranges
- short timeouts
- no redirects
- size limits
If you cannot implement that, do not implement the fetch.
Allowlist-only skeleton (shape of a safe solution):
using System.Net;
using System.Net.Http;
static readonly HashSet<string> AllowedHosts = new(StringComparer.OrdinalIgnoreCase)
{
"example.com",
"cdn.example.com"
};
static async Task<byte[]> FetchAllowlistedBytesAsync(HttpClient http, Uri uri, CancellationToken ct)
{
const int MaxBytes = 256 * 1024;
if (!AllowedHosts.Contains(uri.Host)) throw new InvalidOperationException("host not allowlisted");
var ips = await Dns.GetHostAddressesAsync(uri.DnsSafeHost, ct);
if (ips.Any(IsPrivateOrLinkLocal)) throw new InvalidOperationException("private/link-local IP blocked");
using var req = new HttpRequestMessage(HttpMethod.Get, uri);
using var timeout = CancellationTokenSource.CreateLinkedTokenSource(ct);
timeout.CancelAfter(TimeSpan.FromSeconds(3));
http.Timeout = Timeout.InfiniteTimeSpan;
var resp = await http.SendAsync(req, HttpCompletionOption.ResponseHeadersRead, timeout.Token);
if ((int)resp.StatusCode is >= 300 and < 400) throw new InvalidOperationException("redirects not allowed");
var len = resp.Content.Headers.ContentLength;
if (len is > MaxBytes) throw new InvalidOperationException("response too large");
await using var stream = await resp.Content.ReadAsStreamAsync(timeout.Token);
using var ms = new MemoryStream(capacity: len is > 0 and <= MaxBytes ? (int)len : 0);
var buffer = new byte[16 * 1024];
int read;
while ((read = await stream.ReadAsync(buffer, timeout.Token)) > 0)
{
if (ms.Length + read > MaxBytes) throw new InvalidOperationException("response too large");
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
static bool IsPrivateOrLinkLocal(IPAddress ip) =>
IPAddress.IsLoopback(ip)
|| ip.IsIPv6LinkLocal
|| ip.IsIPv6SiteLocal
|| (ip.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork &&
ip.GetAddressBytes() is var b &&
(b[0] == 10
|| (b[0] == 172 && b[1] >= 16 && b[1] <= 31)
|| (b[0] == 192 && b[1] == 168)
|| (b[0] == 169 && b[1] == 254)));
Rate limiting on state changes
State changes are where abuse costs money. They are also where assistants frequently propose "quick endpoints" without guardrails.
DAP iQ uses a strict rate limit for likes toggles: one toggle per hour per (client IP, slug).
options.AddPolicy("likes-write-per-ip-slug-1-per-60m", httpContext =>
{
var clientIp = GetRateLimitClientIpAddress(httpContext);
if (clientIp is null)
{
return RateLimitPartition.GetNoLimiter("missing-client-ip");
}
var slug = httpContext.Request.RouteValues["slug"]?.ToString();
var partitionKey = string.IsNullOrWhiteSpace(slug)
? clientIp
: $"{clientIp}:{slug.Trim().ToLowerInvariant()}";
return RateLimitPartition.GetFixedWindowLimiter(
partitionKey: partitionKey,
factory: _ => new FixedWindowRateLimiterOptions
{
PermitLimit = 1,
Window = TimeSpan.FromMinutes(60),
QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
QueueLimit = 0
});
});
AI-assisted development is safer when write endpoints have stronger constraints than read endpoints.
Logging rules: do not leak secrets
Common failure mode: logging expands to include headers, tokens, or PII under the banner of debugging. In production, that means tokens, headers, and PII.
Baseline:
- do not log raw headers
- do not log request bodies by default
- do not log secrets or connection strings
- if you must log identifiers, log hashed or truncated values
Reference implementation: PR checklist for AI-generated diffs
Copy/paste this into your PR template.
[AI security checklist]
- [ ] Any change touching headers/IP uses ForwardedHeaders + trusted proxies/networks.
- [ ] Any new route parameter has max length + normalization.
- [ ] Any state-changing endpoint has rate limiting.
- [ ] No new server-side fetch without SSRF controls (allowlist, IP blocks, timeouts, no redirects, size limits).
- [ ] No new Html.Raw or raw string interpolation into HTML.
- [ ] Logs do not include secrets, tokens, headers, or PII.
- [ ] Validation commands were run and results recorded.
Common failure modes
- Trusting
X-Forwarded-Fordirectly. - Switching rendering to raw HTML without a sanitization story.
- Removing max length guards because they "feel defensive".
- Logging user input and headers without filtering.
- Adding server-side fetch code without SSRF controls.
Checklist
- Review the diff for trust boundary violations (headers, IP, auth, network).
- Verify every route parameter has max length limits.
- Confirm markdown rendering does not allow raw HTML injection.
- Require rate limiting on state-changing endpoints.
- Flag any new outbound network calls as high risk.
FAQ
Is disabling raw HTML in markdown enough to prevent XSS?
No. It reduces the surface. You still need review and safe rendering patterns.
Should I trust forwarded headers in production?
Only if you restrict which proxies you trust. If you cannot restrict, treat forwarded headers as attacker-controlled.
What is the most common forwarded-headers regression in AI diffs?
Reading X-Forwarded-For or X-Forwarded-Proto directly.
Treat it as a bug unless ForwardedHeaders is configured and restricted.
Do I need CSRF protection on every POST?
Yes for browser-based apps. If it is a public endpoint, require CSRF (or an explicit same-origin strategy) and rate limit it.
Should a content site use a Content-Security-Policy?
Often yes. Start restrictive and relax only with evidence.
How do I keep slug routes from becoming a DOS vector?
Max length, normalization, and an allowed character set. Reject early before you hit the database.
Do I need SSRF defenses if the feature is "internal"?
Yes. SSRF bugs are commonly introduced by internal tooling and preview features.
What is the minimum safe shape for server-side fetching?
Allowlist hostnames, block private/link-local IP ranges, enforce timeouts, disallow redirects, and cap response size.
Do I need rate limiting on read endpoints?
Sometimes. Start with state changes. If reads are expensive, add caching and then consider rate limits.
Why are max length checks such a big deal?
They cap worst-case work. AI tends to remove them. Attackers tend to find the missing ones.
What to do next
Read Part 4: Performance Defaults That Beat Clever Optimizations. Browse the AI-assisted development series for the full sequence. If you only adopt one habit, make AI-assisted development security review a post-diff step. If you want a second set of eyes on a boundary review, message via Contact.
References
- ASP.NET Core Proxy and Load Balancer Configuration
- ASP.NET Core Rate Limiting Middleware
- ASP.NET Core Cross-Site Scripting (XSS)
- Model validation in ASP.NET Core MVC and Razor Pages
- Configure options for the ASP.NET Core Kestrel web server
- HttpClient Guidelines
- OWASP SSRF Attack Overview
- OWASP XSS Prevention (Community)
Author notes
Decisions:
- Use forwarded headers only with known internal networks. Rationale: prevents header spoofing in production.
- Disable raw HTML in Markdown rendering. Rationale: reduces XSS attack surface for content.
- Rate limit likes writes per (IP, slug). Rationale: cheap defense against abuse and automation.
Observations:
- Before: it was easy to accidentally trust client headers when moving fast.
- After: forwarded headers config + IP normalization made rate limiting stable.
- Observed: markdown hardening provided a consistent boundary for content rendering.