Theme:

Keeping Your Website Off the Naughty List: A Practical Guide to DDoS Defense for Umbraco

As the holiday season brings both happy traffic spikes and the occasional unwelcome visitor, it’s the perfect time to make sure your website can handle the pressure. This guide walks you through real-world strategies for building DDoS resilience into your Umbraco (or any) site — from understanding attack layers and optimizing caching to setting up smart firewall rules and throttling policies.

DDoS attacks have become a recurring issue over the past couple of years, especially for public or semi-public sites in Denmark. If you’ve been part of a site that’s been hit, you probably also experienced long hours, frantic debugging, and the rush to get everything back online.

Over the last three years, I’ve been involved in designing, implementing, and running a website that’s faced over 20 DDoS attacks since going live in the summer of 2023. We knew this would happen, prepared for it, and fortunately, kept the site running through every attack. This is an attempt to condense and relay some of our learnings.

What You Need

No matter how fast your code is or how many servers you spin up, you won’t be able to handle a serious DDoS attack without some extra help beyond your main web server.

At a minimum, you’ll need a DDoS protection service, a CDN, and a Web Application Firewall (WAF). They come bundled together in solutions like Cloudflare or Azure Front Door, which pair nicely with Umbraco setups, but you can also mix and match individual services from AWS or from European alternatives like Myra Security.

Why You Need It

Before diving into how we’ve configured our Umbraco setup to handle DDoS attacks with our chosen protection layer, let’s take a quick look at why this extra infrastructure is essential, and what it actually does.

DDoS Protection for Layer 3/4 Attacks

DDoS attacks come in many forms. If your web server is connected directly to the internet, it might be exposed to several network protocols like ICMP or SSDP. Protocols most of us rarely think about. The problem is, they can be exploited for powerful network-level DDoS attacks that flood your server’s connection. No matter how well your website or code is optimized, it won’t help. The attack takes out the capacity to take requests before your server even gets the chance to respond.

These attacks operate at Layer 3 or 4 of the network stack, and a DDoS protection service shields your network from this kind of overload. Once that’s covered, attackers are limited to Layer 7 (HTTP) attacks, that is: floods of web requests. To stay responsive, you’ll need to stop as many of these malicious HTTP calls as possible before they hit your server. That’s where the next two components come in.

CDN for Traffic Distribution and Caching

A CDN (Content Delivery Network) uses a global network of servers, known as Points of Presence (POPs), to distribute and cache your content. Instead of every user request going straight to your web server, requests are routed to the nearest POP — which can often serve the content directly from its cache.

By including cache headers in your responses, you can tell both browsers and CDN POPs how long they’re allowed to cache your files. For example, allowing a CSS file to be cached for 365 days means that file might not trigger new server requests for an entire year. Setting solid cache rules can reduce the amount of traffic reaching your server during an attack by orders of magnitude.

Web Application Firewall for Blocking Known Bad Traffic

Even DDoS attacks that are not just loads of GET requests to the root of your website are often surprisingly simple. They rely on rented botnets sending massive amounts of POST requests or requests with slightly tweaked query strings to bypass caching. These tricks work because:

  • POST request responses usually aren’t cached by CDNs.
  • Different query strings make the CDN see each as a separate URL, forcing more requests to your server.

If you know which URLs and methods are actually legitimate for your site, you can use your Web Application Firewall (WAF) to filter out junk traffic. By writing rules that block common illegitimate patterns, you can prevent a huge portion of attack traffic from ever reaching your servers.

How to Prepare

Being ready for a DDoS attack starts long before any malicious traffic hits your site. A bit of thoughtful design, smart configuration, and awareness of how your site behaves under pressure can go a long way toward keeping things stable.

Design for Protection

Because POST requests and server-side query string processing can bypass caching, they’re natural weak spots during an attack. Every POST request or unique query string forces the CDN to forward traffic directly to your origin server — exactly what you don’t want when traffic spikes.

To reduce risk, design your site so it relies less on these mechanisms. For instance:

  • Move lightweight form handling and logic into the browser using JavaScript.
  • Use AJAX calls to communicate with your backend, rather than traditional form submissions.
  • If possible, cache API responses that don’t contain user-specific data.

The more content and functionality you can safely cache at the edge (CDN POPs), the fewer requests will ever reach your server, dramatically improving stability during a DDoS surge.

Know Your Site

You may not be able to eliminate POST requests or query strings completely, and that’s okay. The key is to understand exactly where and why they’re used. Map out:

  • Which URLs accept POST requests.
  • Which endpoints depend on query strings.
  • Which APIs or features must always reach the backend.

With this knowledge in hand, you can fine-tune your CDN and WAF configurations. For example:

  • Allow only the POST endpoints that truly need it.
  • Configure your CDN to ignore irrelevant query strings when caching.
  • Use your WAF to block traffic that doesn’t match legitimate patterns.

A clear understanding of your site’s behavior lets you build precise, layered defenses, turning a potential vulnerability into a well-controlled gateway.

How to Protect Your Site

Here’s how to set up your site so it’s actually tough to take down. With the right configurations and caching strategies in place, you can cut the number of attack requests that reach your server to a fraction.

Layer 3/4 Attacks

This part’s easy: use a service that includes built-in Layer 3/4 protection. These are network-level attacks that flood your connection before traffic even gets to your app. Services like Cloudflare and Azure Front Door handle this automatically with no special setup required. Just make sure your site is fully onboarded and routed through one of these providers.

Traffic Dispersion and Caching

A CDN distributes load and caches your content to keep attacks (and legitimate traffic spikes) from overloading your origin server. On Umbraco Cloud, caching can be as simple as checking “Enable Cache” in the settings. That immediately routes traffic through Cloudflare, caching your content at multiple global Points of Presence (POPs). Congratulations, you’ve already started reducing the attack surface.

But to make the CDN truly effective, you’ll want a smarter caching strategy. The goal is to:

  • Cache as much content as possible.
  • Cache it for as long as possible.
  • Avoid serving stale content to users.

Most Umbraco sites contain three main types of content, each with its own caching approach:

  1. Static content - JavaScript, CSS, fonts, images, etc.
  2. Media files - images and files from Umbraco’s media library.
  3. Dynamic pages - content rendered on-the-fly by Umbraco.

Caching Static Resources

For static files, aim for long-term caching - ideally a full year. To make this safe, use versioned or hashed filenames, so uploading a new version also updates the file name (for example, main.af4732.js). You can automate this with your frontend build process or keep it simple by versioning folders in your templates.

Organize your static assets in a single folder (like /dist) to easily target them with caching rules.

Then add cache headers. The “gold standard” caching rule looks like this:


cache-control: public, max-age=31536000, immutable

Strongest possible cache control header

  • public allows the CDN to store the file.
  • max-age=31536000 keeps it cached for a year.
  • immutable tells browsers the file won’t change.

You can set this header either with a URL rewrite rule:


<rewrite>
  <outboundRules>
    <rule name="Set Cache-Control - 1yr immutable for /dist/*">
      <match serverVariable="RESPONSE_Cache_Control" pattern=".*" />
      <conditions>
        <add input="{REQUEST_URI}" pattern="^\/dist\/.*" />
      </conditions>
      <action type="Rewrite" value="public, max-age=31536000, immutable" />
    </rule>
  </outboundRules>
</rewrite>

UrlRewrite rule for setting outbound cache-control header for static sites

Or directly in Startup.cs:


// Cache dist folder files for 1 year
var distFolder = Path.Combine(Directory.GetCurrentDirectory(), "wwwroot", "dist");
_ = app.UseStaticFiles(new StaticFileOptions
	{
		FileProvider = new PhysicalFileProvider(distFolder),
		RequestPath = "/dist",
		OnPrepareResponse = ctx =>
		{
			ctx.Context.Response.Headers["Cache-Control"] = "public, max-age=31536000, immutable";
		}
	});

Adding cache-control header for static sites in Startup.cs

Caching Media Files

Umbraco media items can be replaced without changing their filenames, which makes aggressive caching risky. You could end up serving outdated images, unless you’re 100% sure your editors won’t replace files with the same name, use a shorter cache lifetime for /media paths, in make sure your editors know about the implications of this.

If you can enforce proper versioning or have a disciplined editorial process, you can expand your caching rule to cover media items:


<conditions>
	<add input="{REQUEST_URI}" pattern="^\/dist/.*" />
	<add input="{REQUEST_URI}" pattern="^\/media\/.*" />
</conditions>

Using multiple file patterns for the same cache-control header

Most modern Umbraco sites use ImageSharp for on-the-fly image resizing. Make sure to configure its caching properly in appsettings.json:


"Umbraco": {
	"CMS": {
		"Imaging":  {
			"Cache": {
				"BrowserMaxAge": "30.00:00:00",
				"CacheMaxAge": "365.00:00:00",
				"CacheHashLength": 20
			}
		}
	}
}

Configuring ImageSharp cache settings

Here:

  • BrowserMaxAge controls the cache header sent to browsers (and CDNs).
  • CacheMaxAge sets how long ImageSharp keeps its internal copies.

The example above produces a header like Cache-Control: public, max-age=2592000, immutable, caching the images at the CDN for 30 days, but if you got your editors on board, why not cache those for a year as well.

Caching Dynamic Content

At first, caching dynamic content sounds counterintuitive, after all it changes often. But during a DDoS attack, small caching windows can make an enormous difference.

If your site normally receives hundreds of identical requests per second for the same few URLs, even a 10-second cache window means only a handful of requests will actually hit your origin. Everything else will be served from the CDN.

In Startup.cs, you can set this like so:



// In Startup.cs
_ = app.Use((context, next) =>
	{
		// this is a good place to add other security related headers
		context.Response.Headers.Add("Cache-Control", "public, max-age=10");
		return next(context);
	});

Add 10 seconds cache-control header to all output

This simple rule ensures that your app only processes a tiny fraction of incoming traffic while keeping user-visible content fresh.

⚠️ Important:
When you override this header later in Startup.cs for static files, remember to replace it, not append, use the indexer like ctx.Response.Headers["Cache-Control"] = "...", as we did in the static file snippet.

Handling Query Strings in the CDN

CDNs can treat URLs with different query strings in one of two ways:

  • Respect query strings – each variation is cached separately.
  • Ignore query strings – all variations use the same cached copy.

Attackers often exploit this by adding random query strings (like ?v=123abc) to force the CDN to forward every request to your origin. If your site doesn’t rely on query strings for server-side functionality, configure the CDN to ignore them — it’ll deliver the same cached version regardless of the query.

Since DDoS bots don’t run JavaScript, query strings used for client-side tracking (such as UTM parameters) won’t matter, they can safely be ignored.

⚠️ Important:
For /media images processed by ImageSharp, do not ignore query strings. They’re used for image transformation parameters and include HMAC verification to ensure security. Ignoring them would break legitimate image delivery.

With these steps in place, network protection, smart caching, and query string management, we're already in a much better place, next thing is blocking as much unwanted traffic as possible.

Blocking Unwanted Traffic

Even with good caching in place you might still get stumped. POST requests and requests with query string that are server side processed can be a problem, as well as requests for legitimate endpoints that uses server side resources.

Here are some battle-tested firewall rule concepts that have worked well in real-world scenarios:

Unwanted Requests

Block requests that you know your site will never handle. Attackers and automated probes often request files like .php, .asp, or known CMS paths looking for vulnerabilities. If your site doesn’t use these technologies, simply block such requests at the firewall, instead of letting them hit your web app and trigger 404s.

Unknown POST Requests

POST requests are a common attack vector because they skip CDN caching, so it's important you do something about that. The presence of the following two rules has taken care of several DDoS attacks in our setups all by themselves:

  1. Allow only known POST endpoints by explicitly whitelisting them.
  2. Add a catch-all rule that blocks all other POST traffic.

This simple two-rule setup filters out a large volume of illegitimate requests while keeping your genuine form submissions working normally. If attackers sniff out legitimate POST requests and uses those in an attack, at least you can block the attack by just disabling the allow rule for a while - if you can live with the post requests being unavailable, it might be better than the entire site being unavailable.

Non‑Conformant Image Requests

If you’re using ImageSharp in Umbraco for image transformations, make sure HMAC protection is enabled. This ensures each image URL carries a secure signature.

Then, set up a rule that blocks any image request under /media that:

  • Includes a query string but lacks a valid HMAC parameter, or
  • Uses query parameters outside your known, allowed ImageSharp configuration.

This helps stop attackers from generating endless transformation requests and consuming your server’s resources.

Umbraco Backoffice

The Umbraco backoffice should never be publicly accessible if you can avoid it.

Best practices:

  • Restrict access by IP allowlisting (e.g., limit access to your company’s VPN or static external IPs).
  • Use Umbraco’s load balancing capabilities to completely disable the backoffice on Internet-facing environments.
  • Let editors connect through a private management environment instead.

This not only strengthens security but also keeps admin endpoints out of harm’s way during spikes or attacks.

Throttle the Rest

Even with all these protections, some unwanted traffic might still get through.
A throttling rule acts as a safety net, limiting the number of requests a single IP can make per minute.

But be careful — this can be tricky to tune. For example: One IP might represent a single company (e.g., a corporate NAT gateway), so legitimate users might share it. If each page load triggers ~25 requests (HTML, JS, CSS, images, etc.), just 10 users from the same IP could produce 250 requests per minute — potentially hitting your throttle limit.

To calibrate safely:

  1. Start with the rule in logging mode (many WAFs support this).
  2. Collect data for a few days or weeks.
  3. Adjust thresholds and move to enforcement once you’re confident it won’t block normal users.

If your logs show throttling across static assets (JS, CSS, fonts), that’s usually a sign your limits are too strict, because DDoS attacks, typically do not request static assets.

Also note that these rules may block AI bot scrapers. During the past 6 months or more we've frequently seen these scrapers requesting a large number of pages at so high frequencies that throttling rules are engaged. Whether that’s a problem or a feature may depend on your site’s visibility or stance on data harvesting!

In Summary

Blocking rules complement caching by dealing with traffic that can’t be offloaded. The key steps are to:

  • Know your legitimate endpoints, and block as many illegitimate requests as possible
  • For POST and query string processing endpoints explicitly allow only known endpoints.
  • Throttle everything else.

And remember: many traditional POST submissions can be replaced with client-side API calls that keep your backend endpoints lean and cache-friendly. The fewer endpoints with heavy processing your server has, the harder it becomes for attackers to take it down using sheer traffic volume.