Lecture 7: NGINX Caching and Best Practices

Open visualization in new tab

Learning Objectives

Prerequisites


Section 1: NGINX Caching Fundamentals and Setup

Introduction: The Role of NGINX in Modern Web Architecture

Welcome to our deep dive into NGINX caching. In the landscape of web performance, NGINX stands out as a versatile, high-performance tool. Originally created to solve the "C10k problem"—handling ten thousand concurrent connections on a single server—NGINX has evolved from a simple web server into a multi-purpose tool: a reverse proxy, a load balancer, and, for our focus today, a highly effective caching server. Its event-driven, asynchronous architecture makes it incredibly efficient with system resources, particularly CPU and memory. For context, a minimal NGINX caching setup for a small on-premise environment can comfortably run on a single CPU core with just 512 MB of RAM (Jainandunsing, 2025).

It's crucial to distinguish NGINX's role from other caching solutions you may have studied, such as Redis or Memcached. While Redis and Memcached are in-memory key-value stores, excelling at caching application data, database query results, and user session objects, NGINX operates at the HTTP level. As a reverse proxy, it sits between the client (the user's browser) and your backend application servers. This strategic position allows it to intercept HTTP requests and serve responses directly from its cache without ever needing to contact the backend server, dramatically reducing response times and server load. Jainandunsing (2025) notes that while NGINX doesn't directly cache user sessions like Redis, it excels at caching HTTP session-related content, such as authenticated pages and API responses, by inspecting headers or cookies.

Core Concepts: The `ngx_http_proxy_module`

NGINX's caching capabilities are primarily provided by the `ngx_http_proxy_module`. To enable caching, you must understand a set of core directives that define how the cache operates. Let's dissect the most important ones.

proxy_cache_path

This is the foundational directive; it defines the cache itself. It must be declared in the `http` context of your NGINX configuration (i.e., outside of any `server` or `location` block). It configures the physical path on the disk where cached files will be stored and sets up a shared memory zone to hold the cache keys and metadata.

A typical `proxy_cache_path` declaration looks like this:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;

Let's break down its parameters:

proxy_cache

This directive, used within a `server` or `location` block, enables caching and specifies which cache zone to use. The name must match the name defined in `keys_zone` in the `proxy_cache_path` directive.

proxy_cache my_cache;

Once this is set, NGINX will start caching eligible responses for requests matching this location.

proxy_cache_valid

This directive sets the default caching time for different HTTP response codes. You can have multiple instances of this directive.

proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;

In this example, responses with a `200 OK` or `302 Found` status will be cached for 60 minutes. `404 Not Found` responses will be cached for 1 minute to prevent repeatedly hitting the backend for non-existent resources. This overrides any `Expires` or `Cache-Control` headers sent from the backend server unless you configure NGINX to respect them.

proxy_cache_bypass and proxy_no_cache

These directives give you fine-grained control over when to skip caching. They both take one or more string parameters. If any parameter is not empty and not "0", the condition is met.

A common use case is to bypass the cache for logged-in users, identified by a session cookie:

proxy_cache_bypass $cookie_sessionid;
proxy_no_cache $cookie_sessionid;

Here, if the `sessionid` cookie is present in the request, NGINX will bypass the cache to get a fresh response from the backend (`proxy_cache_bypass`), and it will not save that personalized response to the cache (`proxy_no_cache`).

add_header X-Cache-Status

This is not a caching directive per se, but it is indispensable for debugging. It adds a custom header to the response sent to the client, indicating the cache status.

add_header X-Cache-Status $upstream_cache_status;

The `$upstream_cache_status` variable can have several values:

Installation and Configuration Walkthrough

Let's put theory into practice. We'll set up a basic NGINX caching reverse proxy on an Ubuntu/Debian system.

  1. Install NGINX:
    First, update your package list and install the NGINX package.
    sudo apt update
    sudo apt install nginx -y
  2. Create the Cache Directory:
    Next, create the directory we specified in our theoretical `proxy_cache_path` and assign ownership to the NGINX user (`www-data`).
    sudo mkdir -p /var/cache/nginx
    sudo chown -R www-data:www-data /var/cache/nginx
    Failure to set the correct permissions is one of the most common setup errors.
  3. Configure NGINX:
    Now, edit the main NGINX configuration file to add the `proxy_cache_path` directive.
    sudo nano /etc/nginx/nginx.conf
    Inside the `http` block, but outside any `server` blocks, add:
    http {
        # ... other http settings ...
        
        proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off;
        
        # ... include sites-enabled/*; ...
    }
    Next, configure your specific site to use the cache. Edit your site's configuration file:
    sudo nano /etc/nginx/sites-available/default
    Modify the `server` block to act as a caching proxy. Assume your backend application is running on `http://127.0.0.1:8080`.
    server {
        listen 80;
        server_name your_domain.com;
    
        location / {
            proxy_pass http://127.0.0.1:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
    
            # Caching directives
            proxy_cache my_cache;
            proxy_cache_valid 200 302 10m;
            proxy_cache_valid 404 1m;
            
            # Bypass for logged-in users
            proxy_cache_bypass $cookie_sessionid;
            proxy_no_cache $cookie_sessionid;
    
            # Add cache status header for debugging
            add_header X-Cache-Status $upstream_cache_status;
        }
    }
  4. Test Configuration and Restart NGINX:
    Always check your configuration for syntax errors before restarting the service.
    sudo nginx -t
    If it reports success, restart NGINX to apply the changes.
    sudo systemctl restart nginx
  5. Verify Caching Behavior:
    Use a command-line tool like `curl` to inspect the response headers. The `-I` flag requests only the headers.
    curl -I http://your_domain.com/some-page
    The first time you run this command, you should see `X-Cache-Status: MISS`. Run it a second time, and you should see `X-Cache-Status: HIT`. Success! Your cache is working.

Example: Full Basic Caching Configuration

Here is a consolidated view of the necessary configuration snippets for a simple caching setup.

In `/etc/nginx/nginx.conf` (inside the `http` block):

# Defines the cache storage path, memory zone, size, and other parameters.
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off;

In `/etc/nginx/sites-available/default` (or your site's config):

server {
    listen 80;
    server_name example.com;

    location / {
        # Backend application server address
        proxy_pass http://127.0.0.1:8080;
        
        # Pass essential headers to the backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # Enable caching using the 'my_cache' zone
        proxy_cache my_cache;
        
        # Define cache validity for different response codes
        proxy_cache_valid 200 302 10m; # Cache successful responses for 10 minutes
        proxy_cache_valid any 1m;      # Cache other responses (like 404s) for 1 minute

        # Add a header to see the cache status (HIT, MISS, etc.)
        add_header X-Cache-Status $upstream_cache_status;
    }
}

Verification Commands:

# First request - should be a MISS
$ curl -I http://example.com/
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
...
X-Cache-Status: MISS
...

# Second request - should be a HIT
$ curl -I http://example.com/
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
...
X-Cache-Status: HIT
...

Did You Know?

NGINX was created by a Russian software engineer, Igor Sysoev, in the early 2000s. He was working at Rambler, a popular Russian web portal, and was frustrated with the performance of existing web servers like Apache under heavy load. The primary challenge he aimed to solve was the "C10k problem"—how to architect a server to handle ten thousand concurrent client connections. NGINX's event-driven, non-blocking architecture was his solution, and it proved so effective that it was open-sourced in 2004 and quickly became a cornerstone of high-performance web infrastructure worldwide.

Section 1 Summary

Reflective Questions

  1. What are the potential consequences of making the `keys_zone` in `proxy_cache_path` too small for a high-traffic website with many unique pages?
  2. You are caching API responses. A `GET /api/users/123` response is cached. If a `PUT` request updates user 123's data, how would you ensure the cache is invalidated or updated? What challenges does this present?
  3. Explain the difference between the `inactive` parameter in `proxy_cache_path` and the duration set in `proxy_cache_valid`. In what scenario would a cached item be removed due to `inactive` even if its `proxy_cache_valid` time has not yet passed?

Section 2: Security and SSL/TLS Implementation

The Importance of Securing Your Caching Layer

As your NGINX instance becomes a critical part of your infrastructure, serving content to users, its security becomes paramount. A compromised caching server can lead to several severe issues, including session hijacking, data leakage between users, and cache poisoning attacks, where an attacker injects malicious content into your cache, which is then served to legitimate users. Furthermore, since the cache often sits at the edge of your network, it's a prime target for attackers. Therefore, securing it involves multiple layers: encrypting data in transit, hardening the server configuration, controlling network access, and carefully managing how authenticated or personalized content is handled.

Implementing SSL/TLS with Let's Encrypt

The first and most fundamental step in securing your web traffic is encrypting it with SSL/TLS, enabling HTTPS. In the past, this was a costly and complex process involving purchasing certificates from a Certificate Authority (CA). Today, thanks to Let's Encrypt, it's free and automated. We'll use the `certbot` tool, which automates the process of obtaining, installing, and renewing Let's Encrypt certificates.

Step-by-Step SSL/TLS Setup with Certbot:

  1. Install Certbot: The recommended way to install Certbot on modern Debian/Ubuntu systems is using `snap`. If you don't have it, install it first.
    sudo snap install core; sudo snap refresh core
    sudo snap install --classic certbot
    sudo ln -s /snap/bin/certbot /usr/bin/certbot
    Next, install the Certbot NGINX plugin, which allows Certbot to automatically read and modify your NGINX configuration to set up HTTPS.
    sudo apt install python3-certbot-nginx
  2. Obtain and Install the Certificate: Run Certbot with the `--nginx` flag. It will parse your NGINX configuration files, identify the `server_name` directives, and ask you which domain(s) you want to enable HTTPS for.
    sudo certbot --nginx
    The tool will guide you through a few prompts:
    • Enter your email address (for renewal notices and security alerts).
    • Agree to the Terms of Service.
    • Choose whether to share your email with the Electronic Frontier Foundation (EFF).
    • Select the domain name(s) from your NGINX configuration to activate HTTPS for.
  3. Choose Redirect Behavior: Certbot will ask if you want to redirect all HTTP traffic to HTTPS. This is highly recommended for security. Choosing this option will cause Certbot to add a rewrite rule to your NGINX configuration, ensuring all users are on a secure connection.
  4. Automatic Renewal: The Certbot package automatically sets up a cron job or systemd timer that will attempt to renew your certificates before they expire (Let's Encrypt certificates are valid for 90 days). You can perform a dry run to test the renewal process:
    sudo certbot renew --dry-run

After Certbot completes, it will have modified your site's NGINX configuration file. It will add a new `server` block for listening on port 443 (HTTPS) and will include `ssl_certificate` and `ssl_certificate_key` directives pointing to the newly obtained certificate files. It will also handle the redirect from port 80 if you selected that option.

Hardening SSL/TLS Configuration

While Certbot provides a good default configuration, you can further harden your server's security by specifying stronger protocols, ciphers, and enabling important security headers. You can create a separate snippet file for these settings to keep your main configuration clean.

Create a file, e.g., /etc/nginx/snippets/ssl-params.conf, and add the following:

# Use modern TLS protocols
ssl_protocols TLSv1.2 TLSv1.3;

# Use a strong set of cipher suites
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;

# Enable HSTS (HTTP Strict Transport Security)
# Tells browsers to only connect via HTTPS for the next 6 months.
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;

# Other security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;

Then, include this snippet in your `server` block for port 443:

server {
    listen 443 ssl http2;
    server_name your_domain.com;

    # ... ssl_certificate and ssl_certificate_key from certbot ...

    include /etc/nginx/snippets/ssl-params.conf;

    # ... rest of your server configuration ...
}

Firewall Configuration with UFW

A firewall is a critical security layer that controls network traffic to and from your server. We'll use UFW (Uncomplicated Firewall), a user-friendly frontend for `iptables`.

  1. Set Default Policies: First, deny all incoming traffic and allow all outgoing traffic. This is a secure default posture.
    sudo ufw default deny incoming
    sudo ufw default allow outgoing
  2. Allow Essential Services: You need to allow SSH traffic so you don't lock yourself out. You also need to allow web traffic on ports 80 and 443.
    sudo ufw allow ssh
    sudo ufw allow 'Nginx Full'
    The `'Nginx Full'` profile allows traffic on both port 80 (HTTP) and 443 (HTTPS).
  3. Enable the Firewall:
    sudo ufw enable
    Confirm with 'y' when prompted. You can check the status at any time with `sudo ufw status`.

Securely Caching Authenticated Content

Caching content for logged-in users is powerful but fraught with risk. If you cache a page containing personal user information and serve it to another user, you have a major data breach. The key to doing this safely is to ensure that the cache key is unique for each user.

By default, NGINX's cache key is based on variables like `$scheme`, `$proxy_host`, and `$request_uri`. For a public page, this is fine. But for an authenticated page, this key is the same for all users, which is dangerous.

We can customize the cache key using the proxy_cache_key directive to include a user-specific identifier, such as a session cookie.

# Customize the cache key to include the session ID cookie
proxy_cache_key "$scheme$host$request_uri$cookie_sessionid";

With this directive, a request for `/my-account` from a user with `sessionid=abc` will generate a different cache entry than a request from a user with `sessionid=xyz`. This effectively creates a private cache for each user session. While this prevents data leakage, be aware of the implications: you will now store a separate copy of the page for every user who visits it, which can rapidly consume cache storage.

For highly sensitive pages (e.g., "edit payment info"), it's often best to bypass the cache entirely using the techniques we discussed in Section 1:

location /account/billing {
    # This content is too sensitive to cache
    proxy_cache_bypass 1;
    proxy_no_cache 1;
    proxy_pass http://127.0.0.1:8080;
    # ... other proxy settings ...
}

A balanced approach is to use a unique cache key for personalized but non-sensitive content (like a user's dashboard) and completely bypass the cache for critical sections.

Example: Secure NGINX Server Block

This example combines an SSL configuration from Certbot, our hardened SSL parameters, and a location block that uses a custom cache key for authenticated content.

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # SSL Certificate paths provided by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    # Include hardened SSL/TLS parameters and security headers
    include /etc/nginx/snippets/ssl-params.conf;

    root /var/www/html;
    index index.html;

    location / {
        # General caching for anonymous users
        proxy_pass http://127.0.0.1:8080;
        proxy_cache my_cache;
        proxy_cache_valid 200 10m;
        
        # Bypass for authenticated users (identified by sessionid cookie)
        proxy_cache_bypass $cookie_sessionid;
        proxy_no_cache $cookie_sessionid;
        
        add_header X-Cache-Status $upstream_cache_status;
    }
    
    location /dashboard {
        # Cache this section per-user
        proxy_pass http://127.0.0.1:8080;
        proxy_cache my_cache;
        proxy_cache_valid 200 5m;
        
        # This is the crucial part for per-user caching
        proxy_cache_key "$scheme$host$request_uri$cookie_sessionid";
        
        add_header X-Cache-Status $upstream_cache_status;
    }
}

Did You Know?

Let's Encrypt, the free Certificate Authority, was founded in 2014 by the Internet Security Research Group (ISRG) with a mission to encrypt the entire web. Before its launch, obtaining an SSL/TLS certificate was often a manual, expensive process, creating a barrier for many website owners. By providing free, automated certificates, Let's Encrypt has been a major driving force behind the massive increase in HTTPS adoption, with over 300 million active certificates, making the web a significantly safer place for everyone.

Section 2 Summary

Reflective Questions

  1. What are the potential downsides of including the `preload` directive in the `Strict-Transport-Security` (HSTS) header? What steps would you need to take if you wanted to disable HTTPS after preloading?
  2. You have configured `proxy_cache_key` to include a session cookie. What happens to cache efficiency if your backend application generates a new session ID for a user on every single request?
  3. A cache poisoning attack succeeds on your server. An attacker manages to cache a malicious JavaScript file at `/assets/main.js`. What security headers discussed in this section might help mitigate the damage of such an attack, even after the malicious file is served to a user's browser?

Section 3: Performance Optimization and Advanced Techniques

Beyond the Basics: Tuning for Peak Performance

Having a functional cache is the first step; making it exceptionally fast and resilient is the next. Performance optimization in NGINX caching isn't about a single magic setting. It's about understanding the interplay between memory, disk I/O, and network traffic, and then tuning specific directives to best suit your application's access patterns. Key performance indicators (KPIs) to monitor are the cache hit ratio (the percentage of requests served from the cache), the server's response time (latency), and resource utilization (CPU, memory, disk I/O). Our goal is to maximize the hit ratio while minimizing latency and resource usage.

Fine-tuning `proxy_cache_path`: The Heart of Performance

We introduced `proxy_cache_path` in Section 1, but its parameters have a profound impact on performance that warrants a deeper look.

proxy_cache_path /data/nginx_cache levels=1:2 keys_zone=my_cache:100m max_size=50g inactive=24h use_temp_path=off;

Advanced Caching Strategies

Once your cache is tuned, you can implement more sophisticated strategies to improve resilience and performance under heavy load.

Serving Stale Content for High Availability

What happens when your backend server goes down? By default, NGINX will return an error (e.g., `502 Bad Gateway`) to the user. However, with the proxy_cache_use_stale directive, you can configure NGINX to serve an expired (stale) version of the content from its cache if it's unable to get a fresh version from the backend. This is a massive win for user experience and site availability.

proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

This configuration tells NGINX to use a stale item if it encounters a communication error with the backend, a timeout, or receives one of the specified 5xx error codes. The `updating` parameter is particularly interesting: if a client requests an expired item, NGINX can serve the stale version to that client immediately while it sends a single request to the backend to update the cache in the background. Subsequent requests will then receive the fresh content once it's available.

Preventing the Thundering Herd: Cache Locking

Consider a scenario where a very popular but uncached page is requested by thousands of users at once (e.g., after the cache expires). This is known as the "thundering herd" or "cache stampede" problem. All thousands of requests will result in a `MISS`, and NGINX will forward all of them to your backend server simultaneously, potentially overwhelming it.

The proxy_cache_lock directive solves this elegantly. When enabled, if multiple clients request a file that is not in the cache, only the first request is allowed through to the backend server. The other requests are held (they "wait") until that first request has populated the cache. Once the item is in the cache, the waiting requests are all served from the cache. This ensures only one request hits your backend for any given resource at a time.

location /popular-articles/ {
    proxy_pass http://backend;
    proxy_cache my_cache;
    
    # Enable cache locking
    proxy_cache_lock on;
    
    # Set a timeout for how long a request will wait for the lock to be released
    proxy_cache_lock_timeout 5s;
}

proxy_cache_lock_timeout is a safety net. If the first request doesn't populate the cache within this time, the waiting requests will be passed to the backend server to prevent them from timing out.

Cache Splitting for Different Content Types

Not all content is created equal. Small, frequently accessed HTML files have very different caching characteristics from large, infrequently accessed video files. You can create multiple cache zones to handle them differently. For instance, you could have one small, fast cache on an SSD for API responses and HTML, and another larger cache on a traditional HDD for media assets.

# In http block
proxy_cache_path /mnt/ssd/nginx_cache levels=1:2 keys_zone=fast_cache:50m max_size=5g;
proxy_cache_path /mnt/hdd/nginx_cache levels=1:2 keys_zone=large_cache:20m max_size=500g inactive=30d;

# In server block
server {
    # ...
    location ~ \.(html|json)$ {
        proxy_pass http://backend;
        proxy_cache fast_cache;
        proxy_cache_valid 200 5m;
    }
    
    location ~ \.(mp4|zip|iso)$ {
        proxy_pass http://backend;
        proxy_cache large_cache;
        proxy_cache_valid 200 7d;
    }
}

Monitoring and Logging

You cannot optimize what you cannot measure. Enhancing your logs to include cache status is the first step. You can define a custom log format that includes the `$upstream_cache_status` variable.

In your `http` block in `nginx.conf`:

log_format cache_log '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" '
                      'Cache-Status: $upstream_cache_status';

Then, use this format in your `server` block's `access_log` directive:

access_log /var/log/nginx/access.log cache_log;

With this in place, you can easily analyze your logs with tools like `grep`, `awk`, or dedicated log analysis software to calculate your cache hit ratio. For real-time monitoring, tools like Netdata or a combination of Prometheus with the `nginx-prometheus-exporter` can provide detailed dashboards showing hit/miss ratios, cache sizes, and other critical metrics.

Example: Highly Optimized Caching Configuration

This configuration snippet demonstrates a combination of advanced techniques: serving stale content, cache locking, and a custom log format for monitoring.

In `/etc/nginx/nginx.conf` (inside the `http` block):

# Define a custom log format that includes the cache status
log_format cache_log_format '$remote_addr - [$time_local] "$request" $status '
                           '($body_bytes_sent) "$http_referer" "$http_user_agent" '
                           'Cache: $upstream_cache_status';

# Define the cache path and parameters
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m;

In `/etc/nginx/sites-available/default`:

server {
    listen 80;
    server_name example.com;
    
    # Use the custom log format
    access_log /var/log/nginx/access.log cache_log_format;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_cache my_cache;
        
        # Performance and Resilience Settings
        proxy_cache_lock on;                 # Prevent thundering herd
        proxy_cache_lock_timeout 5s;
        
        # Serve stale content if backend is down or slow
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        
        # Update cache in the background while serving stale content
        proxy_background_update on;
        
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404 1m;
        
        add_header X-Cache-Status $upstream_cache_status;
    }
}

Did You Know?

The "thundering herd problem" is not unique to web caches. It's a classic computer science problem that occurs in any system where multiple processes or threads wait for an event. When the event occurs, they all "wake up" and stampede towards the same resource, overwhelming it. This can happen with database connections, file locks, and network sockets. Cache locking in NGINX is a specific and highly effective implementation of a general solution pattern known as "request coalescing" or "request collapsing."

Section 3 Summary

Reflective Questions

  1. Under what specific circumstances would enabling `proxy_cache_use_stale` be a bad business or technical decision? Consider an application that deals with real-time stock prices or live auction bidding.
  2. You've enabled `proxy_cache_lock`. A request comes in for an uncached item that takes 10 seconds for your backend to generate. Your `proxy_cache_lock_timeout` is set to 5 seconds. Describe the sequence of events for the first request and for a second request for the same item that arrives 2 seconds after the first.
  3. Your cache hit ratio is unexpectedly low. Besides checking your logs for the `$upstream_cache_status`, what NGINX configuration directives would you investigate first as potential culprits for preventing content from being cached effectively?

Glossary

Reverse Proxy
A server that sits in front of one or more web servers, forwarding client (e.g., browser) requests to the appropriate server. It provides a single point of access and can handle tasks like load balancing, SSL termination, and caching.
Cache HIT
An event where a requested item is found in the cache and is served directly from it, without contacting the backend server.
Cache MISS
An event where a requested item is not found in the cache, requiring a request to be sent to the backend server to retrieve it.
SSL/TLS
Secure Sockets Layer / Transport Layer Security. Protocols for encrypting network communication between clients and servers, providing privacy and data integrity. TLS is the modern successor to SSL.
Let's Encrypt
A non-profit Certificate Authority that provides free X.509 certificates for TLS encryption via an automated process.
Cache Key
A unique identifier, typically derived from the request URL and other variables, that NGINX uses to store and look up an item in the cache.
Thundering Herd Problem
A situation where a large number of concurrent processes or threads are awakened by an event and rush to access a single resource, overwhelming it. In caching, this happens when a popular item expires.
HSTS (HTTP Strict Transport Security)
A web security policy mechanism whereby a web server declares that browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol.

References

Jainandunsing, K. (2025). Caching Servers Hardware Requirements & Software Configurations. (Version 1.0). [Internal Course Document].

NGINX, Inc. (2023). NGINX Docs: Module ngx_http_proxy_module. Retrieved from https://nginx.org/en/docs/http/ngx_http_proxy_module.html

Sysoev, I. (2004). [nginx-devel] Nginx-0.1.0. Mail-Archive.com. Retrieved from https://www.mail-archive.com/nginx-devel@sysoev.ru/msg00000.html

Back to Course Index