How to Enable Basic Auth with ALB Ingress in Kubernetes (Step-by-Step Guide)

Introduction
You've succeeded. Your observability game is stronger than ever after you installed the powerful Kube-Prometheus-Stack in your EKS cluster. You have an abundance of metrics, dashboards, and alerts. However, as you look at your Prometheus user interface (UI), which is accessible online through an AWS Application Load Balancer (ALB), the unsettling realization that "Wait... anyone can see this" begins to creep in.
Data is being scraped from every part of your cluster by your Prometheus server. It contains, to put it mildly, sensitive operational metrics, service names, and configuration information. A security incident is just waiting to happen if this data is left exposed on the open internet.
"Easy," you say, "I'll just include Basic Authentication." As you open your Ingress YAML and prepare to make some annotations, you run into a wall.
The core problem: AWS ALB, for all its power, does not natively support Basic Authentication
ALB assigns authentication to more sophisticated, enterprise-grade OIDC providers like AWS Cognito, Okta, or PingFederate, as opposed to Nginx Ingress or Traefik, which manage Basic Auth with a few straightforward annotations. If all you need is a basic username-and-password "gate" to keep automated scanners and casual observers out, then setting up OIDC, despite its power, is frequently overly complicated.
Are you stuck, then? Do you have to decide between exposing your metrics and completing a week-long OIDC integration project?
Absolutely not. We're engineers, and we solve problems with clever layers of abstraction. The solution is elegant: If the load balancer won't be our security guard, we'll hire one to stand right inside the front door.
You'll need a workaround if you're using the Kube-Prometheus-Stack Helm chart in order to protect your metrics user interface without making it public.
This guide will show you how to add an NGINX sidecar container to your Prometheus pod and enable basic auth for Prometheus ingress with ALB. Before sending traffic to Prometheus, this NGINX proxy will take care of authentication.
Why ALB Doesn’t Support Basic Auth
The AWS Application Load Balancer (ALB) is a Layer 7 (Application Layer) component but does not support Basic Auth natively. It is designed specifically for SSL termination, routing, and WAF instead of managing credentials.
Consequently, you must address the issue at the application layer if you wish to implement username-password protection on an endpoint behind the ALB; in our situation, this means utilizing NGINX.
Architecture Overview
Let's picture the traffic flow to get a sense of what's happening underneath:
ALB → Ingress (Kubernetes) → NGINX sidecar → Prometheus container
- ALB terminates SSL and routes traffic to Kubernetes.
- NGINX sidecar handles HTTP Basic Auth.
- Prometheus container receives traffic only after authentication.
This ensures that unauthenticated users are blocked at the NGINX layer.
Preparing Your Kubernetes Cluster
Before we start modifying configurations, ensure you have the following in place:
- AWS EKS cluster running
- ALB Ingress Controller installed and working
- Helm CLI installed
- kube-prometheus-stack Helm chart deployed or ready to deploy
If you're starting from scratch, follow the official AWS ALB Ingress Controller docs to set up.
Modifying the Helm Chart of Kube-Prometheus-Stack
We’re going to inject the NGINX proxy by modifying the values.yaml file used to install or upgrade the kube-prometheus-stack.
Specifically, we’ll add an extra container to Prometheus as a sidecar, configure volume mounts for NGINX files (config, auth, health), and ensure the right service port is exposed.
Let’s go through this piece by piece.
Creating the NGINX Sidecar
First, we need to create the configuration files for our Nginx guard. This includes its rulebook (nginx.conf), its list of allowed guests (basic-auth), and the "Go Away" sign (401.html).
We also need a "health" page. The ALB needs to ping our pod to see if it's "healthy" and ready to receive traffic. We'll create a simple health.html page for Nginx to serve.
1. Create the basic-auth file
This file will store your username and hashed password. We'll use the htpasswd utility. If you don't have it, you can use a Docker container to generate it:
# This command will prompt you to enter a password for 'your-username'
# It creates a new file named 'basic-auth'
htpasswd -c basic-auth your-username
The same command can be used as often as necessary for as many users with different file names and usernames.
2. Create custom pages and nginx.conf using extraSecret in values.yaml
Instead of manually creating a kubectl secret, the kube-prometheus-stack Helm chart offers a convenient extraSecret field within the prometheus section of its values.yaml. This allows us to define our configuration files directly within the Helm chart, making deployment and management much cleaner.
- Create custom HTML pages - 401.html and health.html. These custom pages provide a more user-friendly experience than default Nginx errors.
- Create nginx.conf file. This is the brain of our operation. It tells Nginx how to handle incoming requests, including the crucial "smart" health check.
- Create basic-auth file containing usernames and passwords of all the users we need to authenticate using NGINX.
Here's how you'll embed your files into your values.yaml:
# Inside your kube-prometheus-stack values.yaml
prometheus:
# ... other prometheus settings ...
## ExtraSecret can be used to store various data in an extra secret
## (use it for example to store hashed basic auth credentials)
extraSecret:
name: "nginx-sidecar-secret" # Custom name for our secret
annotations: {}
data:
health.html: |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Health Check</title>
<style>
body {
font-family: monospace;
text-align: center;
margin-top: 50px;
background-color: #e0ffe0;
color: #008000;
}
h1 {
font-size: 2em;
}
</style>
</head>
<body>
<h1>01001000 01100101 01101100 01101100 01101111! (That's "Hello" in binary)</h1>
<p>System Status: Online. (Probably. I haven't checked the flux capacitor lately.)</p>
<p>(If you're reading this, congrats! You've successfully navigated the internet. Give yourself a pat on the back.)</p>
</body>
</html>
401.html: |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>401 - Unauthorized Access (but with a smile)</title>
<style>
body {
font-family: sans-serif;
text-align: center;
margin-top: 50px;
background-color: #f4f4f4;
color: #333;
}
h1 {
font-size: 3em;
margin-bottom: 20px;
}
p {
font-size: 1.2em;
margin-bottom: 10px;
}
.container {
max-width: 600px;
margin: 0 auto;
padding: 20px;
background-color: white;
border-radius: 8px;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.1);
}
.emoji {
font-size: 2em;
margin-bottom: 15px;
}
</style>
</head>
<body>
<div class="container">
<div class="emoji">🔒</div>
<h1>Whoops! Looks like you've hit a locked door.</h1>
<p>This area is for authorized personnel only (secret handshake knowledge is a plus, but credentials are required).</p>
<p>It seems your digital key isn't quite right.</p>
<p>Possible reasons:</p>
<ul>
<li>You haven't entered your username and password yet (oops!).</li>
<li>You've entered them incorrectly (we all have those days).</li>
<li>You're trying to access something you shouldn't (we won't tell).</li>
</ul>
<p>Don't worry, it happens to the best of us. Just click the button below to try again.</p>
<a href="/" style="background-color: #4CAF50; /* Green */border: none;color: white;padding: 15px 32px;text-align: center;text-decoration: none;display: inline-block;font-size: 16px;border-radius: 5px;">Try Again</a>
<p style="margin-top: 20px; font-size: small;">If you believe you should have access, please contact the system administrator. (They're probably drinking coffee, so be gentle.)</p>
</div>
</body>
</html>
basic-auth: |
admin:$apr1$XXurHbQt$gbW8KyLGbgkfK0ZU2CeQu1 # Replace with your actual username and hashed password
nginx.conf: |
pid /tmp/nginx.pid;
error_log /dev/stdout info; # Log errors to standard output for Kubernetes
events {
worker_connections 4096;
}
http {
include mime.types;
access_log /dev/stdout; # Log access to standard output
# Nginx temporary file paths must be writable
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
server {
listen 8181; # Nginx listens on port 8181
listen [::]:8181;
server_name _; # Catch-all server name
error_page 401 /401.html; # Use our custom 401 page
location / {
auth_basic "Protected by Admin!";
auth_basic_user_file basic-auth; # Specify the path to our basic-auth file
proxy_pass http://localhost:9090; # Proxy to Prometheus
}
location = /401.html {
internal; # For internal routing only
root /usr/share/nginx/html;
}
location = /health {
root /usr/share/nginx/html;
try_files /health.html =200;
}
}
}
3. Integrating the Nginx Sidecar into the Prometheus Pod
This is where we tell the Prometheus deployment to include our Nginx security guard. We need to:
Define a Volume that reads from our nginx-sidecar-secret.
Add our Nginx Sidecar Container to the Pod's definition.
Configure that sidecar to mount the individual files from the Volume into the paths Nginx expects.
Inside your values.yaml, under the prometheus.prometheusSpec section:
# Inside your kube-prometheus-stack values.yaml
prometheus:
# ... other prometheus settings ...
prometheusSpec:
# ... other prometheusSpec settings ...
# ==========================================================
# STEP 3A: ADD THE VOLUME FROM OUR SECRET
# ==========================================================
# This makes the "nginx-sidecar-secret" available as a
# volume named "nginx-files" that containers in this Pod can use.
volumes:
- name: nginx-files
secret:
secretName: nginx-sidecar-secret # The secret name we defined in extraSecret
# ==========================================================
# STEP 3B: ADD THE NGINX SIDECAR CONTAINER
# ==========================================================
# This list normally just has the 'prometheus' and 'config-reloader' containers.
# We are adding our 'nginx-sidecar' to it.
containers:
# This is our new container
- name: nginx-sidecar
image: nginx:stable-alpine-slim # Use a lightweight, stable Nginx image
securityContext:
runAsUser: 101 # Run as the 'nginx' user (non-root) for security best practice
ports:
# This is the port our Service and Ingress will target
- containerPort: 8181
name: nginx-sidecar
protocol: TCP
# ========================================================
# STEP 3C: MOUNT THE CONFIG FILES INTO THE SIDECAR
# ========================================================
# This is the magic. We mount *specific files* from our
# "nginx-files" volume into the exact paths Nginx expects.
volumeMounts:
- name: nginx-files
readOnly: true
mountPath: "/etc/nginx/nginx.conf" # Mount the nginx.conf
subPath: nginx.conf # from the 'nginx.conf' key in the secret
- name: nginx-files
readOnly: true
mountPath: "/etc/nginx/basic-auth" # Mount the basic-auth file
subPath: basic-auth
- name: nginx-files
readOnly: true
mountPath: "/usr/share/nginx/html/401.html" # Mount the 401 page
subPath: 401.html
- name: nginx-files
readOnly: true
mountPath: "/usr/share/nginx/html/health.html" # Mount the health page
subPath: health.html
# ... other prometheusSpec settings ...
Here’s what’s happening:
- The NGINX sidecar listens on port 8181.
- It mounts a secret volume containing the configuration, credentials, and static pages.
4. Pointing the Service & Ingress at Nginx
Our Pod is now listening on two ports: 9090 (Prometheus) and 8181 (Nginx). By default, the Prometheus Service and Ingress are pointing at 9090. We need to change that. We want all external traffic to be forced through our Nginx guard on 8181. Instead of ALB sending traffic directly to Prometheus, it’ll send it to the NGINX container first. NGINX handles the Basic Auth logic and then proxies approved requests to Prometheus.
We'll continue editing our values.yaml:
# Inside your kube-prometheus-stack values.yaml
prometheus:
# ... other prometheus settings (including the prometheusSpec from above) ...
# ==========================================================
# STEP 4A: TELL THE SERVICE ABOUT THE NEW PORT
# ==========================================================
# The Ingress sends traffic to the Service. The Service
# needs to know how to handle traffic for port 8181.
service:
# ... other service settings ...
# We add our new port here. This makes port 8181 available
# via the Kubernetes Service and targets our sidecar.
additionalPorts:
- name: nginx-sidecar
port: 8181
targetPort: nginx-sidecar # This matches the 'name' of the containerPort in our sidecar
protocol: TCP
# ==========================================================
# STEP 4B: CONFIGURE THE INGRESS
# ==========================================================
# This is the final piece. We tell the ALB Ingress
# to target our new Nginx port and use its health check path.
ingress:
enabled: true
ingressClassName: "alb" # Ensure you're using the AWS ALB Ingress Controller
# This is the CRITICAL change. Instead of the default 'web' port
# (which is 9090), we tell the Ingress to send ALL traffic
# to our newly defined Nginx sidecar port 8181.
servicePort: 8181
# --- Your ALB Annotations ---
annotations:
# Standard ALB setup: certificate, scheme, SSL redirect
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-south-1:548xxxxxxx0:certificate/axxxxxxx9-a58f-xxxx-98f0-xxxxxx
alb.ingress.kubernetes.io/group.name: org-dev
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/subnets: subnet-xxxxxxx,subnet-yyyyyyyyyy
alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=true
# --- THE TWO MOST IMPORTANT ANNOTATIONS FOR THIS TO WORK ---
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-path: /health
labels: {}
hosts:
- prometheus.example.com # Your domain here
paths: []
# - / # Uncomment if your chart version needs it
pathType: Prefix
Let's pause and appreciate those two key annotations:
alb.ingress.kubernetes.io/target-type: ip: This tells the ALB to register the Pods' direct IP addresses as targets. This bypasses kube-proxy for data plane traffic and is the modern, efficient way to run ALBs on EKS. It's also what robustly enables the ALB to target a specific port on a Pod.alb.ingress.kubernetes.io/healthcheck-path: /health: This is the master stroke. We're telling the ALB, "To see if this Pod is healthy, check the /health path." And who answers at /health? Our Nginx sidecar, which happily serves thehealth.htmlfile.
5. Deploy and Celebrate
You're ready. Apply your changes using Helm:
helm upgrade --install prometheus kube-prometheus-stack/kube-prometheus-stack \
-f your-values.yaml \
-n <your-prometheus-namespace>
Wait a few minutes for the ALB to provision and for the Prometheus Pods to be recreated (this time with the sidecar). You can watch the progress:
# Watch the pod get recreated
kubectl get pods -n <your-prometheus-namespace> -w
# Once running, check the logs of the sidecar
kubectl logs -f <your-new-prometheus-pod-name> -c nginx-sidecar -n <your-prometheus-namespace>
Now, open https://prometheus.example.com in your browser.
Success! You'll be greeted by a Basic Auth pop-up. Enter the credentials you created with htpasswd, and you'll be granted access. If you click cancel, you'll see your custom 401.html error page.
You've successfully used Nginx to enable Basic Auth for your Prometheus Ingress with ALB.
Handling Health Checks Gracefully
One of the trickiest parts of using a sidecar proxy like NGINX is health checks. ALB needs a dedicated path to determine if your service is healthy. In our case, we added this in the Ingress annotations:
alb.ingress.kubernetes.io/healthcheck-path: /health
That path is served by the NGINX sidecar using a static HTML page (health.html). It's a simple file. So, when ALB hits /health, NGINX responds with 200 OK even if Prometheus itself is down but the pod is still running. This is where things get tricky.
Important Caveat: Because NGINX always returns 200 OK for /health, your ALB will think everything is fine even if Prometheus has crashed or is unreachable. That’s misleading and dangerous for production-grade monitoring.
Improving Health Check Accuracy
To improve the accuracy of the health check, instead of serving a static /health page from NGINX, you can configure it to proxy the health check to Prometheus’ actual health endpoint: /-/healthy.
Here’s how you’d modify the nginx.conf:
location = /health {
proxy_pass http://127.0.0.1:9090/-/healthy;
}
With this setup:
- ALB hits
/health - NGINX proxies the request to
http://localhost:9090/-/healthy - If Prometheus is healthy, it returns 200
- If Prometheus is down, NGINX returns an error to ALB
That way, your health checks become more truthful, ensuring you don’t silently serve broken services.
Visualizing the Traffic Flow
Let's understand the "before" and "after" of our traffic flow.
Default Flow (The "Before"):
- User visits
prometheus.example.com. - ALB receives the request on port 443.
- ALB forwards the request to the Prometheus Kubernetes Service on its port (e.g., 9090).
- The Service routes the traffic to the Prometheus Pod on its containerPort (9090).
- Prometheus serves the UI, no questions asked.
User -> ALB -> Kubernetes Service (Port 9090) -> Prometheus Pod (Port 9090)
New Secured Flow (The "After"):
- User visits
prometheus.example.com. - ALB receives the request on port 443.
- Our new Ingress rule tells the ALB to forward the request to the Prometheus Service on a new port: 8181.
- The Service is configured to route traffic destined for port 8181 to our Prometheus Pod's containerPort 8181.
- Inside the Pod, port 8181 isn't Prometheus. It's our Nginx Sidecar container.
- Nginx receives the request and inspects it.
- Case 1: No Auth: Nginx returns a 401 Unauthorized error (and our nice custom error page).
- Case 2: Valid Auth: Nginx accepts the request and proxies it internally (via localhost:9090) to the Prometheus container running in the exact same Pod.
- Prometheus serves the UI, completely unaware it was ever protected.
User -> ALB -> K8s Service (Port 8181) -> Prometheus Pod
Inside Pod: Port 8181 (Nginx Sidecar) -> (Auth Check) -> localhost:9090 (Prometheus Container)
This is the beauty of the sidecar pattern. The Prometheus container itself is unmodified. We're just wrapping it in a layer of security.
Security & Best Practices
- Avoid hardcoding credentials — use Kubernetes Secrets.
- Use HTTPS everywhere (SSL termination at ALB).
- Periodically rotate credentials in your basic-auth file.
- Consider OIDC or OAuth2 proxy for production-grade security.
Final Thoughts
This Nginx sidecar pattern is a powerful technique that extends far beyond just Prometheus. Any application you deploy behind an ALB Ingress that lacks native authentication (or rate-limiting, or custom headers) can be secured using this exact method.
By wrapping our application in a small, configurable proxy, we've bridged the gap between the features of our load balancer and the requirements of our application all without modifying the application itself. It's a clean, reusable, and "Kubernetes-native" way to solve a very common problem.
Of course, this solution is a powerful pattern, but every Kubernetes environment has its own unique quirks and challenges. If you found this guide helpful and are tackling a similar complex use-case whether it's custom authentication, complex ingress routing, or just trying to tame the EKS beast, I'm always open to discussing it. Feel free to reach out to me at hello@dheeth.blog for a consultation or just to trade war stories.
Happy monitoring!