API Gateway Pattern: The Front Door to Your Services
Learn what an API gateway does, when to use one, and how to set it up. Covers routing, authentication, rate limiting, and tools like Kong, AWS API Gateway, and Traefik.
You have ten microservices. Every client — mobile app, web app, third-party partner — needs to talk to all of them. They all need authentication. They all need rate limiting. They all need logging.
You could add all of this to every service. Or you could put a single component in front of them all.
That's the API gateway.
What Does an API Gateway Do?
An API gateway is the single entry point for all client requests. Clients talk to one URL. The gateway figures out which service to call, forwards the request, and returns the response.
Without gateway:
Mobile App → /users → User Service
Mobile App → /orders → Order Service
Mobile App → /products → Product Service
(auth, rate limiting, logging in each service)
With gateway:
Mobile App → API Gateway → /users → User Service
→ /orders → Order Service
→ /products → Product Service
(auth, rate limiting, logging done once, in gateway)The gateway handles cross-cutting concerns so your services don't have to.
What the Gateway Handles
Routing: Map /api/users/* to the user service, /api/orders/* to the order service. Services can change their internal URL structure without clients knowing.
Authentication: Verify JWTs or API keys once, at the gateway. Services trust that the gateway already checked. No auth code in each service.
Rate limiting: Enforce limits per client at one place. 100 requests/minute per IP, 10,000/day per API key — the gateway tracks and enforces this.
SSL termination: Clients connect over HTTPS. The gateway decrypts. Services talk to each other over plain HTTP inside your private network — faster, simpler certs.
Request/response transformation: Strip internal headers before responding to clients. Add request IDs for tracing. Transform response formats.
Load balancing: Multiple instances of a service — the gateway distributes traffic across them.
Logging and monitoring: One place to log every request, response time, and status code. One place to see traffic patterns.
Simple Example: Nginx as Gateway
For small setups, Nginx does basic gateway work:
server {
listen 443 ssl;
server_name api.example.com;
# Route to user service
location /api/users/ {
proxy_pass http://user-service:8001/;
proxy_set_header Host $host;
proxy_set_header X-Request-ID $request_id;
}
# Route to order service
location /api/orders/ {
proxy_pass http://order-service:8002/;
proxy_set_header Host $host;
proxy_set_header X-Request-ID $request_id;
}
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=100r/m;
location /api/ {
limit_req zone=api burst=20;
}
}Good for: simple routing, rate limiting, SSL. Not great for: complex auth logic, dynamic configuration without reloading.
Kong: Production API Gateway
Kong is a popular open-source gateway built on Nginx. You configure it via API or declarative YAML — no code changes needed.
# kong.yml — declarative configuration
_format_version: "3.0"
services:
- name: user-service
url: http://user-service:8001
routes:
- name: users-route
paths:
- /api/users
- name: order-service
url: http://order-service:8002
routes:
- name: orders-route
paths:
- /api/orders
plugins:
- name: jwt # Auth via JWT
config:
secret_is_base64: false
- name: rate-limiting # Rate limiting
config:
minute: 100
hour: 1000
policy: redis
- name: request-id # Add request IDs
config:
header_name: X-Request-IDKong plugins handle auth, rate limiting, logging, request transformation — all configured, not coded.
AWS API Gateway
If you're on AWS, API Gateway integrates with Lambda, ECS, and other services. You define routes and link them to backends.
{
"openapi": "3.0",
"paths": {
"/users/{userId}": {
"get": {
"x-amazon-apigateway-integration": {
"type": "HTTP_PROXY",
"uri": "http://user-service.internal/users/{userId}",
"httpMethod": "GET"
}
}
}
}
}AWS API Gateway gives you: built-in auth (Cognito, Lambda authorizers), usage plans per API key, CloudWatch logging, and WAF integration — no infrastructure to manage.
Good when: you're already deep in AWS and want zero ops overhead.
Traefik: Gateway for Containers
Traefik reads Docker/Kubernetes labels and configures itself automatically. No config files to update when you add a new service.
# docker-compose.yml
services:
api-gateway:
image: traefik:v3.0
command:
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
user-service:
image: user-service:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.users.rule=PathPrefix(`/api/users`)"
- "traefik.http.routers.users.middlewares=auth,rate-limit"
order-service:
image: order-service:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.orders.rule=PathPrefix(`/api/orders`)"Traefik auto-discovers services as containers start and stop. Good fit for Kubernetes.
Authentication at the Gateway
The gateway verifies the token, then passes user info to services via headers:
# Gateway middleware (pseudocode)
import jwt
def auth_middleware(request):
token = request.headers.get("Authorization", "").replace("Bearer ", "")
try:
payload = jwt.decode(token, PUBLIC_KEY, algorithms=["RS256"])
# Add user info to forwarded request
request.headers["X-User-ID"] = str(payload["user_id"])
request.headers["X-User-Role"] = payload["role"]
return forward_request(request)
except jwt.InvalidTokenError:
return Response(status=401, body="Unauthorized")The order service trusts X-User-ID — it never touches token verification. Auth lives in one place.
When Not to Use a Gateway
Small monolith: One service with a few endpoints doesn't need a gateway. Just put auth middleware in your app.
Added latency: Every request goes through an extra hop. For latency-sensitive internal APIs, direct service-to-service calls may be better.
Single point of failure: If the gateway goes down, everything goes down. You need redundancy (multiple gateway instances + load balancer in front).
Not a substitute for service security: Services behind the gateway should still validate inputs. Don't assume internal traffic is safe.
Gateway vs Service Mesh
These solve different problems:
API gateway: North-south traffic (clients → services). One entry point, one place for auth and rate limiting.
Service mesh (Istio, Linkerd): East-west traffic (service → service). Handles mTLS between services, circuit breaking, retries internally.
In production microservices systems, you often use both: gateway for client traffic, service mesh for internal service communication.
Key Takeaways
- API gateway is the single entry point for all clients — routes to the right service
- Handles cross-cutting concerns once: auth, rate limiting, logging, SSL termination
- Nginx for simple setups, Kong for feature-rich self-hosted, AWS API Gateway for AWS-native, Traefik for containers
- Pass user info from gateway to services via headers — services trust the gateway
- You still need redundancy — a gateway is a potential single point of failure
- Don't replace service-level security with gateway security
An API gateway removes boilerplate from every service. The complexity doesn't disappear — it consolidates into one place you control.
Related reading: Rate Limiting Your API · REST API Design Best Practices · Circuit Breaker Pattern
Enjoyed this article?
Get weekly insights on backend architecture, system design, and Go programming.
Related Posts
Continue reading with these related posts
Distributed Tracing: Debug Requests Across Services
Learn how distributed tracing works and how to implement it. Covers trace IDs, spans, OpenTelemetry, Jaeger, and how to find performance bottlenecks in microservices.
Saga Pattern: Distributed Transactions Without 2PC
Learn how the saga pattern handles distributed transactions in microservices. Covers choreography vs orchestration, compensating transactions, and real examples.
Service Discovery: How Microservices Find Each Other
Learn how service discovery works in microservices. Covers client-side vs server-side discovery, Consul, etcd, and Kubernetes DNS with practical examples.