backend

Redis Caching Explained: Speed Up Your Backend

Learn how Redis caching works, when to use it, and common patterns like cache-aside, write-through, and TTL. With Python and Go examples.

By Akash Sharma·5 min read
#redis
#caching
#backend
#system design
#python
#performance
#database

Your app makes the same database query hundreds of times per second. Each time, it goes all the way to the database, waits for a result, and sends it back. That's slow and expensive.

Caching fixes this. Store the result of an expensive operation and reuse it. Redis is the most popular tool for this.

Why Redis? What Makes It Fast

Redis stores data in memory (RAM), not on disk. RAM access is roughly 100,000× faster than disk. That's why Redis can handle over 100,000 operations per second on a single server.

Redis is also simple by design. It stores key-value pairs:

  • "user:123"{"name": "Alice", "email": "alice@example.com"}
  • "product:456:price"29.99

That simplicity makes it extremely fast and easy to reason about.

The Cache-Aside Pattern (Most Common)

This is the pattern you'll use 90% of the time:

  1. Check cache first
  2. Cache hit → return the cached value
  3. Cache miss → fetch from database, store in cache, return value
python
import redis
import json
 
r = redis.Redis(host='localhost', port=6379)
 
def get_user(user_id: int):
    cache_key = f"user:{user_id}"
    
    # Step 1: Check cache
    cached = r.get(cache_key)
    if cached:
        return json.loads(cached)  # Cache hit
    
    # Step 2: Cache miss — fetch from DB
    user = db.query("SELECT * FROM users WHERE id = ?", user_id)
    
    # Step 3: Store in cache with 1-hour TTL
    r.setex(cache_key, 3600, json.dumps(user))
    
    return user

The setex command stores the value with an expiry time (TTL — Time To Live). After 1 hour, Redis automatically deletes it. The next request fetches fresh data from the database.

Choosing the Right TTL

TTL is how long your cached data stays valid. Pick it based on how often the underlying data changes.

Data typeSuggested TTL
Static content (product catalog)24 hours or more
User profiles1–4 hours
Session data30 minutes
Trending content (top 10 posts)1–5 minutes
Real-time dataNo cache

Setting TTL too long: users see stale data. Setting it too short: your cache doesn't help much.

Cache Invalidation: Keeping Data Fresh

When data changes in your database, the cached copy is stale. You have two options:

Option 1: Let it expire. Set a reasonable TTL and accept that cached data might be slightly out of date. Works fine for most cases.

Option 2: Delete the cache key on update. When you update a user's profile in the database, also delete that user's cache key. The next request will fetch fresh data.

python
def update_user(user_id: int, data: dict):
    # Update database
    db.execute("UPDATE users SET ... WHERE id = ?", user_id)
    
    # Invalidate the cache
    r.delete(f"user:{user_id}")

Cache invalidation has a famous saying: "There are only two hard things in computer science: cache invalidation and naming things." It's funny because it's true. Be careful with complex invalidation logic — it's easy to introduce bugs.

Other Redis Caching Patterns

Write-through cache: Every write goes to both cache and database simultaneously. Reads always hit cache. Keeps cache consistent, but writes are slower.

Read-through cache: The cache itself is responsible for fetching from the database on a miss. Your app always talks to the cache. Makes your app simpler, but requires cache library support.

Cache stampede (thundering herd): When a popular cache key expires, thousands of requests simultaneously miss the cache and all hit the database. Solution: use a lock or add random jitter to TTL:

python
import random
 
# Add up to 5 minutes of random TTL to spread expiry
ttl = 3600 + random.randint(0, 300)
r.setex(cache_key, ttl, json.dumps(data))

Redis Beyond Simple Caching

Redis is also useful for:

  • Rate limiting: Count requests per user per minute (use Redis INCR with TTL)
  • Session storage: Store user sessions instead of database sessions
  • Distributed locks: Prevent two servers from doing the same work simultaneously
  • Pub/Sub messaging: Simple message queuing between services
  • Leaderboards: Sorted sets for ranking

When NOT to Use Cache

Caching isn't always the answer. Avoid it when:

  • Data changes extremely frequently (real-time prices, sensor readings)
  • Data must always be perfectly accurate (bank balances during transactions)
  • You have very few users (premature optimization)
  • The database is already fast enough

Key Takeaways

  • Redis stores data in RAM, making it ~100,000× faster than disk-based databases
  • Cache-aside is the most common pattern: check cache, fall back to DB on miss, then store in cache
  • Always set a TTL — stale data is a feature, not a bug (within reason)
  • Invalidate cache on writes when freshness matters
  • Add jitter to TTL to prevent cache stampedes on popular keys
  • Redis is also useful for rate limiting, sessions, and distributed locks

Adding Redis caching to a slow endpoint is often the fastest win you can get in backend performance.

Related reading: Rate Limiting with Redis · Database Indexing Explained

Enjoyed this article?

Get weekly insights on backend architecture, system design, and Go programming.