Message Queues Explained: Kafka vs RabbitMQ
Learn how message queues work and when to use Kafka vs RabbitMQ. Covers async processing, event streaming, and real-world use cases with examples.
Your user signs up. You need to: send a welcome email, create their profile in 3 different services, notify the analytics system, and trigger an onboarding flow.
If you do all this synchronously in the signup request, it takes seconds and can fail midway. Message queues let you do this reliably, in parallel, without the user waiting.
What Is a Message Queue?
A message queue is a buffer between services. Instead of Service A calling Service B directly, A puts a message in a queue. B picks it up and processes it whenever it's ready.
Without queue:
User signup → Email Service → Profile Service → Analytics → [wait...] → Response
With queue:
User signup → Queue → Response (fast!)
↓ (async)
Email Service (processes later)
Profile Service (processes later)
Analytics (processes later)This gives you:
- Decoupling: Services don't need to know about each other
- Resilience: If Email Service is down, messages wait in the queue. No data lost.
- Load buffering: The queue absorbs traffic spikes. Services process at their own pace.
- Parallel processing: Multiple consumers can process messages simultaneously
RabbitMQ: The Traditional Message Broker
RabbitMQ is a message broker — it receives, routes, and delivers messages. Think of it as a smart postal system.
Core concepts:
- Producer: Sends messages to an exchange
- Exchange: Routes messages to queues based on rules
- Queue: Stores messages until consumed
- Consumer: Reads and processes messages from queues
import pika
# Producer: send a message
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='email_queue')
channel.basic_publish(
exchange='',
routing_key='email_queue',
body='{"user_id": 123, "email": "alice@example.com"}'
)
connection.close()
# Consumer: receive and process
def send_welcome_email(ch, method, properties, body):
user_data = json.loads(body)
email_service.send_welcome(user_data['email'])
ch.basic_ack(delivery_tag=method.delivery_tag) # Acknowledge success
channel.basic_consume(queue='email_queue', on_message_callback=send_welcome_email)
channel.start_consuming()RabbitMQ is great for:
- Task queues (send email, resize image, process payment)
- Work distribution across multiple workers
- Request/reply patterns
- When messages should be deleted after consumption
Kafka: The Event Streaming Platform
Kafka is different. It's not just a queue — it's an event log. Messages (events) are written to topics and retained for a configurable period (days or weeks).
Multiple consumers can read the same message independently. Consumers track their own position (offset) in the log.
from kafka import KafkaProducer, KafkaConsumer
# Producer: publish an event
producer = KafkaProducer(bootstrap_servers='localhost:9092')
producer.send('user-signups', b'{"user_id": 123, "name": "Alice"}')
# Consumer group A: sends welcome emails
consumer = KafkaConsumer('user-signups', group_id='email-service')
for message in consumer:
user_data = json.loads(message.value)
email_service.send_welcome(user_data['email'])
# Consumer group B: updates analytics (reads the same events)
analytics_consumer = KafkaConsumer('user-signups', group_id='analytics-service')
for message in analytics_consumer:
analytics.track_signup(json.loads(message.value))Both consumer groups read every event independently. This is what makes Kafka powerful for event-driven architectures.
Kafka is great for:
- Event streaming (user activity, clickstreams, logs)
- Multiple independent consumers for the same events
- Audit logs (events are retained, not deleted on consumption)
- Real-time data pipelines (process millions of events per second)
- Event sourcing
Kafka vs RabbitMQ: The Key Differences
| Feature | RabbitMQ | Kafka |
|---|---|---|
| Model | Push-based broker | Pull-based log |
| Message retention | Deleted after consumption | Retained for days/weeks |
| Multiple consumers | One consumer gets each message | All consumer groups get all messages |
| Throughput | High (thousands/sec) | Very high (millions/sec) |
| Ordering | Per-queue ordering | Per-partition ordering |
| Complexity | Easier to set up | More complex |
| Best for | Task queues, routing | Event streaming, analytics |
When to Use Each
Use RabbitMQ when:
- You have specific task workers (send email, process payment, resize image)
- You want smart routing (send to different queues based on content)
- Messages should be consumed once and deleted
- You're just getting started with queues — it's easier to learn
Use Kafka when:
- Multiple teams/services need the same event data independently
- You need an audit trail or ability to replay events
- Processing millions of events per second
- Building real-time analytics, monitoring, or data pipelines
A Simple Alternative: Redis as a Queue
For simple task queues, Redis can work with libraries like Celery (Python) or Faktory.
# Celery with Redis backend
from celery import Celery
app = Celery('tasks', broker='redis://localhost:6379')
@app.task
def send_welcome_email(user_id, email):
email_service.send_welcome(email)
# Call it asynchronously
send_welcome_email.delay(123, 'alice@example.com')This works great for moderate scale (thousands of tasks per minute). For higher throughput or multiple independent consumers, use Kafka. For complex routing, use RabbitMQ.
Key Takeaways
- Message queues decouple services, improve resilience, and buffer traffic spikes
- RabbitMQ is a broker — messages go to queues, consumed once, then deleted
- Kafka is an event log — events are retained, multiple consumer groups read independently
- Use RabbitMQ for task queues (background jobs, email, payments)
- Use Kafka for event streaming (analytics, audit logs, real-time pipelines)
- For simple use cases, Redis + Celery is the easiest starting point
Message queues are one of the most reliable tools in a backend engineer's toolkit.
Related reading: Redis Caching Explained · Rate Limiting Your API
Enjoyed this article?
Get weekly insights on backend architecture, system design, and Go programming.
Related Posts
Continue reading with these related posts
Distributed Tracing: Debug Requests Across Services
Learn how distributed tracing works and how to implement it. Covers trace IDs, spans, OpenTelemetry, Jaeger, and how to find performance bottlenecks in microservices.
Saga Pattern: Distributed Transactions Without 2PC
Learn how the saga pattern handles distributed transactions in microservices. Covers choreography vs orchestration, compensating transactions, and real examples.
Service Discovery: How Microservices Find Each Other
Learn how service discovery works in microservices. Covers client-side vs server-side discovery, Consul, etcd, and Kubernetes DNS with practical examples.