Message Queues
Throughput benchmarks and delivery guarantees
Not Started
Loading...
⚡ Quick Decision
Choose Redis Pub/Sub When:
- • Need ultra-low latency (<1ms)
- • Real-time notifications
- • Can tolerate message loss
Choose Kafka When:
- • Need high throughput (2M+ msg/sec)
- • Event streaming & replay
- • Analytics pipelines
Choose RabbitMQ When:
- • Need complex routing
- • Task queue processing
- • Guaranteed delivery matters
💡 For implementation guides and code examples: See our technology deep dives: Kafka, RabbitMQ, Amazon SQS
Performance Benchmarks
Throughput and latency vary dramatically by system design and guarantees. Numbers based on typical production deployments.
Redis Pub/SubReal-time notifications
Throughput:
1M+ msg/sec
Latency:
<1ms
Persistence:
Memory only
RabbitMQTask processing
Throughput:
20K msg/sec
Latency:
1-5ms
Persistence:
Disk + Memory
Apache KafkaEvent streaming
Throughput:
2M+ msg/sec
Latency:
2-10ms
Persistence:
Distributed log
Amazon SQSCloud microservices
Throughput:
300K msg/sec
Latency:
20-100ms
Persistence:
Replicated
Google Pub/SubAnalytics pipelines
Throughput:
1M+ msg/sec
Latency:
10-50ms
Persistence:
Replicated
Delivery Guarantees vs Performance
Reliability
95%At-most-once
99.99%Exactly-once
Throughput
100%At-most-once
60%Exactly-once
Latency
100%At-most-once
40%Exactly-once
At-most-onceMetrics, logs
Reliability
Lowest complexity
95%
At-least-onceEmail notifications
Reliability
Medium complexity
99.9%
Exactly-onceFinancial transactions
Reliability
Highest complexity
99.99%
Capacity Planning Numbers
Typical Message Sizes
JSON Event
User actions, API calls
1-5 KB
Database Change
Row updates, deletes
2-10 KB
Image Metadata
Upload notifications
5-20 KB
Video Processing
Job specifications
50-200 KB
Queue Sizing Rules
• Queue depth: 10-100x peak msg/sec
• Consumer lag: <30 sec for real-time
• Retention: 7-30 days for replay
• Partitions: 2x consumer count
• Dead letter: 3x retry attempts
System Selection Decision Tree
Step 1: What's Your Primary Use Case?
🔄 Task Processing
Background jobs, work queues, request handling
→ Consider: RabbitMQ, Amazon SQS
📊 Event Streaming
Analytics, logs, real-time data pipelines
→ Consider: Kafka, Pulsar, Kinesis
Step 2: What Are Your Constraints?
Latency Critical (<5ms)
• Redis Pub/Sub
• ZeroMQ
• Chronicle Queue
Throughput Critical (>100K/sec)
• Apache Kafka
• Apache Pulsar
• Redis Streams
Simplicity Critical
• Amazon SQS
• Google Pub/Sub
• Azure Service Bus
Cost & Operational Considerations
💰 Cost per Million Messages
Self-hosted Redis:$0.05-0.30
Amazon SQS:$0.40
Self-hosted Kafka:$0.10-0.50
Confluent Cloud:$1.00-3.00
⚙️ Operational Complexity
Cloud Managed (SQS/Pub/Sub):Low
RabbitMQ:Medium
Redis Pub/Sub:Medium
Kafka Cluster:High
⚠️ Hidden Costs to Consider
- • Monitoring and observability tooling
- • Multi-region replication bandwidth
- • Storage costs for message retention
- • Engineer time for cluster management
📝 Test Your Knowledge
6 questions • Progress: 0/6