This document contains the complete problem bank with solutions and walkthroughs for the Message Queues & Event Streaming interviewer skill.
From coding-interview-agentnpx claudepluginhub preplabsai/interviewmentor --plugin coding-interview-agentManages AI Agent Skills on prompts.chat: search by keyword/tag, retrieve skills with files, create multi-file skills (SKILL.md required), add/update/remove files for Claude Code.
Manages AI prompt library on prompts.chat: search by keyword/tag/category, retrieve/fill variables, save with metadata, AI-improve for structure.
Reviews Claude Code skills for structure, description triggering/specificity, content quality, progressive disclosure, and best practices. Provides targeted improvements. Trigger proactively after skill creation/modification.
This document contains the complete problem bank with solutions and walkthroughs for the Message Queues & Event Streaming interviewer skill.
Question: "We have a video rendering pipeline. Users upload videos, and we put a job on a queue for worker servers to process. Should we use Kafka or RabbitMQ?"
Key Distinction: This is a work queue pattern, not an event streaming pattern.
Ideal Answer: Use RabbitMQ (or AWS SQS). This is a classic "work queue" pattern. We want multiple workers to pull jobs, process them, and delete them from the queue. We don't care about the ordering of the jobs, and we don't need to replay them.
Key Concepts:
Question: "A consumer reads a message from a RabbitMQ queue. Due to a bug in the JSON payload, the consumer throws an exception and crashes. The message is not ACKed. What happens next, and how do we stop the system from being stuck forever?"
Root Cause: An unacknowledged message gets requeued by RabbitMQ. The next consumer picks it up, crashes on the same bad payload, requeues it -- creating an infinite loop.
Ideal Answer:
Use a Dead Letter Queue (DLQ). Configure the consumer to catch the exception, log it, and explicitly NACK (reject) the message without requeuing, OR configure the queue with a max_deliveries policy. Once the limit is hit, the broker moves the message to a DLQ where engineers can inspect the bad payload.
Key Concepts:
Question: "In Kafka, how do we guarantee that all events for a specific user_id are processed in the exact order they were generated?"
Key Insight: Kafka only guarantees ordering within a single Partition, not across the entire topic.
Ideal Answer:
When the Producer sends the message, it must use the user_id as the message Key. Kafka hashes the key (hash(user_id) % num_partitions) to determine the partition. Because User A always hashes to the same partition, and a partition is consumed sequentially by a single worker thread, ordering is guaranteed.
Key Concepts:
User A go to the same Partition?"Question: "Our worker reads a message, charges the user's credit card via Stripe, and then crashes before acknowledging the message. What happens next?"
Root Cause: The message broker will redeliver the message since it was never ACKed. The worker will process it again, potentially charging the user twice.
Ideal Answer:
Implement idempotency. Before processing, generate or use the message's unique ID as an idempotency key. Store it in a database table (processed_payments). On redelivery, check if the key exists -- if so, skip processing and ACK the message. Also pass the idempotency key to Stripe so they deduplicate on their end.
Key Concepts:
Question: "We have a Kafka topic with 4 partitions. Our team wants to add 10 consumer instances to our consumer group to speed up processing. Will this work?"
Ideal Answer: No. In a Kafka consumer group, each partition is assigned to exactly one consumer. With 4 partitions and 10 consumers, only 4 consumers will be active -- the other 6 will sit idle. To scale to 10 consumers, you need at least 10 partitions.
Key Concepts:
"Hey, welcome! Let's get into it. We're building an order processing system for an e-commerce platform. When a user places an order, we need to charge their card, update inventory, send a confirmation email, and notify the warehouse. Should we use Kafka or RabbitMQ for this? Walk me through your thinking."
Generate scorecard based on the Evaluation Rubric. Highlight strengths, improvement areas, and recommended resources.