Unconventional RPC: Leveraging Apache Kafka and Karafka

Jump to

In the world of distributed systems, developers are constantly seeking innovative solutions to streamline communication between services. An intriguing approach has emerged, challenging conventional wisdom by implementing Remote Procedure Call (RPC) patterns using Apache Kafka and the Karafka framework. This unconventional method offers a unique perspective on utilizing existing infrastructure in creative ways, potentially reducing operational complexity and maintenance overhead.

The Concept

At its core, RPC creates the illusion of executing a function locally when it’s actually running on a remote server. Traditionally, Apache Kafka, designed as an event log optimized for throughput, might seem an unlikely candidate for RPC implementation. However, this novel approach demonstrates that sometimes thinking outside the box can yield surprising results.

Architecture Overview

The RPC pattern built on Kafka requires a delicate balance of synchronous and asynchronous communication. From the client’s perspective, the process appears synchronous – send a request, wait for a response. Behind the scenes, however, the underlying operations within Kafka remain asynchronous.

Key Components:

  • Two Kafka topics: one for commands (requests) and another for results (responses)
  • A client-side consumer operating without a consumer group
  • A commands consumer in the RPC server for processing requests and publishing results
  • A synchronization mechanism using mutexes and condition variables

Implementation Flow

  1. The client generates a unique correlation ID for each RPC call
  2. The command is published to Kafka
  3. Client execution is blocked using a mutex and condition variable
  4. The message flows through various stages:
    • Command topic persistence
    • Consumer polling and processing
    • Result publishing
    • Result topic persistence
    • Client-side consumer matching of correlation ID and response

Design Considerations

This architecture makes specific trade-offs to balance performance and reliability. Single-partition topics ensure strict ordering, simplifying correlation and providing exactly-once processing semantics. The custom consumer approach avoids consumer group rebalancing delays, while the synchronization mechanism bridges the gap between Kafka’s asynchronous nature and the desired synchronous behavior.

Implementation Components

Topic Configuration

Two essential topics are defined:

  1. Command topic for receiving and processing RPC requests
  2. Results topic marked as inactive (using a custom iterator instead of a standard consumer group consumer)

Command Consumer

The consumer handles incoming commands and publishes results back to the results topic. For demonstration purposes, it uses a simple eval to process commands, though production implementations would require proper command validation, deserialization, and secure processing logic.

Synchronization Mechanism

A synchronization mechanism using Ruby’s mutex and condition variables bridges Kafka’s asynchronous nature with synchronous RPC behavior. This mechanism maintains a registry of pending requests and coordinates the blocking and unblocking of client threads based on correlation IDs.

The Client

The client implementation consists of two main components:

  1. A response listener that continuously checks for matching results
  2. A blocking command dispatcher that waits for responses

Usage and Performance

To use this RPC implementation, a response listener is started in a background thread. Synchronous RPC calls can then be made from the application, blocking until the response arrives, creating the illusion of a regular synchronous method call.

In local testing, this implementation achieved impressive performance, with roundtrip times as low as 3ms. However, it’s crucial to note that these results assume ideal conditions and minimal command processing time. Real-world usage would require additional error handling, timeouts, and more robust command processing logic.

Considerations and Limitations

While this approach demonstrates the flexibility of Kafka and Karafka, it’s important to recognize its limitations. This implementation serves as a proof-of-concept and learning resource, lacking production-ready features such as proper timeout handling, resource cleanup after timeouts, error propagation, retries, message validation, security measures, and comprehensive metrics/monitoring.

Conclusion

This innovative RPC pattern using Apache Kafka and Karafka challenges preconceptions about messaging systems and RPC. It showcases how deep understanding of tools can reveal capabilities beyond their primary use cases. While not suitable for every situation, this approach provides a pragmatic alternative when adding new infrastructure components isn’t desirable, particularly in environments where Kafka is already a central part of the infrastructure.

Read more such articles from our Newsletter here.

Leave a Comment

Your email address will not be published. Required fields are marked *

You may also like

Developers using GitHub’s AI tools with GPT-5 integration in IDEs

GitHub AI Updates August 2025: A New Era of Development

August 2025 marked a defining shift in GitHub’s AI-powered development ecosystem. With the arrival of GPT-5, greater model flexibility, security enhancements, and deeper integration across GitHub’s platform, developers now have

AI agents simulating human reasoning to perform complex tasks

OpenAI’s Mission to Build AI Agents for Everything

OpenAI’s journey toward creating advanced artificial intelligence is centered on one clear ambition: building AI agents that can perform tasks just like humans. What began as experiments in mathematical reasoning

Developers collaborating with AI tools for coding and testing efficiency

AI Coding in 2025: Redefining Software Development

Artificial intelligence continues to push boundaries across the IT industry, with software development experiencing some of the most significant transformations. What once relied heavily on human effort for every line

Categories
Interested in working with Newsletters ?

These roles are hiring now.

Loading jobs...
Scroll to Top