Apache Cassandra Migration: 3.x to 4.x Episode: 1 Basics

Well, I am a big fan of Apaches tools after Kafka and Zookeeper this would be my third tool Cassandra and my first database. I and my colleague have previously posted a blog on Kafka too. Please read this also you will also find it useful.

So while working casually like any other day. I just got a call from my manager for Cassandra Migration that to in 14 days. Well frankly speaking I was afraid because I was having zero knowledge of the Cassandra Database. Also, I needed to upgrade the running Cluster

So I accepted this challenge and completed it with no downtime So let’s see how.

So I will Start My Journey Learning Cassandra in this blog, DC/DR Setup of Cassandra in the next, and Migration in the last blog

Continue reading “Apache Cassandra Migration: 3.x to 4.x Episode: 1 Basics”

Platform Engineering’s Impact on IT and DevOps 

Delve into the fundamental concepts of Platform Engineering and its profound implications for IT and DevOps teams.

In an era of ever-evolving digital landscapes, businesses and organizations are continually seeking ways to streamline operations, enhance collaboration and accelerate the delivery of innovative solutions to their customers. This is where Platform Engineering emerges as a game-changer, revolutionizing the way we approach IT infrastructure and DevOps implementation.

In this blog, we’ll delve into the fundamental concepts of Platform Engineering and its profound implications for IT and DevOps teams. We’ll uncover how Platform Engineering fosters a culture of agility, efficiency and scalability, ultimately empowering businesses to thrive in today’s fast-paced and competitive market.

So, let’s embark on this enlightening journey, immersing ourselves in the world of Platform Engineering. Discover how platform engineering reshapes the landscape of IT and DevOps. Let’s delve in!

Continue reading “Platform Engineering’s Impact on IT and DevOps “

Multi-Account Management using AWS Control Tower

Introduction

When an organization grows rapidly with time then the complexity of their cloud infrastructure, security concerns, and the need for better resource management also grows. Then there is a need for a more efficient and secure way to manage the workloads. To overcome these problems we can use multiple aws accounts in our aws environment. Some use cases where we can segregate AWS accounts are as follows: Continue reading “Multi-Account Management using AWS Control Tower”

Continuation Of Redis Throughput and Management

As promised in our previous blog on Redis Performance tunning and Best practices, we have explored more best practices and optimizations in Redis as a cache and database management system. This blog will share some new findings and optimizations we learned in our previous blog’s delta period.

We know that Redis is a high-speed and flexible data storage that can fulfill different cache and database requirements. But if a system is not configured and tested correctly, even a fast and reliable one can quickly become limited. Here we will talk about the different needs of Redis as a system and how we can optimize it further to fully use it.

So while consulting and collaborating with different Redis architects from Redis Labs, I learned different ways of designing a performance-grade, highly available, and secure Redis architecture. Based on my learning, I would like to categorize it into these dimensions:-

  • Right-sizing and deployment of Redis setup.
  • Proxy and connection pooling.
  • Use the correct data type for storing keys.
  • Sharding and replication strategy.
Continue reading “Continuation Of Redis Throughput and Management”

Unlocking Debezium: Exploring the Fundamentals of Real-Time Change Data Capture with Debezium and Harnessing its Power in Docker Containers

Introduction

In a fast moving data driven environment, applications are expected to respond instantly to changes happening inside databases. Batch based systems struggle to meet this demand, especially when businesses rely on real time dashboards, alerts, and event driven workflows. This is where change data capture becomes an essential architectural component. Debezium provides a reliable way to stream database changes in real time and integrates seamlessly with Apache Kafka.

This article walks through the fundamentals of Debezium and demonstrates how PostgreSQL changes can be streamed to Kafka using Docker. The focus is on practical understanding with a working setup rather than just theory.

Understanding Change Data Capture

Change Data Capture, often referred to as CDC, is a mechanism that tracks inserts, updates, and deletes occurring in a database as they happen. Instead of repeatedly querying tables or running heavy batch jobs, CDC captures only the data that has changed.

This approach allows downstream systems to consume fresh data with minimal delay while keeping database load low. CDC is widely used in analytics platforms, event driven microservices, and data replication pipelines.

What Debezium Brings to the Table

Debezium is an open source CDC platform developed by Red Hat. It works by reading database transaction logs, which already record every data modification. By leveraging these logs, Debezium captures changes efficiently and reliably.

Debezium supports multiple databases such as PostgreSQL, MySQL, SQL Server, Oracle, and MongoDB. It publishes each change as a structured event into Kafka topics, making the data available for real time processing.

How Debezium Works Behind the Scenes

Debezium uses a log based CDC approach. Instead of polling database tables, it connects directly to the database log. Every insert, update, or delete operation is converted into a change event.

Each database has its own Debezium connector that understands how to read its transaction log. These connectors push standardized events to Kafka. Kafka then acts as a durable and scalable streaming backbone.

Each event includes details such as database name, table name, primary key, before and after values, and timestamps. This rich metadata makes the events suitable for analytics, auditing, and synchronization.

Use Cases for Debezium:

  1. Microservices Architecture: Debezium plays a crucial role in event-driven microservices architectures, where each microservice can react to specific changes in the data. By consuming the change events, services can update their local view of data or trigger further actions

  1. Data Synchronization: Debezium can be used to keep multiple databases in sync by replicating changes from one database to another in real-time. This is especially useful in scenarios where data needs to be replicated across geographically distributed systems or in cases where different databases serve specific purposes within an organization.

  1. Stream Processing and Analytics: Debezium’s real-time change data capture capabilities make it an excellent choice for streaming data processing and analytics. By consuming the change events from Debezium, organizations can perform real-time analysis, monitoring, and aggregations on the data. This can be particularly beneficial for applications such as fraud detection, real-time dashboards, and personalized recommendations.

  1. Data Warehousing and ETL (Extract, Transform, Load): Debezium can play a vital role in populating data warehouses or data lakes by capturing and transforming the change events into the desired format. It eliminates the need for batch processing or periodic data extraction, enabling near real-time data updates in analytical systems.

 

  1. Data Integration and Replication: Debezium simplifies data integration by providing a reliable and efficient way to replicate data changes across different systems. It allows organizations to easily integrate and synchronize data between legacy systems, modern applications, and cloud-based services. This is particularly valuable in scenarios involving hybrid cloud architectures or when migrating from one database platform to another.

  1. Audit Trail and Compliance: Debezium’s ability to capture every data manipulation operation in a database’s log makes it an ideal solution for generating an audit trail. Organizations can use Debezium to track and record all changes made to critical data, ensuring compliance with regulations and providing a reliable historical record of data modifications.

Setting Up PostgreSQL, Kafka, and Debezium Using Docker

To simplify the setup, Docker and Docker Compose are used. This allows all required services to run together without manual installation.

Before starting, make sure Docker and Docker Compose are available on your system.

Clone the repository that contains the Docker Compose configuration for PostgreSQL, Kafka, ZooKeeper, and Debezium.

Use this repository link in your browser or Git client.

 
https://github.com/sunil9837/Debezium-Setup.git

After cloning the repository, navigate into the project directory.

 
cd Debezium-Setup

Bring up all required containers in detached mode using Docker Compose.

 
docker-compose up -d

Once the containers are running, PostgreSQL, Kafka, ZooKeeper, and Kafka Connect will be available inside the Docker network.

Creating a Test Table in PostgreSQL

To validate streaming, a simple table is created in PostgreSQL.

First, access the PostgreSQL container shell.

 
docker exec -it ubuntu_db_1 bash

Log in to the PostgreSQL database.

 
psql -U postgres -d postgres

Create a table for testing.

 
CREATE TABLE transaction ( name VARCHAR(100), age INTEGER );

This table will be monitored by Debezium for real time changes.

Activating the Debezium Connector

Debezium connectors are created by sending a configuration request to Kafka Connect. The configuration is stored in a JSON file inside the repository.

The request is sent to the Kafka Connect REST endpoint using an HTTP client command.

 
curl -i -X POST \ -H "Accept application/json" \ -H "Content-Type application/json" \ http://localhost:8083/connectors/ \ --data "@debezium.json"

If the configuration is correct, Kafka Connect responds with a success message confirming that the connector has been registered. From this point onward, Debezium starts reading changes from PostgreSQL.

Verifying Kafka Topics

Kafka automatically creates a topic for each monitored table. To verify this, list all topics in the Kafka cluster.

 
docker exec -it \ $(docker ps | grep ubuntu_kafka_1 | awk '{print $1}') \ /kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 --list

You should see a topic corresponding to the PostgreSQL table created earlier.

Monitoring Real Time Events Using Kafka Consumer

Kafka provides a console consumer utility that allows you to read messages from a topic in real time. This helps verify whether change events are flowing correctly.

Start the Kafka console consumer for the table topic.

 
docker exec -it \ $(docker ps | grep ubuntu_kafka_1 | awk '{print $1}') \ /kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic emp.public.transaction

If you want to read all events from the beginning, you can include the from beginning option.

Testing the End to End Streaming

Insert a record into the PostgreSQL table.

 
INSERT INTO transaction (name, age) VALUES ('Opstree', 30);

As soon as the record is inserted, a new event appears in the Kafka console consumer. The message includes the primary key, column values, and metadata such as timestamps.

This confirms that PostgreSQL changes are successfully streaming to Kafka through Debezium.

Conclusion

Debezium provides a robust and production ready solution for implementing change data capture. By reading database transaction logs and streaming events through Apache Kafka, it enables real time data pipelines with minimal latency.

This approach is well suited for microservices communication, analytics platforms, data synchronization, and compliance auditing. As organizations continue to adopt event driven architectures, Debezium remains a key building block for real time systems.

Reference:

https://debezium.io/documentation/reference/stable/tutorial.html

https://debezium.io/documentation/reference/stable/architecture.html

https://www.infoq.com/presentations/data-streaming-kafka-debezium/

Blog Pundits: Deepak Gupta, Naveen Verma and Sandeep Rawat

OpsTree is an End-to-End DevOps Solution Provider.

Connect with Us