In this blog, join me on a voyage through my personal experience my journey of Migrating MySQL from version 5.7 to 8.0. This voyage was motivated by the desire to harness the latest features, bolster security, and unlock the performance enhancements that MySQL 8.0 offers. As we navigate through the intricate migration process, I’ll share the valuable lessons learned, the hurdles encountered, and the strategies employed to overcome them.
In the previous blog posts of this series, we learned the basics and X-DR Setup in Cassandra. Finally, we have arrived at our last phase: Cassandra migration to a newer version.
The last 2 years have been great on our database journey. We searched for the best database for our application and ended up using Cassandra. It proved to be quite fast, just what we needed for our use case. We started using Cassandra 3.1.2 which was fine but after running it for a year we realized that it was time to upgrade as, by then, v3.1.2 had lots of vulnerabilities.
Obviously, experimenting on current production was risky so we decided to go with our non-production first. We were running the same version 3.1.2 and X-DR Setup. In the previous blog, I explained the X-DR Setup you might want to refer to for proper understanding.
Applications hosting on Amazon Elastic Container Service (ECS) is a cloud computing solution provided by AWS that enables organizations to manage, scale, and deploy containerized applications with ease. ECS simplifies container deployment and management, enabling developers to concentrate on creating and running their applications.
Monitoring the OpenSearch visualization dashboard 24*7 becomes challenging when application or system logs fail to appear. Therefore, it’s essential to establish alerts that promptly notify us when such issues arise. The flexibility and scalability of OpenSearch make it a compelling choice for implementing alerting solutions in various domains, from IT operations and security to business intelligence and analytics.
Overview of OpenSearch cluster
OpenSearch, a powerful and versatile search and analytics engine, plays a crucial role in helping you gain valuable insights from your data. However, staying on top of potential issues and anomalies in your OpenSearch cluster requires a proactive approach.
In a fast moving data driven environment, applications are expected to respond instantly to changes happening inside databases. Batch based systems struggle to meet this demand, especially when businesses rely on real time dashboards, alerts, and event driven workflows. This is where change data capture becomes an essential architectural component. Debezium provides a reliable way to stream database changes in real time and integrates seamlessly with Apache Kafka.
This article walks through the fundamentals of Debezium and demonstrates how PostgreSQL changes can be streamed to Kafka using Docker. The focus is on practical understanding with a working setup rather than just theory.
Understanding Change Data Capture
Change Data Capture, often referred to as CDC, is a mechanism that tracks inserts, updates, and deletes occurring in a database as they happen. Instead of repeatedly querying tables or running heavy batch jobs, CDC captures only the data that has changed.
This approach allows downstream systems to consume fresh data with minimal delay while keeping database load low. CDC is widely used in analytics platforms, event driven microservices, and data replication pipelines.
What Debezium Brings to the Table
Debezium is an open source CDC platform developed by Red Hat. It works by reading database transaction logs, which already record every data modification. By leveraging these logs, Debezium captures changes efficiently and reliably.
Debezium supports multiple databases such as PostgreSQL, MySQL, SQL Server, Oracle, and MongoDB. It publishes each change as a structured event into Kafka topics, making the data available for real time processing.
How Debezium Works Behind the Scenes
Debezium uses a log based CDC approach. Instead of polling database tables, it connects directly to the database log. Every insert, update, or delete operation is converted into a change event.
Each database has its own Debezium connector that understands how to read its transaction log. These connectors push standardized events to Kafka. Kafka then acts as a durable and scalable streaming backbone.
Each event includes details such as database name, table name, primary key, before and after values, and timestamps. This rich metadata makes the events suitable for analytics, auditing, and synchronization.
Use Cases for Debezium:
Microservices Architecture: Debezium plays a crucial role in event-driven microservices architectures, where each microservice can react to specific changes in the data. By consuming the change events, services can update their local view of data or trigger further actions
Data Synchronization: Debezium can be used to keep multiple databases in sync by replicating changes from one database to another in real-time. This is especially useful in scenarios where data needs to be replicated across geographically distributed systems or in cases where different databases serve specific purposes within an organization.
Stream Processing and Analytics: Debezium’s real-time change data capture capabilities make it an excellent choice for streaming data processing and analytics. By consuming the change events from Debezium, organizations can perform real-time analysis, monitoring, and aggregations on the data. This can be particularly beneficial for applications such as fraud detection, real-time dashboards, and personalized recommendations.
Data Warehousing and ETL (Extract, Transform, Load): Debezium can play a vital role in populating data warehouses or data lakes by capturing and transforming the change events into the desired format. It eliminates the need for batch processing or periodic data extraction, enabling near real-time data updates in analytical systems.
Data Integration and Replication: Debezium simplifies data integration by providing a reliable and efficient way to replicate data changes across different systems. It allows organizations to easily integrate and synchronize data between legacy systems, modern applications, and cloud-based services. This is particularly valuable in scenarios involving hybrid cloud architectures or when migrating from one database platform to another.
Audit Trail and Compliance: Debezium’s ability to capture every data manipulation operation in a database’s log makes it an ideal solution for generating an audit trail. Organizations can use Debezium to track and record all changes made to critical data, ensuring compliance with regulations and providing a reliable historical record of data modifications.
Setting Up PostgreSQL, Kafka, and Debezium Using Docker
To simplify the setup, Docker and Docker Compose are used. This allows all required services to run together without manual installation.
Before starting, make sure Docker and Docker Compose are available on your system.
Clone the repository that contains the Docker Compose configuration for PostgreSQL, Kafka, ZooKeeper, and Debezium.
Use this repository link in your browser or Git client.
https://github.com/sunil9837/Debezium-Setup.git
After cloning the repository, navigate into the project directory.
cd Debezium-Setup
Bring up all required containers in detached mode using Docker Compose.
docker-compose up -d
Once the containers are running, PostgreSQL, Kafka, ZooKeeper, and Kafka Connect will be available inside the Docker network.
Creating a Test Table in PostgreSQL
To validate streaming, a simple table is created in PostgreSQL.
First, access the PostgreSQL container shell.
docker exec -it ubuntu_db_1 bash
Log in to the PostgreSQL database.
psql -U postgres -d postgres
Create a table for testing.
CREATETABLEtransaction (
nameVARCHAR(100),
age INTEGER
);
This table will be monitored by Debezium for real time changes.
Activating the Debezium Connector
Debezium connectors are created by sending a configuration request to Kafka Connect. The configuration is stored in a JSON file inside the repository.
The request is sent to the Kafka Connect REST endpoint using an HTTP client command.
If the configuration is correct, Kafka Connect responds with a success message confirming that the connector has been registered. From this point onward, Debezium starts reading changes from PostgreSQL.
Verifying Kafka Topics
Kafka automatically creates a topic for each monitored table. To verify this, list all topics in the Kafka cluster.
You should see a topic corresponding to the PostgreSQL table created earlier.
Monitoring Real Time Events Using Kafka Consumer
Kafka provides a console consumer utility that allows you to read messages from a topic in real time. This helps verify whether change events are flowing correctly.
Start the Kafka console consumer for the table topic.
As soon as the record is inserted, a new event appears in the Kafka console consumer. The message includes the primary key, column values, and metadata such as timestamps.
This confirms that PostgreSQL changes are successfully streaming to Kafka through Debezium.
Conclusion
Debezium provides a robust and production ready solution for implementing change data capture. By reading database transaction logs and streaming events through Apache Kafka, it enables real time data pipelines with minimal latency.
This approach is well suited for microservices communication, analytics platforms, data synchronization, and compliance auditing. As organizations continue to adopt event driven architectures, Debezium remains a key building block for real time systems.