Several factors affect database performance, and one of the most critical is how efficiently your application manages database connections. When multiple clients connect to PostgreSQL simultaneously, creating a new
connection for each request can be resource-intensive and slow. This is where connection pooling comes into play. Connection pooling allows connections to be reused instead of creating a new one every time, reducing overhead and improving performance. In this blog, we’ll explore PgBouncer, a lightweight PostgreSQL connection pooler, and how to set it up for your environment. Continue reading “Complete Guide to Fixing PostgreSQL Performance with PgBouncer Connection Pooling”
Tag: database migration
Cassandra to ScyllaDB Migration Without Any Downtime
Enterprises need multiple things to run their business successfully. One of the critical things is the data store they use for storing the data that applications and different analytics platforms will use. To ensure that business is healthy, companies need reliable databases, and according to their tech budget and expertise, they choose databases.
While working and consulting with an enterprise, they ran Cassandra to support their NoSQL data-store operations. Cassandra was working really well but now when the company grew, they needed help to support the NoSQL store. They experienced cascading latencies by Cassandra Hot’s partition as the traffic increased with the events and campaigns. Also, garbage collection was becoming the bottleneck because it heavily impacted the database performance, resulting in poor application performance. Also, one more significant reason was that they wanted to avoid managing the database and were looking for an expert company that could manage it for them without significant application changes.
Why ScyllaDB?
While exploring the different solutions for the database, ScyllaDB caught our attention. We were curious about the solution and did multiple proofs of concept on the ScyllaDB. We finally decided this would be the right choice for our environment and scale. A few primary reasons for our decision were:
Continue reading “Cassandra to ScyllaDB Migration Without Any Downtime”Apache Cassandra Migration: 3.x to 4.x Ep: 2 DC and DR Setup
Well in my previous blog, we learned about Cassandra’s basics. If you have not read it yet, you should go through it. We have discussed the basics of Cassandra which will be useful in your daily operations on the database.
So now we will deep-dive into Cassandra’s DC/DR Setup.

DC/DR setup is necessary in a production environment where you don’t know when an issue can occur. You need to have an immediate backup when your cluster is down, and you should always have another cluster to respond.
Cassandra is a database and for a database, we want it to remain up in any and every situation to avoid downtime of our applications. Disaster Recovery setup of databases is equally necessary as you do for your applications. So let’s get started with this super easy way where it will take a few minutes and make your DR Setup ready.
Continue reading “Apache Cassandra Migration: 3.x to 4.x Ep: 2 DC and DR Setup”Apache Cassandra Migration: 3.x to 4.x Episode: 1 Basics

Well, I am a big fan of Apaches tools after Kafka and Zookeeper this would be my third tool Cassandra and my first database. I and my colleague have previously posted a blog on Kafka too. Please read this also you will also find it useful.
So while working casually like any other day. I just got a call from my manager for Cassandra Migration that to in 14 days. Well frankly speaking I was afraid because I was having zero knowledge of the Cassandra Database. Also, I needed to upgrade the running Cluster
So I accepted this challenge and completed it with no downtime So let’s see how.
So I will Start My Journey Learning Cassandra in this blog, DC/DR Setup of Cassandra in the next, and Migration in the last blog
Continue reading “Apache Cassandra Migration: 3.x to 4.x Episode: 1 Basics”Migrate your data between various Databases
Data Migration Service
in this article.
real-time and that too without the help of any Database Administrator.
different kind of data.
PostgreSQL, MongoDB.
The service supports homogeneous migrations such as Oracle to Oracle,
and also heterogeneous migrations between different database platforms.
Let’s discuss some important features of AWS DMS:
- Migrates the database securely, quickly and accurately.
- No downtime required, works as schema converter as well.
- Supports various type or database like MySQL, MongoDB, PSQL etc.
- Migrates real-time data also synchronize ongoing changes.
- Data validation is available to verify database.
- Compatible with a long range of database platforms like RDS, Google SQL, on-premises etc.
- Inexpensive (Pricing is based on the compute resources used during the migration process).
Note: We’ve performed migration from AWS RDS
to GCP SQL, you can choose database source and
destination as per your requirement.
- Create replication instance:
A replication instance initiates the connection between the source and target databases, transfers the data, cache any changes that occur on the source database during the initial data load.Use the fields to below to configure the parameters of your new replication instance including network and security information, encryption details, select instance class as per requirement.After completion, all mandatory fields click the next tab, and you will be redirected
to Replication Instance tab.Grab a coffee quickly while the instance is getting ready.
Hope you are ready with your coffee because the instance is ready now. - Now we are to create two endpoints “Source” and “Target” 2.1 Create Source Endpoint:
Click on “Run test” tab after completing all fields, make sure your Replication instance IP is whitelisted
under security group. 2.2 Create Target EndpointClick on “Run test” tab again after completing all fields, make sure your Replication instance IP is whitelisted under target DB authorization.Now we’ve ready Replication Instance, Source Endpoint, and Target Endpoint. - Finally, we’ll create a “Replication Task” to start replication.
Fill the fields like:
- Task Name: any name
- Replication Instance: The instance we’ve created above
- Source Endpoint: The source database
- Target Endpoint: The target database
- Migration Type: Here I choose “Migration existing data and replication
ongoing” because we needed ongoing changes.
redirected to “Tasks” Tab.
