Cassandra to ScyllaDB Migration Without Any Downtime

Enterprises need multiple things to run their business successfully. One of the critical things is the data store they use for storing the data that applications and different analytics platforms will use. To ensure that business is healthy, companies need reliable databases, and according to their tech budget and expertise, they choose databases.

While working and consulting with an enterprise, they ran Cassandra to support their NoSQL data-store operations. Cassandra was working really well but now when the company grew, they needed help to support the NoSQL store. They experienced cascading latencies by Cassandra Hot’s partition as the traffic increased with the events and campaigns. Also, garbage collection was becoming the bottleneck because it heavily impacted the database performance, resulting in poor application performance. Also, one more significant reason was that they wanted to avoid managing the database and were looking for an expert company that could manage it for them without significant application changes.

Why ScyllaDB?

While exploring the different solutions for the database, ScyllaDB caught our attention. We were curious about the solution and did multiple proofs of concept on the ScyllaDB. We finally decided this would be the right choice for our environment and scale. A few primary reasons for our decision were:

Continue reading “Cassandra to ScyllaDB Migration Without Any Downtime”

Continuation Of Redis Throughput and Management

As promised in our previous blog on Redis Performance tunning and Best practices, we have explored more best practices and optimizations in Redis as a cache and database management system. This blog will share some new findings and optimizations we learned in our previous blog’s delta period.

We know that Redis is a high-speed and flexible data storage that can fulfill different cache and database requirements. But if a system is not configured and tested correctly, even a fast and reliable one can quickly become limited. Here we will talk about the different needs of Redis as a system and how we can optimize it further to fully use it.

So while consulting and collaborating with different Redis architects from Redis Labs, I learned different ways of designing a performance-grade, highly available, and secure Redis architecture. Based on my learning, I would like to categorize it into these dimensions:-

  • Right-sizing and deployment of Redis setup.
  • Proxy and connection pooling.
  • Use the correct data type for storing keys.
  • Sharding and replication strategy.
Continue reading “Continuation Of Redis Throughput and Management”

All Redis Setup Under 7 Minutes!

Redis is a popular open-source in-memory database that supports multiple data structures like strings, hashes, lists, and sets. But similar to other tools, we can scale standalone Redis to a particular extent, but not beyond that. That’s why we have a cluster mode setup in which we can scale Redis nodes horizontally and then distribute data among those nodes.

Generally, we categorize the Redis setup into three different types:

  • Standalone
  • Leader-Follower (Replication)
  • Leader-Leader(Sharding)

Standalone Setup

In a Standalone setup, the complexity is minimal, but we cannot scale the solution if the data increases. Also, the fail-over and high availability will not be supported inside it.

Continue reading “All Redis Setup Under 7 Minutes!”

Kubernetes CSI: Container Storage Interface – Part 1

There are different application categories in the general application world, but we usually define them in two major types, i.e., stateless and stateful applications.

To have a clearer perspective, we can say that API-based applications are generally stateless, and databases are stateful. In simple words or definition, a stateless application is an application that doesn’t save or persists the client data. On the other hand, a stateful application saves data about each client and uses it for other requests.

In the older days, when we didn’t have a concept of containers and container orchestrators, there was a common way to manage both types of applications: server. For example- API and database-based applications get hosted on servers but with different configurations depending on the resource requirements.

Continue reading “Kubernetes CSI: Container Storage Interface – Part 1”

AWS Gateway LoadBalancer: A Load Balancer that we deserve

Nowadays, LoadBalancing is one of the basic needs for the application systems to perform optimally while considering some important factors like- scalability and high availability. Every cloud is providing LBaaS (LoadBalancing as a Service) as an offering so the consumers don’t have to worry about the setup and management of load-balancers by themselves.

But it’s not like that cloud is offering a single type of load balancer for every use case because for different use-case we require a different type of load balancer. For example- we have different load-balancers for Layer4 and Layer7 level traffic.

Recently AWS had a new family member in their load-balancer family and they named it “Gateway Load Balancer“. So gateway load-balancer is a load-balancing service provided by AWS to send traffic to the different appliances, applications, firewalls, etc. that are not part of the current VPC.

Continue reading “AWS Gateway LoadBalancer: A Load Balancer that we deserve”