Managing logs of Application

A major issue that people face in managing a big system is log files management. In our setup we were primarily facing two issues
1.) We had around 10-15 different applications, it was a messy things to track the logs as you have to login to those systems to view the logs
2.) Other issue was cleaning up of log files
Resolution for the second issue was quiet easy, one of the solution can be to write a script that will delete the logs files older then say n days and then add this script in crontab to execute with some frequency say daily. This approach had an issue, with the addition of a new system you have to do this setup every-time. As a one time solution for this problem we have created a job in our CI system(jenkins) which can be configured to run after some frequency & then reads the details of machine, location of log files which needs to be cleaned. The second approach gave us the flexibility to manage cleaning of the log files from a single place.

For first issue obviously we have to look out for some tool & the first google hit 🙂 suggested log.io and it seemed meeting all our requirements. So the one line definition goes like this Log IO is a Real time log monitoring tool, through this tool you can monitor multiple log files in a single browser window.

I referred the link given below to configure logio
http://linuxdo.blogspot.in/2012/05/install-logio-on-centos.html

I’m not going into the details of setting up of log.io or how it works, but if you have any confusion you can leave a comment

For your reference I’m attaching an image of log.io instance we are using

So happy logs tracking 🙂

Build & Release Challenges : Manual DB Updates Part 2

Previos

This blog was supposed to be about the new system, I thought of building to solve the problem that I discussed in my previous blog. Well for your disappointment this blog will be not about that, the reason is scope of the problem changed. In this blog I’ll be discussing about the new scope and how discussion moved forward about it and what is the current state, which means that I’m still not able to solve this problem & suggestions are welcome :).

I’ll again state the problem which was very simple enough, “database updates were not automated in non prod environments as same db scripts were modified during development“. You need to refer to previous blog for more details about this problem. To solve this problem I came up with incremental db update approach, as per this approach all new modifications will be done as a new sql update which means that let’s say you had a file 1.sql, if you need to do any modification a new file 1′.sql should be committed. In this way the system don’t have to track the files, it just have to maintain what all files got executed, the new files which needs to be executed and execute the new files only. This solution can work in a normal setup very well, in fact in my last assignment I was using this approach only to have automated db updates across all environments.

The incremental db updates can’t be run in current setup, the reason for that is we have very huge database order of 100GB, you can easily imagine that we can’t afford to run same script with slight modifications i.e first script adding a column of size 20 then another script to change it’s size to 40 finally renaming it to some other name. Instead of that a single script should be created after consolidating all these scripts.

The first solution that came to my mind after this new issue emerged, during non prod deployments we should already have database dump of previous release, more preferably cold dump. During deployment 3 steps would be performed first load previous release db dump, run all the consolidated scripts which will be consolidated & do the code deployment. Initially this solution looked fine enough but QA team raised concern as loading previous release dump meant that all the test data  they have created on the QA server would be lost and I was at the beginning of square :).

Another solution that could be implemented  was to have rollback script for each & every script committed. This convention will have an advantage of supporting incremental update i.e whenever a script will be updated first it’s corresponding rollback script will be executed & then the script can be executed. This solution has it’s own challenges the first challenge was it’s really difficult to write rollback script of each & every script, another issues was you have to carefully manage the script files so that there will be no tight coupling between them as execution of rollback of one script will impact another script. Third issue although less significant is that you have to deal with data loss

We could also have used a hybrid approach that is combination of incremental & full db updates. Till QA phase we can use incremental db update mechanism in which all new script modifications are done as a new script and then they can be executed incrementally but for staging & production deployment db updates will be done as a full update which requires a human intervention i.e consolidation of scripts. This approach had 2 challenges the first & foremost was that it had manual intervention & second major issue was that we were duplicating the db scripts.

So these were the few approaches that we thought of & none was able to solve our problem completely, so we are still struggling with fully automating the db update process. Again any suggestions are most welcome 🙂

Build & Release Challenges : Manual DB Updates

The first problem that I’m gonna discuss is manual db updates. In our current application we do have automated DB updates execution in the production environment, but not in the rest of environments i.e dev, qa, stage, performance test … etc.

The process that we use for automated scripts execution in production environment  is that we create a release folder, this release folder contains all the sql scripts for the release along with a meta file. The release meta file contains the list of all the scripts that needs to be executed, the current system reads this meta file & executes all the scripts of release. This process is fair enough for production system since the release is deployed only once on production system. In production systems we don’t have to track whether a script got executed or not i.e all the scripts execution is treated as atomic that is either all the scripts are executed or none is executed.

The drawback of atomic execution is the reason because of which this approach can not be applied to rest of the environments, since the db updates will always be incremental in rest of environments. In case of all other environments apart from production environment the release will be deployed multiple number of times, with each release new db scripts can be added  to the system and only those new scripts needs to be executed.

The new system that I’m trying to develop will have incremental db update capability. The system that I’m planning to deveplop will have following characterstics :

  • It should be able to keep track of script name for later reference.
  • It should store the release mapping to which this script belongs.
  • The sequence of the script to enforce the order of execution.
  • The system should also maintain whether the script is already or not.
    • The system should be able to handle error scenario i.e if a script execution fails a corrective action should be taken by the system
    • It should be extensible enough so that various kind of reports can be generated from it

    In the next blog I’ll be talking about the actual system how it is built

    Prev
    Next