Initial thoughts for an automation testing framework/utility

My first exposure to selenium was in 2010/2011 & I was quiet impressed with it, the way you can use selenium for the testing of web application was totally awesome. At that time I was working with xebia,  our team was working for website revamp of a dutch travel company. We were using selenium for all the regression & functional testing of website, 80% of the website testing was done only by Selenium.

One of the challenge with selenium is that for each test scenario you have to write code for that & if you don’t manage your test scenario/cases effectively, the management of selenium test cases becomes a task in itself. At that point of time also we tried to make maximum use of Java to make selenium test cases as structured & Object oriented possible so that they can be extended & managed easily. I always had a desire to do some improvement in that area so that the selenium test cases management should be made more easier.

In my current company most of the testing is done manually, since I had a prior experience of Selenium & experienced the power it brings to your testing. I was pretty determined to bring the selenium advantage in our company. Off-late an automation testing was set-upped in our company as well which was working on leveraging the power of selenium in testing, but again it was same problem you have to write a lot of code. The other challenge that automation team was facing, the UI of the site was changing very frequently so whatever work they used to do was becoming back to zero after few iterations.

Last week along with my team I’ve started doing some head banging that let’s see if can do something out of the box, in a normal discussion with my team members one idea stuck to us. The manual QA team of our company is very strong & they have complete in & out of idea of whole application, but they have so much work assigned to them that they can’t spend their time in the selenium. We wanted to club the knowledge of our manual testing team & the power of selenium.

As a POC we buit a very simple utility that will read a meta information file and executes the commands listed in that meta file. As an example if they want to open a page one of the line of meta file will contain a command “open url”, similarly if they have to click a button the command will be something like “click . This utility was doing exactly what we wanted to do. We are still in the POC phase where we are trying to include as much commands as possible

Let me know about your thoughts for this approach, suggestions are most welcome.

Automation tips and tricks

As promised I’m back with the summary of cool stuff that I’ve done with my team in Build & Release domain to help us deal with day to day problems in efficient & effective way. As I said this month was about creating tools/utilities that sounds very simple but overall their impact in productivity & agility of build release teams and tech verticals was awesome :).

Automated deployment of Artifacts : If you have ever worked with a set of maven based projects that are interdependent on each other, one of the major problem that you will face in such a setup is to have the latest dependencies in your local system. Here I’m assuming two things you would be using a Maven Repo to host the artifacts & the dependencies would be SNAPSHOT dependencies if their is active development going on dependencies as well. Now the manual way of making sure that maven repo will always have the latest SNAPSHOT version is that every-time somebody does change in the code-base he/she manually deploy that artifact to maven repo. What we have done is that for each & every project we have created a Jenkins job that check if code is checked in for a specific component & if so that component’s SNAPSHOT version get’s deployed to maven repo. The impact of these utilities jobs was huge as now all the developers doesn’t have to focus on deploying their code to maven repo & also keeping track of who last committed the code was also not needed.

Log Parser Utility : We have done further improvement in our event based log analyzer utility. Now we also have a simple log parser utility through which we can parse the logs of a specific component & segregate the logs as per ERROR/WARN/INFO. Most importantly it is integrated with jenkins so you can go to jenkins select a component whose log needs to be analyzed, once analysis is finished the logs are segregated as per our configuration(in our case it is ERROR/WARN/INFO) after that in the left bar these segregations are shown with all the various instances of these categories and user can click on those links to go exactly at the location where that information is present in logs

Auto Code Merge : As I already told we have a team of around 100+ developers & a sprint cycle of 10 days and two sprints overlap each other for 5 days i.e first 5 days for development after tat code freeze is enforced and next 5 days are for bug fixing which means that at a particular point of time there are 3 parallel branches on which work is under progress one branch which is currently deployed in production second branch on which testing is happening and third branch on which active development is happening. You can easily imagine that merging these branches is a task in itself. What we have done is to create an automated code merge utility that tries to merge branches in a per-defined sequence if automatic merge is successful the merge proceeds for next set of branches otherwise a mail is sent to respective developers whose files are in conflict mode

Hope you will get motivated by these set of utilities & come up with new suggestions or point of improvements

Initial thoughts for a patch framework for a java based web project

Although this blog was not in the pipeline of feb month but I got a requirement to build a patch framework for a java based web project, so along with building this framework I thought of writing this blog as well so that I’ll get idea from other people as well.

First of all I will talk about what can be patched using this patch framework, majorly it will be three resources/things that can be patched

  • Class Files
  • JSP’s
  • Static Resources such as images, css, js …

I’m thinking of adding few other features in this patch framework as well

  • Sequence of patches should be maintained : Since we have a big team around 80 developers working on same code-base, their may be a scenario that we can have two or more patches which needs to be applied to a target system. Their may be a fair chance that those patches have to be executed in a sequence or you can say their could be dependency among patches.
  • Validation while applying patches : One of the validation that I can think of is that the resources that have to be patched will be either new or existing one & in case of existing resources the system should verify the location at which resources are patched should already have those resources
  • Rollback : The patch framework should have rollback capability

I’m planning to build this patch framework using

  • Shell scripting : For programming aspect
  • Git : As a version control system for storing patches
  • Jenkins : Provide a front-end to user for applying patches
  • Mysql : Not so sure about it yet but I’ve to store few information such as what all patches are applied, sequence of patches…. I can use file systems as well for storing this information

Let me know your thought about  this framework or any other feature that you can think of

Automation tips and tricks January 2013

I’m starting a new blog series in which I’ll be talking about various cool things or automations that I along with my team done in a month and what are my plans for next month.

Talking about January 2013, I’ve done following things

1.) Streamlining of environments : The big step in streamlining the environments is to change the owners of the application from root user to tomcat user & making ports of all the application consistent across environments i.e dev, qa, pt & staging. This will help me in my long term goal of introducing a server configuration tool most preferably puppet.
2.) Log Analyzer Utility : One of the major challenge that teams face is to get real time notifications of any exceptions that occur in the server logs, to overcome this problem we have written a log analyzer utility that will scan a log file backed by a meta file, this meta file have the information about who all should be notified for an exception. This utility is written in shell script and integrated with Jenkins CI server so that we can schedule the execution of this utility as per convenience, currently jenkins is executing this utility after every 15 minutes.
3.) System monitor : Off late we were facing challenge of servers getting disk out of space & when whole system goes down then only we were able to figure out the issue is due to disk space outage due to huge log files, to overcome this problem we have built a small shell utility that scans couple of folder’s recursively and provide a list of top 10 files whose size is greater then a specified threshold. In our case we have set this threshold as 1 GB, also all these variables can be provided as input to this utility such as folder’s to scan regular expression of files which needs to be considered the threshold value

This is what we have achieved in the month of January 2013 although these utilities seems to be but obvious and simple one but the effect they have in the productivity of the team is considerable.

Now plans for the month of February 2013, usually I choose those things which we are doing manually, this month we will be working on following things
1.) Utility which can perform automated merge if possible
2.) Utility that can automatically upload the artifacts to a central server(artifactory in our case)
3.) Integration of git common operations with Jenkins

Efficiently handling Code merge in Version Control System

One of the painful & mundane task that release engineers have to perform is to merge changes of one branch into another branch & in case of code conflicts the release engineer has to co-ordinate with all the developers to resolve those merge conflicts.

In our current setup the problem is more critical as development of two releases overlap with each other . We have a sprint cycle of 10 days where we have 5 days of active development after that code freeze is implemented & rest 5 days are only for big fixes. The next sprint starts just after the code freeze date of previous release. In ideal scenario this setup should work well but the biggest assumption behind successful execution of the process is their should be minimum code check-ins after code freeze & usually that doesn’t happens. This results in parallel development in 2 branches & therefore while merging two branches their are lot of code conflicts.

The real problem starts when we start merging code, as currently their are close to 100 developers working on the same code-base which means a huge list of files in conflict mode & you have to chase down each & every person to resolve those conflicts. To overcome the above-said problem we are planning to do 2 things.

First one is instead of doing merge after a long duration we are planning to increase the frequency of merge from once in 5 day to twice a day which would help us to reduce the list of conflicting files.

As I always strive to automate things as much as possible, the second part is to at-least create an automated tool that will perform a dummy merge of two branches and list out all the files that would result in conflict mode along with listing the last user’s who have modified the files in respective branch.

We are expecting 60-70% efficiency in code merge process, let’s see how things goes. Feel free to drop any ideas if you have or in case of any concerns :).

Although I tried to be as generic as possible, but just to let you know we are using Git as version control system.