How to Manage Amazon Web Services Instances part 1

If you want to minimize the amount of money you spend on Amazon Web Services (AWS) infrastructure, then this blog post is for you. In this post I will be discussing  the rationale behind starting & stopping AWS instances in an automated fashion and more importantly, doing it in a correct way. Obviously you could do it through the web console of AWS as well, but it will need your daily involvement. In addition, you would have to take care of starting/stopping various services running on those instances.

Before directly jumping on how we achieved instance management in an automated fashion, I like to state the problem that we were facing. Our application testing infrastructure is on AWS and it is a multiple components(20+) application distributed among 8-9 Amazon instances. Usually our testing team starts working from 10 am, and continues till 7 pm. Earlier we used to keep our testing infrastructure up for 24 hours, even though we were using it for only 9 hours on weekdays, and not using it at all on weekends. Thus, we were wasting more then 50% of the money that we spent on the AWS infrastructure. The obvious solution to this problem was: we needed an intelligent system that would make sure that our amazon infrastructure was up only during the time when we needed it.

The detailed list of the requirements, and the corresponding things that we did were:

  1. We should shut down our infrastructure instances when we are not using them.
  2. There should be a functionality to bring up the infrastructure manually: We created a group of Jenkins jobs, which were scheduled to run at a specific time to start our infrastructure. Also a set of people have execution access to these jobs to start the infrastructure manually, if the need arises.
  3. We should bring up our infrastructure instances when we need it.
  4. There should be a functionality to shut down the infrastructure manually: We created a group of Jenkins jobs that were scheduled to run at a specific time to shut down our infrastructure. Also a set of people have execution access on these jobs to shut down the infrastructure manually, if the need arises.
  5. Automated application/services start on instance start: We made sure that all the applications and services were up and running when the instance was started.
  6. Automated graceful application/services shut down before instance shut down: We made sure that all the applications and services were gracefully stopped before the instance was shut down, so that there was be no loss of data.
  7. We also had to make sure that all the applications and services should be started as per defined agreed order.

Once we had the requirements ready, implementing them was simple, as Amazon provides a number of APIs to achieve this. We used AWS CLI, and needed to use just 2 simple commands that AWS CLI provides.
The command to start an instance :
aws ec2 start-instances –instance-ids i-XXXXXXXX
The command to stop an instance :
aws ec2 stop-instances –instance-ids i-XXXXXXXX 

Through above commands you can automate starting and stopping AWS instances, but you might not be doing it the correct way. As you didn’t restrict the AWS CLI allow firing of start-instances and stop-instances commands only, you could use other commands and that could turn out to be a problem area. Another important point to consider is that we should restrict the AWS instances on which above commands could be executed, as these commands could be mistakenly run with the instance id of a production amazon instance id as an argument, creating undesirable circumstances 🙂

In the next blog post I will talk about how to start and stop AWS instances in a correct way.

Attach a new volume to EC2 Instance

This blog will talk about how to mount a new volume to an existing EC2 instance, though it is very straightforward & simple, but it’s good to have a checklist ready with you so that you can do things in one go instead of searching here and there. The most important thing to take note of in this blog is that you have to do couple of manual operations apart from mounting the volume through AWS Web UI.

  1. Go to the AWS Volumes screen, create a new volume if not created already.
  2. Select Attach Volume in Actions button
  3. Choose the instance, to which this volume needs to be mounted
  4. Confirm the volume state changes from available to in-use
  5. Go to the AWS Instances screen, select the EC2 instance to which volume was attached
  6. Check the Block Devices in the details section you can see the new volume details their. Let’s say it is mounted at /dev/sdf.
  7. Now log in to the EC2 instance machine, you can’t see the mounted volume yet(it is like an external un-formatted hdd that is connected to a linux box)
  8. To make it usable execute below commands
sudo su –                              [Switch to superuser]
mkfs -t ext3 /dev/xvdf                      [Format the drive if it is a new volume]
mkdir /home/mettl/mongo                      [Simply create a new directory]
mount /dev/xvdf /home/mettl/mongo   [Mount the drive on newly created directory]
Make sure to change permissions according to how you use it.
  1. To mount EBS volumes automatically on startup add an entry in /etc/fstab
/dev/xvdf    /home/mettl/mongo    ext3    defaults,nobootwait,comment=cloudconfig    0    0
Hope you will find this blog useful, rest assured this is the starting point of a new series I would be talking about couple of other best practices such as why do you need to have this kind of setup, how you will upgrade volume in case of a running ec2-instance..

How to create an extra swap space using file system

Sometimes you feel constrained due to the the RAM limit of your system especially when you are running heavy duty software’s, in this blog I’ll talk about how you can overcome this problem by hav‌ing an extra swap space to give you extra computing power

First of all you can execute swapon command to check how much swap space you already have in your system
$ swapon -s
Filename                Type        Size    Used    Priority
/dev/sda5                               partition    8130556    44732    -1

The above output gives you an indication that you already have a swap space at partition /dev/sda5. The numbers under “Size” and “Used” are in kilobytes. Though I have considerable amount of swap space configured on my system :), let’s continue and try to create a new swap using file system. Before starting with creation of swap space let’s make sure that I’ve enough disk space available in my system

$df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       448G  123G  303G  29% /
udev            1.9G  4.0K  1.9G   1% /dev
tmpfs           767M   40M  727M   6% /run
none            5.0M     0  5.0M   0% /run/lock
none            1.9G  804K  1.9G   1% /run/shm

So I’ve a powerful system with 303G of disk space still available, that means I have a liberty of creating a swap space of my liking. I’ll user the data dump(dd) command to my supplementary swap file, make sure that you would be running this command using root user.
$dd if=/dev/zero of=/home/sandy/extraswap bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 2.41354 s, 222 MB/s

Now we have created a file /home/sandy/extraswap of size 512M which we will be using as a swap partition. Swap can be created by issuing mkswap command
$mkswap /home/sandy/extraswap
Setting up swapspace version 1, size = 524284 KiB
no label, UUID=685ac04a-ad31-48a8-83df-9ffa3dbc6982

Finally we have to run swapon command on our newly created swap partition to bring it into the game
$swapon -s
Filename                Type        Size    Used    Priority
/dev/sda5                               partition    8130556    46248    -1
$swapon /home/sandy/extraswap
$swapon -s
Filename                Type        Size    Used    Priority
/dev/sda5                               partition    8130556    46248    -1
/home/sandy/extraswap                   file        524284    0    -2

As you can notice when we first executed the swapon -s command at that time swap partition was not in the picture, once we executed the command swapon /home/sandy/extraswap  the swap partition was available.

One last thing that we have to do is to add the entry of this swap partition in our /etc/fstab file as with the next system boot the swap partition will not be active by default we have to do the entry of this swap in our /etc/fstab file.

How to securely access your private app on cloud

One of the suggested practices in cloud administration is to always host your applications on a Virtual Private Cloud. Also, you should have a public subnet hosting the public facing apps, and a private subnet which hosts the private apps (like a database or a back-end service/app). To know more about why you need such kind of a setup, please read more about VPC.

This blog will talk about a scenario where you have multiple Virtual Private Clouds (hereafter referred to as VPC), and you need to access a private app hosted in one VPC from another VPC. An example of this scenario could be that you have a VPC for your staging environment and another VPC for production environment, then you’d like to sync the database from of production environment from the staging environment. In this case, it might not be straight forward to do this, as you might not be able to access the production database from outside the production VPC.

One of the solutions for this problem would be to first take a dump of the production database on one of the public facing machines in the production VPC, and then copy that dump to a public facing machine in the Staging VPC and finally applying this dump to the private database of Staging environment. This approach will work, but it would not be a perfect solution, as you have to copy the db dump between VPC’s.

A much better approach would be if you could directly connect to the production database from the Staging VPC & execute the dump & restore command, for that you need direct access of production database from staging environment. This approach is called port-forwarding. We configure port-forwarding at one of the public facing machines(NAT is the preferred one) in the production VPC in such a manner that if a request comes on this machine at port x it will be forwarded to port y on a private facing machine in the production VPC which is the database production in this case.

In the next blog I will talk about other alternate approaches that can be used to solve this problem.

Puppet module to setup nodejs deployment 2

As I said in the previous blog Puppet module to setup nodejs deployment, the nodejs module was for providing the basic infrastructure for automated node app’s deployment & as promised I’ve released the next module “nodeapp” that can be used to setup a node app on the target server.

First of all I’ll talk about what this module will do to facilitate the automated deployment of a nodejs app, as already discussed we are following a convention that all the node app’s code will be present at /home/nodejs/ which is referred by startNodeApp.sh script so we create the directory of nodejs app. The deployNodeApp.sh script was using the upstart to manage the nodejs app instance i.e starting/stoppping the nodejs app, the nodeapp module takes care of creating the require upstart configuration at /etc/init/.conf. Also we use monit to monitor the nodejs app’s so that we can start/stop the nodejs app’s using the web ui of monit & also see various stats such as cpu, memory, load.. consumption of nodejs app.

This nodeapp module is a userdefined type which takes the name of nodeapp as an argument, as a result of which you can setup any number of nodejs app’s on a system.
i.e nodeapp{‘search-demo’: app_name => “search-demo”}
This entry will create below files

/etc/init/search-demo.conf : An upstart configuration file, using which search demo nodejs app can be managed as a service.

#!upstart
description “node.js search-demo server”
author      “sandy”
start on startup
stop on shutdownscript
export HOME=”/home/nodejs”

echo $$ > /var/run/search-demo.pid
exec sudo -u nodejs /home/nodejs/startNodeApp.sh search-demo
end script

pre-start script
# Date format same as (new Date()).toISOString() for consistency
echo “[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting” >> /var/log/search-demo.sys.log
end script

 

pre-stop script
rm /var/run/search-demo.pid
echo “[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping” >> /var/log/search-demo.sys.log
end script
/etc/monit/conf.d/search-demo.monit : A monit configuration file, using which search-demo nodejs app can be monitored & even automatedly restarted

check process search-demo with pidfile /var/run/search-demo.pid
stop program = “/sbin/stop search-demo”
start program = “/sbin/start search-demo”

So using these 2 modules nodejs & nodeapp you can make any system up & running for nodejs app’s automated deployment