Backup files to Amazon S3


After a few years of silence today it came to my mind that I should once again start to write on my blog. During the last 12 odd months I have been involved in developing and revamping several of the websites own by the company I currently work.

In this article I’m going to discuss about the steps I have been using to automate the backing up of the websites that were hosted with Amazon EC2 to Amazon S3 bucket.

The strategy I adapted can be broken down into following 3 steps:

  1. Create a backup copy of each of the database and the website source code (all my websites were developed using PHP) on a daily basis and compress each (database & source code) using tar.gz compression appending the timestamp
  2. Pushing of backup files to Amazon S3 bucket
  3. Set a conjob task to execute to process

Step 1: Create a copy of each of the database and the website source code

To achieve this I created a folder called backups (/home/ubuntu/backups) in the home directory and added the necessary instructions into the shell script as follows.

#!/bin/sh

# (1) set up the required variables
DB_DUMP=<filename>_`date +"_%Y_%m_%d"`.sql
SOURCE_CODE=<filename>_`date +"_%Y_%m_%d"`.tar.gz
DBSERVER=<hostname>
DATABASE=<database name>
USER=<database user>
PASS=<database password>

# (2) use the following command  to create a dump of the database
cd /home/ubuntu/backups/
mysqldump --opt --user=${USER} --password=${PASS} -h ${DBSERVER} ${DATABASE} > ${DB_DUMP}

# (3) compress the mysql database dump using tar.gz compression
tar -zcf ${DB_DUMP}.tar.gz ${DB_DUMP}

# (4) create a copy of the website source, compress it and moved to /home/ubuntu/backups/
cd /var/www/
tar -zcf ${SOURCE_CODE}  <website source code folder>/
mv ${SOURCE_CODE} /home/ubuntu/backups/

# (5) delete the older copies of backups which are more than 3 days old inside /home/ubuntu/backups/
cd /home/ubuntu/backups/
find <filename>_* -mtime +3 -exec rm {} \;

Save the file as backup.sh inside /home/ubuntu/backups

Step 2: Pushing of backup files to Amazon S3 bucket

To achieve this I adapted two approaches and you’ll find that the latter approach is easier. Initially I adapted an approach of using the Amazon AWS’s SDK to move the backup files to Amazon S3 bucket. This approach had an limitation when individual file size (After the initial compression the backup was over 12 GB) exceeded more than 4GB while on a 64 bit architecture Linux box (I used Ubuntu 16.04) since I used PHP. To overcome this I sliced the final output of the compressed file in to multiples of  3.6 GB.

tar czf - / | split -b 3850 MB - ${SOURCE_CODE}.tar.gz.

Approach 1: Using Amazon AWS SDK

Download the appropriate Amazon AWS SDK from here.  In my case I used the PHP SDK using the instructions available here and downloaded the PHP library using the 3rd steps (Installing via Zip file).

<?php
require_once('/home/ubuntu/aws/aws-autoloader.php');
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '<bucket name>';
$pathToFile = '/home/ubuntu/backups/';
$fileNameSourceCode = ['<filename>_'.date('Y_m_d').'.tar.gz']; // name of the website source code, it should be equal to name of SOURCE_CODE variable found on /home/ubuntu/backups/backup.sh
$fileNameDBDump = '<filename>_'.date('Y_m_d').'.sql.tar.gz';// name of the database dump file, it should be equal to the name of DB_DUMP variable found on /home/ubuntu/backups/backup.sh

$credentials = new Aws\Credentials\Credentials(”, ”);

// Instantiate the client.
$s3 = S3Client::factory([
‘region’ => ‘us-east-1’,  // Since I have create the buckets in US East region (N. Virginia)
‘version’ => ‘2006-03-01’, // Standard version number for the S3 bucket service
‘credentials’ => $credentials
]);

//Pushing the source code file to the Amazon S3 bucket

if(count($fileNameSourceCode) > 0) {
foreach($fileNameSourceCode as $file) {
if(file_exists($pathToFile.$file)) {
try {
// Upload data.
$result = $s3->putObject(array(
‘Bucket’ => $bucket,
‘Key’ => $file,
‘SourceFile’ => $pathToFile.$file,
‘ACL’ => ‘public-read’,
‘Expires’ => gmdate(“D, d M Y H:i:s T”, strtotime(“+15 days”)) //This parameter doesn’t get applied, this we have to set on the bucket from the Amazon S3 account
));

// Print the URL to the object.
echo $result[‘ObjectURL’] . “\n”;
} catch (S3Exception $e) {
echo $e->getMessage() . “\n”;
}
}
}
}

//Pushing the database dump file to the Amazon S3 bucket

if(file_exists($pathToFile.$fileNameDBDump)) {
try {
// Upload data.
$result = $s3->putObject(array(
‘Bucket’ => $bucket,
‘Key’ => $fileNameDBDump,
‘SourceFile’ => $pathToFile.$fileNameDBDump,
‘ACL’ => ‘public-read’,
‘Expires’ => gmdate(“D, d M Y H:i:s T”, strtotime(“+15 days”)) ////This parameter doesn’t get applied, this we have to set on the bucket from the Amazon S3 account
));

// Print the URL to the object.
echo $result[‘ObjectURL’] . “\n”;
} catch (S3Exception $e) {
echo $e->getMessage() . “\n”;
}
}
Save the file as upload_to_s3bucket.php inside /home/ubuntu/backups

Approach 2: Using Amazon S3Tools

The Amazon S3 Tools is a very easy to use command line utility which can be used to push very huge files to Amazon S3 bucket with minimum effort. For Linux & Mac we can use s3cmd while for Windows use S3Express. I found this article on TecAdmin which has comprehensively explained it usage. I followed the following steps to set it up on my server.

  • Setting up of S3tool on the server

Installation

$ sudo apt-get install s3cmd

Configuration

You need to provide the Access Key ID and Secrete Key available with your Amazon AWS account during the configuration by executing the following command. As a best practice it recommends to create an IAM user and provide that creadentials instead of using the root account details.

# s3cmd --configure

  • Setting up the shell script to push the files to S3 Bucket

To achieve this I created a folder called backups (/home/ubuntu/backups) in the home directory and added the necessary instructions into the shell script as follows.


#!/bin/bash

_DB_DUMP=<filename>_`date +"_%Y_%m_%d"`.sql  # name of the website source code, it should be equal to the name of DB_DUMP variable found on /home/ubuntu/backups/backup.sh
_SOURCE_CODE=<filename>_`date +"_%Y_%m_%d"`.tar.gz  # name of the website source code, it should be equal to name of SOURCE_CODE variable found on /home/ubuntu/backups/backup.sh

s3cmd put ${_DB_DUMP} s3://<bucket name>/
s3cmd put ${_SOURCE_CODE} s3://<bucket name>/

Save the file as upload_to_s3bucket.sh inside /home/ubuntu/backups

Step 3: Set a conjob task to execute to process

Now lets set the cronjob task to daily or any required time interval to execute the following two scripts.

Firstly lets make the two shell scripts executable using following command

$ chmod +x /home/ubuntu/backups/backup.sh
$ chmod +x /home/ubuntu/backups/upload_to_s3bucket.sh

Open up the terminal and execute the following command
sudo crontab -e

Enter the following two lines and save.

30 01 * * * /home/ubuntu/backups/backup.sh #set to run the backup 30 minutes passing 1 o'clock in the morning

#use this if used the Amazon AWS SDK approach
00 03 * * * php /home/ubuntu/backups/upload_to_s3bucket.php #set to run the backup daily 3 o'clock in the morning

#use this if used the Amazon S3tools approach

00 03 * * * /home/ubuntu/backups/upload_to_s3bucket.sh #set to run the backup daily 3 o'clock in the morning

Advertisements

Change management of database scripts


In this post I’m going to discuss the strategy used by our team to manage the changes taking place in the database. Most of the time we as developers mainly focus on implementing the feature and put less effort on manage the database changes. For instance individual might not straight away see the importance in the first place and also this may not be very important for projects with smaller teams but for projects with a larger scope, involving multiple teams and frequency of changes in the functionality is high then might feel the importance of coming up with a strategy to manage this.

First lets try to identify situations that favours to coming up with a strategy for projects that requires to maintain a RDBMS as the data store.

  • For a project having multiple teams to carrying out the development.
  • For a project with a larger scope, having fortnight or monthly deliverables.
  • For a projects with a longer duration.
  • For projects having teams distributed across different geographic locations.

Above list might not be a complete one, so the reader is always welcome for suggestions. I’ll be taking the above scenarios as the base for the rest of my discussion. The obvious fact is that the change is inevitable. To accommodate this change almost every development team use version controlling to effectively mange the development carry out by each individual in the team. Similarly it’s best to come up with a strategy early in the development life cycle on managing the changes taking place to the database of the application(s). The changes can take place to the database under following categories:

  1. A way to keep track of the new insertions of table(s).
  2. A way to keep track of insertion, updates and deletion of column(s) in a table.
  3. A way to keep track of insertion, updates and deletion of row(s) in a table.
  4. A way to manage application configurations/fixed data mappings.
  5. A way to manage setting up of the database for fresh installation or upgrades.

Step 1

The Step 1 can be handled by maintaining a separate SQL script (lets name it db_tables.sql) to keep track of all the new insertion of table(s) into the application. Most of the situations inserting of new table(s) arises when there is a requirement to implement a new feature. So this would give a clear picture of the database about when and at which point in time each table got introduced and by whom.

Step 2 & 3

The Step 2 & 3 can be handled by maintaining a separate SQL script (lets name it db_tables_tracker.sql).

Step 4

In the Step 4, maintain individual SQL scripts to represent the data mapping of each identified table(lets name them as sys_config.sql, app_modules.sql, etc…), also the changes required by each can be introduced directly in to mapping itself or can be accommodated via db_tables_tracker.sql. For tables having larger number of records as data mapping it would be easier to alter the existing mappings to reflect the new change rather accommodating it via db_tables_tracker.sql. This way it helps an individual to get an idea of the most recent state of the mapping just by looking at the SQL script, this approach fails when the application required to main foreign key constraints among related tables because this approach requires to drop and reintroduce the table every time the update takes place on the table column(s) or to the data.

Step 5

The Step 5 can be elaborated by broken it down in to two.

Fresh installation of the database

To accommodate a fresh installation scenario of the database, all the SQL scripts created in first 4 steps can be combines into one batch by carefully arranging the SQL scripts as follows.

  • SQL script from step 1 – db_tables.sql
  • SQL script from step 2 & 3 – db_tables_tracker.sql
  • SQL scripts from step 4 – sys_config.sql, app_modules.sql, etc…

Note : Above sequence can be automated using a shell script (db_init.sh).

Upgrading of an existing database

For the purpose of preparing the database to accommodate new changes on to the existing table(s) we can maintain a new SQL script called db_upgrade_tracker.sql. This script will only hold the SQL statements required for updating the previous version of the database to the new state, taken from db_tables.sql and db_tables_tracker.sql scripts. Next step is to sequence the flow of the SQL scripts for the database upgrade as follows:

  • db_upgrade_tracker.sql
  • SQL scripts from step 4 – sys_config.sql, app_modules.sql, etc…

Note : Above sequence can be automated using a shell script (db_upgrade.sh).