Amazon Cloud Instance

From genomewiki
Revision as of 18:34, 21 July 2009 by Hiram (talk | contribs) (adding NFS information)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Starting an Amazon cloud instance

In your AWS console screen, start an instance from the AMI: ami-cc55b2a5

An m1.large machine is large enough for ordinary genome browser work. This is a standard UCSC genome browser configured to work with the Hg18 data volumes, UCSC release version v203.

Note the regions your instance is running in from your AWS console, something like: us-east-1a

This zone will be used below to specify the location where to create the data volumes. Also note the identifier given to your running instance.

Back on your UCSC login machine where you have set up the AWS infrastructure, create the volumes with the snapshots of the data this browser needs:

# ucscHg18MySQL
ec2-create-volume --snapshot snap-fafd0f93 -z us-east-1a
# ucscHg18Gbdb
ec2-create-volume --snapshot snap-d3fd0fba -z us-east-1a
# ucscCommon
ec2-create-volume --snapshot snap-98fc0ef1 -z us-east-1a

Those snapshot identifiers are the pre-packaged snapshots of the desired filesystems we need for this system to function.

Those create commands give outputs something like:

# ucscHg18MySQL
# VOLUME  vol-165eb77f  256   snap-fafd0f93 us-east-1a   creating  2009-07-16T19:27:46+0000
# ucscHg18Gbdb
# VOLUME  vol-e85eb781  1024  snap-d3fd0fba us-east-1a   creating  2009-07-16T19:27:53+0000
# ucscCommon
# VOLUME  vol-ed5eb784  256   snap-98fc0ef1 us-east-1a   creating  2009-07-16T19:28:06+0000

Attach those volumes to your running instance:

ec2-attach-volume vol-165eb77f -i i-27d3ed4e -d /dev/sdh
ec2-attach-volume vol-e85eb781 -i i-27d3ed4e -d /dev/sdi
ec2-attach-volume vol-ed5eb784 -i i-27d3ed4e -d /dev/sdj

The -i argument is the identifier of your running instance. The volume identifiers are from the create-volume actions.

Now, on your running instance, login as the root user and mount these volumes:

mkdir -p /mnt/ucscCommon
mkdir -p /mnt/ucscHg18MySQL
mkdir -p /mnt/ucscHg18Gbdb
mount /dev/sdh /mnt/ucscHg18MySQL
mount /dev/sdi /mnt/ucscHg18Gbdb
mount /dev/sdj /mnt/ucscCommon

Note the relationship that must be maintained from the snapshot identifier, to the created volume identifier, to the device attach point, to the named mount point.

Use the elastic IPs console in your AWS management screen to create and associate a fixed IP address to this running instance.

NFS Sharing

On your NFS server, with these filesystems active, edit /etc/exports and add the following lines:

/mnt/ucscCommon 10.253.6.111(ro,no_root_squash)
/mnt/ucscHg18MySQL 10.253.6.111(ro,no_root_squash)
/mnt/ucscHg18Gbdb 10.253.6.111(ro,no_root_squash)

Export those filesystems with the command:

# exportfs -a

The address specified is the internal cloud address of the machine you want to send these NFS mounts to, your NFS client(s). If you have more than one, simply add additional entries, for example:

/mnt/ucscCommon 10.253.6.111(ro,no_root_squash) 10.254.154.143(ro,no_root_squash) ... etc ...

On the NFS clinet machine that wants these mounts, run the following commands:

# mkdir /mnt/ucscCommon
# mkdir /mnt/ucscHg18MySQL
# mkdir /mnt/ucscHg18Gbdb
# mount  10.252.214.4:/mnt/ucscCommon /mnt/ucscCommon
# mount  10.252.214.4:/mnt/ucscHg18MySQL /mnt/ucscHg18MySQL
# mount  10.252.214.4:/mnt/ucscHg18Gbdb /mnt/ucscHg18Gbdb

And in this case, the IP address mentioned here is the internal cloud IP address of the NFS server.