Monday, March 28, 2016

Git server setup on Amazon AWS EC2

This post describes how to set up a secure git server on an Amazon EC2 instance, although this basic approach should work with any cloud provider that allows you to easily create an ubuntu virtual machine (instance).

Before beginning I should note that this setup is optimal for those who need a simple, secure git server for a small number of users. If you need a more complete collaboration environment or can tolerate a less secure footprint, you might look at gitlab, which has pre-configured AMIs (except for the govcloud), or a private setup on a service like github.

Obviously, you'll need to set up an Amazon AWS account, which is quite easy. Then you create an instance using default the AMI for Ubuntu (using HVM virtualization). A t2.nano suffices. Make sure that the port for SSH (22) is open to any IPs you might use. 

I chose to use two volumes. One with the root image (unencrypted, magnetic storage), and a second, encrypted volume to serve as /home and contain the repositories, for added security. You could skip this if you want but it will be less secure.

After the instance has launched, ssh into the public IP using the key from amazon and the ubuntu username.

First update the system and get git:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git

Now as root (sudo su -), create a filesystem on your second, encrypted volume:
mkfs -t ext4 /dev/xvdb
e2label /dev/xvdb encrypted_home

Mount it and move the home directories to the encrypted volume:
mv /home/ubuntu /
Add line to /etc/fstab
LABEL=encrypted_home /home ext4 0 0 1
Try to mount it as it would at boot-up:
mount -a
Run mount to check it's loaded
mv /ubuntu /home
check with another terminal that you can still log in.

Create a user and home directory for git
sudo useradd -m -d /home/git -U git
sudo su git

Setup ssh for git:
mkdir ~git/.ssh && chmod 700 ~git/.ssh
touch ~git/.ssh/authorized_keys && chmod 600 ~git/.ssh/authorized_keys

Ideally, for small numbers of users, each user should create their own secure key pair on their local machine (you might consider other solutions for large userbases):
ssh-keygen -f ~/.ssh/securegit -C 'Secure Git' -N '' -t rsa -b 4096
chmod 600 ~/.ssh/securegit
If you have many users, you could consider a solution like gitolite.

Users can then edit their ~/.ssh/config to add the following entry to the top (create the file if it doesn’t yet exist):

Host secure-git AMA.ZON.IPA.DDR
User git
Hostname AMA.ZON.IPA.DDR
Ciphers blowfish-cbc
Compression yes
IdentityFile ~/.ssh/securegit

This will configure their ssh client to use the key they just generated whenever connecting to the server’s IP (replaced for AMA.ZON.IPA.DDR) with user git. It also creates an alias ‘secure-git’ for the IP, enables strong ciphers and compression for the SSH session.

Users can then provide the admin user with their public key. Back on the git server, for each user's public key securegit.pub:
cat securegit.pub >> ~git/.ssh/authorized_keys
Have them test that they can ssh into git@secure-git with another terminal (this will only work until we setup git to respond to ssh, in the next steps).

Next, repositories need to be added. As git user (sudo su git -):cd
mkdir project.git
cd project.git
git init --bare


And now, let's lock down git access (also as git):
mkdir ~/git-shell-commands
cat >~/git-shell-commands/no-interactive-login <<\EOF
#!/bin/sh
printf '\n%s\n\n' "You've successfully authenticated, but interactive shell access is disabled."
exit 128
EOF
chmod +x git-shell-commands/no-interactive-login

As root (sudo su):
echo `which git-shell` >> /etc/shells
chsh -s `which git-shell` git

Make sure that when you reboot, everything still works. You're now set up with a secure git server in the cloud. For long term use, you can change this to a reserved instance to save on fees. In addition to the EC2 instance, you'll be paying for the storage of the two volumes and outgoing traffic to the internet. 

Recovery

If you mess up at a late stage in setup and lock yourself out of your instance (ie, if you misconfigure SSH or cause it to fail to boot; at an early stage, just delete it and start over), you can mount the root volume on a temporary instance and fix the damage, as follows:
  1. Create a new ubuntu instance and stop the original ubuntu instance.
  2. Detach the root directory volume from the original instance. Attach it to the new instance as /dev/sdf. Also detach the secure home volume.
  3. Mount the volume to the new instance: sudo mount /dev/xvdf1 /mnt  (see available volumes with lsblk)
  4. Fix whatever you broke on the old volume under /mnt and then sudo umount /mnt. Stop the instance.
  5. Unfortunately you cannot re-assign this volume to the original instance's boot device. You need to create a new instance cloning the original. Start by creating a snapshot of this fixed volume. (Volumes, right click->create snapshot)
  6. Also create a snapshot of your secure home volume.
  7. Create an AMI of the fixed root snapshot. (Snapshots, right click->create image). Use hardware-assisted virtualization. Add the secure home volume as the snapshot for the second volume (as /dev/sdb).
  8. Now create a new instance using this new AMI. (Instances, Launch Instance, find it under My AMIs). You should be able to access it again, and can reassign the elastic IP if applicable.
  9. If you wish, you can terminate the old instance, delete any old volumes and the AMI / snapshots you created (although the snapshots are useful backups and the AMI takes no space beyond the snapshot).