Wednesday, November 30, 2016

Simplified Fonts for ggplot2 in R

I struggled for a while to get fonts to work properly with ggplot2 charts in R under Windows. The solution turned out to be easier than it seemed. The "old" way was to use library("extrafonts") which would then scan your entire fonts directory each run (slowly). Then if you got that working and you wanted to export a chart to a PDF, say, you'd need to install Ghostscript and embed the fonts subsequent to generating it. Nowadays R can do it all internally, and with a bit of setup, not have to scan the fonts at all. That's thanks to the showtext and cairo libraries.

You just have to find the filename of the font you want from your fonts directory. (In windows, open control panel -> fonts, then view details and  you may need to add a "Font File name" column". If no name appears, it may be a grouping; open the grouping and do the same.) In my case I wanted to use the Perpetua font, which has the name PER_____.ttf.


install.packages("showtext") # once
install.packages("Cairo" # once
library("showtext")
library("Cairo") # for embedding fonts in PDF; may not need to be loaded here
library("ggplot2")

# add the desired font to the font database (you can add multiple)
font.add("perpetua", "PER_____.ttf")

# the following should only be necessary in windows, and often isn't documented
# for each font you add, do this, mapping the Windows name and type to a font family
# variable (Perpetua in this case) that you will refer to it as.
windowsFonts(Perpetua=windowsFont("TT Perpetua"))

# plot something
# and use perpetua font for text (by default - any text can be customized)
qplot(1:10) +
  + theme(text = element_text(family="Perpetua")) 

# save to file; using Cairo drivers to embed the fonts as needed
ggsave("mychart.eps", width=6.5, height=5.5, device=cairo_ps)
ggsave("mychart.pdf", width=6.5, height=5.5, device=cairo_pdf)

Note that you shouldn't need any special driver to save as an image file (jpg/png/...). I have encountered a few fonts that don't seem to embed correctly and I'm not sure why that is at the moment, but most fonts seem to work fine with this method; they are viewable on screen and in PDFs. This procedure should theoretically work cross-platform (except that the windowsFonts call will not be needed), which is another advantage to this method, although I have not yet tested this.

You can also use google fonts like so, so you don't even have to find one on your system:
font.add.google("Roboto", "roboto")

There is more about the showtext library here. Hope this helps you. Be sure to leave comments if you find any improvements to this method.

Thursday, September 22, 2016

Notes on using git and github

Everybody knows about github and what an amazing resource it is. This post is about using git with github.

Fork what you're interested in on github. In the following user is your username, repository is the repository you forked.
git config --global core.editor <your_favorite_editor>

You may want to modify the EOL settings to match your preferences and platform.

git clone http://github.com/user/repository.git
cd repository
git config user.name "user"
git config user.email "user@users.noreply.github.com"
git remote add upstream https://github.com/originaluser/repository.git

To integrate new changes from the upstream repo into yours:

git fetch upstream
git rebase -i upstream/master

If there are redundant commits, 'squash' them in the first "interactive" message. The -i is important; otherwise you'll get stuck with a bunch of redundant commits.

If there are any conflicts (changes to the same general area of code, even if just adjacent, or even if equivalent), you will be dropped to the command prompt commit by commit. Edit the conflicting file as it should be, removing any >>>>> or <<<<<. When satisfied, git add thefile and git rebase --continue. They will be applied in a new commit.

Similarly if you want to create a new branch for your own use:

git checkout -b branch

To switch branches at any time (without uncommitted changes), just omit the -b. If you want to instead create the branch at a previous commit, add that branch hash at the end.

Be sure to push any changes to the master branch to github before changing to other local branches. Then pull from github before rebasing.

Then you can rebase as above with the other branch. But if the other branch is pushed to github, it is dangerous to push if anyone else is using it. If you're sure they're not, you can git push --force while on that branch (if git config --global push.default simple). If there were any other users there, after pulling, they'd have to blow away their unpushed local commits with git reset --hard origin/branch.

Here is more on rebasing and merging: https://www.atlassian.com/git/tutorials/merging-vs-rebasing/workflow-walkthrough

Or you can move just one commit to the current branch by using:
git cherry-pick <hashcode>

If you need to clean things up: http://stackoverflow.com/questions/5916329/cleanup-git-master-branch-and-move-some-commit-to-new-branch

If you need to "undo" a change made on github, first pull to update your local repo. Then:
git reset --soft HEAD^, and 
git push origin +branchName (see caveats). 

About reverting, resetting etc, see: https://www.atlassian.com/git/tutorials/resetting-checking-out-and-reverting

To update an existing remote branch from a local branch (which is currently checked out):
git push origin local_branch_name:remote_branch_name
or if the branch names match, do this and it will work in the future too:
git push --set-upstream origin local_branch_name

All in all my experience is that git is vastly inferior to mercurial (hg); git is far more finicky, harder to use and more prone to ugliness, and plus mercurial has nice GUIs from TortoiseHg. All in all, git feels like an advanced patch manager that has morphed into a version control system while mercurial feels like an advanced version control system. But alas, the linux kernel uses git and thus we have github and the rest is history. But I still use mercurial whenever I have a choice.

Saturday, June 11, 2016

Detecting CSS position: sticky support

If you're using modernizr for other things, by all means use that. If you just need a simple check to see if the current browser supports position: sticky, insert this javascript code:

var positionStickySupport = function() {
 var el = document.createElement('a'),
     mStyle = el.style;
 mStyle.cssText = "position:sticky;position:-webkit-sticky;position:-ms-sticky;";
 return mStyle.position.indexOf('sticky')!==-1;
}();

positionStickySupport will be true if it's supported. You should use the following CSS:


.myElement {
 position: -webkit-sticky;
 position: -ms-sticky;
 position: sticky;
 top: 0px;
}

Of course you can set top however you want. If you're wondering, the -ms-sticky is for future-proofing. As of this writing, Microsoft plans to support the feature in Edge, but it's not clear whether they'll use a prefix or not. Safari uses -webkit. Opera has it implemented in pre-release form without a prefix, and Firefox already supports it without one.

It would have been possible to use the CSS.supports() functionality, but Safari 6-8 support position: sticky but not CSS.supports().

Here is the one-liner version:

var positionStickySupport = function() {var el=document.createElement('a'),mStyle=el.style;mStyle.cssText="position:sticky;position:-webkit-sticky;position:-ms-sticky;";return mStyle.position.indexOf('sticky')!==-1;}();

Thursday, May 12, 2016

OpenVPN Server Setup Made Simple


For those not familiar with it, OpenVPN is probably the best and most secure VPN protocol out there at this time, with clients available for every mainstream platform. Unlike most other VPN protocols, it uses shared keys rather than passwords for security, which does make initial configuration take slightly longer.

But the real problem used to be that the process of setting up an OpenVPN server and generating appropriate keys was long, tedious, complicated, and error prone.

Those days are gone. There are now scripts that do all the hard work of setting it up and configuring users and keys for you. Answering a few simple questions, in a couple minutes you can have it up and running on your Linux server, and using all the current best practices to boot.

There are two use cases, and there are scripts for each, supporting at least Debian/Ubuntu/CentOS, which are both based upon the earlier work of Nyr:
  1. A multi-user, secure OpenVPN server, with quasi-anonymous usage.
  2. An OpenVPN server for personal use, supporting 3 simultaneous connections and potentially older clients.

In case one, use Angristan's script. To get it, run:
wget --no-check-certificate http://bit.ly/ovpn-install -O openvpn-install.sh
In case two, use jtbr's script, which is a lightly modified version of Angristan's.
wget --no-check-certificate http://bit.ly/openvpn-install -O openvpn-install.sh
You can click through to the link to see the extra security measures being taken by these scripts.
For either case, now simply run it as root after adding execute permissions:
chmod +x openvpn-install.sh
sudo ./openvpn-install.sh
For case two, if you want to change the number of simultaneous clients allowed, simply change the line max-clients 3 to the appropriate number before running, or remove it altogether for no limit.
The first run will set it up, and add the first client key (.ovpn file, which is placed in your home directory). Subsequent runs allow adding or removing clients or uninstalling. The generated client keys need to be (securely!) copied on to the clients' devices and imported into their OpenVPN clients. Once it's on their computer, this is usually drag and drop. For iPad/iPhone, I found the easiest option to be to use iTunes File Sharing to save to the OpenVPN Connect app. Then they can connect at will.
You're all set!

These scripts enable clients to access the internet and appear as if they are coming from the server's IP, a typical road-warrior setup; and depending upon the location of your server, to allow clients to access the server's local network. To allow VPN clients to connect to each other, or to allow access to the clients' network, then you'll need some additional routing configurationmore help here.

Especially if you're using a VPN for anonymity, you should also put some effort into ensuring your clients don't have any IP leaks (where your IP is discoverable by sites you access). A good guide to IP leaks and how to fix them is here; for a quick check, try here.

Wednesday, May 11, 2016

Touch-enabled swipeable scroll bars

Looking for an elegant, functional scrollbar solution for the web?

If you google around, you'll find TouchSwipe and various other Javascript-based solutions. These, while impressive, are trying too hard. They all have issues, either with handling links within the scrollable area, or with stutters in the animation, or with unnatural (on iOS anyway) non-bouncy ending of the scroll. I had several issues I could never overcome, not to mention their large code size and complexity.

You want something that works on touch devices and non-touch devices and is flexible and easy. I found the solution to be CSS-based, and this is it. It works on any modern browser and IE 9+.



The key is that you need an enclosing container (scrollWrapper) with overflow hidden, and an internal container (scrollableArea) that scrolls within it, and is allowed to scroll using the native facilities.

Normally, on a desktop, using the native scroll facilities would mean that you'd get an ugly scrollbar, so you need to hide that, and this can be done in CSS. But you need to replace that functionality for non-touch users. That means adding buttons on the ends that can move the scrollable area (and could also mean enabling the mouse wheel). The buttons call javascript functionw that simply update the scroll position and allows CSS transitions to occur. On the codepen below I even include a tiny javascript plugin that enables scrolling horizontally with a mousewheel (vertically would be automatic).

The result is something that's totally natural and totally flexible, and actually much simpler than the javascript solutions.

You can see a demo of the resulting scrollbar here

The HTML structure is trivial:

<div class="scrollWrapper">
  <a class="scrollBtn prev">&lt;</a>
  <div class="scrollableArea">
    <!-- stuff to scroll here -->  
  </div>
  <a class="scrollBtn next">&gt;</a>
</div>

The key part is this simple CSS code:
div.scrollWrapper {
 width: 96%;
 height: 91px;
 margin-left: 2%;
 position: relative;
 overflow: hidden;
 white-space: nowrap;
}
div.scrollableArea {
 position: relative;
 width: 100%;
 height: 125%; /* crop scrollbar (if applicable) outside scrollWrapper, while maintaining scrollability */
 white-space: nowrap;
 overflow-x: scroll;
 overflow-y: hidden;
 -webkit-overflow-scrolling:touch;
}

Note that the height is 91 in order to fit 81px-high divs with 5px margin on top and bottom. The scrollableArea height needs to be about 20px higher than the scrollWrapper to ensure that the desktop scrollbar will be cropped off.

Sample CSS for the scroll buttons:
.scrollBtn {
  display: inline-block;
  position: absolute;
  margin-top: 5px;
  margin-bottom: 5px;
  top: 0px;
  height: 81px;
  width: 30px;
  line-height: 81px;
  text-align: center;
  vertical-align: middle;
  background-color: #444;
  opacity: 0.5;
  text-decoration: none;
  z-index: 100;
  font-size: 30px;
  font-weight: 700;
  outline: none;
  cursor: pointer;
}
.scrollBtn:hover {
  opacity: 0.7;
  transition: 0.3s;
}
.scrollBtn.prev {
  left: 0px;
}
.scrollBtn.next {
  right: 0px;
}


Finally, here's javascript to handle the buttons and (optionally) enable horizontal scrolling (using jQuery):
$('a.scrollBtn.prev').click(function(e) {
 var scroller = $(".scrollableArea");
 scroller.animate({scrollLeft: scroller.scrollLeft() - (scroller.innerWidth() - 91)});
 e.preventDefault();
});

$('a.scrollBtn.next').click(function(e) {
 var scroller = $(".scrollableArea");
 scroller.animate({scrollLeft: scroller.scrollLeft() + (scroller.innerWidth() - 91)});
 e.preventDefault();
});

/** 
  * Enable scrolling an element horizontally using up/down mousewheel events
  * (when over the element). Amount is the number of pixels to move per
  * wheel turn (default 120) 
  **/
$.fn.hScroll = function (amount) {
 amount = amount || 120;
 $(this).bind("DOMMouseScroll mousewheel", function (event) {
  var origEvent = event.originalEvent, direction = origEvent.detail ? origEvent.detail * -amount : origEvent.wheelDelta, position = $(this).scrollLeft();
  position += direction > 0 ? -amount : amount;
  $(this).scrollLeft(position);
  event.preventDefault();
 })
};
$('.scrollableArea').hScroll(70);




Monday, March 28, 2016

Git server setup on Amazon AWS EC2

This post describes how to set up a secure git server on an Amazon EC2 instance, although this basic approach should work with any cloud provider that allows you to easily create an ubuntu virtual machine (instance).

Before beginning I should note that this setup is optimal for those who need a simple, secure git server for a small number of users. If you need a more complete collaboration environment or can tolerate a less secure footprint, you might look at gitlab, which has pre-configured AMIs (except for the govcloud), or a private setup on a service like github.

Obviously, you'll need to set up an Amazon AWS account, which is quite easy. Then you create an instance using default the AMI for Ubuntu (using HVM virtualization). A t2.nano suffices. Make sure that the port for SSH (22) is open to any IPs you might use. 

I chose to use two volumes. One with the root image (unencrypted, magnetic storage), and a second, encrypted volume to serve as /home and contain the repositories, for added security. You could skip this if you want but it will be less secure.

After the instance has launched, ssh into the public IP using the key from amazon and the ubuntu username.

First update the system and get git:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git

Now as root (sudo su -), create a filesystem on your second, encrypted volume:
mkfs -t ext4 /dev/xvdb
e2label /dev/xvdb encrypted_home

Mount it and move the home directories to the encrypted volume:
mv /home/ubuntu /
Add line to /etc/fstab
LABEL=encrypted_home /home ext4 0 0 1
Try to mount it as it would at boot-up:
mount -a
Run mount to check it's loaded
mv /ubuntu /home
check with another terminal that you can still log in.

Create a user and home directory for git
sudo useradd -m -d /home/git -U git
sudo su git

Setup ssh for git:
mkdir ~git/.ssh && chmod 700 ~git/.ssh
touch ~git/.ssh/authorized_keys && chmod 600 ~git/.ssh/authorized_keys

Ideally, for small numbers of users, each user should create their own secure key pair on their local machine (you might consider other solutions for large userbases):
ssh-keygen -f ~/.ssh/securegit -C 'Secure Git' -N '' -t rsa -b 4096
chmod 600 ~/.ssh/securegit
If you have many users, you could consider a solution like gitolite.

Users can then edit their ~/.ssh/config to add the following entry to the top (create the file if it doesn’t yet exist):

Host secure-git AMA.ZON.IPA.DDR
User git
Hostname AMA.ZON.IPA.DDR
Ciphers blowfish-cbc
Compression yes
IdentityFile ~/.ssh/securegit

This will configure their ssh client to use the key they just generated whenever connecting to the server’s IP (replaced for AMA.ZON.IPA.DDR) with user git. It also creates an alias ‘secure-git’ for the IP, enables strong ciphers and compression for the SSH session.

Users can then provide the admin user with their public key. Back on the git server, for each user's public key securegit.pub:
cat securegit.pub >> ~git/.ssh/authorized_keys
Have them test that they can ssh into git@secure-git with another terminal (this will only work until we setup git to respond to ssh, in the next steps).

Next, repositories need to be added. As git user (sudo su git -):cd
mkdir project.git
cd project.git
git init --bare


And now, let's lock down git access (also as git):
mkdir ~/git-shell-commands
cat >~/git-shell-commands/no-interactive-login <<\EOF
#!/bin/sh
printf '\n%s\n\n' "You've successfully authenticated, but interactive shell access is disabled."
exit 128
EOF
chmod +x git-shell-commands/no-interactive-login

As root (sudo su):
echo `which git-shell` >> /etc/shells
chsh -s `which git-shell` git

Make sure that when you reboot, everything still works. You're now set up with a secure git server in the cloud. For long term use, you can change this to a reserved instance to save on fees. In addition to the EC2 instance, you'll be paying for the storage of the two volumes and outgoing traffic to the internet. 

Recovery

If you mess up at a late stage in setup and lock yourself out of your instance (ie, if you misconfigure SSH or cause it to fail to boot; at an early stage, just delete it and start over), you can mount the root volume on a temporary instance and fix the damage, as follows:
  1. Create a new ubuntu instance and stop the original ubuntu instance.
  2. Detach the root directory volume from the original instance. Attach it to the new instance as /dev/sdf. Also detach the secure home volume.
  3. Mount the volume to the new instance: sudo mount /dev/xvdf1 /mnt  (see available volumes with lsblk)
  4. Fix whatever you broke on the old volume under /mnt and then sudo umount /mnt. Stop the instance.
  5. Unfortunately you cannot re-assign this volume to the original instance's boot device. You need to create a new instance cloning the original. Start by creating a snapshot of this fixed volume. (Volumes, right click->create snapshot)
  6. Also create a snapshot of your secure home volume.
  7. Create an AMI of the fixed root snapshot. (Snapshots, right click->create image). Use hardware-assisted virtualization. Add the secure home volume as the snapshot for the second volume (as /dev/sdb).
  8. Now create a new instance using this new AMI. (Instances, Launch Instance, find it under My AMIs). You should be able to access it again, and can reassign the elastic IP if applicable.
  9. If you wish, you can terminate the old instance, delete any old volumes and the AMI / snapshots you created (although the snapshots are useful backups and the AMI takes no space beyond the snapshot).

Saturday, February 13, 2016

Troubleshooting Verizon Fios Router WiFi Instability

My folks have a router model MI424WR-GEN3I which they rent from Verizon Fios.

Beginning in Fall 2015, after a year of good functioning, they began experiencing instability with the wireless portion of the router. Specifically, after a period of a few hours, days or weeks:
  • New wireless clients would fail to connect to the wireless network, even when the WPA2 password was correct.
  • As their leases expired, existing clients could not renew their leases and thus could no longer access the network either.


What is most bizarre about the situation is that the wired clients continued functioning normally. Ethernet works. Wireless stops working. This is not an issue related to the WAN connection or Verizon's servers. Quite a few issues have now been ruled out:
  • Radio interference, the issue verizon's help desk keep returning to cannot be the explanation:
    • They live in the exurbs and have little radio interference. The router is in the basement, shielded by concrete and soil.
    • Standing directly next to the router, no matter what device is used, does not affect one's ability to connect. When the router is exhibiting the issue, none can.
    • There is no notable change in the electromagnetic environment of the house, like a new microwave or other appliance, nor any pattern of outages happening when any electrical device is being used.
    • During outages, some devices continue to work for some time (eventually they just can't re-authenticate).
    • Both the router and other devices show almost no other networks in the area, all of which are weak. The signal strength shown by clients for the network in question is good, even when the problem is ocurring. It's not that the network can't be seen or is weak, it's that you can't (re-)connect/authenticate.
    • Changing the wireless channel being used has no effect.
  • Issues with the wireless clients:
    • This happens to portable devices (phones, iPads), laptops and different operating systems (android, iOS, windows). Eventually all are affected.
    • It is remotely possible that some incompatibility with some client device is causing the router software to bonk after some time, but the devices being used are all pretty standard, and it doesn't seem to matter which devices are on a network when outages occur.
  • Some kind of virus or something on a client
    • Verizon technicians seem to think this is a possibility, but even if some client did have one (and they almost certainly don't) I can see no way in which this should affect the wireless portion of the router (only) and go away after resetting it.
  • An issue with the specific Verizon router. This was my first thought, that probably something had caused the wireless radio to begin failing or some other hardware problem
    • Verizon has now replaced the router not once but twice, and both replacement routers began experiencing the exact same problem (and only that problem), after a matter of days. The only special configurations they had in common were:
      • Opening a single (SSH) port for forwarding, and of course some UPnP ports.
      • The existence of a wired client which regularly transfers data.
      • A custom WPA key and network name.
    • I should say that both the replacement routers were "refurbished" units, which may or may not have had any real refurbishment performed after customers returned them. I can only speculate about this, but given that the routers all pass a self-test even while exhibiting the problem, and that when you first turn them on they appear to work perfectly fine (the problem takes a minimum of a few hours to begin happening), it is quite possible that Verizon thinks nothing is wrong with them and is simply returning other peoples' broken routers to the next guy expecting that the real problem is incompetent customers returning perfectly good routers (I get the sense that's what the agents think anyway). 
    • All of these routers were, at the time of the issue, running firmware version 40.21.18.
  • Like I said, the internet connection continues working fine as can be verified via wired connection, so that seems to rule out issues with the ONT or other Verizon network infrastructure.
The issue can be "remedied" (temporarily, until the next outage randomly happens) by either turning off wireless radio from the router configuration software and turning it back on, or by resetting the whole router. ADDENDUM: When the issue happens, under System Monitoring -> Advanced Status -> "Full Status / System wide Monitoring of Connections", Wireless Access Point "Status" shows "Disconnected", even though under Wireless Settings, the wireless network is "On". It remains unclear what causes it to become "disconnected" or why it fails to re-connect.

This problem is very puzzling and remains unresolved. I'll most likely have to "resolve" it by buying another wireless router and plugging it into the Verizon router via ethernet, then disabling the Fios internal wireless capability. It's pretty frustrating that you have no choice to replace the Verizon router altogether.

Given the lack of great options, I'm asking you, dear internet, if you are seeing similar issues. I don't trust that Verizon will find them themselves, and Verizon agents appear to be well off the scent.

The only realistic possibilities I see are that:
  1. This is a common flaw that is either tolerated by customers or not correctly diagnosed, possibly since so few people use ethernet.
  2. I've really got bad luck and managed to get three routers which all exhibit the same problem with the wireless radio (possibly because they don't fix it when refurbishing units). A fourth time might be the charm.
  3. There was some automatic update to the firmware last fall and the new firmware (40.21.18) contains this bug. As I noted, it's possible the firmware might have some incompatibility with other implementations of the 802.11a/b/g/n protocol(s). But the most likely device that would be would be iPhones (the only new one on the network since the problems started happening), so again, you'd think lots of customers would see it.
Any comments or suggestions are most welcome.