Friday, October 2, 2015

DD-WRT on Netgear AC1450

As it comes, Netgear's AC1450 router is quite lousy, with buggy firmware, lots of crashes and limited throughput. However the internet figured out that the actual hardware is identical to the more expensive, better performing R6300v2 (AC 1750) router. It is possible to simply convert the router and use the R6300v2 router's firmware.

But we'll do even better and install DD-WRT's firmware, giving this inexpensive router better functionality than one costing several times more. If you want a DD-WRT router, this is an excellent choice. It's a dual band, dual network, 802.11ac router supporting 450Mbps (2.4Ghz) + 1300 Mbps (5Ghz) speeds (both 3x3 receiver-transmitter antennas), with two USB ports, one of them USB 3.0, 4+1 gigabit ethernet ports, a fast dual-core 800Mhz (ARM v7) processor, 128MB flash and 256MB RAM. You can find it new or refurbished on Amazon*.

Although hard to find, the steps to open up the capabilities couldn't be easier. Here's the summary:

1) Download the two files from ftp://ftp.dd-wrt.com/betas/2017/*/netgear-ac1450/, where * is a release that seems to work well according to users of the dd-wrt forums (I used 09-28-2015-r27858 and did not see issues -- Update: I would now recommend a more recent build like 09-29-2016-r30709 for security improvements.)

2) If the router is not new, reset first by holding down the red button on the back for 10 seconds. From the stock firmware web GUI choose manual (advanced) setup, and login with the admin user/password printed on the bottom of the router. Then choose router update. Select the factory-to-dd-wrt.chk file and flash. (Continue anyway if it says it's installing the same version). Whenever flashing, be patient and be sure that the power remains connected. It should boot back up automatically after some delay.

3) The router will reboot and open the DD-WRT web GUI. Don't set this up (apart from the mandatory username/password). Just go to Administration -> Firmware Upgrade. Select the second file, netgear-ac1450-webflash.bin and flash with reset to default settings.

DONE! After rebooting, you now have a great, fast full-featured DD-WRT router. More information about DD-WRT and how to set it up for your needs is available on the DD-WRT Wiki page.


* Really appreciate if you purchase via these links as I'll get a small commission at no cost to you.

Wednesday, September 30, 2015

"Improving" Sony Xperia Z1 Compact D5503

It was time for a new phone since my old one has unfortunately kicked the bucket. Luckily I found the Sony Z1 compact, which is totally badass and an amazing bargain given the specs. It's available on Amazon (also in white, pink, or lime green).* Really happy with my purchase so far.

Mine came carrier unlocked so any SIM card will work -- that's an absolute requirement for me as I travel a lot. On previous android phones, I would install a custom ROM (version of Android) in order to rid myself of the carrier and manufacturer bloat and add functionality. However, Sony's version of Android is streamlined and I like it so far. Plus, unlocking the bootloader (which is necessary to use a custom ROM), although easy to do, has downsides. It may void the warranty and more importantly reduce the quality of the camera and break anything requiring DRM. Plus custom ROMs may not have all the useful features Sony has, particularly power saving modes.

However, it's possible to root the phone without unlocking the bootloader. You'll have the stock ROMs and experience with a rooted phone that you can largely customize. In particular you can install the XPOSED framework to install XPrivacy, which like Privacy Guard in CyanogenMod allows controls over app permissions. It also allows me to run apps like AirAudio that require root. And at a pretty basic level, with root I can back up the phone.

My Z1 Compact was loaded with Android 4.4.4 Kitkat.
This guide will root the phone and upgrade it to Sony's stock Android 5.1.1 Lollipop, without wiping any software/settings

** UPDATE 2/14/2015 with the latest versions of apps and how to root if you're already on Lollipop (5.*) **
** UPDATE 12/7/2015: it is possible to upgrade to the latest firmware version 14.6.A.1.236 using this procedure (in step 7). If you already had upgraded to rooted 5.1+ you can just do step 7+ to get the latest firmware. This update includes the stagefright 2.0 fixes. (If you use Xposed be sure to get the latest version of that as well.) **
  1. Download the latest flashtool software and install it. (You can get a head start by now downloading the firmware needed in step 7.1)
  2. Once installed, go to flashtool's drivers subdirectory and run the drivers installation found there. Choose flashmode and flashboot, along with drivers for the Z1C. It'll likely take a few minutes.
  3. My phone shipped with build number 14.4.A.0.157 of Kitkat (you can see this under Settings->About Phone). Unfortunately the "one-click" root solution doesn't work for this version or any other above 14.4.A.0.108. Instead we'll downgrade to .108 and root. If you're already at .108, skip this step. 
    1. Depending upon what version of Android your phone came with:
      1. If you have KitKat (Firmware 14.4.* / Android 4.4.*) then you can downgrade the kernel (only). Download the zip file from this guide and unzip the .108.ftf file.
      2. If you have Lollipop (Firmware 14.6.* / Android 5.*.*) then you must install the whole .108 firmware. This can be found here. You may have to use File->Switch Advanced, and select to wipe userdata (You will lose your user data in this case!) for step 4, in order to be able to boot.
    2. Put the .ftf file into the .flashTool/firmwares directory under your home directory (Eg, C:\Users\username\.flashTool\firmwares).
    3. Enable developer mode by tapping 7 times on the Android Build under Settings -> About Phone. Then under Settings -> Developer options, enable USB debugging. And under Settings -> Security, enable Unknown Sources.
    4. Run Flash tool; click the lightning button, select Flashmode. Select the A.0.108 kernel firmware file and hit Flash. Then follow directions to power down and attach your phone to the computer, while holding the volume down button until the flash process begins.
    5. Once complete, unplug the USB cable, close flashtool, and turn on the phone.
  4. Root the device using "one click" solution from here (in windows, the tool included in the zip from last step should work as well but is older). 
    1. Ensure USB debugging and unknown sources remain enabled, as above.
    2. Run install.bat in windows or install.sh on mac/linux.
    3. Wait until it tells you to connect the device via USB, and then do that.
    4. Now, wait until it tells you "Device Rooted", dismissing any prompts on your phone if they appear.
  5. Now that you can, backup using online nandroid backup software (get and install it and required BusyBox from the Google Play Store). Save all backups to external SD and/or computer. This includes the TA partition that is wiped if you bootloader unlock. To just backup that critical partition (needed to allow you to restore stock at some point if necessary), there's also this tool.
  6. Install dual recovery using installer. A recovery is like a custom BIOS that enables easier flashing of ROMs and recovery from errors. With this package you get two; you can open clockworkmod recovery by pressing the up arrow after the green/purple light flashes during bootup, and the TWRP recovery by pressing the down arrow. Sometimes one is needed rather than the other.
    1. Download installer Z1C-lockeddualrecovery2.8.25-RELEASE.combined.zip (or newer) from here. This version can be used both to install initially and later to flash.
    2. Unzip it (but keep the .zip for later) and run install.bat/install.sh. Choose (2) rooted with SuperUser. Connect phone when prompted and wait until complete. Reboot phone.
  7. Upgrade to Kitkat using this guide to create a pre-rooted firmware.
    1. Run flashtool to download the firmware version you want (14.6.A.0.368 or 14.6.A.1.236 either customized for your region or generic) by clicking the XF button, finding it and downloading by clicking its name on the far right, selecting to unpack automatically. This takes a while as it is big. Once complete, it will be in your .flashTool/firmwares directory.
    2. Download SuperSU
    3. Download PRFCreator, unzip it and run it.
      1. Choose the FTF file you just downloaded in flashtool.
      2. Choose the SuperSU binary you just downloaded.
      3. Choose the Z1C-lockeddualrecovery2.8.25-RELEASE.combined.zip you downloaded previously for the recovery.
      4. Select all of the checkboxes except "sign zip" (including "legacy mode" which is below sign zip) and click "Create". This will take a while.
      5. It will appear in the PRF directory as flashable-prerooted.zip. Copy this to the phone/SD card.
    4. Install the pre-rooted firmware.
      1. Reboot your phone into TWRP recovery by pressing the down arrow after the green/purple light blinks during bootup.
      2. Go to wipe, advanced wipe, choose dalvik cache and cache and wipe (maybe unnecessary but doesn't hurt).
      3. Then return to main menu and install. Find the flashable-prerooted.zip file and install it.
      4. Once again wipe dalvik and cache.
      5. Then reboot to system. It will take a while as the new OS is configured and apps are optimized. Don't fret, all this optimizing means the apps will run faster later on. The NFC firmware will also be updated. 
  8. Done! You now have a rooted phone with a locked bootloader running Lollipop 5.1.1!
To optionally install XPrivacy:
  1. Via the browser on your phone, download and install the latest XPosed Installer app from http://bit.ly/1LodTO5 (I used 3.0-alpha4). It won't work yet.
  2. Download the latest Xposed framework zip to your phone from the same link. The Z1Compact requires the 32-bit ARM version. For Android 5.1.* it's SDK22 (if you updated to 5.0.* instead, use SDK21). I used xposed-v80-sdk22-arm.zip. (SDK23 is for 6.* which likely won't be available on the Z1 compact).
  3. Reboot the phone into TWRP recovery by hitting the volume down button once the green/purple LED lights up during boot. Install the xposed SDK file zip you just downloaded and reboot to System. The Xposed Framework is now installed, the next bootup will take a while.
  4. On your phone, Now visit http://bit.ly/1QfkTxT and download and install the latest XPrivacy APK. Reboot the phone a second time.
  5. Go into Xposed installer, under modules, enable XPrivacy. Reboot the phone.
  6. XPrivacy can now be configured.
If you're a bit hesitant about all the steps, this can be a useful video to get the idea of how the tools work (though it's outdated and following a different procedure).


Thanks to: SlikToxic, WaleDac, Jamal, zxz0O0, and Muuuly who first found the components of this solution. And to all the linked app developers!

* Really appreciate making your purchase through these links as I'll get a small commission at no cost to you.

Friday, September 18, 2015

Creating Lightroom Web Gallery plugins

The basic reference is the Lightroom SDK Guide, Chapter 6. But the relevant section hasn't been updated since it was first written, and there's a lot missing. This forum could potentially be helpful but there isn't a lot about web plugins. There are also a LOT of quirks and bugs to workaround when creating lightroom galleries, and amazingly almost none of them are discussed on the net.

This is a very incomplete list of things I've encountered or learned:
  • When debugging, changes to luascript (*.lrweb) only update reliably by restarting Lightroom. Errors therein typically cause the gallery to disappear from the list of galleries. Changes to the HTML template code itself are updated whenever you change any gallery setting that triggers a preview browser refresh (or by switching to Library view and back). Errors in this HTML syntax typically (but not reliably) cause a web page showing an error message to appear. Sometimes things will just inexplicably not work.
    • New or modified broken web galleries and web templates will only appear after LR restart.
  • If you use manifest.lrweb's AddResources to actually move the files (ie, its purpose: the source and destination differ), then most if not all the resources will not be available when rendering the preview. One workaround is to then explicitly AddResource or AddPage for each file that's actually used in the preview. But this has problems too. I wound up using AddResources with the same source and destination directories.
    • It's important that your pages (referenced in e.g. AddPage() or AddGridPages())  NOT be in your source directory for AddResources. They may however both have the same destination directories.
  • In Windows, Lightroom 4 appears to use an internet explorer 6 or 7 rendering engine for the internal preview web page, so sadly this needs to be a target of your gallery. Debugging it in IE 7 (or at least a later version running with F12 compatibility mode for IE 7) is helpful to get it working in the preview mode (since I haven't figured out any facilities for debugging the actual preview album within lightroom). Not sure about other versions of LR or on a Mac.
  • Correction: It turns out Lightroom 4 is actually using a modified build of the webkit library used by Safari 4.0.3 (released in August 2009) - the user agent string is "Mozilla/5.0 (Windows; U; en-US) AppleWebKit/531.9 (KHTML, like Gecko) AdobeAIR/2.7.1". Which is also seriously old too, but at least you can download it and test with this old Safari version outside lightroom, which I highly recommend if you're having trouble.
  • There is a maximum of one level of indirection when referencing variables in galleryinfo.lrweb. E.g., if you want variable A to depend upon the value of another (using the function() syntax) and others to depend upon variable A, it's not possible.
  • Variables names like "nonCSS.text.name" are actually twice-nested lua tables. Hyphens are not allowed in such names (are you subtracting?). To workaround this, you can express it as such: nonCSS.text["hyphenated-name"].
  • Typical methods of debugging the lightroom preview webpage are unavailable (alerts, console logs, inspecting the dom, etc). live_update.js provides an AgDebugPrint() function, but I have no idea how or if it's possible to see output from this call. But one thing you can do is make a "debug" div and insert things into the content of that div to be displayed on page. Primitive, but helpful - I wish I'd thought of it sooner.
    • Also, lightroom puts the preview album in the temporary folder named AgWPGPreview-### (eg, C:\Users\username\AppData\Local\Temp\AgWPGPreview-21). Inspecting the files here can tip you off about what may be going wrong when viewing inside LR (note that the images are not written here, or at least not named as you'd expect).
    • Of course, it's useful to export the gallery and use typical browser debugging tools as well.
  • If you want to support live update, you need to set the variable supportsLiveUpdate = true in galleryinfo.lrweb. This is unmentioned in all documentation. For debugging it's best to turn off to ensure things update as mentioned above. live_update.js calls methods under the variable 'myCallback', but checking for this variable's existence fails and causes live_update to bonk, even under lightroom preview. That means we can't check, so the code will give errors outside the preview (if testing- it shouldn't be included in the export anyway).
  • For every metadata item you'd like to use for images (eg, title or gps location), you need to include them in a perImageSetting. These then need to be provided in the properties, perImage field of the GUI to be actually provided (it would also work if you don't provide a properties block, but then these settings will appear in random order on the GUI). Useful ones include com.adobe.X where X can be: caption, title, location (which is actually sublocation), city, state, country, GPS, and many more (all undocumented). If you name it as perImageSetting.title, it will be available under the image's metadata.title field in LuaPages. A longer list of properties is here.
  • Luascript in LuaPages can be really tough to debug. I now do one small change at a time and check for failures by reloading the preview window and checking the generated preview html. One thing to note is that there seems to be only one pass-through by the interpreter and thus functions must be defined before they are used.
  • If in your LuaPages you have long sections of html/javascript/etc with no luascript elements, then you may get the error "chunk has too many syntax levels". You can cure it by placing luascript comments e.g., <% --[[ Comment here ]] %> to break up the offending section.
  • When making Lightroom UI elements with titles, you can use "\n" to break up long lines in the text. But, if you use the LOC function to localize the text and use other languages, there does not appear to be any way to break up lines. Thus other languages must fit on one line.
  • Variables starting with "appearance." have magic powers in that they trigger a request to update the appearance of the webpage in the UI; other variables won't trigger refresh. Lightroom sends such changes to the web page's document.liveUpdate() function to determine how much of the page needs to be regenerated. I found that the return value "invalidateOldHTML" is almost always what I needed, and so I replaced the provided function to always return this (YMMV). Variables starting with "metadata" will be treated as text and remove any potential html tags. It also updates like those with "appearance".
Please share any more that you know about and/or know how to work around. Or if you're stuck and need a hand, perhaps it's something I've encountered. Ask in the comments.

Thursday, September 10, 2015

CMake / Visual Studio "the parameter is incorrect" error

If you're trying to run configure on a build project with CMake for the first time on this computer, and while trying to detect the C/C++ compiler features, it bonks inexplicably. You have Visual Studio installed correctly after all, and it builds things without problems both within the GUI and from the command line. Inspecting the logs, you see that the errors occur when CMake is trying to compile an internal file such as CMakeCXXCompilerId.cpp using the '@' feature of the compiler to provide command line parameters via file. The only reported error is from the compiler saying that "the parameter is incorrect".

The problem turned out to be caused by BeyondTrust's Trusted Desktop software (which is corporate security software). Running 'elevated' did not help. But removing Trusted Desktop altogether made the problem go away.

In my case, this occurred on Windows 7, with Visual Studio Professional 2010, 2010sp1 and 2013 Express versions, and CMake 3.3.1. But I suspect it's not very sensitive to these versions. Since I reached the end of the internet trying to figure this out, hopefully this will help someone.

(While I'm at it, I also happen to know that Trusted Desktop is incompatible with recent versions of Oracle VirtualBox. Versions since 4.3.12 have hardened security features, verifying that DLLs are what they say they are. But these features fail under Trusted Desktop, making it impossible to run VMs. The workaround is to use version 4.3.12, which predates these features.)

Wednesday, August 26, 2015

Fancy, Semantic tooltips with CSS only

I came across this post describing a way of allowing control over the appearance of tooltips in a semantic way that avoids too much extra markup and doesn't trip up screen readers. Really good idea.

Tooltip shown when hovering over the first really

I made a couple modifications to allow it to work with a href tags as well as non-link spans, eg:

<p>I <a href="#" data-tt="I mean really, really" class="tt">really</a> like pie.</p>

or

<p>I <span data-tt="I mean really, really" class="tt">really</span> like pie.</p>

I also spiced it up some and made it so it doesn't appear immediately, but only after hovering for a bit, and then transitions in gracefully.

Finally, since tooltips may be enclosed in a container with overflow hidden, or near the right edge of the viewport, I added some code to choose the positioning for each tooltip (lower left, lower right, upper left, or the default: upper right). When set appropriately, this ensures they can be seen.

Best of all it seems to be supported by essentially all browsers, including back to at least IE6.

Here's the CSS:

.tt {
  position: relative;
  cursor: pointer;
  outline: 0;
}
.tt:after {
  content: attr(data-tt);
  position: absolute;
  left: 1.7em;
  top: -1.7em;
  white-space: nowrap;
  color: #FFF;
  background: rgba(0, 0, 0, 0.8);
  padding: 3px 7px;
  border-radius: 3px;
  -moz-border-radius: 3px;
  -webkit-border-radius: 3px;
  margin-left: 7px;
  border: 1px solid gray;
  margin-top: -3px;
  transition-delay: 0s;
  visibility: hidden;
  opacity:0.0;
}
.tt:hover:after {
  visibility:visible;
  opacity: 1.0;
  transition-delay: .5s;
  -webkit-transition-delay: .5s;
  transition-duration: .4s;
  -webkit-transition-duration: .4s;
  transition-timing-function: ease-in;
  -webkit-transition-timing-function: ease-in;
}
.tt-lr:after {
  top: 1.7em;  
}
.tt-ll:after {
  top: 1.7em;
  left: auto;
  right: 1.7em;
}
.tt-ul:after {
  left: auto;
  right: 1.7em;
}


You can see it in action here

Friday, July 31, 2015

Deploying a website to Amazon S3

You've generated a great website that you now want to deploy to Amazon S3 to have hosted as a server. I assume you've set up an S3 bucket and configured it for website hosting. Now you have to transfer the files comprising your site to S3, and keep them up to date. You can do this manually with a GUI tool like CrossFTP or S3 Browser. But perhaps you have a few tweaks you'd like to make before you deploy it. Most importantly you'd like to have clients be able to make use of gzip compression when the site is served, since this can greatly speed up site loading, and also reduce your costs. And you can optimize/minify your images for websites. Finally, you need to set all the custom S3 settings you need (Cache-Control headers, Reduced redundancy storage, etc), in one step, reducing requests (and thus costs) and saving lots of time and hassle. And ideally, if you make changes to an existing site, you should only need to upload the files that have changed, again saving time and money.

Here, I present a script that can do all of this, using the convenient example of a generated Gallerific Web Album. This script takes considerably more technical skill relative to generating an album, since you will need to use a command line in the form of a bash shell to use it (which comes with MacOS, Linux, and you can get it with cygwin in Windows, as I use), but it's worth it!

To use it you'll need several tools:
  • s3cmd which is used for copying to/interacting with your S3 bucket. To install:
    • Linux: normally you can get the package as here.
    • Cygwin: get zip from github, run python setup.py install [which requires python-setuptools and to have run easy_install dateutil]
    • Mac via homebrew: brew update && brew install s3cmd
    • In all cases you must then run s3cmd --configure to provide your S3 access keys.
  • mozjpeg v3.0+ which is now the best tool for compressing jpegs for use on the web (available in binary form here). (Be sure that jpegtran/djpeg/cjpeg on your path is this one, and not the default libjpeg implementation.) Lossless compression rates of 60%+ are achieved on thumbnails, and 8-10% on larger images. (It is now as good or better than common online tools and better than things like jpegrescan and adept.sh by my testing.)
  • gzip, or optionally zopfli, for compressing text files as a traditional web server would do (zopfli produces files that are 4-8% smaller than gzip -9. Note that although 7zip also produces smaller files than gzip, it will not work with this script because the timestamp cannot be excluded and so MD5 signatures for files always change)

If you're curious, here is the script (deploy_to_s3):

#!/bin/bash
##################################################
# Deploys a gallerific gallery to the web server
##################################################
# Usage: deploy_to_s3 [directorytodeploy]
##############CONFIG##############################
S3_BUCKET=s3://mys3bucket
REDUCED_REDUNDANCY=-rr
ADD_HOME_LINK=1 # set 'pathtohome/index.html' as applicable
#GZIP_CMD='gzip -n -9'
GZIP_CMD=zopfli
JPEG_QUALITY=80  # lossless or number (recommend 65-85)
#S3CMD_DEBUG="-v --dry-run" # can use --dry-run on s3cmd to see the effects but not execute, also can use -v for verbose
##################################################

GALLERY_DIR=$1
GALLERY_DIR=${GALLERY_DIR%/} # remove trailing slash
if [ -z "$GALLERY_DIR" ] || [ ! -d "$GALLERY_DIR" ] ; then 
  echo Invalid Parameter. USAGE: deploy_to_s3 [directory to deploy] 
  exit
fi
DEPLOY_DIR=$(mktemp -d)
if [ $JPEG_QUALITY == "lossless" ] ; then 
  JPEG_CMD='jpegtran -outfile {}.tmp {}'
  JPEG_CMD2='mv {}.tmp {}'
else
  JPEG_CMD='djpeg -outfile {}.pnm {}' 
  JPEG_CMD2="cjpeg -quality $JPEG_QUALITY -outfile {} {}.pnm"
fi

cp -Rf $GALLERY_DIR $DEPLOY_DIR

if [ $ADD_HOME_LINK -eq 1 ]; then
  echo ADDING CUSTOM HOME LINK TO INDEX.HTML ...
  sed -i '/<div class="header"[^<]*>/ a <p><a style="color: #548BBF !important;" href="pathtohome/index.html">Home</a></p>' \
      $DEPLOY_DIR/$GALLERY_DIR/index.html 
fi

# compress text files and mark them for direct download by browsers using gzip encoding 
echo COMPRESSING TEXT FILES ...
find $DEPLOY_DIR -regex '.*\.\(html\|js\|css\|xml\|svg\|txt\)' -exec $GZIP_CMD {} \; -exec mv {}.gz {} \;
echo UPLOADING COMPRESSED TEXT FILES ...
s3cmd sync $S3CMD_DEBUG --acl-public $REDUCED_REDUNDANCY --add-header="Content-Encoding":"gzip" \
              --guess-mime-type --exclude='*' --rinclude='.*\.(html|js|css|xml)' --signature-v2 $DEPLOY_DIR/$GALLERY_DIR $S3_BUCKET/
echo
echo

# re-compress jpegs using mozjpeg encoder (https://github.com/mozilla/mozjpeg, http://mozjpeg.codelove.de/binaries.html)
echo IMPROVING COMPRESSION ON JPGS ...
# Note some EXIF/comments may be lost, particularly using lossy compression
#pushd/popd necessary because mozjpeg for windows doesn't handle roots (eg /tmp)
pushd $DEPLOY_DIR/$GALLERY_DIR
find im -regex '.*\.jpg' -exec $JPEG_CMD \; -exec $JPEG_CMD2 \;
find im -regex '.*\.pnm' -exec rm -f {} \; # clean up after lossy compression
popd
echo UPLOADING IMAGES AND REMAINING FILES ...
s3cmd sync $S3CMD_DEBUG --acl-public $REDUCED_REDUNDANCY --add-header="Cache-Control":"public,max-age=86400,no-transform" --guess-mime-type \
                --signature-v2 $DEPLOY_DIR/$GALLERY_DIR $S3_BUCKET/
echo
echo

echo REMOVING DELETED FILES ON S3 SIDE CLEANING UP TEMP FILES ...
s3cmd sync $S3CMD_DEBUG --delete-removed --acl-public $DEPLOY_DIR/$GALLERY_DIR/* $S3_BUCKET/$GALLERY_DIR/
rm -rf $DEPLOY_DIR
echo
echo
echo DONE!


To just use it as is, save the file  deploy_to_s3  and make it executable (chmod +x deploy_to_s3). Next you need to configure it:
  1. Set S3_BUCKET to point to your bucket
  2. If you wish to use gzip, uncomment the first GZIP_CMD line and comment the other.
  3. Before you run for real, I'd recommend uncommenting the S3CMD_DEBUG option to see what the effects would be but not execute them.
  4. I like to add a home link at the top of my pages. to do this set ADD_HOME_LINK=1 and change the path to your homepage in the html code (<p>...</p>) in the sed command.
  5. Choose the jpeg compression quality, either "lossless", or an integer typically 65-80.
  6. If you don't want to use reduced redundancy storage, comment out the REDUCED_REDUNDANCY option.
  7. Change what directories get image compression (currently im/ and subdirs)
To run, go to the parent of your gallerific web album's folder. Then simply execute:
deploy_to_s3 [mygallerificwebalbum]

A few notes/features:
  • If you re-generate the album or make tweaks, but only some files have changed, you can re-run deploy_to_s3 and only the files that have changed will be transferred (saving time and money again).
  • Only jpegs in the im directory will be compressed. Other images in gallerific web albums have already been minified. 
    • For use with gallerific, I recommend using lossy compression with this script, combined with very high quality settings when saving JPGs in Lightroom. This will lead to much smaller jpegs than using a lower quality setting in Lightroom combined with lossless compression here. For example, a full size image that I find to be acceptable quality for the web at 65% quality in Lightroom was 827kb. This can be losslessly compressed to 769k. However if the image is saved at 95% quality in Lightroom (effectively lossless) and compressed at 75% using mozjpeg, it occupies only 580kb and has approximately equivalent image quality. The only downside is potentially lost EXIF/comment data with lossy compression (and your local copy will be quite large - a perhaps more reasonable compromise if that is a concern is LR quality 92, mozjpeg 80). [Note that I've learned that LR jpeg quality for the Web is not the same as that used for normal LR export; it appears to be generally lower.]
  • HTTP headers are added so that browsers will recognize and get the gzip format compressed text files over the internet and use them as normal; you and users save significant time and bandwidth and the page loads faster.
  • The Cache-Control HTTP header is set to allow browsers (and CDNs/proxies) to cache all non-text (html/css/js/xml) files for 24 days, dramatically speeding reducing HTTP requests and subsequent views. But be aware that if you make changes to these binary files, users with cached files may not see them unless you rename the files.
Although it does take a bit of work to set it up, using this tool should save you and users enough time to enjoy a choice beverage, which you now deserve. So do!

Tuesday, July 21, 2015

Gallerific: A Modern HTML5 Lightroom Web Gallery

Latest update May 2016.  
 Also in Español, Français & Deutch.


This post is to announce the release of a powerful new Web Gallery plugin for Adobe Lightroom.

Like many photographers, I use Lightroom to manage my photos and do most of my adjustments, and it's just great. But when it comes to exporting them in an easily viewable way, I was shocked to realize how poor the options were. The included galleries are simply ancient; most of them use Adobe Flash, which is being phased out on the web, is slow and painful to use and is a security headache to boot. The rest use decade-old HTML code and produce web pages that are just plain painful to use. And searching around, I wasn't able to find any third-party web albums that worked reasonably well either, especially for larger sets of photos.

The free Gallerific HTML5 Web Gallery plugin is here to fix all of that and bring Lightroom web exports into the present.




Features:
  • Allows you to create great albums, with captions (or not), from the program you already use to manage your photos and metadata.
  • The entire web site is generated automatically, with multiple customization options. No special expertise is needed, but there are customization options for those wanting more control.
  • Albums are fast, responsive, and support large numbers of images (up to thousands, anyway).
  • Albums are standards-compliant, using Javascript, HTML5 and CSS3 - and no obsolete Flash! They should work well on any modern web browser or device, and have great backward compatibility too. They even work minimally if your browser doesn't support Javascript.
  • Albums follow modern responsive web design principles, automatically adapting to the size of the web browser or mobile display.
  • Images can be navigated by mouse or touch screen (including by swiping), or by using keyboard shortcuts (Left/Right, Page Up/Down, Home, End, Space).
  • For a hands-free experience, you can play a slideshow to flip through photos automatically.
  • The web page uses no server-side scripting, so it can be served very cheaply from a static web server such as Amazon S3 (or any other web server).
  • Lightroom versions 2.0 to 6 (and CC) should all be supported
  • Runs under Windows or MacOS (generated galleries are viewable under any OS)
  • Available in English, Español, Français, & Deutch
  • Totally FREE!

Version 1.0 (22 July 2015, in brief):

Smart image pre-fetching for immediate transitions between images, touch support, responsive design adapts to page size. Thumb navigation. Captions may overlay image or lie below it. Various configuration options, color schemes and example templates. A limited representation of the album is viewable as a preview inside Lightroom. Center-crops images to create aesthetically-pleasing square thumbnails (something Lightroom doesn't support).

New in version 1.1 (23 August 2015):
  • Album loading speed is massively improved, using a variety of techniques.
  • Greatly improved mobile / touch device support:
    • Smaller mobile devices, or tablets in portrait orientation are shown thumbnails on the bottom, with native, re-flow-enabled scrolling. This means that thumbnails scroll totally naturally for touchscreens, without losing any functionality on small window non-touch devices (buttons and mouse-wheel will work in this case).
    • Apple iOS bug causing flickering animations has been addressed.
    • Very small devices like phones are now supported without any quirks.
    • Pinching to zoom will open full size images (when full size image links are provided).
  • Adds options to not include full size images, and to disable ability to right click for saving images.

New in version 1.2 (11 September 2015):
  • Full gallery previews are now available inside Lightroom! Now almost all functionality and styling can be seen when changing settings.
  • Galleries can now be social media sharing enabled. Users can click to share galleries via Facebook, Google, Twitter, and Pinterest.
  • Maps from Google can now be opened for any image with geo-location data (GPS or city/country).
  • The icon scheme has been standardized. High quality, scalable SVG icons are shown in 95% of browsers, with PNG images used as a backup for the rest.
  • Page loading is even faster now, and more robust, using CDNs where possible. It still works when you're running locally (even if offline), or if CDN services are blocked or down.
  • The album is now compatible with Internet Explorer all the way back to 5.5 and other older browsers as well as Opera Mini.
  • Additional customization options.
  • Other minor improvements and bug fixes.


New in version 1.3 (12 January 2016):
  • Much greater control over where to place identity plate (logo), titles, and contact info
  • Allow customization of all text in generated galleries
  • Internationalization - now includes versions in Spanish, French and German
  • Better appearance on low-performance devices
  • Various minor improvements and bug fixes, and additional documentation
New in version 1.3.1 (9 May 2016):
  • New tool to optionally allow downloading all images in the gallery.
  • You can now choose the maximum number of thumbnail columns that will be shown (from 2-4). Also improved the layout and use of screen real estate to allow slightly larger image area.
  • Various minor bug fixes and improvements.
Update to version 1.3.1 (7 September 2017):
  • Minor bugfixes for viewing GPS position and auto-starting slideshows.


Installation:
  • Unzip the "GallerificAlbum.lrwebengine" folder and copy this to the Lightroom Web Galleries folder. This folder can be opened within Lightroom preferences (Win: Edit->Preferences, Mac: Lightroom->Preferences) by clicking on the "Show Lightroom Presets Folder" button then opening the Web Galleries folder.
    • On a Mac this is located under: /Users/username/Library/Application Support/Adobe/Lightroom/Web Galleries/
    • On Windows Vista/7/8, this is normally under: C:\Users\username\AppData\Roaming\Adobe\Lightroom\Web Galleries\ 
    • On Windows XP this is normally under: C:\Documents and Settings\username\Application Data\Adobe\Lightroom\Web Galleries\
  • Recommended but optional: Install the set of templates ("skins") for albums by copying the "Gallerific-Templates" folder into the Web Templates\ folder under the Lightroom presets folder. These will give you a good idea of output options and possible color schemes.
  • (Re-)Start Lightroom; the Gallerific gallery should appear in the list under the Web workflow

Usage:
  • As with any web album, select the images you wish to export using the Library view in Lightroom. Be sure they have all the metadata you want to include in the album. When you're happy with the images and their order, open the Web view.
  • Select the "Gallerific HTML5 Gallery" from the Layout style at the top right.
  • In the Web view, choose the options you want. Enter titles, descriptions, customize your image captions and use a custom logo, and choose where they appear. Customize your color scheme and how images will appear. Choose the size of the gallery image slide and an expanded full size image to be generated, as well as caption metadata and watermark to include. You can use the Preview in Browser button to check out your album’s appearance. And then simply Export to disk or Upload (to a server via FTP/SFTP) the complete album when you’re ready! The resulting HTML pages and supporting files will be generated in the folder or server location you selected. 
    • You can view the gallery on your local computer by opening the file index.html in a web browser.
    • After exporting to disk, the generated folder can be loaded onto a web server for access via the web or just copied to a CD/DVD or shared folder to share with others.
  • More details about creating web albums in general are available from Adobe.

Click here to see an example album - note that galleries are quite customizable and this is just an example.


Download here and get started!


License: This project is open source and released under the MIT License.


Many thanks:
This project couldn't have been done without the galleriffic and jquery history jQuery plugins; these have been extensively modified and improved for this project and have been forked on github. The project also benefits from the touchSwipe and transit plugins, the grunticon toolkit, the mediamatch polyfilljszipfilesaver.js, and fallback.js. Tablet image above is thanks to zanilic.com. Loader icons were generated from spiffygif.com and preloaders.net and compressed with kraken.io. Scalable icon designs are by Elusive, Typicons, Font Awesome, and Icomoon. And thanks to Henriette Donis for invaluable help in translating.


I'd love to hear about any websites that you create using this tool. Feel free to email me or leave a message below to let me know! And if you have any problems, suggestions or requests, leave them here too.


If you find the Gallerific Web plugin useful to you or your business, or if it's saving you time or money, I'd really appreciate a contribution. It's taken me countless hours to set this up and make it work well (now I realize why there are so few other options available!). Every bit counts. And many thanks.


Addendum: If you want to deploy your website onto Amazon S3, I provide an automated approach here (A word of warning that unlike creating an album, it does require somewhat advanced computer skills. An easier, but less efficient, option is to simply drag and drop the album using a GUI tool like S3 Browser or CrossFTP).

HGGIT errors pushing to Github

If you're using the hggit extension of mercurial (hg), and you get the error

pushing to git+https://github.com/username/project.git
http authorization required for https://github.com/username/project.git/info/refs
realm: GitHub
 searching for changes
adding objects
URLError: [Errno 10054] An existing connection was forcibly closed by the remote host

after trying to push to a cloned repository, the problem is probably your URL (even though it seems to connect and authenticate fine). For some reason certain files are inaccessible (In git you'd get: error: The requested URL returned error: 403 Forbidden while accessing https://github.com/username/project.git/info/refs). To fix this you should add your username to the url:

git+https://username@github.com/username/project.git

This would work under git, BUT, this doesn't work with hggit for some reason. It gives:

URLError: [Errno 11003] getaddrinfo failed

That means our only alternative is to use SSH, and with github, you're required to use public/private keys. These need to be generated and can be set here in github. In windows/putty, here's good info on generating them. Then they can be used by default in mercurial by adding an entry in the mercurial config files:

[ui]
ssh = tortoiseplink.exe -ssh -i "C:\Users\UserName\mykey.ppk"

Hopefully you should now be able to connect using SSH and push changes to the repo. The whole thing falls under the category of "should be easy but isn't."

Tuesday, May 19, 2015

Stably monitoring console programs in Linux

So you want to run a program, while keeping a log file for posterity. Obviously you redirect stdout to a log file (and in this case stderr as well, while running it in the background):

myprogram someparams > myprogram.log 2>&1 &

Now you also want to monitor it while it runs to see how things are going (there is also tee for this, but then you have to keep monitoring). So use:

tail -f myprogram.log

But what if you've ssh'ed into the shell so it's possible that you may get disconnected. You don't want to have your program abort just because you lose connection, or your shell ends for whatever reason. You could use bash's disown, if you've already run the program. But if you're thinking ahead, simply:

nohup myprogram someparams > myprogram.log 2>&1 &

You probably already knew that too. But actually, that raises a problem. No nohup is buffering stdout using glibc and so if you want to monitor output, you'll be waiting a while, and only get it in chunks (or possibly even at the end of execution). To the rescue comes unbuffer, part of the expect package. (In ubuntu, install via apt-get install expect). So finally:

unbuffer nohup myprogram someparams > myprogram.log 2>&1 &

This works by tricking the kernel into thinking it's writing to console, which changes it to line buffering. But there's actually now an even better way, using stdbuf, assuming you have it.

stdbuf -oL nohup myprogram someparams > myprogram.log 2>&1 &

And now you can tail to monitor continuously without issues!

Friday, April 10, 2015

Accepting self-signed and local IP certificates with HTTPS in Java

By default, the connection will fail anytime java tries to connect via https to a server whose certificate doesn't have an issuer chain or whose issuer chain doesn't lead to a recognized issuer in the java trust store. That means if the server's certificate is self-signed it will fail. The exception message will be:

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

Many people get around this by simply accepting the certificates or allowing users to accept the certificates. But this is potentially insecure because there might be a man-in-the-middle attack going on. A better approach in many cases is to allow users to provide a certificate that they trust. This can be done permanently by using keytool, a program that comes with the JRE. But if you want the option for users to trust particular certificates without adding them permanently to the java default keystore, it can be handled in code.

The following code downloads a file from a given URL. If the URL uses HTTPS, then one may optionally add an additional trusted key to the store (presumably either to the server being accessed, or to a trusted issuer which issued the certificate used by the server). This allows a secure connection to such servers. (And uses Java 7 for the file transfer).



import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import java.nio.file.CopyOption;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import java.security.KeyStore;
import java.security.cert.Certificate;
import java.security.cert.CertificateException;
import java.security.cert.CertificateFactory;
import java.util.logging.Logger;
import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSession;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManagerFactory;
import javax.xml.parsers.DocumentBuilderFactory;

/* 
 * Downloads from a standard URL; works with HTTP, HTTPS, and handles self-signed certificates, with no issuer chain -- in this case the certificate must 
 * be provided for security purposes.
 *  
 * Resulting file will be overwritten if it already exists.
 * 
 *  @param urlString: location of file to download
 *  @param fileName: local filename where file will be saved
 *  @param certFile: path and filename of certFile to be used if server certificate is self-signed (with no issuer path), or empty string if not needed. 
 *  @throws IOException: In case of any connection error, or the certificate doesn't exist
 *  @throws FileNotFoundException: If the file cannot be created
 * 
 */
private long downloadFileFromURL( String urlString, String fileName, String certFilePath )
throws FileNotFoundException, IOException
{          
    try {
        LOGGER.fine( "Fetching URL: " + urlString );
        URL url = new URL(urlString);
        
        URLConnection conn = url.openConnection();
        
        if ( url.getProtocol().equals( "https" ) && certFilePath!=null && !certFilePath.isEmpty() ) {
            // Server uses a self-signed certificate (with no issuer chain)
            // we require the user to provide the public key of the server (as obtained
            // from a trusted source, eg the sys admin) for verification (to 
            // ensure there is no man-in-the-middle attack)
            HttpsURLConnection sconn = (HttpsURLConnection) conn;

            try {
                /* Load the default trustedStore (normally containing root certificates) if one exists,
                 * otherwise create an empty one
                 */
                KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
                String jrePath = System.getProperty("java.home");
                String defaultTrustStorePath = jrePath + File.separator + "lib" + File.separator + "security" + File.separator;
                File certfile = new File(defaultTrustStorePath + "jssecacerts"); // first default location
                if (!certfile.isFile()) // use fall-back default location
                    certfile = new File(defaultTrustStorePath + "cacerts");
                if (!certfile.isFile()) // use no default trust store, only provided key.
                    certfile = null;
                char[] javaTruststoreDefaultPassword = "changeit".toCharArray();
                keyStore.load( certfile==null?null:new FileInputStream(certfile) , javaTruststoreDefaultPassword); 

                /* load the provided certificate and add to keystore
                 */
                FileInputStream fis = new FileInputStream(certFilePath);
                BufferedInputStream bis = new BufferedInputStream(fis);
                CertificateFactory cf = CertificateFactory.getInstance("X.509");
                while (bis.available() > 0) {
                    Certificate cert = cf.generateCertificate(bis);
                    keyStore.setCertificateEntry( "selfsignedkey", cert );
                }
                
                // Set to use this keystore when creating ssl socket
                TrustManagerFactory tmf = 
                    TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
                tmf.init(keyStore);
                SSLContext ctx = SSLContext.getInstance("TLS");
                ctx.init(null, tmf.getTrustManagers(), null);
                SSLSocketFactory sslFactory = ctx.getSocketFactory();

                sconn.setSSLSocketFactory(sslFactory);
                
            } catch (FileNotFoundException e) {
                LOGGER.severe( "Unable to find specified certificate file '" + certFilePath + "'!");
                //could proceed to try without having this certificate, but if they provide it better to assume it's needed.
                throw new IOException(e); // Recast as IOException (to categorize it as a connection error)
            } catch (CertificateException e) {
                LOGGER.severe( "Error loading specified certificate; certificate may be corrupt. Details:" );
                e.printStackTrace();
                throw new IOException(e);
            } catch (Exception e) { // KeyStore-related exceptions
                LOGGER.severe( "Unknown exception setting up SSL connection. Details:" );
                e.printStackTrace();
                throw new IOException(e);
            }
        }
        
        // Save the result of the HTTP request as a file
        InputStream is = null;
        File outFile = new File(fileName);
        try {
            is = conn.getInputStream();
            Path dest = outFile.toPath();
            Files.copy( is, dest, new CopyOption[]{StandardCopyOption.REPLACE_EXISTING});
             
        } finally {
            if (is!=null)  is.close();
        }
        
       LOGGER.fine( "Transmission completed, wrote " + fileName);
       return outFile.length();
       
    } catch (MalformedURLException e) {
        LOGGER.severe( "Invalid URL requested: " + urlString);
        throw e; // derives from IOException
    }
         
}

Another certificate problem that may occur is that the certificate makes no reference to the IP, and you want to access the site via the IP (particularly if there is no way to verify the domain, as for example the server is on a local network). You might receive the following certificate error:

javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names present

Ideally, the certificate could add the IP to the SAN (subject alternative name) field and avoid this error, but issuers are moving away from doing this. But if the server is on your local network, ie, you know the server operator, then this is not a big deal. You can simply override the error. The following code accepts certificates having this problem, so long as the requested IP is on a local (private) network. Simply place it in the static initialization block in the same class.


static {
    HttpsURLConnection.setDefaultHostnameVerifier(
        new  HostnameVerifier()
        {
            // function is called only when certificate name verification fails, to check whether to override
            public boolean verify(String hostname, SSLSession session)
            {
                // Accept local (private) network ipv4 address even if they don't match the certificate name.
                // (Ideally the IP would be put in the SAN:IP field when creating the certificate, then we wouldn't get here)
                if (hostname.startsWith("10.") ||
                    hostname.startsWith("192.168."))
                    return true;
                else if ( hostname.startsWith( "172.") && ".".equals(hostname.substring(6,7)) ) {
                    try {
                        int ipsubnet = Integer.parseInt( hostname.substring(4,6) );
                        if (ipsubnet>=16 && ipsubnet <=31)
                            return true;
                    } catch (NumberFormatException e)
                    {                          
                    }
                }
                return false;
            }
        });
}

I am indebted to some insightful stack overflow responses in developing this solution, particularly this one by erickson.