Git Tricks

Git is a great  SCM / VCS, but sometimes it can be scary: unless you start keeping track of those useful commands which can save your day!


If your git knowledge is a little on the beginner side you might want to start with the following tutorials:


Ever needed to get back to a clean repo to apply a quick fix while you are in the middle of something and plenty of changes waiting to be committed?

# stash any changes to tracked files
$ git stash

# stash only unstaged changes to tracked files
$ git stash --keep-index

# stash untracked and tracked files
$ git stash --include-untracked

# stash ignored, untracked, and tracked files
$ git stash --all


The default fast-forward merge strategy is not always the most advice-able with regards to your repo history tree:

git merge ff.gif

Often a non fast-forward merge strategy generates an easier to follow history tree:

git merge no-ff

# merge with no fast-forward
$ git merge --no-ff

Condensed status

Git status is quite verbose, but it’s easy to make it condensed if you can handle the compressed display format:

$ git status --short --branch

Proper init

Creating the central repo is an administrative task you don’t want to leave to the occasional developer: git first commit cannot be rebased.

$ git init
$ git commit -m “root” --allow-empty

Fix the last commit

How many times you discovered you missed something in your last commit? This will fix it without adding a new commit to your history and mess it up.

$ git commit -m 'fixed stylesheet'
# (facepalm)
$ git add css/main.css
$ git commit --amend --no-edit

Force, delicately

It’s sometimes inevitable to force a push, but you can try to do it in the most delicate manner possible:

# force your push, but first ensure you are up to date with the remote
$ git push --force-with-lease


Make all the above commands shorter using aliases:

# git `please`
$ git config --global alias.please 'push --force-with-lease'

# git `commend` aka commit and amend
$ git config --global alias.commend 'commit --amend --no-edit'

# git `staaash`
$ git config --global alias.stsh 'stash --keep-index'
$ git config --global alias.staash 'stash --include-untracked'
$ git config --global alias.staaash 'stash --all'

# git `st` aka short status
$ git config --global 'status --short --branch'

# git `grog` aka graphical log
$ git config --global alias.grog 'log --graph --abbrev-commit --decorate --all --format=format:"%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(dim white) - %an%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n %C(white)%s%C(reset)"'

Github project website

This is nothing new, but apparently not many Github users know about this feature and I’ve recently learnt an easy way to set it up.

But let’s step back for a second: what are Github pages?

it’s a web space granted to each Github user and project where you can publish anything related to the subject: users might decide to publish their curriculum/resume or contact information while a project might publish a nice looking documentation.

A good set of information is available via a mini site which already guides you through the setup process, so I’m not going to reproduce that here.

Briefly, for a project site, the site contents are going to be hosted withing the project git repository (it does make a lot of sense to me) whitin a dedicated branch gh-pages.

What I wish to share is the set of git commands you can use to setup an existing project to host a clean Github pages setup.

Within the project folder you want to run:

git checkout --orphan gh-pages
git reset
touch .gitignore
git add .gitignore
git commit -m "gh-pages setup"
git push -u --all

You will end up with an almost empty folder (don’t panic, your project is not gone!) apart for the empty .gitignore file.

The project is now displaying what is published on your project’s Github pages, which is nothing at start. You can easily switch between the project contents and the project website by using the git checkout command:

  • git checkout master re-populates your project folder with your project contents
  • git checkout gh-pages brings you back to project website editing

The nice part of the above sequence is the newly created gh-pages branch is not going to share anything with your code branch structure or, if you prefer, master is not a parent for gh-pages!

You can now populate the website with your content using your HTML5 or MarkDown skills, or you can use one of the readily available templates from Github to get a nice looking index.html page to start from.

Also, additional and advanced features are available for multi-page websites via Jekill, but that’s another story.

Markdown with Yada Wiki and Jetpack

In the office we have recently decide to migrate our team wiki to WordPress and Yada Wiki has been selected.

The team is also quite comfortable with Markdown, even if not everybody is ready to adopt it as the main editor, so I struggled a bit to find a solution, until I stumbled upon Jetpack.

I installed Jetpack directly from the WordPress admin console (which I just love!), but I had to connect to the server console to force Jetpack into developer mode, which is required if your server isn’t going to be publicly accessible.
To do so you need to open your wp-config.php, search for define('WP_DEBUG', false); and add the following line

add_filter( 'jetpack_development_mode', '__return_true' );

That brings you one step forward, but after enabling the Markdown feature in the Jetpack installation page you will be able to use Markdown in pages and posts only: your wiki pages will not be affected. That’s because Yada Wiki uses it’s own custom content type to distinguish wiki pages from other contents, which is a good thing.
So you need to extend the Markdown support to this additional content type, which is easily achievable adding the following lines at the very end of the functions.php file of your theme of choice:

add_action('init', 'my_custom_init');
function my_custom_init() {
    add_post_type_support( 'yada_wiki', 'wpcom-markdown' );

Now your wiki editors can decide to use the WYSIWYG editor or switch to the text editor and start typing their contents in Markdown syntax and preview their edits by just hitting the Preview button.

Windows 7 disk space hunger

My recently formatted desktop computer just started to show those nice notifications regarding system disk drive exhaustion…

Really? I gave you 60GB of my brand new SSD and I’ve just installed one third of the software I want to run! What are you Windows doing with my precious hard disk?

So yes, I admit I use Windows along with GNU/Linux. Don’t kill me for that, please.

The weird thing was the disk space run low so quickly I couldn’t believe my eyes…

I installed a nice piece of free software called WinDirStat to highlight the disk space usage on my SSD and I realized I did forget about a few little optimizations.

My little beast has 16GB of RAM and I didn’t tune anything on my Windows 7 installation, which meant I had 16GB of hibernation file and another 12GB of paging file!

C:\> powercfg -h off

Just immediately freed up a nice few gigs and some manual settings on the system page file location and size gave me back another slice of storage.

I then refreshed the WinDirStat view just to realize the Windows folder was now leading the league taking 18GB of space, half of which was allocated under a folder called winsxs: what’s all that space for? I can’t exactly tell what’s under that folder (I’m not a Windows expert by all means!) but I can tell you I was able to regain another gigabyte by just running the system disk clean tool as Administrator and remove what the tool reports as Windows Update Cleanup.

Now a little question: for which misterious reason a set of resources identified as Windows Update Cleanup should occupy a gigabyte of space and a No Free Disk Space warning should pop up instead of just freeing up that wasted space silently trying to cover up your sins?

I then realized I could save another large amount of space by moving off the SSD my iTunes folder, but to achieve that I had to create an hard link (or directory junction in Windows terminology)

mklink /J "%APPDATA%\Apple Computer\MobileSync\Backup" "D:\iTunes\Backup"

This, along with the iTunes Media folder relocation, performed through the application settings, saved me another 12GB of space!

My disk is now breathing again!

Customize Transmission on Fonera 2.0N

I finally managed to nicely setup my Fonera 2.0N torrent client to work as I expect, even if it was not a very simple task.

What I wanted was to use a separate in-progress folder for non completed torrents and a completed folder for… guess what!

I found there are two ways to achieve this goal: having SSH access to the Fonera or having a Linux distro (a live one will do).


In both cases you need to plug into the Fonera an USB 2.0 hard drive. I recommend not to use flash drives as they are much slower and will die quite quickly, not considering their capacity is a lot smaller: I used a 250GB USB 2.0 Maxtor hard drive I had lying around.

You don’t have to format the hard drive if it’s formatted FAT32 or ext2/ext3, but both the FOnera team and I recommend against using an NTFS formatted hard drive as it will slow down everything. Remember though that FAT32 has a maximum file size limit (biggest file it can store) of 4GB which can be easily hit if you are used to download Blue Ray images or any other big file format: I went for an ext3 file system which can accommodate all my needs and I will use Paragon ExtFS in case I wish to plug this hard drive into a Windows computer.

Once plugged the hard drive will get assigned a generated name (something like Disk-A1) which I didin’t like as it doesn’t tell much about the functionality and can get confusing if you use multiple hard drives:  I went into the USB Disk section and assigned it the name TORRENT (all uppercase) as this is going to be the disk only purpose.

Whatever is the method you will use, you need to initialize your hard drive for running the torrent client (Transmission is its name) by setting it up into the Torrent section: please ensure the drive name listed here corresponds to the name you assigned to the drive in the USB Disk section as we will use it.

After setting up the disk to run the torrent client (yes, the torrent client binaries and configuration is going to be hosted onto the external hard drive) start it and wait for the process to complete.

Customize through SSH

I start with this method as I think it’s the easiest one if you have flashed a DEV firmware which enables SSH access.

Get access to your Fonera using root as username and your Fonera WPA key as password (the default one is printed on the side of your modem), then get into the Transmission startup script located at /tmp/images/torrent/bin/

In this file you’ll find a very long line containing startup instruction for the transmission-daemon client, and by default it contains two directives that are going to override our next customization. We will get rid of them both by removing the part --download-dir $1/torrent -c $1/torrent.

Save the file and let’s switch to a web browser to configure our transmission client through the Transmission web UI: now you will be able to change the Download folder that was previously forced to be torrent by the script we just changed: now it can be anything you like, but the folder must exists on the disk, so create it if it’s not already there. I decided to use a folder called completed, in opposition to in-progress which will store the non completed torrents. I kept the torrent folder in case I wish to upload .torrent files into the disk instead of using the web UI (probably it will never be used).

Shut down the Transmission client: this operation will write into a file your configuration.

Now we need to access the Transmission configuration without having the Transmission client running. When you start the torrent client your Fonera will open three files hosted on your hard drive within the FoneraApps folder and mount them as disks. We will do the same, but we’ll mount only one of them, the torrent. file (the numbering might be slightly different and depends on your firmware version).

Switch back to the SSH console and issue the following commands to create a mount point and mount the disk image into it:

mkdir /tmp/torrent.var
mount -o loop /tmp/mounts/TORRENT/FoneraApps/torrent. /tmp/torrent.var

Now you can edit /tmp/torrent.var/settings.json changing the download-dir, incompleted-dir and incompleted-dir-enable options to your desired folders, the download-dir one should already look correct as it was set by the Transmission web UI.

Mine look like:

"download-dir": "/tmp/mounts/TORRENT//completed",
"incomplete-dir": "/tmp/mounts/TORRENT//in-progress",
"incomplete-dir-enable": "true",

Now ensure those folders exist or make them yourself:

mkdir /tmp/mounts/TORRENT/completed
mkdir /tmp/mounts/TORRENT/in-progress

Customize on Linux

If you don’t have SSH access to the Fonera you can still modify the Transmission configuration as it is completely stored onto your hard drive: just shutdown the torrent client on the Fonera through the web UI and move your hard disk to your Linux box.

Once mounted you’ll find a couple of new folders that have been created by the Fonera: FoneraApps and torrent. Inside the former you’ll find three files:  torrent. and torrent. (the numbering might be slightly different as it depends on your Fonera firmware version).

Let’s start with the fmg file, which I guess it stands for Fonera Image, by mounting it through the following:

cd <your usb disk mount point>
mkdir /tmp/torrent.img
sudo mount -o loop FoneraApps/torrent. /tmp/torrent.img

We will have to modify the content of the /tmp/torrent.img/bin/ file containing startup instruction for the transmission-daemon client: by default it contains two directives that are going to override our next customization. We will get rid of them both by removing the part --download-dir $1/torrent -c $1/torrent.

Save the file and let’s move to the next step: unmount this disk image and mount the var one with:

sudo umount /tmp/torrent.img
sudo mount -o loop FoneraApps/torrent. /tmp/torrent.img

Now you can edit /tmp/torrent.img/settings.json changing the download-dir, incompleted-dir and incompleted-dir-enable options to your desired folders, mine look like:

"download-dir": "/tmp/mounts/TORRENT//completed",
"incomplete-dir": "/tmp/mounts/TORRENT//in-progress",
"incomplete-dir-enable": "true",

Now save the file, unmount the image and ensure those folders exist:

umount /tmp/torrent.img
rmdir /tmp/torrent.img
cd <your usb disk mount point>
mkdir completed
mkdir in-progress

Unmount your usb drive, unplug from your Linux box, plug it back into the Fonera and restart the Transmission client to enjoy your new custom configuration!

Resistor color decoder

Moving along on my previous Ohm’s law calculator I decided to add another little feature, a resistor color decoder. I know, there are many out there already, but you know… this is mine!

This was more an exercise on SVG manipulation rather than anything else, but I still believe it’s something I will use in the future for my own Arduino and Spark projects.

Enjoy my Resistor color decoder!

resistor color decoder

Mobile cross platform development

Among the many mobile app development challenges (user experience being one of the most important) the platform is one of those that has seen many solutions raising in the recent.

There are a few platforms out there, some are growing (iOS and Android), some are still in the process of seeing sun light (Microsoft) and some are slowly falling to the backyard of the market (Symbian, BB, etc).

When you develop a mobile app you either learn all the platforms you want to support or you try to leverage the frameworks available out there.

There a few promising mobile app portability frameworks, each with their own pros and cons and, obviously, none can really stand the comparison against the true native app, but you must be ready to sacrifice something at the altar of portability.


One type of portable frameworks relies on the web and it’s HTML5 ecosystem to provide an “almost native” experience: you develop an HTML5 application, then the framework wraps it in a headless browser exposing some native functions through JavaScript.

One of those frameworks is Apache Cordova (previously known as PhoneGap) which does exactly what I just described and supports many different mobile OSes, not all of them at the same level though. The JavaScript mapping functions allows an HTML5 app to store data locally, access the phone camera, know the device orientation, access contacts and calendar, etc…

This approach produces 99% reusable code, meaning the only non reusable parts are the little native bits you might need to plug/customize to get some native features: as an example, if you want to show one of those ad banners using the native ad provider, you need to plug in a little native fragment.

Performance wise these type of frameworks have a limitation directly connected to the native web browser component: faster is the native browser, faster will be your app.

Please note that this type of frameworks usually do not provide any widget by themselves: they just present whatever HTML5 content you tell them to render: while this might be a great benefit it is usually their greater weakness.

To achieve a pseudo native UI style the designers/developers have to pick some JavaScript framework capable of styling HTML tags appropriately (or write their own very specific CSS/HTML) like jQuery Mobile, Sencha Touch and so forth. When using such libraries a lot of JavaScript and DOM processing is involved which results in UI being a little sloppier than the native ones.

In my experience such difference is not so bad to make apps unusable, but you can definitely say it’s not native or not really optimized. Please note: this is not an issue directly related to the portable framework itself, but to the native components these framework use and the UI library the designers adopt.

Native Mapping

Another approach to achieve mobile app portability is through a common mapping layer/library: this approach consist of exposing through a uniform library the interface of the native components, as per the well know Adapter design pattern. This is the same approach Java used for their AWK library. The greatest limit of this solution is the framework either exposes the Lowest Common Denominator (abbrv LCD) of the native features or delegates to the end users the problem (AWK used the first approach, which revealed as a failure).

One expression of this approach is Appcelerator Titanium which remaps native operations/function/structures into JavaScript ones, including UI components.

The level of reusability these approach provides is definitely lower than the previous one and can greatly vary depending on a per app basis: their authors state a generic app usually reuse 80% of the codebase, with the remaining 20% customized on a per target OS. These figures are still good considered the 100% non reusable code you get by going completely native, but what is the performance/capability loss? Well, from a capability point of view, the loss is usually not so much if the framework you adopt has taken the approach of exposing the union of all the features available on the OSes it supports. Please consider though there is a time gap between when a features is made available on the native OS and the same feature becomes available within the framework.

Performance wise it depends on the framework approach, as an example Appcelerator Titanium went for an interpreted language (javascript) which has very good support and performance attention by many OS vendors, mainly because it is used within their native browsers! Obviously a compiled approach would provide better performance at the price of lesser portability.

There’s no silver bullet here yet: pick your option considering the different pros and cons while the market evolves and new options get explored!

Every keyboard is a programmer’s keyboard

It’s well known among programmers: US/UK keyboards are programmer’s keyboards!

Well, may be among real software developers only.

The problem with other keyboards layout is related to some strange characters us real software developers use, like the backtick (`) and the tilde (~): if you don’t know what I’m talking about you should reconsider your programming skill.

Those characters are not available on non US/UK keyboards and more than often software developers struggle with weird ALT+KeyNum combinations to obtain those characters (remember the days of ALT+123, ALT+125?).

While this issue can be easily solved on modern operating systems (read *nix) other legacy OSes (read MS Windows) doesn’t provide any help to poor developers with non US/UK keyboards like me. I’ve finally found a great solution that can expand your keyboard combinations on MS Windows, allowing you to bind weird characters to keyboard shortcuts.

A little tiny free software called AutoHotkey is at the core of this solution: when launched it intercepts key presses and it’s able to run a script (yes, it provides a small scripting language) and, believe it or not, output a custom sequence of characters!

With something as simple as the following I now have the backtick mapped to ALTGR+’ and the tilde mapped to ALTGR+\:


Just add the AutoHotkey executable to your auto start programs list and you end up having the perfect keyboard!

Compacting VirtualBox disks

For my development tasks I often use VirtualBox, mostly for testing purposes. Sadly though, installing multiple operating systems consumes quite a lot of disk space, so I need some way to keep the virtual disks small.

Because you can use auto expanding virtual disks you might think they will automatically shrink if you reduce the amount of content in there, but this is not true: if you have a virtual disk of say 40GB and 30GB in use, your VDI file size will be about 30GB, but if you delete 20GB of data from within your virtual disk the VDI file size will remain 30GB.

That space can be reclaimed though by using a command line tool called vboxmanage which provides a command called modifyhd which in turn has a –compact option.

In other words you can execute something like

vboxmanage modifyhd your/virtual/hard/disk/file.vdi --compact

and shrink your VDI file to its real content size… if you managed to wipe your virtual disk free space with zeroes!

Do not underestimate the last statement: the vboxmanage tool will eliminate from your VDI file empty space only, but usually when you delete a file the space it was occupying is not emptied, just unlinked!

Luckily for us there are tools around to help us on this task, which has to be executed from within the virtualized machine (aka the guest machine). These tools though depend on the virtual OS you are running.


Open a terminal window and run the following command, then wait and ignore the warnings you get.

diskutil secureErase freespace 0 /


Defrag your disk, download SDelete from the Windows Technet web site and run it with the -c option.

sdelete.exe -c C:


Run this command and wait for it’s completion. Note though this will expand your virtual disk to it’s maximum capacity to allow you to shrink it.

cat /dev/zero > /tmp/junk & rm /tmp/junk