Comparison Sony E 18-135mm f3.5-5.6 vs Zeiss 16-70mm F4

Purpose of the article is to point out long standing discussions and controversies around the Zeiss 16-70mm zoom and compare it to the Sony 18-135mm lens which is far more recent release from Sony and believed to be all around well praised lens.

  • Zeiss 16-70mm F4 (released on August 27, 2013)
  • Sony E 18-135mm F3.5-5.6 (released in January, 2018)

The Sony APS-C sensor cameras such as A6000 all the way to A6500s have a limited options when it comes to zoom lenses and the entire range from Sony comes down to 4 lenses which has their own strengths and weaknesses.

  • Sony E PZ 18-105mm F4 G OSS
  • Sony E 18-135mm F3.5-5.6 OSS (more recent release than other two)
  • Zeiss 16-70mm F4 OSS
  • Latest Sony E 16-55mm f2.8

I ignored the Sony 18-105mm G from comparing as it has known issue with high distortion and a considerably bigger lens for APSC mount, also it is a power zoom for videographers than people who wants to take photos and need smaller zoom for portability to go with their crop sensor bodies. I have also seen many comparisons between 18-105 vs 18-135 so it is not difficult to figure out which is best out of the two image quality and sharpness wise.

On the other hand, It is too early to say how good the latest 16-55mm f2.8 is as it was released few days ago and haven’t seen any in-dept reviews yet. I also don’t find the ~$2000 (AUD) price tag is reasonable for the lens unless you are a professional and need weather sealing and additiona custom button is a requirement.

Hence the research to weigh options such as the 18-135 vs 16-70mm as they both cover a versitile zoom range at a similar price range (Zeiss is around $200 expensive).

Why a comparision?

Reason 1: Zeiss 16-70mm F4 is a controversial lens mainly due to the blue “Zeiss” label and the price tag it comes with. Many people who bought it has been getting decentered lenses with QC issues or generally softer edges.

Reason 2: So many online forum discussions by people who either has the zeiss or wants to buy the lens asking recommendations. Mostly the answer has been hold off from owners who got bad copies.

Some online forum threads comparing the two

Reason 3: Zeiss vs the Sony price gap is getting lower. Zeiss lens price has come down to around $980-$1200 range here in Australia. Therefore I believe both lenses should be considered as great general purpose zooms.

Reason 4: purchased both lenses recently within a month or so, and decided to keep only one, sell the other and figure out which is the best lens overall IQ wise.

Test Method

This wasn’t a perfect scientific test, but I tried to keep the parameters between lens tests fair and accurate as possible to derive unbiased results.

There are number of ways to test the IQ of a lens but for this particular exercise, I choose to shoot indoors on a solid tripod at a fixed target.
The target is from B&H lens test document which is

I only managed to print the lens test chart in A4 size due to the home printer limitation so printed 4 copies of that and made a bigger rectangular chart by connecting copied using tape and stuck it to the wall in a somewhat equally lit room.

Camera settings

Focus mode: Manual
White balance: Fixed to indoor flurocent
ISO: 1000
Distance from the target: Fixed (~60cm away)

I kept the above settings fixed between lenses, but when taking photos I only changed the aperature to take test shots. I went from F4, F5.6 and F8 as those are the most common aperture values I would shoot with the zeiss. For the Sony went from F3.5, F4 and F5.6 and F8.

Lenses Used

I purchased both lenses recently within a month or so and did test for decentering issues. I didn’t find any obvious issues with the Zeiss, however the Sony 18-135 was very slightly soft on the left side, but I believe this is within the acceptable threshold.

If you see issues with any of the lenses that I didn’t, feel free to leave a comment and point that out.

The Results

The results between two lenses are shown below at different aperture values. Some tests had to be done at different aperture values due to their max aperture values in each lens.

NB: Right click, and open URL in a new tab or click on “Download” to get the full 24MP image to view at 100%.

Test 1: at shortest focal lengths Sony 18mm f3.5 vs Zeiss 16mm f4

Test 2: at shortest focal lengths Sony 18mm f5.6 vs Zeiss 16mm f5.6

Test 3: at shortest focal lengths Sony 18mm f8 vs Zeiss 16mm f8

Test 4: at ~24mm Sony f4 vs Zeiss f4

Test 5: at ~24mm Sony f5.6 vs Zeiss f5.6

Test 5: at ~24mm Sony f8 vs Zeiss f8

Test 5: at ~35mm Sony f4.5 vs Zeiss f4

Test 6: at ~35mm Sony f5.6 vs Zeiss f5.6

Test 7: at ~35mm Sony f8 vs Zeiss f8

Test 7: at ~50mm Sony f5.6 vs Zeiss f4

Test 8: at ~50mm Sony f5.6 vs Zeiss f5.6

Test 9: at ~50mm Sony f8 vs Zeiss f8

Test 10: at ~70mm Sony f5.6 vs Zeiss f4

Test 11: at ~70mm Sony f5.6 vs Zeiss f5.6

Test 12: at ~70mm Sony f8 vs Zeiss f8


  1. The obvious point from the above tests is that both lenses is that they are both sharp at centre even wide open.
  2. Both lenses become outstandingly sharp corner to corner when you step down. Specially f8, given this is a common aperture for landscape photos both lenses can be recommended for landscapes.
  3. Zeiss performs better at corners at larger apertures (f4) than the Sony from my observations. 18-135’s sharpness falls apart quickly when diverges from the centre.
  4. Zeiss’s weakness becomes apparent at 70mm, as far as I can observe, this is the weakest focal length of the Zeiss lens, sharpness and contrast is slightly lower.
  5. Around 35mm both lenses perform equally well.

Following the tests, I decided to keep the Zeiss due to it’s overall performance below 70mm and constant f4 aperture as well as the smaller size compared to the 18-135. In these tests I didn’t pay any attention to AF performance or OSS.

Centralised Scalable log-server implementation on Ubuntu 15.04

Centralised and Scalable log-server implementation on Ubuntu 15.04

The goal of this article is to explain how to implement a very simple log server which can accept application logs coming from one or more servers. For simplicity I will be implementing the log server and log forwarder (explained in Components section) on the same computer.

In order to keep the article easy to understand and to keep the implementation simple as possible I have excluded the security of the logs that get transmitted from application servers to the log-server and considered outside the scope of this article. Therefore if you implement the exact solution make sure you consider security of log data as some logs may contain sensitive information about the applications.


  • Log-courier Sends logs from servers that need to be monitored. In this example plain-text TCP transport protocol will be used for forwarding logs.
  • Logstash Processes incoming logs from log forwarder application (log-courier in this tutorial) and sends to elasticsearch for storage.
  • Elasticsearch Distributed search engine & document store used for storing all the logs that are processed in logstash.
  • Kibana User interface for filtering, sorting, discovering and visualising logs that are stored in Elasticsearch.

Things to consider

  • In a real world scenario you may need the log forwarder to be running on the servers you need to monitor and send logs back to the centralised log server. For simplicity this article, both log-server and the server being monitored are the same.
  • Logs can grow rapidly in relation to how much log data you will be sending from monitoring servers and how many servers are being monitored. Therefore my personal recommendation would be to transmit what you really need to monitor, for a web application this could be access logs, error logs, application level logs and/or database logs. I would additionally send the syslog as it would indicate OS level activity.
  • Transmitting Logs mean more network bandwidth requirements on the application server side, therefore if you are running your application on a Cloud-Computing platform such as EC2 or DigitalOcean, make sure you take network throttling to account when your centralised log monitoring implementation is deployed.

Installation steps

There is no perticular order that you need to follow to get the stack working.

Elasticsearch installation

  • Install Java
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo aptitude -y install oracle-java8-installer
java -version
  • Install Elasticsearch
wget -O /tmp/elasticsearch-1.7.1.deb
sudo dpkg -i /tmp/elasticsearch-1.7.1.deb
sudo update-rc.d elasticsearch defaults 95 10

then start the service sudo service elasticsearch start.

Install Logstash

echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
sudo apt-get update
sudo apt-get install logstash

Install log-courier

Installing log-courier could be bit tricky as we have to install the logstash plugins to support data communication protocol implemented in log-courier. More information can be found here

sudo add-apt-repository ppa:devel-k/log-courier
sudo apt-get update
sudo apt-get install log-courier

You may now need to install log-courier plugins for logstash, when I installed logstash using above instructions, logstash was installed on /opt/logstash folder. So following commands assume that you also have the logstash automatically installed to /opt/ if not leave a comment below.

cd /opt/logstash/
sudo bin/plugin install logstash-input-courier
sudo bin/plugin install logstash-output-courier

Install Kibana

wget -O /tmp/kibana-4.1.1-linux-x64.tar.gz
sudo tar xf /tmp/kibana-4.1.1-linux-x64.tar.gz -C /opt/

Install the init script to manage kibana service, and enable the service.

cd /etc/init.d && sudo wget
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start

Configuration steps

Logstash configuration

Logstash need to be configured to accept log-courier transmitted log data, process and output to Elasticsearch. The following simple config adds

Logstash configuration should sit on the logstash server which stores and processes all the incoming logs from other servers which run the log-courier agent for sending logs.

sudo vim /etc/logstash/conf.d/01-simple.conf

And paste following config

input {
  courier {
    port => 5043
    type => "logs"
    transport => "tcp"

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
    elasticsearch { host => localhost }
    stdout { codec => rubydebug }

Log-Courier configuration

In a real world scenario, this log-courier configuration would sit in the servers that need to be monitored. Following is an example config for this tutorial which will ship syslog activity.

sudo vim /etc/log-courier/log-courier.conf
    "network": {
        "servers": [ "localhost:5043" ],
        "transport": "tcp"
    "files": [
            "paths": [ "/var/log/syslog" ],
            "fields": { "type": "syslog" }

Enable services

Now we have Elasticsearch, Logstash, Kibana and Log-courier configured to process, store and view them. In order to use the setup we need to enable all the services using ubuntu service command.

sudo service elasticsearch start
sudo service logstash start
sudo service kibana4 start
sudo service log-courier start

The way I normally test whether services are listening on their perticular TCP ports is using the lsofcommand. This will show whether Elasticsearch, Logstash and Kibana are listening on their perticular ports.

sudo lsof -i:9200,5043,5601

An example output may look like

log-couri   970          root    9u  IPv4 373056      0t0  TCP localhost:38366->localhost:5043 (ESTABLISHED)
java      15513 elasticsearch  134u  IPv6 372333      0t0  TCP localhost:9200 (LISTEN)
java      15513 elasticsearch  166u  IPv6 373878      0t0  TCP localhost:9200->localhost:41345 (ESTABLISHED)
java      15612      logstash   16u  IPv6 372339      0t0  TCP *:5043 (LISTEN)
java      15612      logstash   18u  IPv6 373057      0t0  TCP localhost:5043->localhost:38366 (ESTABLISHED)
node      15723          root   10u  IPv4 372439      0t0  TCP localhost:41345->localhost:9200 (ESTABLISHED)
node      15723          root   11u  IPv4 373061      0t0  TCP *:5601 (LISTEN)
node      15723          root   13u  IPv4 389495      0t0  TCP> (ESTABLISHED)

Testing the setup

One way to test our configuration would be to log something to syslog, which can be done using “logger” command. Once the Kibana indices are setup (refer to Kibana documentation) the incoming log messages would appear on it realtime.

Example command to log a test message in syslog

logger test-log-message

Now goto Kibana web interface to discover the message you pushed to syslog. In this setup Kibana is listening on port 5601 on the log server (ex: Thats it!


Now you have setup a very simple centralised logging system for you application with Kibana, Logstash and Elasticsearch. Log-courier is the agent sending log messages from application servers to the logstash centralised logging server. This setup can be scaled having a cluster of Elasticsearch and logstash instances, a good starting point can be found at

This setup can accept any type of logs that come from log-courier agents and discover/filter logs using Kibana, Logstash can be configured to support log types using GROK patterns which can be found here

Export NFS shares from OS X to Ubuntu / Debian and mount using Autofs

I will discuss very few steps involved in getting a NFS directory shared from OSX and mounting it on an Ubuntu VM.

Step 1


The gist of it is OSX knows how to handle(export) NFS shares as long as you write the instructions correctly in /etc/exports file. So assume you need to export the directory in /Volumes/Disk 2/Projects

edit the /etc/exports file in your favourite editor as root (ex: sudo vim /etc/exports), paste the below command and change it to fit your requirement. Some parameters are explained below.

/Volumes/Disk\ 2/Projects -network -mask -mapall=purinda

/Volumes/Disk\ 2/Projects 
is the directory we export as a NFS share.
is the network subnet that will be allowed to access the share. 
is subnet mask, which indicates all IPs from to will be allowed.
indicate that when reading/writing operations will be done under this user. 

Step 2

Run the following command in your OS X terminal, which will restart nfsd daemon and trigger above export (may not require).

sudo nfsd restart

Step 3

SSH into your Ubuntu server and install autofs for mounting the above export. Remember that the IP address defined in the export above should allow the Ubuntu server IP for it to access it.

Once ssh’ed in run the following command to validate the export.

showmount -e <ip of the OSX>

Above command should show the exported directory, if not try running the same on the OSX for validating that NFS has started to share the directory.

Install autofs

sudo apt-get install autofs

Step 4

If all of the above steps has worked for you now it is the time to setup a config file for autofs to mount the above NFS share on demand.

In order to do this, we need a autofs mount file in /etc/ folder, you can call this folder with a easy to recognisable name. In my case I called it auto.projects because I shared projects that I worked on to the Ubuntu VM.

I ran the below command (using Vim) to create the autofs config file

sudo vim /etc/auto.projects

Then paste the following line and change according to your setup and save the file.

/home/purinda/Projects -fstype=nfs,rw,nosuid,proto=tcp,resvport\ 2/Projects/

This is the directory that NFS export will be mounted on.
Indicates options passed in for the NFS mount command.\ 2/Projects/
This is the OSX host IP and the absolute path to the NFS share on the OSX.

Step 5

You need to let autofs know that you have added your NFS share configuration. In order to do that edit the master autofs config.

sudo vim /etc/auto.master

Paste the following filename that we created in the end of Step 4 at the end of the /etc/auto.master file

/- auto.projects

Then restart autofs on Ubuntu VM using the command below

sudo service autofs restart

And… thats it. You should have the NFS share mounted in your Ubuntu VM under specified path.

Symfony2 assetic with Ruby 2.1 (under Ubuntu 15.04)

Recently updated my Dev machine to Ubuntu 15.04 and assetic:dump on Symfony2 apps stop working, showing the following error message.

An error occurred while running:
‘/usr/bin/ruby’ ‘/usr/local/bin/sass’ ‘–load-path’ ‘/home/purinda/Projects/phoenix/apps/frontend/src/Nlm/XYZBundle/Resources/
public/css’ ‘–scss’ ‘–cache-location’ ‘/home/purinda/Projects/phoenix/apps/frontend/app/cache/dev’ ‘/tmp/assetic_sassbUrFhU’
Error Output:
/usr/local/bin/sass:23:in `load’: cannot load such file — /usr/share/rubygems-integration/all/gems/sass-3.2.19/bin/sass (LoadErro
from /usr/local/bin/sass:23:in `<main>’



It appears to be that Ubuntu 15.04 installs Ruby 2.1 but doesn’t install a more recent version of Sass, so in oreder to get a up-to-date version of Sass compiler, you need to install compass pre-release version.

gem install compass –pre

Try running assetic:dump again.

Compiled binaries of Qt 5.3 for the Raspberry Pi (ARMv6) and Pi 2 (ARMv7-A)

At the time of writing Qt 5.4 release existed but I found it has number of bugs that affect multimedia capabilities of the Raspberry Pi.

Download the following binary release and install using following instructions. Instructions assume you still have the user pi in your Raspberry Pi system and make command installed as well (sudo apt-get install make). For the wget command, use the address found in “Download” link below the instructions.

ssh pi@<ip-of-the-rpi>

mkdir -p Qt5.3.2/qt-everywhere-opensource-src-5.3.2

cd Qt5.3.2


tar xf qt-everywhere-opensource-src-5.3.2_compiled_armv7l.tar.gz -C qt-everywhere-opensource-src-5.3.2

cd qt-everywhere-opensource-src-5.3.2

sudo make install

After running “make install”, binaries will be copied to the /usr/local/qt5 so you can remove the /home/pi/Qt5.3.2 directory.

Raspberry Pi still doesn’t know how to use the Qt 5.3.2 libraries as we haven’t instructed where the library files are. So you need to edit your .bashrc and stick following lines in there which sources the Qt libraries.

export LD_LIBRARY_PATH=/usr/local/qt5/lib/
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/qt5/bin

Precompiled binaries: Download

The problem

If you are like me running Raspbian on your “Raspberry Pi” or “Raspberry Pi 2”, you will figure out that Raspbian pointing to the following package repository doesn’t contain Qt libraries.

deb wheezy main contrib non-free rpi

Then tried searching for a solution and almost all require you to Cross-compile qt source, which I am not a big fan of as you need to cross-compile dependencies of Qt libraries to get Qt to compile without choking on some incorrectly cross-compiled dependency.

So I thought of compiling the Qt source package on shiny new Raspberry Pi 2, which has quad-cores and 1GB of RAM so I thought it will be faster to compile directly on it, and ran the ./configure and make -j3 (-j3 here is telling make command to use 3-cores to compile) with following settings enabled.

Build options:
Configuration ………. accessibility accessibility-atspi-bridge alsa audio-backend c++11 clock-gettime clock-monotonic concurrent cups dbus evdev eventfd fontconfig full-config getaddrinfo getifaddrs glib iconv icu inotify ipv6ifname large-config largefile libudev linuxfb medium-config minimal-config mremap nis no-harfbuzz openssl pcre png posix_fallocate precompile_header pulseaudio qpa qpa reduce_exports release rpath shared small-config system-freetype system-jpeg system-png system-zlib xcb xcb-glx xcb-plugin xcb-render xcb-xlib xinput2 xkbcommon-qt xlib xrender
Build parts ………… libs
Mode ………………. release
Using C++11 ………… yes
Using PCH ………….. yes
Target compiler supports:
iWMMXt/Neon ………. no/auto

Qt modules and options:
Qt D-Bus …………… yes (loading dbus-1 at runtime)
Qt Concurrent ………. yes
Qt GUI …………….. yes
Qt Widgets …………. yes
Large File …………. yes
QML debugging ………. yes
Use system proxies ….. no

Support enabled for:
Accessibility ………. yes
ALSA ………………. yes
CUPS ………………. yes
Evdev ……………… yes
FontConfig …………. yes
FreeType …………… yes (system library)
Glib ………………. yes
GTK theme ………….. no
HarfBuzz …………… no
Iconv ……………… yes
ICU ……………….. yes
Image formats:
GIF ……………… yes (plugin, using bundled copy)
JPEG …………….. yes (plugin, using system library)
PNG ……………… yes (in QtGui, using system library)
journald …………… no
mtdev ……………… no
getaddrinfo ………. yes
getifaddrs ……….. yes
IPv6 ifname ………. yes
OpenSSL ………….. yes (loading libraries at run-time)
NIS ……………….. yes
OpenGL / OpenVG:
EGL ……………… no
OpenGL …………… no
OpenVG …………… no
PCRE ………………. yes (bundled copy)
pkg-config …………. yes
PulseAudio …………. yes
QPA backends:
DirectFB …………. no
EGLFS ……………. no
KMS ……………… no
LinuxFB ………….. yes
XCB ……………… yes (system library)
EGL on X ……….. no
GLX ……………. yes
MIT-SHM ………… yes
Xcb-Xlib ……….. yes
Xcursor ………… yes (loaded at runtime)
Xfixes …………. yes (loaded at runtime)
Xi …………….. no
Xi2 ……………. yes
Xinerama ……….. yes (loaded at runtime)
Xrandr …………. yes (loaded at runtime)
Xrender ………… yes
XKB ……………. no
XShape …………. yes
XSync ………….. yes
XVideo …………. yes
Session management ….. yes
SQL drivers:
DB2 ……………… no
InterBase ………… no
MySQL ……………. yes (plugin)
OCI ……………… no
ODBC …………….. yes (plugin)
PostgreSQL ……….. yes (plugin)
SQLite 2 …………. yes (plugin)
SQLite …………… yes (plugin, using bundled copy)
TDS ……………… yes (plugin)
udev ………………. yes
xkbcommon ………….. yes (bundled copy, XKB config root: /usr/share/X11/xkb)
zlib ………………. yes (system library)
NOTE: libxkbcommon and libxkbcommon-x11 0.4.1 or higher not found on the system, will use
the bundled version from 3rd party directory.
NOTE: Qt is using double for qreal on this system. This is binary incompatible against Qt 5.1.
Configure with ‘-qreal float’ to create a build that is binary compatible with 5.1.
Info: creating super cache file /home/pi/Qt5.3.2/qt-everywhere-opensource-src-5.3.2/.qmake.super

Qt is now configured for building. Just run ‘make’.
Once everything is built, you must run ‘make install’.
Qt will be installed into /usr/local/qt5

Prior to reconfiguration, make sure you remove any leftovers from
the previous build.

After good 4-5 hours the make finished and end up with the binaries.

Some information regarding the build environment

uname -a

Linux thor 3.18.5-v7+ #225 SMP PREEMPT Fri Jan 30 18:53:55 GMT 2015 armv7l GNU/Linux

cat /proc/version

Linux version 3.18.5-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.8.3 20140303 (prerelease) (crosstool-NG linaro-1.13.1+bzr2650 – Linaro GCC 2014.03) ) #225 SMP PREEMPT Fri Jan 30 18:53:55 GMT 2015

libc6 info

Package: libc6
State: installed
Automatically installed: no
Multi-Arch: same
Version: 2.13-38+rpi2+deb7u7
Priority: required
Section: libs
Maintainer: GNU Libc Maintainers <>
Architecture: armhf
Uncompressed Size: 8,914 k
Depends: libc-bin (= 2.13-38+rpi2+deb7u7), libgcc1
Suggests: glibc-doc, debconf | debconf-2.0, locales
Conflicts: prelink (<= 0.0.20090311-1), tzdata (< 2007k-1), tzdata-etch
Breaks: locales (< 2.13), locales-all (< 2.13), nscd (< 2.13)
Provides: glibc-2.13-1
Description: Embedded GNU C Library: Shared libraries
Contains the standard libraries that are used by nearly all programs on the system. This package includes shared versions of the standard C library and the standard math library, as well as many others.

build directory


install directory


Raspberry Pi WiFi configuration in Raspbian

I recently bought a Raspberry Pi2 and installed Raspbian (version 2015-01-31), it comes with WiFi drivers preinstalled for the Realtek module which I used as my Raspberry Pi WiFi dongle.

So I though things will go smoothly once I have the /etc/wpa_supplicant/wpa_supplicant.conf file configured. So what I did was setup it with my wireless network configuration, which looks like below ( I use WPA2-PSK, with CCMP authentication, if yours is different you may need to configure this right first, I used the wpa_gui command in Raspbian for the following config).

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev


However after setting that up and restarting didn’t really start up the network with a DHCP lease, nor my dongle lit up. So after an hour of monkeying with the /etc/network/interfaces file I got it sorted out. The reason was after initializing the wlan0 interface it wasn’t really calling the wpa_supplicant script for WiFi connection. So here is what I modified in /etc/network/interfaces file. Pay special attention to the post-up commands which gets run after taking the wlan0 interface live.

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
post-up /sbin/wpa_supplicant -Dwext -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf -B
post-up /sbin/dhclient wlan0

iface default inet dhcp

The second “post-up” command is there to ask the router for a IP lease.

Be patient once you reboot the rPI with the above configuration, for the things to kick in and configure, it may take 2-3 minutes after booting the system.

I/O Ports Programming (Parallel port) Reading / Writing using VB.NET

This is a re-blog of my first ever article published online, on, which I wrote on 18 Sep 2007. Lately, I've not kept myself up to date with Windows and .NET platform so any feedback would be great regarding how well/bad this project works with more recent platform changes.


The parallel port, or printer port, is one of the easiest pieces of hardware available to users for communicating with external circuitry or homemade hardware devices. Most people think that interfacing with an external circuit is hard and never come up with a solution. However, in this article, I am trying to convince you that porting with another external device is very easy and it has only a few tricks to be remembered!

I will also be discussing a sample surveillance application that can be developed with few requirements. It is very simple and anyone who reads this article can use it at home. I will then be giving an introduction to Serial Port Programming, which will be my next article on The Code Project. In that section, I will illustrate some basics of Serial Data Transmission and how to use a more advanced serial port instead of a parallel port.

The application that I have given is developed in Visual Basic .NET and it uses inpout32.dll for direct port access. In Windows XP and NT versions, direct port accessing is prohibited, so we need to have a driver in order to communicate with ports directly. Therefore we are using a freeware driver, inpout32.dll, which can be found here.


Parallel Ports that are built into your machine can be damaged very easily! So, use all of my sample code at your own risk. All of these sample programs and circuits are thoroughly tested and they will never lead you to damage your hardware or your computer. I have never experienced such a bad incident! However, use these programs with great care…

Communicating with the Parallel Port

The parallel port usually comes as a 25-pin female port and it is commonly used to connect printers to a computer. Many geeks also use it to connect their own devices to their PCs. There is a few more things to remember when using a PC’s Parallel Port. It can load only 2.5mA and ~2.5 volts. It is better to use opto-couplers or ULN2803 when interfacing with an external device.

If you’re just thinking of lighting up an LED through a Printer Port, try to connect a 330 ohm resistor in between. There is some more useful information to consider when programming a Printer Port: the usual Printer Port is a 25-pin female D-shaped port and it has several registers. A register is a place where you can write values and retrieve them. There are different types of registers called Data Register, Control Register and Status Register.

Data Register

This is the register that allows the user to write values into the port. In simple words, these pins can be used to output a specific value in a data register. You can also change voltages in specific pins. These are called output pins. There are altogether 8 output pins available, ranging from D0 to D7.

Status Register (Pins)

These are called input pins or status registers and can hold a value that the outside world gives to the Parallel Port. So, this port acts like a reader and it has 5 pins for inputs. The pin range is S4 to S7.

Control Register (Pins)

This register can be used in both ways: it enables a user to write values to the outside world, as well as read values from the outside world. However, you need to remember that most of the control pins work in an inverted manner. You can see them with a dash sign on the top of the pin. Pin range is C0 to C3. Ground pins are used as neutral; these pins are used as (-) in batteries. If you’re connecting a device to a Parallel Port in order to read or write, you have to use one or more ground pins and a read/write pin to work. For example, if you’re trying to light up an LED, then you have to connect the (-) of the LED to a ground pin and the (+) of the LED to an output pin. For reading purposes, use the same mechanism.

· 8 Output  pins [D0 to D7]

  <!--[if !supportLists]-->
  · 5 Status  pins [S4 to S7 and S3]

  <!--[if !supportLists]-->
  · 4 Control  pins [C0 to C3]

  <!--[if !supportLists]-->
  · 8 ground  pins [18 to 25]
Parallel Port Illustration
Parallel Port Illustration

Addressing the Parallel Port and Registers

Port addressing controls a great deal in port programming, as it is the door that enables programs to connect to the external circuit or device. Therefore I would like to explain all the available port addresses.

In normal PCs, a parallel port address can be one of the addresses given below, but depending on the BIOS settings and some other issues, the parallel port address can vary. However, it always lies in one of these address ranges.

<!--<span class="code-comment">[if !supportLists]--></span>
· LPT1 03BC (hex)             956(decimal)

<!--<span class="code-comment">[if !supportLists]--></span>
· LPT2 0378(hex)              888(decimal)

<!--<span class="code-comment">[if !supportLists]--></span>
<!--<span class="code-comment">[endif]--></span>
LPT3 0278 (hex)               632(decimal)

As I mentioned above, those are the commonly used address ranges for parallel ports. Now you might be wondering how we communicate with each of them and how they work. It’s very simple! Each of those above-mentioned address ranges has its own address for Data Register, Control Register and Status Register. So, if you want to access a Data register of port 888, then the Data register is also 888. However, if you want to access a status register, then add +1 to the 888 = 889 and this is the status register. To access a control register, what you have to do is add another +1 so that it is 890 and that is the Control register of the 888-based parallel port. If your parallel port address is different, please do this accordingly to find its corresponding registers.

There is another simple way to check the parallel port address of your computer. You can get to know the parallel port I/O addresses through the device manager. First, open the device manager (Start -> Settings -> Control Panel -> System -> Hardware -> Device Manager). Then select the parallel port you are interested in from the Ports (COM & LPT) section. With the mouse’s right button, you can get the menu where you select Properties. When you go through steps, you will see a dialog similar to this. Locate the first item in the listbox and it carries the value 0378 – 037F so that this is the address range of the Printer Port. The data register address is 0378 in hex or 888 in dec.

Windows Printer Port Device Config Properties
Windows Printer Port Device Config Properties

Small Description of Inpout32.dll

Here in this article, we are using a third party tool called inpout32.dll to connect to ports available in your PC. We are using it because in Windows NT based versions, all the low-level hardware access is prohibited and a separate I/O driver is needed. When we read the status of the port, we have to take the support of that third party application.

In inpout32.dll, all of the system drivers and small tools needed to activate and run that system driver are included. So, we don’t have to consider much or think about how this happens; it will do everything for us…

As far as we are using a DLL file to communicate with the hardware, we have to know the links that this particular file provides for the user, or the entry points for that particular file. Here, I will describe all the entry points one by one.

DLL Entry used to read

Public Declare Function Inp Lib "inpout32.dll" Alias "Inp32" _
     (ByVal  PortAddress As Integer)  As Integer

DLL Entry used to output values

Public Declare Sub Out Lib "inpout32.dll" Alias "Out32" _
    (ByVal  PortAddress As Integer,  ByVal Value As Integer)

The Inp32 function can be used to read values from any address that is given through its PortAddressparameter. Also, the Out32 function is specifically used to write values to data ports. A long description on both functions can be found below.

Reading a Pin (Status Register) or External Value

Inp32 is a general-purpose function defined inside inpout32.dll to communicate with various hardware addresses in your computer. This function is included in a DLL, so this file can be called within any programming language that supports DLL import functions.

Here, for this sample application, I have used Visual Basic .NET because it can be used by any person who knows a little about the .NET Framework. What you have to do to read the status of a parallel port is very simple. I will explain everything in steps, but keep in mind that in the sample coding I assume that the Address of the status register is 889 or 0x379h in hex.

  1. Open Visual Studio .NET (I made this in the .NET 2005 version).
  2. Make a new Visual Basic .NET Windows application.
  3. Go to the Project menu and Click on “Add new Item.” Then select the Module item in the listbox.
  4. Type any considerable name for the new module that you’re going to create, for example, porting.vb.
  5. In the Module file, there should be some text like this:
    Module porting
        Public Declare Function Inp Lib "inpout32.dll" _
        Alias "Inp32" (ByVal PortAddress As Integer) As Integer
    End Module
  6. Go to your form and add a new text box. Then add a Command button and, in the click event of the command button, paste this code:
    'Declare variables to store read information and Address values
    Dim intAddress As Integer, intReadVal As Integer
    'Get the address and assign
    intAddress = 889
    'Read the corresponding value and store
    intReadVal = porting.Inp(intAddress)
    txtStatus.Text = intReadVal.ToString()

    This will work well only if the name of the text box you just made is txtStatus.

  7. Run your application and click on the button so that the text should be 120 or some similar value. Then that is the status of your parallel port.
  8. If you want to check whether this application runs properly, you just have to take a small wire (screen wire or circuit wire is ok) and short a Status Pin and a Ground Pin. For example, Status pins are 10,11,12,13,15 and if you short the 10th pin and 25th (Ground) pin, then the value in the textbox will be 104. If you short some other pins, like the 11th and 25th, then the value of the text box will be 88 or a similar value.

Now you can clearly see that all the power is in your hands. If you’re a creative person with a sound knowledge of electronics, then the possibilities are endless…


Before you run and compile this application, download inpout32.dll from this page and copy it to the Bin folder of your project directory, the place where the executable file is made, so that the DLL should be in the same directory to work. Alternatively, you can just copy and paste the DLL file to your Windows System directory and then you don’t want to have that file in your application path. You only have to follow one guide.

Writing to a Pin (Data Register)

When writing data to the Data Register, you have to follow the same rules. There is no difference in basics! As you learned in the above context, do the same thing here with a few differences. I will explain to you what they are, but keep in mind that in the sample coding I assume that the address of the data register is 888 or 0x378h in hex.

  1. Go to the Project menu and Click on “Add New Item.” Then select the Module item in the list box, the same as above.
  2. Type any considerable name for the new module that you’re going to create, for example, porting.vb.
  3. In the Module file, there should be some text like this:
    Module porting
       Public Declare Sub Out Lib "inpout32.dll" _
       Alias "Out32" (ByVal PortAddress As Integer, ByVal Value As Integer)
    End Module
  4. Go to your form and add a Command button. In the Click event of the Command button, paste this code:
    intVal2Write As Integer
    'Read the corresponding value and store
    intVal2Write = Convert.ToString(txtWriteVal.Text)
    'porting is the name of the module
    porting.Out(888, intVal2Write)

    This will work properly only if the name of the text box you just made is txtWriteVal.

  5. Run your application and click on the Button so that the parallel port will output voltages in pins according to the value you entered in the text box.

How to Light up LEDs

Now to test the value that you output. Assume that you wrote value 2 to the data register and you want to know which pin outputs +5V. It’s really simple to check. The data register starts from pin number 2 and ends in pin number 9, so there are 8 pins for output. In other words, it’s an 8-bit register. So, when you write 2 to its data register, +5 voltage will be there in the 3rd pin. You have to take a scientific calculator and convert Decimal 2 to Binary; then the value is 2(DEC) = 1 0(BIN), so 1 0 means 0 0 0 0 0 0 1 0 is the status at the data register.

Example 2

If you write value 9 to a data register, then the binary of 9 is 0 0 0 0 1 0 0 1. So, +5V can be taken from the 2nd pin and the 5th pin.

Binary Value 0 0 0 0 1 0 0 1
Data Register D7 D6 D5 D4 D3 D2 D1 D0
Actual Pin number 9 8 7 6 5 4 3 2

Example 3

Assume you’re writing 127 to a data register. Then the binary of 127 is 0 1 1 1 1 1 1 1 and therefore all the pins will be +5V, excluding the 9th pin.

Binary Value 0 1 1 1 1 1 1 1
Data Register D7 D6 D5 D4 D3 D2 D1 D0
Actual Pin number 9 8 7 6 5 4 3 2

When you want to light up an LED, there is a special way to connect them to a data register. So, you need some tools to connect. As a tutorial, I will explain how to connect only 1 LED.

How the LED should be connected to Printer Port

Here you have to connect a ground pin to any pin starting from the 18th to 25th. All are ground pins, so there is no difference. The output pin or positive of the LED should be connected to a pin in the data register. When you write a value which enables that particular data register pin, then the LED will light up.


Never connect devices that consume more voltage than LEDs, as this would burn your parallel port or entire motherboard. So, use them at your own risk…! I recommend that you refer to the method written below if you’re going through more advanced methods. If you want to switch electric appliances on and off, then you will obviously have to use relays or high voltage Triacs. I recommend using relays and connecting them through the method I have illustrated below.

More Advanced Way to Connect Devices to Parallel Port
(Compact 8 Channel Output Driver)

If you’re not satisfied with lighting up LEDs and you’re hungry to go further, then use this method to connect all your advanced or more complex devices to the Parallel Port. I would recommend that you consider using ULN2803 IC. This is an 8-channel Darlington Driver and it is possible to power up devices with 500mA and +9V with it.

Here in this diagram, you can see how to connect a set of LEDs to this IC. Also remember that you can replace these LEDs with any devices such as relays or Solenoids that support up to 500mA and they will run smoothly.

How the LED should be connected to Printer Port

How Do the Surveillance System and Sample Application Run?

The surveillance system that I have written is very small and can only be used at home or for personal use. However, you can make it more advanced if you know how more complex security systems work. Here, the application has a timer and it always checks for status register changes. If something changes, then the application’s status will be changed and a sound will be played accordingly.

This can be used to check whether your room’s door is closed. What you have to do is place a push to turn on the switch on the edge of the door and you have to place it in a manner which activates when the door is opened. If you’re not using a push to turn on the switch, then just place two layers of thin metal that can be bent and place them on the door. However, those two layers of metal should be able to make a short circuit when the door is closed. One piece of metal layering should be soldered to a screen wire (circuit wire) and that should be connected to the ground pin. The other layer of metal piece should be connected to a status pin of the parallel port. Status pins are 10, 11,12,13,15.

Sample application source code: Download

Make your Mac (OSX) serve as a NTP time server

Some background to the problem

At the time of writing I work for NewsCorp and they have a corporate firewall proxy inside the office LAN which blocks almost all ports below 1000 to external services (ex: github, bitbucket, ntp servers, etc) unless you specifically request for firewall rules to be allowed.

I sometimes use my Mac to run some linux applications in virtual machines, so when the host is suspended (closing the laptop lid) or shutdown while VM state is saved and resumes it cannot resync the system clock (vm) since NTP uses Port 123 (UDP) the packets cannot get out or come back in via the corporate proxy / firewall rules. So AWS and other web services started reporting my requests are in past or not in sync with the current epoch.

So I needed a way to somehow sync the VM system clock automatically, since accessing NTP servers outside the LAN was not possible, this is what I did to make my MacBook Pro (VM host) serve time.

Step 1

Get a terminal and edit the following file, I used the following command

sudo vim /etc/ntp-restrict.conf

Step 2

There should be only 10-15 lines in the file, locate the last few lines which includes some .conf files using “includefile” command, just above that add the following line

restrict mask

“” IP and mask “” allows all IPs in the range “ to”, this is the range I am using in the Virtual Box (my vm management software, VMware and others should have a similar Virtual Network Adapters). Thats the default range Virtual Box is configured to use when one VM adapter is configured to use via NAT or Host-only adapter.

If you need a different range (such as whole LAN in the house, etc) use that instead.

So after the modification my ntp-restrict.conf file looks like this,

# Access restrictions documented in ntp.conf(5) and
# Limit network machines to time queries only

restrict default kod nomodify notrap noquery
restrict -6 default kod nomodify notrap noquery

# localhost is unrestricted
restrict -6 ::1

restrict mask

includefile /private/etc/ntp.conf
includefile /private/etc/ntp_opendirectory.conf

Step 3

Restart the ntp daemon in your OSX by running the following command (kills but gets auto-restarted)

sudo killall ntpd

Step 4

Thats it! Try running the following command from the VM to sync time. Set up a cron to resync every 5min or so if needed.

sudo ntpdate <mac ip addr>

Linux brightness control

This is more of a note to myself than a blog post, whenever I install some Linux distribution on one of my laptops the first thing to be done is fix brightness controls as it helps to conserve battery life significantly.

The changes required is trivial, start off with figuring out what drivers are in use on your system

ls -ld  /sys/class/backlight/*

should list something like below but not all three, on mine I get intel and acpi




So assume you have an intel_backlight entry in the sys filesystem, then try following command to figure out the MAX supported brightness level.

cat /sys/class/backlight/intel_backlight/max_brightness

I get 975, so that would be the maximum possible brightness value your device can produce, if we could set something in between we should be able to control the level of brightness. So try something like this and see if that works, on mine I use “intel_backlight” as I use an intel chipset and run the following command

sudo sh -c “echo 488 > /sys/class/backlight/intel_backlight/brightness”

That sets half the brightness level my laptop supports. so right now you should have an idea about how to control brightness levels using a terminal. So if you bind that command into a key combination and make a shortcut you should be able to set up with any sort of desktop environment you use.

Since you understand the underline brightness control system, here is a better trick which may prevent you from setting up key combinations in your desktop.

I also found that you can let the system know which virtual device to be used for controlling brightness levels. This is done at the boot time, so you need to edit the following file

sudo vim /etc/default/grub

locate GRUB_CMDLINE_LINUX_DEFAULT line and set it to look like the code below

GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash acpi_backlight=vendor”

acpi_backlight parameter lets the acpi system know which device to be used for controlling brightness, here vendor refers to intel (in my case).

Hope this helps.

QtMultimedia: FATAL: cannot locate cpu MHz in /proc/cpuinfo

This became a problem for my Raspberry Pi (armv6h) running Archlinux (with a relatively newer kernel). Some users of my Grooveshark server reported some bufferring problems where it stopped streaming songs and reported “FATAL: cannot locate cpu MHz in /proc/cpuinfo”.

Since the Grooveshark server is based on Qt5 I thought the new Qt5.3 version had some multimedia library changes that caused the issue, but turned out the issue is /proc/cpuinfo not producing CPU frequency for the ARM based CPUs. But what program is using a proc cpuinfo to determine CPU frequency without getting the value from sys filesystem exposed by the kernel? or using a monotonic clock as per newer kernels, which would provide timing capabilities without depending on system level changes, turned out there are quite a few programs around that depends on the value specified in “cpuinfo” file in proc filesystem. Including “jack” audio server which is a dependancy for QtMultimedia framework and number of other applications.

After hours of searching I came up with the commit that broke this,, after reading the mailing list which explains why they have removed BogoMips (bogus mips rating of the CPU) from the /proc/cpuinfo, which gets set in the kernel bootup process for the purpose of setting its internal busy-loop.

How to fix QtMultimedia on ARM

Since qtmultimedia libraries depend on jack audio server for sinking audio-out and jack server being broken because of the removal of bogus mips value, I found out that there is jack2 version which seems to be working fine on ARM based computers, so here is set of steps to get JACK2 (a drop-in replacement for JACK as far as I understand) working on your Raspberry Pi,

Login in as root (if you are running Archlinux)

cd ~
git clone git://
cd jack2
./waf configure --alsa
./waf build # this takes a while
./waf install
cd ..
rm -r jack2


That’s it, you should not get “FATAL: cannot locate cpu MHz in /proc/cpuinfo” error when using QtMultimedia framwork under ARM (Raspberry Pi, etc) boards.

Interested in kernel level timers and clocks, go on to

My experiance with GI Stasis in Rabbits

Every rabbit owner should read all the articles on this page

It is a must, if you don’t and you have a bunny with you, I am sure you will regret when you lose them one day.

Okay, I am not going to get into much detail on GI Stasis (in short and as far as I understand, it is a partial or full shutdown of a rabbits gastrointestinal system) as you can find hundreds of articles about the illness if you do a google search. What I will be explaining here is what we went through when our pet rabbit was struck with the illness and what we learnt from it. Here I am referring to ‘we’ as me, Maddie and Sloppy the minilop.

Day 1

Every morning before I leave for work I give Sloppy a pile (size of his body) fresh oaten hay and a bawl of fresh water. When we get back we give him vegies, fruits (occationally) and Oxbow adult pellets (1/5 of a cup). Monday the 28th July 2014 was such a day, we left home and came back. Today Maddie went out to give him his share of vegies, and asked me to come down as she found that the he hasn’t drunk any water from his bawl or hasn’t even got into his hay pile as it was exactly the way I gave it to him.

We normally keep and good eye on his behaviour as almost two years ago he had a stomach upset and partially stopped eating for two days. Since then I would normally run my palm and fingers through his *stomach* (cecum area to be specific) and feel how hard it is. I do this every other day. So I did the same and I felt it was bit stiff was around 1-1.5 inch at least. So felt the left side of his stomach as well but it was normal (you will understand what normal is when you feel your rabbits tummy everyday for few weeks). Maddie asked me to bring him inside the house because when she gave him greens he ate abit and went out the backyard to play with her, normally he would eat at least half and beg for pellets as well. We knew that rabbits should eat and produce dropings all the time, so we brought him inside the house which he liked very much as well.

His room was upstairs where he had his hay container and water bawl, so brought him in and offered some parsley and english spanish, he had a parsley leaf or two and came to my room where he would lie down and relax next to my desk where he could see mee. Since we both were bit worried I started moving from room to room so he followed me. We knew the best thing for a rabbit with a slow moving gut is exercices. Oh as well as lot of fluids.

Hour or two went past but didn’t really eat anything but he was bright as any-other day. Around 8:30 (after two hours) or so we read online that we should force feed feed water or some enzymes such as papaya or pineapple juice. So Maddie asked me to go and buy some of that and a syringe before the shops close. I could only find papaya and two 3ml and 5ml syringes. Came back home around 9:30 and Maddie made him a papaya juice with a bit of water to make it sort of watery and I gave around 2ml-3ml and 2ml of water.

His gut was hard like I said and I also noticed that it was a bit swollen. So I thought we shouldn’t feed him any more food (which was not appropriate though, read on), Maddie suggested that we should give him more papaya juice every two hours or so and water but I rejected the proposal. We just fed another 2ml of water around 12:30am and went to bed, decided that Maddie should take him to the vet tomorrow. As we were *sort of* worried (6/10 I would say) and knew stomach upsets are bad for rabbits. After almost 6 hours he didn’t even produce a single dropping.

Day 2

When I was getting ready for work he was up already and bright as usual, following me from room to room while I was getting ready. I gave him another parsley leaf but didn’t eat, also a pellet but he rejected both. I asked Maddie to not wait for that long and take him to Dr. Alex Rosenwax at early as possible. Dr. Alex was his regular vet since we showed Sloppy to him the first day. Alex is a very experianced bird and exotics vet in Sydney.

Maddie went to the clinic around 10am and said that doctor wanted Sloppy to be admitted immediately and he might not be able to come home for two days as it very neccessary to make sure that he eats and poops. They also suggested that we need X-Rays to confirm that there is no obstruction before treating him. X-Rays were clean as no foreigh object was in his body, so Maddie left him there and called me and explained to me that they will give him probiotics, laxatives to make him poop. We were relaxed.

Maddie called up again around end of the day to see how he is doing, but doctor confirmed that he hasn’t pooped but has pee’ed, and also said 95% of such rabbits would produce droppings once they receive those laxatives. So they would treat this as a critical case and force feed him water and baby food as well as antibiotics to make sure his gut and liver can handle bacteria in the gut.

Day 3

We both went to work today and also took the pet carrier as we knew we wanted to bring him back as soon as recovers and treat him at home.

We called the hospital around the midday but they said he hasn’t really produced any droppings yet so were worried now. Maddie suggested that we should go to the hospital and see him. So we did. I saw him for the first time after two days and his stomach was swollen on both sides and hard as a brick. I was shocked to see him like that, I gave him my hand as he likes to lick my fingers but he didn’t, I also cleaned his left ear with finger as couldn’t because of his splay leg. He normally lick the finger after cleaning his ear but today he didn’t, that was enough indication that he wasn’t really at his 50%. He occasionally ground his teeth as well. We both were upset. Dr Andrew was there that day and explained that Sloppy is not really doing good but they force feed him lot of water and baby food with pain killers to help him.

We discussed what options Sloppy got at the time and doctor said we will have to give him sometime as some rabbits take more time to recover from the condition than others, and surgery would be the last option but survival rate of a gastrointestinal surgery for a rabbit is extremely low, less than 50%.

We came home with tears on our eyes. We couldn’t really sleep that night thinking about what he sort of pain he is going through, but hoped he would produce droppings at least tonight or tomorrow morning. We spent pretty much the whole night looking for another person with a similar experience, and any information regarding what we can do to ease his pain and help him with his condition, and I found this video I thought of performing that massage to Sloppy hoping it would make any difference.

Day 4

We both went to work today and also made a call early morning to find out how Sloppy did last night. He was the same. Maddie said we should go in the evening. So we thought we should bring some red towels as that was his favourite colour so he likes to mark those with his pee and droppings. We thought red towels would stimulate him, so we went to the hospital and the staff asked us to go into one of the two rooms they had and came in with Sloppy.

Maddie laid the towels on the floor and we put him on the floor. Maddie gave him lot of cuddles and kisses I was massaging two sides of his tummy with my fingers with enough pressure not to hurt him, it was gentle but I did it for at least 10-15mins. I also picked him up (with his belly up) and head between my elbow and body as shown in the above video. And massaged his tummy for 30secs to 1min. Then back to massaging both sides of tummy again for another 5min or so, I also kept my ears to his stomach area and listened weather I can hear any gut movements, I heard sort of a stomach movement for a second but nothing much. Doctors came into the room and gave us and heads up saying they need to give medication before they leave work and we have another few minutes with sloppy, that was around 5:40pm. Around 5:50pm Dr Alex came in and took Sloppy away. He also asked us weather we want to go ahead with a GI surgery tomorrow and explained that survival rate is extremely low. But we said will still do that to save him if nothing else would work by Friday morning. He agreed, Dr Alex told us that we can come and see him tomorrow morning before the surgery if we need to, we said probably not, as we were weak at the time. We went home with a heavy heart.

Day 5

We couldn’t really sleep, woke up very early got ready and went to work thinking what we should do if he hasn’t pooped by today. We knew surgery could be extremely dangerous and difficult in a sense that extreme medical care is neccessary after the surgery. Also we knew that most rabbits can’t make it through the anesthetic process as well.

I came to work, and told Maddie that I would talk to the doctors today as she didn’t really want to make decisions as of today. I received a call from Maddie just after she dropped me off at work, she was crying on the other end. She said “Sloppy pooped, but he also has passed away.”, it was a shocker, we both drove back to the hospital again. approx 10 mins from the city, where we work. Dr Alex was there and said though Sloppy pooped last night around 6:20pm, that was exactly 30mins after we left, he also admitted that he probably pooped because we massaged his tummy and did some exercises with him. But he really doesn’t know why he died even after producing droppings. He said if we don’t mind he would like to proceed with a necropsy for his and staff’s benefit to better understand Sloppy’s illness, I agreed as I wanted save another rabbit from the same disease in future. He also said the only regret he has is no one was with him last night.

Both Maddie and I have the same regret as of today, two days after his death. I still see him running around the backyard and following me from room to room, sleep next to my feet when I work. I thought of writing this post not to say how depressed we are now but to give someone a better understand the GI Stasis. So now I am gonna proceed explaining what we could have done differently to possibly save him.

Dr. Alex called me after the post-mortem and said he didn’t really have any dried hay or food in his tummy, and his organs were good as far as he could see, but bit of lumps in the stomach area. They still don’t really have a successful answer.

What could have we done differently?

The question which echoed in my head since the moment I heard that upsetting news. I am writing this in point form, based on priority as far as I understand this illness.

1. I could have checked his tummy more often (at least everyday if not twice a day) to see if it was hard or not.

2. Once we discovered that his tummy was hard on the first day, and not producing any dropings. I could have done what Maddie told me, force feed him water and enzymes (papaya juice or pineapple juice) in small amounts more frequently, every two hours or so the whole night. Also force feed lot of water since the first day.

4. We could have massaged his tummy and perform the exercise that is shown on the video ( since the first day. Massaging is the most beneficial of all for GI stasis on par with enough hydration, cannot emphasise any more about how effective this can be if you do right. So look the video number of times, back off if bunny doesn’t like it or try a different posture, pressure and try again.

5. We could have taken him back home with medication and not leave him in the hospital with other animals and no one to take care of him. And could have taken back to hospital on the following day. I would still do this if lived closeby, because stressing out a sick rabbit is not really good in any way. Right now I think I could have booked a hotel room but too late now.

Little about Sloppy

Sloppy was a three and half year old male minilop, who had E. Cuniculi and a splay leg (we think it was a birth defect). He had to go through his first surgery when he was around 8-9months, when his left eye had abscess due to E.Cuniculi, he recovered from it pretty well.

He was very playful and always run after us, he liked red towels, blankets, pretty much any soft cloth that is red. Since his early days he was on 1/4 cups of Oxbow pellets every other day or so and lots of fresh hay and greens, not much veggies but occational fruit such as strawberries or tiny bits of pear, apples, oranges. He was a happy bunny who had lot of exercises and freely roamed in the backyard when there was enough shade, also at nights till we go to bed. Few days a week he spent time with us indoors, thats when he pee’d on all the red cloths he could find :)


If your bunny has a hard tummy or if she/he doesn’t eat or poop or produces small droppings then you should consult your exotic vet immediately, in the meantime get a syringe and feed him lot of water. Again I can’t emphasize any more how effective a tummy massage can be, look at that 7 year old video (, I found it when it was too late to save him. Once you have required medication and if you can put in 8-10 hours a day during GI stasis don’t leave him in the hospital at night, bring him/her back home and do those massages gently and feed them lot of water. Once your bunny recovered from his first episode of GI stasis, make sure you check his tummy every day, keep a good eye on his droppings, shape, size, frequency, etc. That’s what good bunny parents do.


Very first picture I took of him. The day we brought him 16-04-2011
Thats where he stays if he is inside the house
Him and the herb garden we planted for him

IMG_20130901_133500 IMG_20130901_153303

Installation Guide for Linux Mint 17 / Ubuntu 14 04 on Mac mini late 2012

Installing Ubuntu 14.04 based distribution on a Mac (or any EFI based computer) has become very straightforward. If you read my previous guide on getting Ubuntu installed on Mac mini guide and thought it was long and too many things to configure, relax because you only have to run few commands with the latest Ubuntu 14.04 / Linux mint 17 distributions.

Firstly, remember that you can either split your Mac partitions using DiskUtil and have a separate partitions for your Linux or you can wipe the whole disk and have just Linux on your Mac. If you choose to go with the first, then make a partition on your Mac, of at least 10GB for Linux. And remember how much disk space you allocated for this partition as in the Ubuntu/Mint setup, you will have to point to this specific partition.

Make a bootable USB of Ubuntu 14.04 or Mint 17 (Qiana), you can easily do this by following these instructions (on your Mac).

1. Download the ISO from either Ubuntu or Linux Mint website (make sure you get the 64bit version).

2. Make sure the ISO file copied into your home directory of your Mac with the name linux.iso

3. Then run “hdiutil convert -format UDRW -o ~/linux.img ~/linux.iso” without quotes. This will take a minute or two.

4. If the output file is called “linux.img.dmg” then rename it to “linux.img”

5. Plug your USB of at least 2GB, some old USBs won’t boot so if this guide doesn’t work for your try it with a different USB.

6. Run “diskutil list” without quotes and check the output, should look like the following example. You should be able to distinguish your USB device from your hard drive because of the size, type name (it says usb_disk).


7. Now we gotta unmount the USB disk using this command before writing the image file we created in Step 3. Run “diskutil unmountDisk /dev/diskN“, where is the USB device number from the above list. As per the above example it would be 2 so command would be “diskutil unmountDisk /dev/disk2″, make sure you get yours right!

8. Making the bootable USB is easy, run “sudo dd if=~/linux.img of=/dev/rdiskN bs=1m”, again is the USB device number as per Step 6.

This step will take a while, have a coffee and come back.

9. Then you gotta eject the USB, don’t just pull it out. run “diskutil eject /dev/diskN“, again N is the USB device number you used in the above steps. Wait 5secs and then pull the USB out.

10. Reboot your Mac. Keep press “Option / alt” button when you hear the boot sound.

11. You should end up with a password screen where you need to enter your Admin password.

12. You will end up in the bootloader screen of the Mac, now plug in your bootable USB. And press Enter.

13. Continue as per normal Ubuntu / Linux mint installation except the partitions section where you should have a partition for EFI boot partition. So if you choose to have both Mac and Linux you can just split the partition you created in the first steps of this guide to two, for an example one “swap” (of at least 1 to 8GB, or whatever suits you) partition and one “/” (root) partition. And leave the boot loader installation device as as it is (bottom dropdown in partition management section).

If you choose to wipe the whole Mac, I recommend you should chose “Erase disk and install Linux Mint” / “Ubuntu” in the partition management section and leave LVM un-ticked.

14. After finishing the installation process it will ask to reboot, but DONT! get a terminal and run the following commands to make sure that GRUB is the preferred bootloader on your Mac (this overrides your Macs original bootloader sequence but your can revert it back when you want to so don’t panic).

You should also make sure the USB is still plugged into your Mac.

run “sudo apt-get install efibootmgr


sudo efibootmgr“, this will list something like the below example output

Mint is at address “Boot0000”

Ubuntu is at address “Boot0010”

Mac OS is at address “Boot0080”

above are the EFI identifiers of my mac, yours will look similar what you need to do now is alter the boot sequence.

I ran the following command, which changed my Mac boot sequence to boot Mint first then Ubuntu (which is just a phantom EFI identifier I assume), lastly the Mac’s original.

sudo efibootmgr -o 0, 10, 80

then just to confirm that above command altered the sequence run “sudo efibootmgr” again and confirm the new order.

15. Reboot using the menu in Mint/Ubuntu. Unplug the USB.

16. You should come back to Mint/Ubuntu, then you gotta fix your wireless networking if required. Make sure the USB is pugged back in then press the “Super” (Command key or Windows key) key and type “Driver manager“, press enter. You should end up in a screen like below.

Choose the Airport Extreme driver and apply.





Low colour depth in the second monitor

If you have a dual monitor setup, they you will notice that sometimes one of the monitors display less colours than the other one. I read somewhere that this issue happens in EFI based machines with Intel HD xxxx video cards, but I am not exactly sure the reason behind it.

The permanent fix for this would be to alter a configuration register for the Intel HD video card. The way I prefer for this is through the /etc/profile file.

sudo gedit /etc/profile

Add the following line

intel_reg_write 0x70008 0xC4002000

Save the file and exit. In the next boot colours will be normal on both monitors.



Installing LPCXpresso on Ubuntu 14.04 based distribution

I had to spend a lifetime trying to install LPCXpresso IDE built by NXP on Ubuntu 14.04, always came up with the following error message.

[14:52:45] purinda@purinda-ws[~/Downloads]+ ./Installer_LPCXpresso_7.1.1_125_Linux-x86
invalid command name "bind"
 while executing
"::unknown bind Text <Tab>"
 ("uplevel" body line 1)
 invoked from within
"uplevel 1 $next $args"
 (procedure "::obj::Unknown" line 3)
 invoked from within
"bind Text <Tab>"
 (procedure "::InstallJammer::InitializeGui" line 19)
 invoked from within
"::InstallJammer::InitializeGui "
 (procedure "::InstallJammer::InitInstall" line 68)
 invoked from within
 (file "/installkitvfs/main.tcl" line 38437)


The issue was that with multiarch built binaries shipped with Ubuntu, 32bit libraries cannot be installed via old library which I used in the past (ia32libs). As of Ubuntu 14.04 the 32bit packages can be installed by typing the same old “apt-get install <package>”, but with an extra suffix “:i386”.

So for getting LPC Xpresso (Java based IDE / Installer) installed you need 32bit variants of the following libraries.

sudo apt-get install libgtk2.0-0:i386 libpangox-1.0-0:i386 libpangoxft-1.0-0:i386 libidn11:i386 libglu1-mesa:i386 libxtst6:i386 libncurses5:i386

Weekend Project:

One of my friends is working on a project which she spends most of her time integrating data from Xero, MyOB and other sources with their product at her work place, pretty much everyday.

Normally this data exports come as CSV and Excel spreadsheets, last week she asked me whether we can build something to help her with this repetitive, boring task which needs lot of attention to get it right.

So I asked her what she does, and she explained that the application the company has developed, only accepts data in a certain format, so she normally open spreadsheets coming from multiple sources and transform them by hand using a spreadsheet software to fit the template that developers have provided to her to get the data into the system. So this is what she does,

  1. Remove unnecessary fields from the import file.
  2. Merge multiple fields to a single field (ex: First name and Surname becomes Name, etc)
  3. Trim certain characters from Address fields and Phone number fields so the system doesn’t blow up trying to import them into their database.

As she claimed, this normally takes around an hour per file with 100 rows.

I thought there might be more than 1 person in the world who might need help with manipulating data in spreadsheets. So was born last weekend.

What it does is pretty much what I explained above in three points, it can Trim/Remove characters or words from Columns that get imported, merge and map multiple columns into single column, convert file types while doing all the above and get the output in a clean, compatible Excel file. Oh it also supports previewing data before taking an export, so proof-read before really export stuff.

I also implemented user accounts in the applications as well, which let users to save their Mappings as templates in the cloud, so ones they have build whatever templates they want to use against multiple different types of files, all they gotta do is upload the file and apply the template and rest gets done automatically, producing a clean, precise spreadsheet which fits the template user wants. Front Page


Supports Excel AutoFilter like searching.


Adding custom fields to the export and mapping source columns from the imported spreadsheet.
Previewing data before exporting.
Look here for the application
Like everything else I do, find the code here  :)

Enabling XDebug in AMPPS 2.2

Once you have installed AMPPS 2.2 on MacOSx and navigate to PHP section in the AMPPS UI.

Click on “Php Extension” button and you will see a list of extensions but XDebug, In order to get Xdebug to appear in that list you will have to goto /Applications/AMPPS/php-5.<version using>/lib/extensions/

and copy the file in /Applications/AMPPS/php-5.<version using>/lib/extensions/ directory to /Applications/AMPPS/php-5.<version using>/lib/extensions/ext directory.

Then the xdebug extension will show up in the extensions list.

in brief

cd /Applications/AMPPS/php-5.<version using>/lib/extensions/
cp ext/

Weekend Project: Pi Music Box – Raspberry Pi as a Music streaming device

Get Grooveshark to run on a Raspberry Pi, Let your friends connect to and vote for their favourite songs. Let the majority rule what you hear at home or office.

We used to play songs regularly at work. Most of the time I play Dubstep while rest of the team want to listen to Trance or a different genre, this is where the clash of genres occur. This is where this system would be useful, if you get this running on a network then you can ask people to search their favourite songs and queue them in the system. Based on the votes they/others make for the song, it may float up or down making it either famous or not-so-famous. This makes a playlist of highest voted or popular songs.

The basic idea of the system is you connect the computer that streams music to a speaker/stereo. The system consists of two applications, one is used for streaming music and controlling the output of songs while the other interface/application takes user inputs via a Web Browser.

Media Streaming Application

Media streaming application is a console based service that runs in a threaded environment (multiple threads) to handle a player and a TCP server to accept user connections and pass user commands to the player to control the music output.

The preferred framework for most of my cross-application development is Qt, so I went with it for this project as well. I used the latest release of it which is Qt 5.1. I have specifically programmed this application to run on single board computers that run embedded Linux which implements a TCPIP stack and GStreamer based audio playback.

If you look at the “grooveshark-server” project hosted in Github you should be able to understand how to get it up and running on a Raspberry Pi, a Beaglebone or any other type of single board embedded computer. I still prefer you to give it a try on your desktop computer first and make sure it compiles correct on your x86 or x86_64 box before running it on ARM based environment.

In a nutshell all you need to do is to get Qt 5.1 running with Qt Core, Multimedia and Networking modules and then run Gstreamer with HTTPs based streaming plugin and mpeg4 decoder (this is found under “gstreamer ugly” package section of your distribution).

User Interface

The purpose of this application is to show a pretty interface to the user and let them search their favourite songs and queue them in the media streaming server, which we discussed above., since the above application only accepts 5 or so commands, this UI shows a eye-candy interface to the user and let them control the playlists and player attributes.

I cheated here, I could have implemented this into the media streaming server as a HTTP server (probably by embedding something like mongoose), implement my own templating engine in C and make it serve dynamic content. Since I wanted to make this project short-and-sweet (preferably finish within a weekend or so — basics :D ) I chose LAMP stack to build the Web based user interface to control the media server. This gives an added bonus of letting it run on the embedded computer (that runs the media streaming server) or another server sitting on the same network (this is achieved via TCP service that media server implements).

UI is build with LAMP stack as I mentioned before. jQuery and Bootstrap 3 is used for front-end content manipulation.

Look below for source code and demo of both applications working together to build a complete media streaming system.

frontpage navmenu search_results



Why I chose Linux over MacOSX after using a Mac for a year

First month with a Mac

I had never used MacOSX till January this year when I got a new job and didn’t get an opportunity to let the company know my preferred setup for work. So I ended up with a MacMini late 2012 with an i7 CPU and lots of RAM. So It was very clear to me at the time that I would be stuck with it for a long time. Since I never used MacOSX I wasn’t aware of how the mouse or keyboard works and it took me almost 2-3weeks to get familar with most of the short cuts and basic functionality, so I thought of replacing the workstation I had at home with a Mac too, so that I can spend time using MacOSX and somehow minimise all sorts of embarassing situations I faced at work specially with the basic use of keyboard, mouse and software installations :D

Somehow after buying a Mac for home use, I was very happy with it, it just worked, not like Linux where I had to mess around with for a fresh installation for days until I get it to work the way I want it to, but
on the other hand, Macs just had very few things to configure and I had to get used to the way it let me do things but I was happy because it worked. Since I am not that conservative I loved the change and the UI and the entire
user expeirence on Macs. I just had to install iTerm2 and Homebrew to get some software up and running but that wasn’t of any concern.

I thought I should also write few words about Mac hardware too, their aluminium uni-body design and craftsmanship is so excellent that I always thought it is piece of well engineered hardware.

In brief, was very impressed with both software and hardware.

After few months…

After the first few months of using MacMini with MacOSX I came across lots of annoying issues, and some of which are listed below.

Least concern first

In Linux or at least on the distributions that I have used supported moving cursor between words by pressing <Ctrl> + <Left | Right Arrow>, so when ever I wanted to make a code/document change that I have been working on, all I had to do was press the Control + arrow keys on the direction of the word I want modified and cursor keeps jumping based on the way I have configured Keyboard (or press the arrow key n number of times so it skips n number of words in the direction you want) settings in Linux but in OSX that moves the cursor to the beginning or the end of a word which was very frustrating when I was getting used to Macs. Also HOME and END keys on OSX worked in a way that was so strange compared to all other operating systems. END key was pretty much useless at least I couldn’t / didn’t know how to use it properly…

Dual monitors (this issue has been solved in the Latest Mavericks release)

The very first complaint would be full-screen mode of applications making the second monitor useless. Some mostware had built in full-screen modes so they let the second monitor free for the user, but system default full-screen
made the second display useless. I heard that the new version of OSX will come out with better screen management for multi-monitor setup but so far it has been painful.

Making a window always on top and moving windows across monitors 

You cannot do this as of OSX 10.8.4, without hacking their window management system a lot, I found some hacky ways to get that done but didn’t want to install all sorts of legacy(xquarts) software on my mac to get something so simple to work.
Again some software like mplayer had its own ‘always on top’ settings when playing movies, etc. but it wasn’t a generic setting for all applications which is provided by the OS. This made me sad, as I tend to use terminal or
some other software in a small window and leave it on the corner of my editor so I can quickly compile software, etc (just one example).

Moving a maximised window from one monitor to the other is so difficult you have to leave it precisely on the correct edge of the second monitor, the window manager of OSX doesn’t take care of it for you. In Gnome, KDE, etc all I did
was just grabbed the title bar and moved the cursor to the second monitor and release mouse so it sticks on the second monitor stretched edge to edge with its default state (maximised in this example), I didn’t have to
resize or prcisely move the window. This was a freaking pain.

Network shares

At this point I started using my home Mac to work on externally hosted projects, which I connected to through OpenVPN and mount source code directory on my workstation so that I can use my editor (Sublime and MacVim) easily.
In order to get this done, I had to either mount the directory through Finder (using network shares), or use terminal to mount it on the directory I want. Since I had few projects running I had few servers connected at any given time,
and all of them had the source directories in one specific path (assume it was /var/www/htdocs for now), so when I wanted to mount that path on my workstation the first server was mounted on /Volumes/htdocs and the second was /Volumes/htdocs-2.
This wasn’t very convinient to me as I wasn’t sure which dir is which server when I had more than 3 mounted at the sametime. If I use terminal however I could mount with custom directory names I wanted, but I still could see those
production servers in the Places section of Finder and if I ever click on one of it accidently then the clicked server share gets mounted again. So OSX wasn’t aware that a perticular network share is already mounted on whatever the dir I wanted.
This was a bit annoying to me.

Getting a cross-compiler to work

I play with all sorts of embedded computers, so I was hoping to get a cross-compiler installed on my Mac so that I could write apps and compile them on my workstation  rather than trying to install a toolchain on the embedded system and wait ages to compile something with its low-powered processor.

This is again a painful experiance and I read somewhere that MacPorts (homebrew alternative) supports installation of gcc for ARM, but homebrew never did, and HFS filesystem was case-sensitive and I had to create another partition to get GCC to compile but didn’t work perfectly either, so I had to run a VM to compile all multilib c/c++ apps.

HFS Filesystem

I don’t even know where to begin, I can either have HFS, FAT(exFat included) and NTFS on Macs, which stopped me from using my 2TB backup hard disk using with other computers at home. If I use FAT on it, it wouldn’t support symlinks or case-sensitive file structures. So I had to write my own backup script which makes an archive before backing up and then copy the files to the HD.

I will explain one issue I had with HFS, once I wanted to build a toolchain and install GCC like I mentioned above, since I never had any issues with other software I thought of reformatting my Mac and make the primary partition case-sensitive. So I did. I was happily working with the case-sensitive filesystem until I try to install Adobe Photoshop on it. Then it complained it can’t install Photoshop on case-sensitive filesystem so I was pretty much screwed xD at the end I reformatted back to case-insensitive and install photoshop then installed the crosscompiler toolchain on a separate case-sensitive partition. But this is just one issue which wasted a whole weekend.

Last not least, memory management in OSX :)

You probably already know where I am trying to get to, and I found a nice explanation of why it sucks on Mac

OS X has a feature called inactive memory. This is memory that was recently used by an app you closed and can be quickly made to active memory if you resume to use that app. A nice concept, that fails miserably. OS X’s documentation says, that this memory may be freed at any moment. However in practice, it just keeps on accumulating until you run out of free memory. In this case a sane option for the OS would be freeing the inactive memory. Instead the OS X decides to swap the inactive memory on the disk. So when running out of free memory and having a 1,5 gigabytes of inactive memory left, your OS starts paging the unused inactive memory to disk instead of freeing it for applications to use. Not only this causes your computer to slow down, it also is counter-intuitive in the terms of the original idea of inactive memory: when it’s on disk, it definitely is not made active quickly.
I managed to find out that this memory can be freed with combination of XCode’s purge-command and repairing disk permissions. First usually freed around 200MB of memory while latter freed almost every bit of inactive memory. Eventually this became a daily routine. When arriving to work the first thing was to hit repair disk permissions button and do something else than actually use the computer for the next five to ten minutes. Sigh.

Why Linux based distro (I use debian derivatives)?

In few words, it lets you free from all the restrictions that OSX puts on you.

I run Mint on my workstation (MacMini) and ElementaryOS on my Acer V5 celeron with Cinnamon and Pantheon as window managers. I never had any of the problems I mentioned above on a Linux box, but I had hardware issues like getting MacMini wireless driver (Broadcom BCM4331) to work, but I somehow manage to fix it and write an article about it. So in brief if you can initially spend few weeks configuring and figuring out the best configuration for you in a Linux based distro then you will be safe for a foreseeable future. Now that’s what I understood after using my MacMini with OSX for more than 10 months, now I prefer to invest some time initially on the configuration I need over trying myself to adopt OSX so hard (which is clearly not meant for me, at least for now).

Apt-get software management and Linux memory management so well designed that makes most of getting software to run on a Linux system is so much fun and easy to do.

I should give some credit to developers who designed MacOSX UI, that’s the only thing I miss so far, those clean, beautiful interface and the magic mouse, which I miss everyday!

PLEASE NOTE: that this is completely my personal opinion and you may find MacOSX suit best for you, most of my colleagues love it too :D so if you have ways to fix the issues I had with OSX or any other constructive criticism please leave a comment.

How to check if a Port is open or not in C (Unix/Linux)

Thought of posting this script as I found this useful to embed in any socket based programs to figure out if a port is open or not in a certain host.
Tested on a Linux box and appears to be working fine.

Localhost and port 22 is hard coded into the snippet.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>

int main(int argc, char *argv[])
    int portno     = 22;
    char *hostname = "localhost;

    int sockfd;
    struct sockaddr_in serv_addr;
    struct hostent *server;

    sockfd = socket(AF_INET, SOCK_STREAM, 0);
    if (sockfd < 0) {
        error("ERROR opening socket");

    server = gethostbyname(hostname);

    if (server == NULL) {
        fprintf(stderr,"ERROR, no such host\n");

    bzero((char *) &serv_addr, sizeof(serv_addr));
    serv_addr.sin_family = AF_INET;
    bcopy((char *)server->h_addr,
         (char *)&serv_addr.sin_addr.s_addr,

    serv_addr.sin_port = htons(portno);
    if (connect(sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0) {
        printf("Port is closed");
    } else {
        printf("Port is active");

    return 0;

Multi-hop SSH Port Forwarding

Following code block can initiate a tunnel between two ssh servers when the second ssh server doesn’t/can’t accept direct connections from the first, and it uses an interim ssh server to forward the port from the source (where you would run this code block)  to the destination.

If you don’t have public/private keys established between servers, ssh will prompt you to enter two passwords, first is for the interim-server and second is for the destination.

PORT=1900; USER=purinda; SECOND_HOP="ssh -N -R destination-server:${PORT}:localhost:${PORT} ${USER}@destination-server"; ssh -t -t -R interim-server:$PORT:localhost:80 ${USER}@interim-server ${SECOND_HOP}

1. Ports you need to hop between server are available (closed), on both interim and destination servers.

2. There is a common user account which you can use for hopping. My first name in this case (if not you will need to modify the command to use different usernames).

3. You should use IP addresses as interim and destination servers (at least in the section where you forward the port, ‘IP-destination-server:${PORT}’)

4. Forwarding port 80 in this example, change it to whatever port you want to forward.

Installation Guide for Linux Mint / Ubuntu on Apple Mac mini (late 2012)

UPDATE (29-07-2014) : If you want to install Ubuntu 14.04 or Linux Mint 17 or newer version Please refer to

UPDATE (11-06-2014):  As of Ubuntu version 14.04, Ubuntu installer supports Macs without going through the following guide. Download the compatible Mac ISO image from Ubuntu site before proceeding.

UPDATE (11-01-2014): I removed Mint and Ubuntu versions used in the title of the article assuming that all Ubuntu based distros will be almost identical in terms of installation and initial configuration steps. I personally upgraded Mint 15 to 16 following the same steps and worked.


Getting Linux to run on Apple hardware has always been tricky, so if you get lost somewhere in the middle of the process of configuring or installing Linux, please drop a comment and I would reply as soon as possible, probably within hours.

Installing Ubuntu/Linux Mint on Macmini is easy, but getting it to work the way you normally want a PC to run is a bit difficult (meaning, run smoothly and making all peripherals work) :) I am trying to cover all the issues I had with linux installation process and getting Apple hardware to run happily with Linux.

Few things before we start,

  • I assume that you have a reasonable knowledge in *nix based OSes and commands.
  • You need a stable power supply ;)
  • A Mac mini late 2012 with OSX in a perfect state (if not you can reinstall OSX to start with, I did that)
  • A keyboard and mice
  • A monitor or two so you can see whats going on :)

NOTE: I reinstalled macosx before installing Linux and recreated the partitions table with two (using apple disk utility), since I had a 1TB drive, I split it into two equal partitions and reinstalled mac on the first (512GB was used).
I left the second partition untouched, meaning unformatted. I dont think this is a must but just saying what my configuration was…

Macmini comes with an UEFI bootloader, the easiest way to get multiple OSs to run with the machine is through rEFInd, you can download it from the link below.
As far as I can see rEFInd is a software layer for UEFI based computers so that you can esily run multiple OSes with a nice menu driven interface so that users can switch between them easily.
Download the “Binary zip file” from the link below.

After downloading, extract the zip, ‘cd’ into the folder and run “sudo ./”
it will show some messages and install the complete installation.

Reboot, if rEFInd is installed correcty you will see the menu at the time of booting…

(ver 0.7.4 as of 05-Sep-2013, Binary zip archive)

Installation process
The installation should be straightforward until you get to the point where you choose the paritions setup. You need to choose “Something else” from the harddisk setup/partition section, then in the partition editor
chose the 2nd partition you left blank in the mac disk partitioning utility (check the note above), use the blank partition to setup your favourite partition table.

A video clip explaining manual disk partitioning in Mint.

In the “Device for bootloader installation” dropdown please select the root partition or /boot/ partition you created in the partition table. Never use any of the block devices used by the Mac OSX, in mine they were the device itself /dev/sda and /dev/sda1, /dev/sda2

Once the partitions are confirmed and the bootloader installation device is set, proceed with the installation.

If you need more detailed guide for partitioning and installation guide please refer to

NOTE: I noticed this when I was upgrading my MacMini from Mint 15 to 16, the package grub2-amd64-signed fails if the computer doesn’t have access to Internet (I assume this has something to do with GPG package signing).

Low colour depth in the second monitor

If you have a dual monitor setup, they you will notice that sometimes one of the monitors display less colours than the other one. I read somewhere that this issue happens in EFI based machines with Intel HD xxxx video cards, but I am not exactly sure the reason behind it.

The permanent fix for this would be to alter a configuration register for the Intel HD video card. The way I prefer for this is through the /etc/profile file.

sudo gedit /etc/profile

Add the following line 

intel_reg_write 0x70008 0xC4002000

Save the file and exit. In the next boot colours will be normal on monitors.

Broadcom wireless BCM4331 chipset

Broadcom chips are not supported well with linux, if you need more information about why the Arch linux documentation provides a good inside (

Lets get the wifi driver installed,

You need to make sure that repository sources are set to download and compile drivers from the source package itself, inorder to enable source code repositories refer to, usually in linux mint/ubuntu in “Software sources” application should have a option to enable “source code” option.

After enabling source packages open up a terminal and run following commands, in order to compile a package from the source, you need kernel headers and debian package developer tools.

sudo apt-get update
sudo apt-get install linux-headers-generic linux-headers-`uname -r`
sudo apt-get install dpkg-dev debhelper dh-modaliases
sudo apt-get install --reinstall bcmwl-kernel-source

And make sure the opensource drivers are disabled, in order to disable them edit the kernel modules config file and make required changes as follows

sudo gedit /etc/modprobe.d/blacklist.conf

add following lines to the bottom of the file

blacklist bcm43xx
blacklist b43
blacklist bcma

Reboot the machine and wireless should be up and running..! ;)

Suspend to ram (S3)

If you really don’t care about suspending your mac mini reliably so that you can save trees ;) and got wireless working perfectly, you can skip this section. If you need to get both suspend and wifi working, continue.

In latest ubuntu release the kernel 3.8.xx does not reliably able to put Mac mini to sleep, I couldn’t figure out the exact reason but, after lot of trial and error I figured out that kernel-3.11.xx can reliable suspend Mac mini.

But the drawback with installing the latest kernel is that wifi driver stops compiling as per the kernel header changes in the kernel-3.10.0.

If you follow the guide to the letter you should be able to apply the patch I have submitted to the bcm-kernel-source package and get it to compile and install correctly.

Step 1: Create a temp directory to get download and run kernel updates.

mkdir /tmp/kernel-3.11.0
cd /tmp/kernel-3.11.0

Step 2: Get the kernel packages as per your processor architecture from the link below. You should download THREE packages from the link linux-headers-generic, linux-headers, linux-image. Since all latest mac mini run Core iX processors you can safely download 64bit packages.

sudo dpkg -i *.deb

Step 3: Error messages and the fix

At the time of writing bcmwl-kernel-source didn’t compile using kernel-hearders 3.11.0 and get the error mentioned in the below paragraph, and I have submitted a patch in the ubuntu launchpad. You can check the status of the package in the URL at the end of the document, if it says the package is updated, you don’t have to worry about running any of the below commands.

After running dpkg command you may see an error which says check make.log for debug information, that means the wireless driver failed to compile and install correctly. But dont worry just restart your computer so that the next time computer will boot up with the latest kernel we just installed. Please note that the wireless will not work after the reboot (this is due to the wl.ko kernel module error we got), so make sure you got another device with this tutorial open so you can continue reading.

After restarting, download the patch I have attached in the launchpad bug report and copy it to the bcmwl-kernel-source directory as follows and recompile the code.

cd /usr/src/bcmwl-
cd /usr/src/bcmwl-
sudo gedit dkms.conf

After editing dkms.conf you will see 10-15 lines as follows and add the line into the file and save.

CLEAN="rm -f *.*o"
MAKE[0]="make -C $kernel_source_dir M=$dkms_tree/$PACKAGE_NAME/$PACKAGE_VERSION/build"
PATCH[7]="0008-add-support-to-linux-3.9.0-3.11.0.patch" <-- Line we need to add

Step 4: Reconfigure bcmwl-kernel-source

If you have done all the above steps correctly, you should be able to run the following command and recompile the bcmwl driver without compile time errors.

sudo dpkg-reconfigure bcmwl-kernel-source

Step 5: Check if the driver is installed

cd /lib/modules/3.11.0-031100-generic/kernel/drivers/net/wireless
ls -l

You should see a file called wl.ko, thats it!

Reboot your machine and wireless should come up. You can try to load the kernel module by running “sudo modprobe wl” in the terminal as well without getting module not found errors.

If anyone is interested this is the bug that we just fixed

These are all the problems I experienced while using Linux with my Mac mini in the first two weeks, and hope this document would help one of you brave linux users to run your favourite distro on one of the best hardware provided by Apple :)

Drop a comment if you found any other issues with Mac minis and Linux, I am happy to help.

How to Setup an Amazon AWS EC2 NFS Share

This article will be short and to the point, so whoever wants to mount a remote nfs share in their local machine they should be able to get it up and running in 5 mins or less. Here we go!

Step – 1 Setup AWS security groups

In your EC2 instance setup following Security group exemptions, it would be a good idea to create a separate security group called “NFS Services” or something to leave these exemptions separate from the rest of your security groups.

Port (Service) Source
Port (Service) Source
32770 – 32800  

I have set source to for those ports but I do restrict who can access those services via /etc/hosts.deny in next step. Or if you prefer set the IP address of the client machine (external ip) to source when you add those port exemptions.

Step – 2 Install the NFS server

You need a running NFS service in your remote server so that the client can access shared directories/paths, install nfs server by typing the following line in the terminal of your aws instance.

sudo apt-get update && sudo apt-get install nfs-kernel-server

Step – 3 Decide what you want to share

Whatever you wanna share should go in /etc/exports file. So edit it using nano/vi/ or whatever text editor you have in the terminal, I use nano here

sudo nano /etc/exports

and add entries of the directories you want to share

/home/purinda *(rw,async,insecure,all_squash,no_subtree_check,anonuid=1001,anongid=1001)
/opt *(rw,no_subtree_check,sync,insecure)

If you read a nice article on how these exports thingy work you will figure out that the astrix is to specify the client IP (which can be specified as a CIDR address such as or * to allow any client).

I have a different configuration set for /home/purinda as I use a Mac OSx 10.8.2 mountain lion client which uses nfsv2 client to connect and it require some security tweaks like I have mentioned. Or read my other article on this subject.

Step – 4 Reload the NFS service 


sudo service nfs-kernel-server reload

to reload the NFS service on your ec2 instance.

and you may or may not require

exports -av

Step – 5 Connect!

In your local/desktop open up a terminal and create a directory which should be used for mounting the remote directory, for example if you want the remote /home/purinda mounted in your /Volumes/purinda in MacOS x


mkdir /Volumes/purinda
mount -t nfs -o nfsvers=2 <elastic-ip-of-ec2>:/home/purinda /Volumes/purinda/

on a linux desktop/client you may be able to just do

mount -t nfs <elastic-ip-of-ec2>:/home/purinda /Volumes/purinda/


NFS Shares between Ubuntu 12.04 and Mac OSX 10.8

Mounting NFS volumes between Ubuntu and Mac OS X 10.8 was a problem for weeks, I started using SMB shares as I couldn’t get both systems live in harmony! But today fiddling with zillion different configurations in both sides I got Mac OS X to accept my NFS exports from Ubuntu 12.04. This is how I got it working….

First you need to export directories from Ubuntu end, so you may need nfs-kernel-server package installed in the ubuntu system.

sudo apt-get install nfs-kernel-server

Then edit /etc/exports file as root. Normally, you would

sudo nano /etc/exports

Then add whatever directory you wanna share to the Mac OS X (or any NFS compatible computer). Here, I have shared my Projects directory under /home/purinda (my Home directory).

/home/purinda/Projects *(rw,async,insecure,all_squash,no_subtree_check,anonuid=1001,anongid=1001)

To exit nano after saving your changes, use Control+O to save the file and Control+X to exit.

Lets go through each settings to understand what it does,

“/home/purinda/Projects” is what you are sharing with others.

* represents that you’re sharing it with any client machine, usually for security reasons you may want to put something line “” without quotes to share it only with a single client, or “” to share it to all clients in your Class C network. I am * (any) as the machine I am working on is a virtual machine and only exposed to the host computer and other VMs in the virtual network.

manual page for ‘exports’ (`man exports`) gives a good explanation.

Once you have edited the exports file correctly. You can restart nfsd (nfs server daemon) using

sudo service nfs-kernel-server reload

You can also use following command to export new shared directories, whenever you make changes to your exports file,

sudo exportfs -ra

To mount it, on the Mac get a terminal and type (change host and directory path to suit your requirement)

mount -t nfs -o nfsvers=2 /Volumes/Projects/

‘nfsvers=2’ option is important for mac osx and ubuntu to be compatible with each other.

UPDATE: 19/09/2014 

I tried to follow my own instructions and failed, not because they were wrong but the way Ubuntu upstart initiates services.

When I tried to mount the exported directory on the client Mac I got “Cannot retrieve info from host: RPC: Program not registered“.

The way to fix the issue is, on the NFS server restart rpcbind service before starting NFS-kernel-server.


sudo service rpcbind restart


sudo service nfs-kernel-server restart

Try remounting

Multiple/Second SPI Buses on BeagleBone

How to: Get SPI to work on Beaglebone

If you want just one SPI port running, then most up to date images for beaglebone has enabled them by default.
Location to get pre-compiled images with SPI1 enabled,

Enabling SPI0

Step 1

  • In order to enable userland SPI we need to be able to rebuild the kernel, so grab the Linux source:
    git clone git://
  • Install the required tools and libs by running:
    sudo apt-get install gcc-4.4-arm-linux-gnueabi git ccache libncurses5-dev u-boot-tools lzma

I tend to compile the kernel using hardfloat, so arm-linux-gnueabihf compiler would be the way to go.

  • As per Robert C. Nelson’s repo, get the patches required to enable spidev
    git clone git://
    cd linux-dev
  • switch to the am33x-v3.2 branch with:
    git checkout origin/am33x-v3.2 -b am33x-v3.2
  • Now copy to and update with the settings for your setup:
  • For my setup I had to make the following changes to
    uncomment line 14, change from:
    at line 60 update the path to your linux source, change from:
    uncomment line 70 to set the kernel entry point, change from:
    uncomment line 80 to set the BUILD_UIMAGE flag, change from:
    and finally at line 89 uncomment and set the path to the SD card, change from:
  • Build kernel command

Step 2

Note: This step will discuss how to enable SPI0 (SPI1 is enabled by default with latest kernels, or the one you download from above URL which contains precompiled images).

  • Goto “/KERNEL/arch/arm/mach-omap2” directory in kernel source which is in the clone you got from “ git://” in above steps.
  • Make the following modification after you get to Kernel configuration menu.
    • Note: if you make the change before you get to config menu, the git update process will overwrite it again with the files from repository.
      *** /home/purinda/Software/Linux/linux-dev/KERNEL/arch/arm/mach-omap2/board-am335xevm.c 2012-08-10 00:12:53.937992451 +1000
      --- board-am335xevm.c 2012-08-09 23:36:24.774040301 +1000
      *** 2275,2280 ****
      --- 2275,2289 ----
      static struct spi_board_info bone_spidev2_info[] = {
      + /* [PG] I added the following struct call in order to get SPI0 working
      + */
      + {
      + .modalias = "spidev",
      + .irq = -1,
      + .max_speed_hz = 12000000,
      + .bus_num = 1,
      + .chip_select = 0,
      + },
       .modalias = "spidev",
       .irq = -1,
      *** 3189,3195 ****
       if(beaglebone_spi1_free == 1) {
       beaglebone_spi1_free = 0;
       pr_info("BeagleBone cape: exporting SPI pins as spidev\n");
      ! setup_pin_mux(spi1_pin_mux);
       spi_register_board_info(bone_spidev2_info, ARRAY_SIZE(bone_spidev2_info));
       if(beaglebone_w1gpio_free == 1) {
      --- 3198,3207 ----
       if(beaglebone_spi1_free == 1) {
       beaglebone_spi1_free = 0;
       pr_info("BeagleBone cape: exporting SPI pins as spidev\n");
      ! /* [PG] I made the following pin_mux call in order to get SPI0 working
      ! */
      ! setup_pin_mux(spi0_pin_mux);
      ! setup_pin_mux(spi1_pin_mux);
       spi_register_board_info(bone_spidev2_info, ARRAY_SIZE(bone_spidev2_info));
       if(beaglebone_w1gpio_free == 1) {

Step 3

  • Call ./ (inside linux-dev directory).

Step 4

  • Call ./tools/
    • you may need to modify file according to the changes displayed on above.
    • if you have set the correct memory card device in it will load the compiled driver correctly into the boot partition. In my case the mmc dev was /dev/sdb

How to test

  • Once you boot the new Kernel you should see two devices in /dev/ directory as follows when you run the following command
    ls /dev/spidev* -lah
    Output should be
    crw------- 1 root root 153, 0 Aug 9 23:58 /dev/spidev1.0
    crw------- 1 root root 153, 1 Aug 9 23:58 /dev/spidev2.0
  • spidev1.0 is spi0 in beaglebone board and spidev2.0 is the spi1(the default spi master active on precompiled kernel images) in beaglebone.
  • once you can see these two devices you are ready to test.
  • connect pin 18 and 21 in header P9 with a cable, compile and run spidev_test.c in documentation/spi folder in Kernel source (refer to 1st site in references section) to test “spidev1.0”.
  • connect pin 28 and 30 in header P9 with a cable, compile and run spidev_test.c in documentation/spi folder in Kernel source (refer to 1st site in references section) to test “spidev2.0”.
  • When you do so make sure you change make sure you change the following line (in spidev_test.c) to spidev1.0 or spidev2.0, depending on what you need to test.
    static const char *device = "/dev/spidev1.1";

Example SPI driver with modifications

Modified am335_omap_spi.c file from kernel source

Beaglebone RF24(nRF24L01) Library


I started working on a project which require communication with multiple radios. I started looking at nRF24L01 chip and found it to be very useful, cheap little device which has many features built-in. Initially I got two AVRs to talk to each other through the simple library found here ( But I found multiple issues with the library when the number of nodes in the network increases,

  • Didn’t support retries
  • Dynamic packet sizes
  • Channel scanning, etc

It was a simple library to start with and understand how the radio actually works in different configurations. I didn’t want to spend lot of time implementing all features I wanted so started looking around for complete libraries for nRF24L01, and I found RF24.

RF24 is one of the most complete libraries available for the nRF24L01[+]. So I implemented my project using RF24, It was an excellent library for AVRs (Arduino), but there was no port for devices running linux, as one of the nodes in my network was running linux I had to hack the code in order to get it to work on the beaglebone. So I had to spend a weekend hacking the RF24 library to come up with a very thin layer below the library to support all its features on embedded linux boards. I used a beaglebone for testing.

This is a Beaglebone port of the popular RF24(nrf24l01) library posted on
If you find the library is out of sync with the Arduino library please drop an email to

Contains portions of the GPIO library posted here,

Complete wiki can be found here


  • Refer to RF24 documentation.


How to connect the pins

  • I used one of these boards to test the library.
  • Connect PIN 1 (GND) of NRF24L01 board to Expansion header P9 PIN 1(GND) of Beagleboard
  • Connect PIN 2 (VCC) of NRF24L01 board to Expansion header P9 PIN 3(3v3) of Beagleboard
  • Connect PIN 3 (CE) of NRF24L01 board to Expansion header P9 PIN 27(GPIO3_19) of Beagleboard
  • Connect PIN 4 (CSN) of NRF24L01 board to Expansion header P9 PIN 25(GPIO3_21) of Beagleboard
  • Connect PIN 5 (SCK) of NRF24L01 board to Expansion header P9 PIN 31(SPI1_SCLK) of Beagleboard
  • Connect PIN 6 (MOSI) of NRF24L01 board to Expansion header P9 PIN 30(SPI1_D1) of Beagleboard
  • Connect PIN 7 (MISO) of NRF24L01 board to Expansion header P9 PIN 29(SPI1_D0) of Beagleboard


  • Note that the library uses a GPIO pin in BeagleBone as CS pin, which is an workaround/issue exist in the code. Planning to fix it whenever I have time.
  • Project is written using NetBeans and I have commited some files used by netbeans IDE.
  • main.c contains a program which prints out the information of the connected NRF24L01 device and ping demo.
  • you may need to upload the Arduino sketch in ‘pingpair_dyn_arduino_pongback’ directory to a Arduino board in order for both Beaglebone and Arduino to run interactively.

Download the source here