Comparison Sony E 18-135mm f3.5-5.6 vs Zeiss 16-70mm F4

Purpose of the article is to point out long standing discussions and controversies around the Zeiss 16-70mm zoom and compare it to the Sony 18-135mm lens which is far more recent release from Sony and believed to be all around well praised lens.

  • Zeiss 16-70mm F4 (released on August 27, 2013)
  • Sony E 18-135mm F3.5-5.6 (released in January, 2018)

The Sony APS-C sensor cameras such as A6000 all the way to A6500s have a limited options when it comes to zoom lenses and the entire range from Sony comes down to 4 lenses which has their own strengths and weaknesses.

  • Sony E PZ 18-105mm F4 G OSS
  • Sony E 18-135mm F3.5-5.6 OSS (more recent release than other two)
  • Zeiss 16-70mm F4 OSS
  • Latest Sony E 16-55mm f2.8

I ignored the Sony 18-105mm G from comparing as it has known issue with high distortion and a considerably bigger lens for APSC mount, also it is a power zoom for videographers than people who wants to take photos and need smaller zoom for portability to go with their crop sensor bodies. I have also seen many comparisons between 18-105 vs 18-135 so it is not difficult to figure out which is best out of the two image quality and sharpness wise.

On the other hand, It is too early to say how good the latest 16-55mm f2.8 is as it was released few days ago and haven’t seen any in-dept reviews yet. I also don’t find the ~$2000 (AUD) price tag is reasonable for the lens unless you are a professional and need weather sealing and additiona custom button is a requirement.

Hence the research to weigh options such as the 18-135 vs 16-70mm as they both cover a versitile zoom range at a similar price range (Zeiss is around $200 expensive).

Why a comparision?

Reason 1: Zeiss 16-70mm F4 is a controversial lens mainly due to the blue “Zeiss” label and the price tag it comes with. Many people who bought it has been getting decentered lenses with QC issues or generally softer edges.

Reason 2: So many online forum discussions by people who either has the zeiss or wants to buy the lens asking recommendations. Mostly the answer has been hold off from owners who got bad copies.

Some online forum threads comparing the two
https://www.dpreview.com/forums/thread/4381291
https://www.dpreview.com/forums/thread/4271837
https://www.dpreview.com/forums/thread/4375304
https://forum.lowyat.net/topic/4648948/all

Reason 3: Zeiss vs the Sony price gap is getting lower. Zeiss lens price has come down to around $980-$1200 range here in Australia. Therefore I believe both lenses should be considered as great general purpose zooms.

Reason 4: purchased both lenses recently within a month or so, and decided to keep only one, sell the other and figure out which is the best lens overall IQ wise.

Test Method

This wasn’t a perfect scientific test, but I tried to keep the parameters between lens tests fair and accurate as possible to derive unbiased results.

There are number of ways to test the IQ of a lens but for this particular exercise, I choose to shoot indoors on a solid tripod at a fixed target.
The target is from B&H lens test document which is http://static.bhphotovideo.com/explora/explora/sites/default/files/bandh-test-target.jpg

I only managed to print the lens test chart in A4 size due to the home printer limitation so printed 4 copies of that and made a bigger rectangular chart by connecting copied using tape and stuck it to the wall in a somewhat equally lit room.

Camera settings

Focus mode: Manual
White balance: Fixed to indoor flurocent
ISO: 1000
Distance from the target: Fixed (~60cm away)

I kept the above settings fixed between lenses, but when taking photos I only changed the aperature to take test shots. I went from F4, F5.6 and F8 as those are the most common aperture values I would shoot with the zeiss. For the Sony went from F3.5, F4 and F5.6 and F8.

Lenses Used

I purchased both lenses recently within a month or so and did test for decentering issues. I didn’t find any obvious issues with the Zeiss, however the Sony 18-135 was very slightly soft on the left side, but I believe this is within the acceptable threshold.

If you see issues with any of the lenses that I didn’t, feel free to leave a comment and point that out.

The Results

The results between two lenses are shown below at different aperture values. Some tests had to be done at different aperture values due to their max aperture values in each lens.

NB: Right click, and open URL in a new tab or click on “Download” to get the full 24MP image to view at 100%.

Test 1: at shortest focal lengths Sony 18mm f3.5 vs Zeiss 16mm f4

Test 2: at shortest focal lengths Sony 18mm f5.6 vs Zeiss 16mm f5.6

Test 3: at shortest focal lengths Sony 18mm f8 vs Zeiss 16mm f8

Test 4: at ~24mm Sony f4 vs Zeiss f4

Test 5: at ~24mm Sony f5.6 vs Zeiss f5.6

Test 5: at ~24mm Sony f8 vs Zeiss f8

Test 5: at ~35mm Sony f4.5 vs Zeiss f4

Test 6: at ~35mm Sony f5.6 vs Zeiss f5.6

Test 7: at ~35mm Sony f8 vs Zeiss f8

Test 7: at ~50mm Sony f5.6 vs Zeiss f4

Test 8: at ~50mm Sony f5.6 vs Zeiss f5.6

Test 9: at ~50mm Sony f8 vs Zeiss f8

Test 10: at ~70mm Sony f5.6 vs Zeiss f4

Test 11: at ~70mm Sony f5.6 vs Zeiss f5.6

Test 12: at ~70mm Sony f8 vs Zeiss f8

Conclusion

  1. The obvious point from the above tests is that both lenses is that they are both sharp at centre even wide open.
  2. Both lenses become outstandingly sharp corner to corner when you step down. Specially f8, given this is a common aperture for landscape photos both lenses can be recommended for landscapes.
  3. Zeiss performs better at corners at larger apertures (f4) than the Sony from my observations. 18-135’s sharpness falls apart quickly when diverges from the centre.
  4. Zeiss’s weakness becomes apparent at 70mm, as far as I can observe, this is the weakest focal length of the Zeiss lens, sharpness and contrast is slightly lower.
  5. Around 35mm both lenses perform equally well.

Following the tests, I decided to keep the Zeiss due to it’s overall performance below 70mm and constant f4 aperture as well as the smaller size compared to the 18-135. In these tests I didn’t pay any attention to AF performance or OSS.

Advertisements

Centralised Scalable log-server implementation on Ubuntu 15.04

Centralised and Scalable log-server implementation on Ubuntu 15.04

The goal of this article is to explain how to implement a very simple log server which can accept application logs coming from one or more servers. For simplicity I will be implementing the log server and log forwarder (explained in Components section) on the same computer.

In order to keep the article easy to understand and to keep the implementation simple as possible I have excluded the security of the logs that get transmitted from application servers to the log-server and considered outside the scope of this article. Therefore if you implement the exact solution make sure you consider security of log data as some logs may contain sensitive information about the applications.

Components

  • Log-courier Sends logs from servers that need to be monitored. In this example plain-text TCP transport protocol will be used for forwarding logs.
  • Logstash Processes incoming logs from log forwarder application (log-courier in this tutorial) and sends to elasticsearch for storage.
  • Elasticsearch Distributed search engine & document store used for storing all the logs that are processed in logstash.
  • Kibana User interface for filtering, sorting, discovering and visualising logs that are stored in Elasticsearch.

Things to consider

  • In a real world scenario you may need the log forwarder to be running on the servers you need to monitor and send logs back to the centralised log server. For simplicity this article, both log-server and the server being monitored are the same.
  • Logs can grow rapidly in relation to how much log data you will be sending from monitoring servers and how many servers are being monitored. Therefore my personal recommendation would be to transmit what you really need to monitor, for a web application this could be access logs, error logs, application level logs and/or database logs. I would additionally send the syslog as it would indicate OS level activity.
  • Transmitting Logs mean more network bandwidth requirements on the application server side, therefore if you are running your application on a Cloud-Computing platform such as EC2 or DigitalOcean, make sure you take network throttling to account when your centralised log monitoring implementation is deployed.

Installation steps

There is no perticular order that you need to follow to get the stack working.

Elasticsearch installation

  • Install Java
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo aptitude -y install oracle-java8-installer
java -version
  • Install Elasticsearch
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.deb -O /tmp/elasticsearch-1.7.1.deb
sudo dpkg -i /tmp/elasticsearch-1.7.1.deb
sudo update-rc.d elasticsearch defaults 95 10

then start the service sudo service elasticsearch start.

Install Logstash

echo 'deb http://packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
sudo apt-get update
sudo apt-get install logstash

Install log-courier

Installing log-courier could be bit tricky as we have to install the logstash plugins to support data communication protocol implemented in log-courier. More information can be found herehttps://github.com/driskell/log-courier/blob/master/docs/LogstashIntegration.md

sudo add-apt-repository ppa:devel-k/log-courier
sudo apt-get update
sudo apt-get install log-courier

You may now need to install log-courier plugins for logstash, when I installed logstash using above instructions, logstash was installed on /opt/logstash folder. So following commands assume that you also have the logstash automatically installed to /opt/ if not leave a comment below.

cd /opt/logstash/
sudo bin/plugin install logstash-input-courier
sudo bin/plugin install logstash-output-courier

Install Kibana

wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz -O /tmp/kibana-4.1.1-linux-x64.tar.gz
sudo tar xf /tmp/kibana-4.1.1-linux-x64.tar.gz -C /opt/

Install the init script to manage kibana service, and enable the service.

cd /etc/init.d && sudo wget https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start

Configuration steps

Logstash configuration

Logstash need to be configured to accept log-courier transmitted log data, process and output to Elasticsearch. The following simple config adds

Logstash configuration should sit on the logstash server which stores and processes all the incoming logs from other servers which run the log-courier agent for sending logs.

sudo vim /etc/logstash/conf.d/01-simple.conf

And paste following config

input {
  courier {
    port => 5043
    type => "logs"
    transport => "tcp"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
    elasticsearch { host => localhost }
    stdout { codec => rubydebug }
}

Log-Courier configuration

In a real world scenario, this log-courier configuration would sit in the servers that need to be monitored. Following is an example config for this tutorial which will ship syslog activity.

sudo vim /etc/log-courier/log-courier.conf
{
    "network": {
        "servers": [ "localhost:5043" ],
        "transport": "tcp"
    },
    "files": [
        {
            "paths": [ "/var/log/syslog" ],
            "fields": { "type": "syslog" }
        }
    ]
}

Enable services

Now we have Elasticsearch, Logstash, Kibana and Log-courier configured to process, store and view them. In order to use the setup we need to enable all the services using ubuntu service command.

sudo service elasticsearch start
sudo service logstash start
sudo service kibana4 start
sudo service log-courier start

The way I normally test whether services are listening on their perticular TCP ports is using the lsofcommand. This will show whether Elasticsearch, Logstash and Kibana are listening on their perticular ports.

sudo lsof -i:9200,5043,5601

An example output may look like

COMMAND     PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
log-couri   970          root    9u  IPv4 373056      0t0  TCP localhost:38366->localhost:5043 (ESTABLISHED)
java      15513 elasticsearch  134u  IPv6 372333      0t0  TCP localhost:9200 (LISTEN)
java      15513 elasticsearch  166u  IPv6 373878      0t0  TCP localhost:9200->localhost:41345 (ESTABLISHED)
java      15612      logstash   16u  IPv6 372339      0t0  TCP *:5043 (LISTEN)
java      15612      logstash   18u  IPv6 373057      0t0  TCP localhost:5043->localhost:38366 (ESTABLISHED)
node      15723          root   10u  IPv4 372439      0t0  TCP localhost:41345->localhost:9200 (ESTABLISHED)
node      15723          root   11u  IPv4 373061      0t0  TCP *:5601 (LISTEN)
node      15723          root   13u  IPv4 389495      0t0  TCP 10.11.0.4:5601->10.11.0.2:64084 (ESTABLISHED)

Testing the setup

One way to test our configuration would be to log something to syslog, which can be done using “logger” command. Once the Kibana indices are setup (refer to Kibana documentation) the incoming log messages would appear on it realtime.

Example command to log a test message in syslog

logger test-log-message

Now goto Kibana web interface to discover the message you pushed to syslog. In this setup Kibana is listening on port 5601 on the log server (ex: http://10.11.0.4:5601/). Thats it!

Conclusion

Now you have setup a very simple centralised logging system for you application with Kibana, Logstash and Elasticsearch. Log-courier is the agent sending log messages from application servers to the logstash centralised logging server. This setup can be scaled having a cluster of Elasticsearch and logstash instances, a good starting point can be found athttps://www.elastic.co/guide/en/elasticsearch/guide/current/scale.html

This setup can accept any type of logs that come from log-courier agents and discover/filter logs using Kibana, Logstash can be configured to support log types using GROK patterns which can be found here https://grokdebug.herokuapp.com/

Export NFS shares from OS X to Ubuntu / Debian and mount using Autofs

I will discuss very few steps involved in getting a NFS directory shared from OSX and mounting it on an Ubuntu VM.

Step 1

Read https://support.apple.com/en-us/HT202243

The gist of it is OSX knows how to handle(export) NFS shares as long as you write the instructions correctly in /etc/exports file. So assume you need to export the directory in /Volumes/Disk 2/Projects

edit the /etc/exports file in your favourite editor as root (ex: sudo vim /etc/exports), paste the below command and change it to fit your requirement. Some parameters are explained below.

/Volumes/Disk\ 2/Projects -network 10.11.1.0 -mask 255.255.255.0 -mapall=purinda

/Volumes/Disk\ 2/Projects 
is the directory we export as a NFS share.
-network 10.11.1.0
is the network subnet that will be allowed to access the share. 
-mask 255.255.255.0
is subnet mask, which indicates all IPs from 10.11.1.1 to 10.11.1.255 will be allowed.
-mapall=purinda
indicate that when reading/writing operations will be done under this user. 

Step 2

Run the following command in your OS X terminal, which will restart nfsd daemon and trigger above export (may not require).

sudo nfsd restart

Step 3

SSH into your Ubuntu server and install autofs for mounting the above export. Remember that the IP address defined in the export above should allow the Ubuntu server IP for it to access it.

Once ssh’ed in run the following command to validate the export.

showmount -e <ip of the OSX>

Above command should show the exported directory, if not try running the same on the OSX for validating that NFS has started to share the directory.

Install autofs

sudo apt-get install autofs

Step 4

If all of the above steps has worked for you now it is the time to setup a config file for autofs to mount the above NFS share on demand.

In order to do this, we need a autofs mount file in /etc/ folder, you can call this folder with a easy to recognisable name. In my case I called it auto.projects because I shared projects that I worked on to the Ubuntu VM.

I ran the below command (using Vim) to create the autofs config file

sudo vim /etc/auto.projects

Then paste the following line and change according to your setup and save the file.

/home/purinda/Projects -fstype=nfs,rw,nosuid,proto=tcp,resvport 10.11.1.2:/Volumes/Disk\ 2/Projects/

/home/purinda/Projects
This is the directory that NFS export will be mounted on.
-fstype=nfs,rw,nosuid,proto=tcp,resvport
Indicates options passed in for the NFS mount command.
10.11.1.2:/Volumes/Disk\ 2/Projects/
This is the OSX host IP and the absolute path to the NFS share on the OSX.

Step 5

You need to let autofs know that you have added your NFS share configuration. In order to do that edit the master autofs config.

sudo vim /etc/auto.master

Paste the following filename that we created in the end of Step 4 at the end of the /etc/auto.master file

/- auto.projects

Then restart autofs on Ubuntu VM using the command below

sudo service autofs restart

And… thats it. You should have the NFS share mounted in your Ubuntu VM under specified path.

Symfony2 assetic with Ruby 2.1 (under Ubuntu 15.04)

Recently updated my Dev machine to Ubuntu 15.04 and assetic:dump on Symfony2 apps stop working, showing the following error message.

[Assetic\Exception\FilterException]
An error occurred while running:
‘/usr/bin/ruby’ ‘/usr/local/bin/sass’ ‘–load-path’ ‘/home/purinda/Projects/phoenix/apps/frontend/src/Nlm/XYZBundle/Resources/
public/css’ ‘–scss’ ‘–cache-location’ ‘/home/purinda/Projects/phoenix/apps/frontend/app/cache/dev’ ‘/tmp/assetic_sassbUrFhU’
Error Output:
/usr/local/bin/sass:23:in `load’: cannot load such file — /usr/share/rubygems-integration/all/gems/sass-3.2.19/bin/sass (LoadErro
r)
from /usr/local/bin/sass:23:in `<main>’

Input:

….

It appears to be that Ubuntu 15.04 installs Ruby 2.1 but doesn’t install a more recent version of Sass, so in oreder to get a up-to-date version of Sass compiler, you need to install compass pre-release version.

gem install compass –pre

Try running assetic:dump again.

Compiled binaries of Qt 5.3 for the Raspberry Pi (ARMv6) and Pi 2 (ARMv7-A)

At the time of writing Qt 5.4 release existed but I found it has number of bugs that affect multimedia capabilities of the Raspberry Pi.

Download the following binary release and install using following instructions. Instructions assume you still have the user pi in your Raspberry Pi system and make command installed as well (sudo apt-get install make). For the wget command, use the address found in “Download” link below the instructions.

ssh pi@<ip-of-the-rpi>

mkdir -p Qt5.3.2/qt-everywhere-opensource-src-5.3.2

cd Qt5.3.2

wget https://s3-ap-southeast-2.amazonaws.com/purinda.com/raspberrypi/qt-everywhere-opensource-src-5.3.2_compiled_armv7l.tar.gz

tar xf qt-everywhere-opensource-src-5.3.2_compiled_armv7l.tar.gz -C qt-everywhere-opensource-src-5.3.2

cd qt-everywhere-opensource-src-5.3.2

sudo make install

After running “make install”, binaries will be copied to the /usr/local/qt5 so you can remove the /home/pi/Qt5.3.2 directory.

Raspberry Pi still doesn’t know how to use the Qt 5.3.2 libraries as we haven’t instructed where the library files are. So you need to edit your .bashrc and stick following lines in there which sources the Qt libraries.

export LD_LIBRARY_PATH=/usr/local/qt5/lib/
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/qt5/bin

Precompiled binaries: Download

The problem

If you are like me running Raspbian on your “Raspberry Pi” or “Raspberry Pi 2”, you will figure out that Raspbian pointing to the following package repository doesn’t contain Qt libraries.

deb http://mirrordirector.raspbian.org/raspbian/ wheezy main contrib non-free rpi

Then tried searching for a solution and almost all require you to Cross-compile qt source, which I am not a big fan of as you need to cross-compile dependencies of Qt libraries to get Qt to compile without choking on some incorrectly cross-compiled dependency.

So I thought of compiling the Qt source package on shiny new Raspberry Pi 2, which has quad-cores and 1GB of RAM so I thought it will be faster to compile directly on it, and ran the ./configure and make -j3 (-j3 here is telling make command to use 3-cores to compile) with following settings enabled.

Build options:
Configuration ………. accessibility accessibility-atspi-bridge alsa audio-backend c++11 clock-gettime clock-monotonic concurrent cups dbus evdev eventfd fontconfig full-config getaddrinfo getifaddrs glib iconv icu inotify ipv6ifname large-config largefile libudev linuxfb medium-config minimal-config mremap nis no-harfbuzz openssl pcre png posix_fallocate precompile_header pulseaudio qpa qpa reduce_exports release rpath shared small-config system-freetype system-jpeg system-png system-zlib xcb xcb-glx xcb-plugin xcb-render xcb-xlib xinput2 xkbcommon-qt xlib xrender
Build parts ………… libs
Mode ………………. release
Using C++11 ………… yes
Using PCH ………….. yes
Target compiler supports:
iWMMXt/Neon ………. no/auto

Qt modules and options:
Qt D-Bus …………… yes (loading dbus-1 at runtime)
Qt Concurrent ………. yes
Qt GUI …………….. yes
Qt Widgets …………. yes
Large File …………. yes
QML debugging ………. yes
Use system proxies ….. no

Support enabled for:
Accessibility ………. yes
ALSA ………………. yes
CUPS ………………. yes
Evdev ……………… yes
FontConfig …………. yes
FreeType …………… yes (system library)
Glib ………………. yes
GTK theme ………….. no
HarfBuzz …………… no
Iconv ……………… yes
ICU ……………….. yes
Image formats:
GIF ……………… yes (plugin, using bundled copy)
JPEG …………….. yes (plugin, using system library)
PNG ……………… yes (in QtGui, using system library)
journald …………… no
mtdev ……………… no
Networking:
getaddrinfo ………. yes
getifaddrs ……….. yes
IPv6 ifname ………. yes
OpenSSL ………….. yes (loading libraries at run-time)
NIS ……………….. yes
OpenGL / OpenVG:
EGL ……………… no
OpenGL …………… no
OpenVG …………… no
PCRE ………………. yes (bundled copy)
pkg-config …………. yes
PulseAudio …………. yes
QPA backends:
DirectFB …………. no
EGLFS ……………. no
KMS ……………… no
LinuxFB ………….. yes
XCB ……………… yes (system library)
EGL on X ……….. no
GLX ……………. yes
MIT-SHM ………… yes
Xcb-Xlib ……….. yes
Xcursor ………… yes (loaded at runtime)
Xfixes …………. yes (loaded at runtime)
Xi …………….. no
Xi2 ……………. yes
Xinerama ……….. yes (loaded at runtime)
Xrandr …………. yes (loaded at runtime)
Xrender ………… yes
XKB ……………. no
XShape …………. yes
XSync ………….. yes
XVideo …………. yes
Session management ….. yes
SQL drivers:
DB2 ……………… no
InterBase ………… no
MySQL ……………. yes (plugin)
OCI ……………… no
ODBC …………….. yes (plugin)
PostgreSQL ……….. yes (plugin)
SQLite 2 …………. yes (plugin)
SQLite …………… yes (plugin, using bundled copy)
TDS ……………… yes (plugin)
udev ………………. yes
xkbcommon ………….. yes (bundled copy, XKB config root: /usr/share/X11/xkb)
zlib ………………. yes (system library)
NOTE: libxkbcommon and libxkbcommon-x11 0.4.1 or higher not found on the system, will use
the bundled version from 3rd party directory.
NOTE: Qt is using double for qreal on this system. This is binary incompatible against Qt 5.1.
Configure with ‘-qreal float’ to create a build that is binary compatible with 5.1.
Info: creating super cache file /home/pi/Qt5.3.2/qt-everywhere-opensource-src-5.3.2/.qmake.super

Qt is now configured for building. Just run ‘make’.
Once everything is built, you must run ‘make install’.
Qt will be installed into /usr/local/qt5

Prior to reconfiguration, make sure you remove any leftovers from
the previous build.

After good 4-5 hours the make finished and end up with the binaries.

Some information regarding the build environment

uname -a

Linux thor 3.18.5-v7+ #225 SMP PREEMPT Fri Jan 30 18:53:55 GMT 2015 armv7l GNU/Linux

cat /proc/version

Linux version 3.18.5-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.8.3 20140303 (prerelease) (crosstool-NG linaro-1.13.1+bzr2650 – Linaro GCC 2014.03) ) #225 SMP PREEMPT Fri Jan 30 18:53:55 GMT 2015

libc6 info

Package: libc6
State: installed
Automatically installed: no
Multi-Arch: same
Version: 2.13-38+rpi2+deb7u7
Priority: required
Section: libs
Maintainer: GNU Libc Maintainers <debian-glibc@lists.debian.org>
Architecture: armhf
Uncompressed Size: 8,914 k
Depends: libc-bin (= 2.13-38+rpi2+deb7u7), libgcc1
Suggests: glibc-doc, debconf | debconf-2.0, locales
Conflicts: prelink (<= 0.0.20090311-1), tzdata (< 2007k-1), tzdata-etch
Breaks: locales (< 2.13), locales-all (< 2.13), nscd (< 2.13)
Provides: glibc-2.13-1
Description: Embedded GNU C Library: Shared libraries
Contains the standard libraries that are used by nearly all programs on the system. This package includes shared versions of the standard C library and the standard math library, as well as many others.
Homepage: http://www.eglibc.org

build directory

/home/pi/Qt5.3.2/qt-everywhere-opensource-src-5.3.2/

install directory

/usr/local/qt5

Raspberry Pi WiFi configuration in Raspbian

I recently bought a Raspberry Pi2 and installed Raspbian (version 2015-01-31), it comes with WiFi drivers preinstalled for the Realtek module which I used as my Raspberry Pi WiFi dongle.

So I though things will go smoothly once I have the /etc/wpa_supplicant/wpa_supplicant.conf file configured. So what I did was setup it with my wireless network configuration, which looks like below ( I use WPA2-PSK, with CCMP authentication, if yours is different you may need to configure this right first, I used the wpa_gui command in Raspbian for the following config).

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
ssid=”Raspberry”
psk=”PASSWORDINPLAINTEXT”
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP
auth_alg=OPEN
}

However after setting that up and restarting didn’t really start up the network with a DHCP lease, nor my dongle lit up. So after an hour of monkeying with the /etc/network/interfaces file I got it sorted out. The reason was after initializing the wlan0 interface it wasn’t really calling the wpa_supplicant script for WiFi connection. So here is what I modified in /etc/network/interfaces file. Pay special attention to the post-up commands which gets run after taking the wlan0 interface live.

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
post-up /sbin/wpa_supplicant -Dwext -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf -B
post-up /sbin/dhclient wlan0

iface default inet dhcp

The second “post-up” command is there to ask the router for a IP lease.

Be patient once you reboot the rPI with the above configuration, for the things to kick in and configure, it may take 2-3 minutes after booting the system.

Make your Mac (OSX) serve as a NTP time server

Some background to the problem

At the time of writing I work for NewsCorp and they have a corporate firewall proxy inside the office LAN which blocks almost all ports below 1000 to external services (ex: github, bitbucket, ntp servers, etc) unless you specifically request for firewall rules to be allowed.

I sometimes use my Mac to run some linux applications in virtual machines, so when the host is suspended (closing the laptop lid) or shutdown while VM state is saved and resumes it cannot resync the system clock (vm) since NTP uses Port 123 (UDP) the packets cannot get out or come back in via the corporate proxy / firewall rules. So AWS and other web services started reporting my requests are in past or not in sync with the current epoch.

So I needed a way to somehow sync the VM system clock automatically, since accessing NTP servers outside the LAN was not possible, this is what I did to make my MacBook Pro (VM host) serve time.

Step 1

Get a terminal and edit the following file, I used the following command

sudo vim /etc/ntp-restrict.conf

Step 2

There should be only 10-15 lines in the file, locate the last few lines which includes some .conf files using “includefile” command, just above that add the following line

restrict 192.168.56.0 mask 255.255.255.0

“192.168.56.0” IP and mask “255.255.255.0” allows all IPs in the range “192.168.56.0 to 192.168.56.255”, this is the range I am using in the Virtual Box (my vm management software, VMware and others should have a similar Virtual Network Adapters). Thats the default range Virtual Box is configured to use when one VM adapter is configured to use via NAT or Host-only adapter.

If you need a different range (such as whole LAN in the house, etc) use that instead.

So after the modification my ntp-restrict.conf file looks like this,

# Access restrictions documented in ntp.conf(5) and
# http://support.ntp.org/bin/view/Support/AccessRestrictions
# Limit network machines to time queries only

restrict default kod nomodify notrap noquery
restrict -6 default kod nomodify notrap noquery

# localhost is unrestricted
restrict 127.0.0.1
restrict -6 ::1

restrict 192.168.56.0 mask 255.255.255.0

includefile /private/etc/ntp.conf
includefile /private/etc/ntp_opendirectory.conf

Step 3

Restart the ntp daemon in your OSX by running the following command (kills but gets auto-restarted)

sudo killall ntpd

Step 4

Thats it! Try running the following command from the VM to sync time. Set up a cron to resync every 5min or so if needed.

sudo ntpdate <mac ip addr>