Centralised Scalable log-server implementation on Ubuntu 15.04

Centralised and Scalable log-server implementation on Ubuntu 15.04

The goal of this article is to explain how to implement a very simple log server which can accept application logs coming from one or more servers. For simplicity I will be implementing the log server and log forwarder (explained in Components section) on the same computer.

In order to keep the article easy to understand and to keep the implementation simple as possible I have excluded the security of the logs that get transmitted from application servers to the log-server and considered outside the scope of this article. Therefore if you implement the exact solution make sure you consider security of log data as some logs may contain sensitive information about the applications.

Components

  • Log-courier Sends logs from servers that need to be monitored. In this example plain-text TCP transport protocol will be used for forwarding logs.
  • Logstash Processes incoming logs from log forwarder application (log-courier in this tutorial) and sends to elasticsearch for storage.
  • Elasticsearch Distributed search engine & document store used for storing all the logs that are processed in logstash.
  • Kibana User interface for filtering, sorting, discovering and visualising logs that are stored in Elasticsearch.

Things to consider

  • In a real world scenario you may need the log forwarder to be running on the servers you need to monitor and send logs back to the centralised log server. For simplicity this article, both log-server and the server being monitored are the same.
  • Logs can grow rapidly in relation to how much log data you will be sending from monitoring servers and how many servers are being monitored. Therefore my personal recommendation would be to transmit what you really need to monitor, for a web application this could be access logs, error logs, application level logs and/or database logs. I would additionally send the syslog as it would indicate OS level activity.
  • Transmitting Logs mean more network bandwidth requirements on the application server side, therefore if you are running your application on a Cloud-Computing platform such as EC2 or DigitalOcean, make sure you take network throttling to account when your centralised log monitoring implementation is deployed.

Installation steps

There is no perticular order that you need to follow to get the stack working.

Elasticsearch installation

  • Install Java
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo aptitude -y install oracle-java8-installer
java -version
  • Install Elasticsearch
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.deb -O /tmp/elasticsearch-1.7.1.deb
sudo dpkg -i /tmp/elasticsearch-1.7.1.deb
sudo update-rc.d elasticsearch defaults 95 10

then start the service sudo service elasticsearch start.

Install Logstash

echo 'deb http://packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
sudo apt-get update
sudo apt-get install logstash

Install log-courier

Installing log-courier could be bit tricky as we have to install the logstash plugins to support data communication protocol implemented in log-courier. More information can be found herehttps://github.com/driskell/log-courier/blob/master/docs/LogstashIntegration.md

sudo add-apt-repository ppa:devel-k/log-courier
sudo apt-get update
sudo apt-get install log-courier

You may now need to install log-courier plugins for logstash, when I installed logstash using above instructions, logstash was installed on /opt/logstash folder. So following commands assume that you also have the logstash automatically installed to /opt/ if not leave a comment below.

cd /opt/logstash/
sudo bin/plugin install logstash-input-courier
sudo bin/plugin install logstash-output-courier

Install Kibana

wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz -O /tmp/kibana-4.1.1-linux-x64.tar.gz
sudo tar xf /tmp/kibana-4.1.1-linux-x64.tar.gz -C /opt/

Install the init script to manage kibana service, and enable the service.

cd /etc/init.d && sudo wget https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start

Configuration steps

Logstash configuration

Logstash need to be configured to accept log-courier transmitted log data, process and output to Elasticsearch. The following simple config adds

Logstash configuration should sit on the logstash server which stores and processes all the incoming logs from other servers which run the log-courier agent for sending logs.

sudo vim /etc/logstash/conf.d/01-simple.conf

And paste following config

input {
  courier {
    port => 5043
    type => "logs"
    transport => "tcp"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
    elasticsearch { host => localhost }
    stdout { codec => rubydebug }
}

Log-Courier configuration

In a real world scenario, this log-courier configuration would sit in the servers that need to be monitored. Following is an example config for this tutorial which will ship syslog activity.

sudo vim /etc/log-courier/log-courier.conf
{
    "network": {
        "servers": [ "localhost:5043" ],
        "transport": "tcp"
    },
    "files": [
        {
            "paths": [ "/var/log/syslog" ],
            "fields": { "type": "syslog" }
        }
    ]
}

Enable services

Now we have Elasticsearch, Logstash, Kibana and Log-courier configured to process, store and view them. In order to use the setup we need to enable all the services using ubuntu service command.

sudo service elasticsearch start
sudo service logstash start
sudo service kibana4 start
sudo service log-courier start

The way I normally test whether services are listening on their perticular TCP ports is using the lsofcommand. This will show whether Elasticsearch, Logstash and Kibana are listening on their perticular ports.

sudo lsof -i:9200,5043,5601

An example output may look like

COMMAND     PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
log-couri   970          root    9u  IPv4 373056      0t0  TCP localhost:38366->localhost:5043 (ESTABLISHED)
java      15513 elasticsearch  134u  IPv6 372333      0t0  TCP localhost:9200 (LISTEN)
java      15513 elasticsearch  166u  IPv6 373878      0t0  TCP localhost:9200->localhost:41345 (ESTABLISHED)
java      15612      logstash   16u  IPv6 372339      0t0  TCP *:5043 (LISTEN)
java      15612      logstash   18u  IPv6 373057      0t0  TCP localhost:5043->localhost:38366 (ESTABLISHED)
node      15723          root   10u  IPv4 372439      0t0  TCP localhost:41345->localhost:9200 (ESTABLISHED)
node      15723          root   11u  IPv4 373061      0t0  TCP *:5601 (LISTEN)
node      15723          root   13u  IPv4 389495      0t0  TCP 10.11.0.4:5601->10.11.0.2:64084 (ESTABLISHED)

Testing the setup

One way to test our configuration would be to log something to syslog, which can be done using “logger” command. Once the Kibana indices are setup (refer to Kibana documentation) the incoming log messages would appear on it realtime.

Example command to log a test message in syslog

logger test-log-message

Now goto Kibana web interface to discover the message you pushed to syslog. In this setup Kibana is listening on port 5601 on the log server (ex: http://10.11.0.4:5601/). Thats it!

Conclusion

Now you have setup a very simple centralised logging system for you application with Kibana, Logstash and Elasticsearch. Log-courier is the agent sending log messages from application servers to the logstash centralised logging server. This setup can be scaled having a cluster of Elasticsearch and logstash instances, a good starting point can be found athttps://www.elastic.co/guide/en/elasticsearch/guide/current/scale.html

This setup can accept any type of logs that come from log-courier agents and discover/filter logs using Kibana, Logstash can be configured to support log types using GROK patterns which can be found here https://grokdebug.herokuapp.com/

Advertisements

Export NFS shares from OS X 10.10.4 to Ubuntu / Debian and mount using Autofs

I will discuss very few steps involved in getting a NFS directory shared from OSX and mounting it on an Ubuntu VM.

Step 1

Read https://support.apple.com/en-us/HT202243

The gist of it is OSX knows how to handle(export) NFS shares as long as you write the instructions correctly in /etc/exports file. So assume you need to export the directory in /Volumes/Disk 2/Projects

edit the /etc/exports file in your favourite editor as root (ex: sudo vim /etc/exports), paste the below command and change it to fit your requirement. Some parameters are explained below.

/Volumes/Disk\ 2/Projects -network 10.11.1.0 -mask 255.255.255.0 -mapall=purinda

/Volumes/Disk\ 2/Projects 
is the directory we export as a NFS share.
-network 10.11.1.0
is the network subnet that will be allowed to access the share. 
-mask 255.255.255.0
is subnet mask, which indicates all IPs from 10.11.1.1 to 10.11.1.255 will be allowed.
-mapall=purinda
indicate that when reading/writing operations will be done under this user. 

Step 2

Run the following command in your OS X terminal, which will restart nfsd daemon and trigger above export (may not require).

sudo nfsd restart

Step 3

SSH into your Ubuntu server and install autofs for mounting the above export. Remember that the IP address defined in the export above should allow the Ubuntu server IP for it to access it.

Once ssh’ed in run the following command to validate the export.

showmount -e <ip of the OSX>

Above command should show the exported directory, if not try running the same on the OSX for validating that NFS has started to share the directory.

Install autofs

sudo apt-get install autofs

Step 4

If all of the above steps has worked for you now it is the time to setup a config file for autofs to mount the above NFS share on demand.

In order to do this, we need a autofs mount file in /etc/ folder, you can call this folder with a easy to recognisable name. In my case I called it auto.projects because I shared projects that I worked on to the Ubuntu VM.

I ran the below command (using Vim) to create the autofs config file

sudo vim /etc/auto.projects

Then paste the following line and change according to your setup and save the file.

/home/purinda/Projects -fstype=nfs,rw,nosuid,proto=tcp,resvport 10.11.1.2:/Volumes/Disk\ 2/Projects/

/home/purinda/Projects
This is the directory that NFS export will be mounted on.
-fstype=nfs,rw,nosuid,proto=tcp,resvport
Indicates options passed in for the NFS mount command.
10.11.1.2:/Volumes/Disk\ 2/Projects/
This is the OSX host IP and the absolute path to the NFS share on the OSX.

Step 5

You need to let autofs know that you have added your NFS share configuration. In order to do that edit the master autofs config.

sudo vim /etc/auto.master

Paste the following filename that we created in the end of Step 4 at the end of the /etc/auto.master file

/- auto.projects

Then restart autofs on Ubuntu VM using the command below

sudo service autofs restart

And… thats it. You should have the NFS share mounted in your Ubuntu VM under specified path.

Symfony2 assetic with Ruby 2.1 (under Ubuntu 15.04)

Recently updated my Dev machine to Ubuntu 15.04 and assetic:dump on Symfony2 apps stop working, showing the following error message.

[Assetic\Exception\FilterException]
An error occurred while running:
‘/usr/bin/ruby’ ‘/usr/local/bin/sass’ ‘–load-path’ ‘/home/purinda/Projects/phoenix/apps/frontend/src/Nlm/XYZBundle/Resources/
public/css’ ‘–scss’ ‘–cache-location’ ‘/home/purinda/Projects/phoenix/apps/frontend/app/cache/dev’ ‘/tmp/assetic_sassbUrFhU’
Error Output:
/usr/local/bin/sass:23:in `load’: cannot load such file — /usr/share/rubygems-integration/all/gems/sass-3.2.19/bin/sass (LoadErro
r)
from /usr/local/bin/sass:23:in `<main>’

Input:

….

It appears to be that Ubuntu 15.04 installs Ruby 2.1 but doesn’t install a more recent version of Sass, so in oreder to get a up-to-date version of Sass compiler, you need to install compass pre-release version.

gem install compass –pre

Try running assetic:dump again.

Compiled binaries of Qt 5.3 for the Raspberry Pi (ARMv6) and Pi 2 (ARMv7-A)

At the time of writing Qt 5.4 release existed but I found it has number of bugs that affect multimedia capabilities of the Raspberry Pi.

Download the following binary release and install using following instructions. Instructions assume you still have the user pi in your Raspberry Pi system and make command installed as well (sudo apt-get install make). For the wget command, use the address found in “Download” link below the instructions.

ssh pi@<ip-of-the-rpi>

mkdir -p Qt5.3.2/qt-everywhere-opensource-src-5.3.2

cd Qt5.3.2

wget https://s3-ap-southeast-2.amazonaws.com/purinda.com/raspberrypi/qt-everywhere-opensource-src-5.3.2_compiled_armv7l.tar.gz

tar xf qt-everywhere-opensource-src-5.3.2_compiled_armv7l.tar.gz -C qt-everywhere-opensource-src-5.3.2

cd qt-everywhere-opensource-src-5.3.2

sudo make install

After running “make install”, binaries will be copied to the /usr/local/qt5 so you can remove the /home/pi/Qt5.3.2 directory.

Raspberry Pi still doesn’t know how to use the Qt 5.3.2 libraries as we haven’t instructed where the library files are. So you need to edit your .bashrc and stick following lines in there which sources the Qt libraries.

export LD_LIBRARY_PATH=/usr/local/qt5/lib/
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/qt5/bin

Precompiled binaries: Download

The problem

If you are like me running Raspbian on your “Raspberry Pi” or “Raspberry Pi 2”, you will figure out that Raspbian pointing to the following package repository doesn’t contain Qt libraries.

deb http://mirrordirector.raspbian.org/raspbian/ wheezy main contrib non-free rpi

Then tried searching for a solution and almost all require you to Cross-compile qt source, which I am not a big fan of as you need to cross-compile dependencies of Qt libraries to get Qt to compile without choking on some incorrectly cross-compiled dependency.

So I thought of compiling the Qt source package on shiny new Raspberry Pi 2, which has quad-cores and 1GB of RAM so I thought it will be faster to compile directly on it, and ran the ./configure and make -j3 (-j3 here is telling make command to use 3-cores to compile) with following settings enabled.

Build options:
Configuration ………. accessibility accessibility-atspi-bridge alsa audio-backend c++11 clock-gettime clock-monotonic concurrent cups dbus evdev eventfd fontconfig full-config getaddrinfo getifaddrs glib iconv icu inotify ipv6ifname large-config largefile libudev linuxfb medium-config minimal-config mremap nis no-harfbuzz openssl pcre png posix_fallocate precompile_header pulseaudio qpa qpa reduce_exports release rpath shared small-config system-freetype system-jpeg system-png system-zlib xcb xcb-glx xcb-plugin xcb-render xcb-xlib xinput2 xkbcommon-qt xlib xrender
Build parts ………… libs
Mode ………………. release
Using C++11 ………… yes
Using PCH ………….. yes
Target compiler supports:
iWMMXt/Neon ………. no/auto

Qt modules and options:
Qt D-Bus …………… yes (loading dbus-1 at runtime)
Qt Concurrent ………. yes
Qt GUI …………….. yes
Qt Widgets …………. yes
Large File …………. yes
QML debugging ………. yes
Use system proxies ….. no

Support enabled for:
Accessibility ………. yes
ALSA ………………. yes
CUPS ………………. yes
Evdev ……………… yes
FontConfig …………. yes
FreeType …………… yes (system library)
Glib ………………. yes
GTK theme ………….. no
HarfBuzz …………… no
Iconv ……………… yes
ICU ……………….. yes
Image formats:
GIF ……………… yes (plugin, using bundled copy)
JPEG …………….. yes (plugin, using system library)
PNG ……………… yes (in QtGui, using system library)
journald …………… no
mtdev ……………… no
Networking:
getaddrinfo ………. yes
getifaddrs ……….. yes
IPv6 ifname ………. yes
OpenSSL ………….. yes (loading libraries at run-time)
NIS ……………….. yes
OpenGL / OpenVG:
EGL ……………… no
OpenGL …………… no
OpenVG …………… no
PCRE ………………. yes (bundled copy)
pkg-config …………. yes
PulseAudio …………. yes
QPA backends:
DirectFB …………. no
EGLFS ……………. no
KMS ……………… no
LinuxFB ………….. yes
XCB ……………… yes (system library)
EGL on X ……….. no
GLX ……………. yes
MIT-SHM ………… yes
Xcb-Xlib ……….. yes
Xcursor ………… yes (loaded at runtime)
Xfixes …………. yes (loaded at runtime)
Xi …………….. no
Xi2 ……………. yes
Xinerama ……….. yes (loaded at runtime)
Xrandr …………. yes (loaded at runtime)
Xrender ………… yes
XKB ……………. no
XShape …………. yes
XSync ………….. yes
XVideo …………. yes
Session management ….. yes
SQL drivers:
DB2 ……………… no
InterBase ………… no
MySQL ……………. yes (plugin)
OCI ……………… no
ODBC …………….. yes (plugin)
PostgreSQL ……….. yes (plugin)
SQLite 2 …………. yes (plugin)
SQLite …………… yes (plugin, using bundled copy)
TDS ……………… yes (plugin)
udev ………………. yes
xkbcommon ………….. yes (bundled copy, XKB config root: /usr/share/X11/xkb)
zlib ………………. yes (system library)
NOTE: libxkbcommon and libxkbcommon-x11 0.4.1 or higher not found on the system, will use
the bundled version from 3rd party directory.
NOTE: Qt is using double for qreal on this system. This is binary incompatible against Qt 5.1.
Configure with ‘-qreal float’ to create a build that is binary compatible with 5.1.
Info: creating super cache file /home/pi/Qt5.3.2/qt-everywhere-opensource-src-5.3.2/.qmake.super

Qt is now configured for building. Just run ‘make’.
Once everything is built, you must run ‘make install’.
Qt will be installed into /usr/local/qt5

Prior to reconfiguration, make sure you remove any leftovers from
the previous build.

After good 4-5 hours the make finished and end up with the binaries.

Some information regarding the build environment

uname -a

Linux thor 3.18.5-v7+ #225 SMP PREEMPT Fri Jan 30 18:53:55 GMT 2015 armv7l GNU/Linux

cat /proc/version

Linux version 3.18.5-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.8.3 20140303 (prerelease) (crosstool-NG linaro-1.13.1+bzr2650 – Linaro GCC 2014.03) ) #225 SMP PREEMPT Fri Jan 30 18:53:55 GMT 2015

libc6 info

Package: libc6
State: installed
Automatically installed: no
Multi-Arch: same
Version: 2.13-38+rpi2+deb7u7
Priority: required
Section: libs
Maintainer: GNU Libc Maintainers <debian-glibc@lists.debian.org>
Architecture: armhf
Uncompressed Size: 8,914 k
Depends: libc-bin (= 2.13-38+rpi2+deb7u7), libgcc1
Suggests: glibc-doc, debconf | debconf-2.0, locales
Conflicts: prelink (<= 0.0.20090311-1), tzdata (< 2007k-1), tzdata-etch
Breaks: locales (< 2.13), locales-all (< 2.13), nscd (< 2.13)
Provides: glibc-2.13-1
Description: Embedded GNU C Library: Shared libraries
Contains the standard libraries that are used by nearly all programs on the system. This package includes shared versions of the standard C library and the standard math library, as well as many others.
Homepage: http://www.eglibc.org

build directory

/home/pi/Qt5.3.2/qt-everywhere-opensource-src-5.3.2/

install directory

/usr/local/qt5

Raspberry Pi WiFi configuration in Raspbian

I recently bought a Raspberry Pi2 and installed Raspbian (version 2015-01-31), it comes with WiFi drivers preinstalled for the Realtek module which I used as my Raspberry Pi WiFi dongle.

So I though things will go smoothly once I have the /etc/wpa_supplicant/wpa_supplicant.conf file configured. So what I did was setup it with my wireless network configuration, which looks like below ( I use WPA2-PSK, with CCMP authentication, if yours is different you may need to configure this right first, I used the wpa_gui command in Raspbian for the following config).

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
ssid=”Raspberry”
psk=”PASSWORDINPLAINTEXT”
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP
auth_alg=OPEN
}

However after setting that up and restarting didn’t really start up the network with a DHCP lease, nor my dongle lit up. So after an hour of monkeying with the /etc/network/interfaces file I got it sorted out. The reason was after initializing the wlan0 interface it wasn’t really calling the wpa_supplicant script for WiFi connection. So here is what I modified in /etc/network/interfaces file. Pay special attention to the post-up commands which gets run after taking the wlan0 interface live.

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
post-up /sbin/wpa_supplicant -Dwext -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf -B
post-up /sbin/dhclient wlan0

iface default inet dhcp

The second “post-up” command is there to ask the router for a IP lease.

Be patient once you reboot the rPI with the above configuration, for the things to kick in and configure, it may take 2-3 minutes after booting the system.

I/O Ports Programming (Parallel port) Reading / Writing using VB.NET

This is a re-blog of my first ever article published online, on Codeproject.com, which I wrote on 18 Sep 2007. Lately, I've not kept myself up to date with Windows and .NET platform so any feedback would be great regarding how well/bad this project works with more recent platform changes.

Introduction

The parallel port, or printer port, is one of the easiest pieces of hardware available to users for communicating with external circuitry or homemade hardware devices. Most people think that interfacing with an external circuit is hard and never come up with a solution. However, in this article, I am trying to convince you that porting with another external device is very easy and it has only a few tricks to be remembered!

I will also be discussing a sample surveillance application that can be developed with few requirements. It is very simple and anyone who reads this article can use it at home. I will then be giving an introduction to Serial Port Programming, which will be my next article on The Code Project. In that section, I will illustrate some basics of Serial Data Transmission and how to use a more advanced serial port instead of a parallel port.

The application that I have given is developed in Visual Basic .NET and it uses inpout32.dll for direct port access. In Windows XP and NT versions, direct port accessing is prohibited, so we need to have a driver in order to communicate with ports directly. Therefore we are using a freeware driver, inpout32.dll, which can be found here.

WARNING

Parallel Ports that are built into your machine can be damaged very easily! So, use all of my sample code at your own risk. All of these sample programs and circuits are thoroughly tested and they will never lead you to damage your hardware or your computer. I have never experienced such a bad incident! However, use these programs with great care…

Communicating with the Parallel Port

The parallel port usually comes as a 25-pin female port and it is commonly used to connect printers to a computer. Many geeks also use it to connect their own devices to their PCs. There is a few more things to remember when using a PC’s Parallel Port. It can load only 2.5mA and ~2.5 volts. It is better to use opto-couplers or ULN2803 when interfacing with an external device.

If you’re just thinking of lighting up an LED through a Printer Port, try to connect a 330 ohm resistor in between. There is some more useful information to consider when programming a Printer Port: the usual Printer Port is a 25-pin female D-shaped port and it has several registers. A register is a place where you can write values and retrieve them. There are different types of registers called Data Register, Control Register and Status Register.

Data Register

This is the register that allows the user to write values into the port. In simple words, these pins can be used to output a specific value in a data register. You can also change voltages in specific pins. These are called output pins. There are altogether 8 output pins available, ranging from D0 to D7.

Status Register (Pins)

These are called input pins or status registers and can hold a value that the outside world gives to the Parallel Port. So, this port acts like a reader and it has 5 pins for inputs. The pin range is S4 to S7.

Control Register (Pins)

This register can be used in both ways: it enables a user to write values to the outside world, as well as read values from the outside world. However, you need to remember that most of the control pins work in an inverted manner. You can see them with a dash sign on the top of the pin. Pin range is C0 to C3. Ground pins are used as neutral; these pins are used as (-) in batteries. If you’re connecting a device to a Parallel Port in order to read or write, you have to use one or more ground pins and a read/write pin to work. For example, if you’re trying to light up an LED, then you have to connect the (-) of the LED to a ground pin and the (+) of the LED to an output pin. For reading purposes, use the same mechanism.

· 8 Output  pins [D0 to D7]

  <!--[if !supportLists]-->
  · 5 Status  pins [S4 to S7 and S3]

  <!--[if !supportLists]-->
  · 4 Control  pins [C0 to C3]

  <!--[if !supportLists]-->
  · 8 ground  pins [18 to 25]
Parallel Port Illustration

Parallel Port Illustration

Addressing the Parallel Port and Registers

Port addressing controls a great deal in port programming, as it is the door that enables programs to connect to the external circuit or device. Therefore I would like to explain all the available port addresses.

In normal PCs, a parallel port address can be one of the addresses given below, but depending on the BIOS settings and some other issues, the parallel port address can vary. However, it always lies in one of these address ranges.

<!--<span class="code-comment">[if !supportLists]--></span>
· LPT1 03BC (hex)             956(decimal)

<!--<span class="code-comment">[if !supportLists]--></span>
· LPT2 0378(hex)              888(decimal)

<!--<span class="code-comment">[if !supportLists]--></span>
·
<!--<span class="code-comment">[endif]--></span>
LPT3 0278 (hex)               632(decimal)

As I mentioned above, those are the commonly used address ranges for parallel ports. Now you might be wondering how we communicate with each of them and how they work. It’s very simple! Each of those above-mentioned address ranges has its own address for Data Register, Control Register and Status Register. So, if you want to access a Data register of port 888, then the Data register is also 888. However, if you want to access a status register, then add +1 to the 888 = 889 and this is the status register. To access a control register, what you have to do is add another +1 so that it is 890 and that is the Control register of the 888-based parallel port. If your parallel port address is different, please do this accordingly to find its corresponding registers.

There is another simple way to check the parallel port address of your computer. You can get to know the parallel port I/O addresses through the device manager. First, open the device manager (Start -> Settings -> Control Panel -> System -> Hardware -> Device Manager). Then select the parallel port you are interested in from the Ports (COM & LPT) section. With the mouse’s right button, you can get the menu where you select Properties. When you go through steps, you will see a dialog similar to this. Locate the first item in the listbox and it carries the value 0378 – 037F so that this is the address range of the Printer Port. The data register address is 0378 in hex or 888 in dec.

Windows Printer Port Device Config Properties

Windows Printer Port Device Config Properties

Small Description of Inpout32.dll

Here in this article, we are using a third party tool called inpout32.dll to connect to ports available in your PC. We are using it because in Windows NT based versions, all the low-level hardware access is prohibited and a separate I/O driver is needed. When we read the status of the port, we have to take the support of that third party application.

In inpout32.dll, all of the system drivers and small tools needed to activate and run that system driver are included. So, we don’t have to consider much or think about how this happens; it will do everything for us…

As far as we are using a DLL file to communicate with the hardware, we have to know the links that this particular file provides for the user, or the entry points for that particular file. Here, I will describe all the entry points one by one.

DLL Entry used to read

Public Declare Function Inp Lib "inpout32.dll" Alias "Inp32" _
     (ByVal  PortAddress As Integer)  As Integer

DLL Entry used to output values

Public Declare Sub Out Lib "inpout32.dll" Alias "Out32" _
    (ByVal  PortAddress As Integer,  ByVal Value As Integer)

The Inp32 function can be used to read values from any address that is given through its PortAddressparameter. Also, the Out32 function is specifically used to write values to data ports. A long description on both functions can be found below.

Reading a Pin (Status Register) or External Value

Inp32 is a general-purpose function defined inside inpout32.dll to communicate with various hardware addresses in your computer. This function is included in a DLL, so this file can be called within any programming language that supports DLL import functions.

Here, for this sample application, I have used Visual Basic .NET because it can be used by any person who knows a little about the .NET Framework. What you have to do to read the status of a parallel port is very simple. I will explain everything in steps, but keep in mind that in the sample coding I assume that the Address of the status register is 889 or 0x379h in hex.

  1. Open Visual Studio .NET (I made this in the .NET 2005 version).
  2. Make a new Visual Basic .NET Windows application.
  3. Go to the Project menu and Click on “Add new Item.” Then select the Module item in the listbox.
  4. Type any considerable name for the new module that you’re going to create, for example, porting.vb.
  5. In the Module file, there should be some text like this:
    Module porting
        Public Declare Function Inp Lib "inpout32.dll" _
        Alias "Inp32" (ByVal PortAddress As Integer) As Integer
    End Module
  6. Go to your form and add a new text box. Then add a Command button and, in the click event of the command button, paste this code:
    'Declare variables to store read information and Address values
    Dim intAddress As Integer, intReadVal As Integer
    
    'Get the address and assign
    intAddress = 889
    
    'Read the corresponding value and store
    intReadVal = porting.Inp(intAddress)
    
    txtStatus.Text = intReadVal.ToString()

    This will work well only if the name of the text box you just made is txtStatus.

  7. Run your application and click on the button so that the text should be 120 or some similar value. Then that is the status of your parallel port.
  8. If you want to check whether this application runs properly, you just have to take a small wire (screen wire or circuit wire is ok) and short a Status Pin and a Ground Pin. For example, Status pins are 10,11,12,13,15 and if you short the 10th pin and 25th (Ground) pin, then the value in the textbox will be 104. If you short some other pins, like the 11th and 25th, then the value of the text box will be 88 or a similar value.

Now you can clearly see that all the power is in your hands. If you’re a creative person with a sound knowledge of electronics, then the possibilities are endless…

Note:

Before you run and compile this application, download inpout32.dll from this page and copy it to the Bin folder of your project directory, the place where the executable file is made, so that the DLL should be in the same directory to work. Alternatively, you can just copy and paste the DLL file to your Windows System directory and then you don’t want to have that file in your application path. You only have to follow one guide.

Writing to a Pin (Data Register)

When writing data to the Data Register, you have to follow the same rules. There is no difference in basics! As you learned in the above context, do the same thing here with a few differences. I will explain to you what they are, but keep in mind that in the sample coding I assume that the address of the data register is 888 or 0x378h in hex.

  1. Go to the Project menu and Click on “Add New Item.” Then select the Module item in the list box, the same as above.
  2. Type any considerable name for the new module that you’re going to create, for example, porting.vb.
  3. In the Module file, there should be some text like this:
    Module porting
       Public Declare Sub Out Lib "inpout32.dll" _
       Alias "Out32" (ByVal PortAddress As Integer, ByVal Value As Integer)
    End Module
  4. Go to your form and add a Command button. In the Click event of the Command button, paste this code:
    intVal2Write As Integer
    
    'Read the corresponding value and store
    
    intVal2Write = Convert.ToString(txtWriteVal.Text)
    
    'porting is the name of the module
    porting.Out(888, intVal2Write)

    This will work properly only if the name of the text box you just made is txtWriteVal.

  5. Run your application and click on the Button so that the parallel port will output voltages in pins according to the value you entered in the text box.

How to Light up LEDs

Now to test the value that you output. Assume that you wrote value 2 to the data register and you want to know which pin outputs +5V. It’s really simple to check. The data register starts from pin number 2 and ends in pin number 9, so there are 8 pins for output. In other words, it’s an 8-bit register. So, when you write 2 to its data register, +5 voltage will be there in the 3rd pin. You have to take a scientific calculator and convert Decimal 2 to Binary; then the value is 2(DEC) = 1 0(BIN), so 1 0 means 0 0 0 0 0 0 1 0 is the status at the data register.

Example 2

If you write value 9 to a data register, then the binary of 9 is 0 0 0 0 1 0 0 1. So, +5V can be taken from the 2nd pin and the 5th pin.

Binary Value 0 0 0 0 1 0 0 1
Data Register D7 D6 D5 D4 D3 D2 D1 D0
Actual Pin number 9 8 7 6 5 4 3 2

Example 3

Assume you’re writing 127 to a data register. Then the binary of 127 is 0 1 1 1 1 1 1 1 and therefore all the pins will be +5V, excluding the 9th pin.

Binary Value 0 1 1 1 1 1 1 1
Data Register D7 D6 D5 D4 D3 D2 D1 D0
Actual Pin number 9 8 7 6 5 4 3 2

When you want to light up an LED, there is a special way to connect them to a data register. So, you need some tools to connect. As a tutorial, I will explain how to connect only 1 LED.

How the LED should be connected to Printer Port

Here you have to connect a ground pin to any pin starting from the 18th to 25th. All are ground pins, so there is no difference. The output pin or positive of the LED should be connected to a pin in the data register. When you write a value which enables that particular data register pin, then the LED will light up.

Note!

Never connect devices that consume more voltage than LEDs, as this would burn your parallel port or entire motherboard. So, use them at your own risk…! I recommend that you refer to the method written below if you’re going through more advanced methods. If you want to switch electric appliances on and off, then you will obviously have to use relays or high voltage Triacs. I recommend using relays and connecting them through the method I have illustrated below.

More Advanced Way to Connect Devices to Parallel Port
(Compact 8 Channel Output Driver)

If you’re not satisfied with lighting up LEDs and you’re hungry to go further, then use this method to connect all your advanced or more complex devices to the Parallel Port. I would recommend that you consider using ULN2803 IC. This is an 8-channel Darlington Driver and it is possible to power up devices with 500mA and +9V with it.

Here in this diagram, you can see how to connect a set of LEDs to this IC. Also remember that you can replace these LEDs with any devices such as relays or Solenoids that support up to 500mA and they will run smoothly.

How the LED should be connected to Printer Port

How Do the Surveillance System and Sample Application Run?

The surveillance system that I have written is very small and can only be used at home or for personal use. However, you can make it more advanced if you know how more complex security systems work. Here, the application has a timer and it always checks for status register changes. If something changes, then the application’s status will be changed and a sound will be played accordingly.

This can be used to check whether your room’s door is closed. What you have to do is place a push to turn on the switch on the edge of the door and you have to place it in a manner which activates when the door is opened. If you’re not using a push to turn on the switch, then just place two layers of thin metal that can be bent and place them on the door. However, those two layers of metal should be able to make a short circuit when the door is closed. One piece of metal layering should be soldered to a screen wire (circuit wire) and that should be connected to the ground pin. The other layer of metal piece should be connected to a status pin of the parallel port. Status pins are 10, 11,12,13,15.

Sample application source code: Download

Make your Mac (OSX) serve as a NTP time server

Some background to the problem

At the time of writing I work for NewsCorp and they have a corporate firewall proxy inside the office LAN which blocks almost all ports below 1000 to external services (ex: github, bitbucket, ntp servers, etc) unless you specifically request for firewall rules to be allowed.

I sometimes use my Mac to run some linux applications in virtual machines, so when the host is suspended (closing the laptop lid) or shutdown while VM state is saved and resumes it cannot resync the system clock (vm) since NTP uses Port 123 (UDP) the packets cannot get out or come back in via the corporate proxy / firewall rules. So AWS and other web services started reporting my requests are in past or not in sync with the current epoch.

So I needed a way to somehow sync the VM system clock automatically, since accessing NTP servers outside the LAN was not possible, this is what I did to make my MacBook Pro (VM host) serve time.

Step 1

Get a terminal and edit the following file, I used the following command

sudo vim /etc/ntp-restrict.conf

Step 2

There should be only 10-15 lines in the file, locate the last few lines which includes some .conf files using “includefile” command, just above that add the following line

restrict 192.168.56.0 mask 255.255.255.0

“192.168.56.0” IP and mask “255.255.255.0” allows all IPs in the range “192.168.56.0 to 192.168.56.255”, this is the range I am using in the Virtual Box (my vm management software, VMware and others should have a similar Virtual Network Adapters). Thats the default range Virtual Box is configured to use when one VM adapter is configured to use via NAT or Host-only adapter.

If you need a different range (such as whole LAN in the house, etc) use that instead.

So after the modification my ntp-restrict.conf file looks like this,

# Access restrictions documented in ntp.conf(5) and
# http://support.ntp.org/bin/view/Support/AccessRestrictions
# Limit network machines to time queries only

restrict default kod nomodify notrap noquery
restrict -6 default kod nomodify notrap noquery

# localhost is unrestricted
restrict 127.0.0.1
restrict -6 ::1

restrict 192.168.56.0 mask 255.255.255.0

includefile /private/etc/ntp.conf
includefile /private/etc/ntp_opendirectory.conf

Step 3

Restart the ntp daemon in your OSX by running the following command (kills but gets auto-restarted)

sudo killall ntpd

Step 4

Thats it! Try running the following command from the VM to sync time. Set up a cron to resync every 5min or so if needed.

sudo ntpdate <mac ip addr>