Saturday, May 02, 2015

Fix broken Vmware player on Kubuntu 15.04

  • Go to vmware source folder  /usr/lib/vmware/modules/source
  • copy vmnet.tar file to some other location
  • Make copy of original tar file
  • untar vmnet.tar file

  • Make changes according to 

  • Then copy the modified vmnet.tar file back into vmware/modules/source folder
  • /usr/lib/vmware/modules/source$ sudo vmware-modconfig --console --install-all

Sunday, April 19, 2015

Mac OSX like screen capture with ksnapshot in KDE

ksnapshot is a nifty tool for capture windows and screens. However I don't want to click the menu , bring up ksnapshot and then select options in menu to do screen grab. I really liked the way it is done on Mac OSX. Press Command +4 and you have a rectangular screen grab.

So I tuned my ksnapshot to do the same on KDE desktop. command +4 should open a rectangular screen grab and command +P should open a full screen print.

ksnapshot can be called with --region or --fullscreen.
so all we need to do is to map above key combos against ksnapshot [ --region | --fullscreen]
Here is how to do it

KDE desktop | settings |  shortcust and gestures

Saturday, April 26, 2014

How to keep in sync with git repository of another person on github or bitbucket

suppose rjha94 has a repo on called dl (rjha94/dl - copy #1). Now you also want a copy of this repository on your machine. First thing you have to do is to fork this repo using your account.

 #1. fork this repository on first (fork rjha94/dl as srj0408/dl )
 #2. to get this code on your local machine, you can just clone your fork of repo.

 $git clone srj0408/dl (get the actual clone URL on bitbucket interface)

when you forked, you made a copy at hosting server (copy #2) . when you cloned, you made a copy of your server repo on your local m/c (copy #3) . from a git point of view, all these copies are equally valid (there is no central or one true copy). so you have rjha94/dl that you forked into srj0408/dl on server. Then you cloned the same on your local m/c creating a third copy.

All these copies can be independent of each other. Like you can make changes on your local m/c that no one knows about. Same way, the original repo (copy #1) can be changed. Now it could be that some new changes have come to rjha94/dl. how can you get them into the repo on your local m/c (copy #1 -> copy #3) and push it to your own server repo srj0408/dl (copy #3 -> copy #2) ?

 #3 To get rjha94/dl repo changes to the repo on your local m/c
$git checkout master
$git remote add rjha94/dl ssh://

check out your local master branch and then add a new remote URL called rjha94/dl that points to the original server repo your forked (copy #1) . Then to merge the new changes from rjha94/dl repo (copy #3)

$git fetch rjha94/dl
$git merge remotes/rjha94/dl/master

This would pull changes from rjha94/dl repo and merge into your local copy. (copy #1 -> copy #3) To get these changes into your server repo copy

$git commit -m "merged changes from upstream on 25-apr-14"
$git push origin

Doing this will push the changes you just merged into your local copy into your server repo (copy #3 -> copy #2) where origin is a shortcut (alias) for your own server copy.

Sunday, April 13, 2014

Time series database survey for IoT and m2m devices

This is a survey of time series databases available for use, both the cloud offerings as well as "install on our own machines" solutions. The requirement we have is

  • store high velocity time series data (frequent data arriving from one node)
  • store data from lot of nodes 
  • compute aggregates (sum over a days worth of data)
  • Grouping functions (average, STDEV) 
  • Analyze the data for patterns etc.

No one is paying me to write this so I will stay clear of jargons like, slice and dice, Cubes and all that b.s. in plain simple terms, we are receiving data from lot of devices very frequently, so first problem is simply storing a lot of data. Mysql and other RDBMS are not optimized for storing such time series data.  That is problem #1.

Another problem is that it may not be prudent to fetch all the raw data points for certain queries later on. Let's say that you want to watch the trend over a month then just fetching all the raw datapoints may be a overkill. What you instead would like to do is to just fetch 30 data points, each an average over a day's worth of datapoints. Now, creating such buckets (rollups) on demand would be expensive operation, so we need to push data into such buckets (rollup) as and when they arrive. That is problem #2, a good solid support for whatever rollup I would like to create. For data arriving at millsecond intervals that can just be one minute! 

There is actually a rollup hierarchy. say, data is arriving at 5 minute intervals and then you make rollup of an hour (average over 12 datapoints) . Further you would like to make a rollup of a day (averaged over 24 datapoints of previous bucket) etc. 

Then we also need aggregates. We would like to sum over datapoints for a particular interval for reporting. (say Rainfall over a day). 

For IoT/m2m kind of use cases, you also need to detect patterns in real times (this is apart from the threshold alerts). Then we would like to analyze the data and perform statistical opeartions on it.


Nice circular buffer
Expects data at requried intervals
Language bindings available
Good fit for small numer of metric


forked from openTSDB
storing metric in HBase/ Cassandra
Good storage facility, allows tagging of data 
However Data model is very limited. 
Aggreates are calculated during query time and can be a performance drag
No support for automatic rollup


Looks very married to the Graphs
Good for computer metric cases 
Does not look a good fit for device case 
(where data dictionary is device dependent)


Cloud offerings
Xivey a.k.a  pachcube a.k.a whatever-it-was

Good PR buzz
Good ecosystem
support is a black hole if you are in Asia
Rollup supported (in their own way)
Good provisioning and device activation support
Device side things are unnecessarily complicated
support for average function only (haven't found others yet)


Digi m2m cloud


I think all cloud based application would run into limitation for serious applications.  Also, there is no way others can do your analytic for you. For the moment, my strategy is to prototype on xively and then switch to influxdb (or maybe another on-my-machine solution). For realtime analytic, look at 
amazon Kinesis or numPy with HDF5. The debate is far from settled.

Saturday, November 02, 2013

OSX Maverick and Vmware fusion 4.1

 My copy of Vmware fusion 4.1 works with OSX maverick. 

It is quite understandable why vmware would spread FUD to sell more copies of newer versions of Fusion. However, we had already burnt the vmware support bridge when we had updated to 3.x kernels. Long live open-vm-tools!

Tcpflow Network traffic capture in three simple steps

Three simple steps to capture network traffic capture on your linux box

  • On the Debian Box, just install tcpflow using apt-get
  • $ifconfig to figure out the network interface (mine was eth0)
  • $sudo tcpflow -c -e -i eth0

Friday, August 16, 2013

HTTP Traffic monitor and capture tools

Here is a quick list of tools that can capture and monitor http traffic between your browser and server. yeah, I know wireshark exists. it is just that I still do not know how to use it!

  1. tcpflow - works at card level. Much better if you can script things on your own box.
  2. Fiddler - Nice tool but Windows only (who is going to download mono to install a http capture tool on linux box?)
  3. httpry - Not tried yet, on the list though!
  4. TCPCatcher - Would need Java. Downloadable as a stanadlone jar.
  5. Charles Proxy - commercial (50$)  but people have lot of praise for the tool.
  6. WebScarab - Our security team was using it. Ugly as hell but does the job.

Tuesday, May 14, 2013

Eight queen problem in Java

Here is a solution to eight queen puzzle in Java. You should note the following

  • This is a brute force solution for NxN board
  • The way we solve it is - first solve it for eight rook - i.e. first take care of horizontal and vertical lines only (the way a rook moves) and then omit the solutions having collision on diagonals 
  • To solve the N-rook problem we generate all possible permutations of N (corresponding to the fact that one queen occupies one column) - so this solution is not at all going to scale
  • The only way to learn anything in life is to do it yourself - even though this is a simple brute force solution or whatever - doing it gives me more pleasure than reading other's elegant solutions.

Next step - is probably to port this to javascript and generate the boards using HTML5 canvas. 

 * 8-queen problem  using brute force searching
 * This solutions uses following strategies 
 * 1 - fix one queen in one column and generate all
 * non-conflicting permutations - N! in total  
 * this is akin to solving the N-rook problem
 * 2- eliminate from N! permutations - that do not 
 * pass the additional diagonal test 
 * @author Rajeev Jha
 * @version 1.0

import java.util.Set;
import java.util.HashSet;

public class queen8 {

    private int[] columns ;
    private char[] colNames ;
    private int size ;
    private int solutions ;
    public queen8(int N) { 
        this.columns = new int[N] ; 
     for(int i=0 ; i < N ; i++) this.columns[i] = i+1 ;

        this.colNames = new char[N] ; 
        char a = 'A' ;
     for(int i=0 ; i < N ; i++) this.colNames[i] = (char) (a + i) ;

        this.size = N ; = 0 ;

    private void solve() {

     * permutation generator using a backtracking algo
     * from */

    private void generate(int N){
     int c ;
        /* factorial 1, 1! case,just one possibility,print that ..*/
        if ( N == 0 ) test_diagonal(); 
        //algorithm adjusted for zero-based indexes ..
        for(c = 0 ; c < N ; c++){

    /* swap for permutation */
    private void swap(int x, int y){
     int tmp = this.columns[x] ;
        this.columns[x] = this.columns[y] ;
        this.columns[y] = tmp ;

    /* for position of a queen in a column (a permutation)
     * diagonal positions are given by moving 
     * one row up in next column and one row down in 
     * previous column */

    private void test_diagonal(){
        int x ;

     for(int i = 0 ; i < this.columns.length ; i++) {
            x = this.columns[i] ;
            for(int j = i+1, k = 1; j < this.columns.length ; j++, k++) {
                if((x+k) == this.columns[j]) return ;
                if((x-k) == this.columns[j]) return ;

        // diagonal test passed

    private void print_board() {

     for(int i = 0 ; i < this.columns.length ; i++) {
            System.out.print(this.columns[i] + " ");

        System.out.println(); ;


    private int getNumSolutions() {
        return ;
    public static void main(String[] args) throws Exception {
        queen8 board = new queen8(8);
        System.out.println(" \n Total " + board.getNumSolutions() + " solutions " );


And here are the solutions

rjha@mint13 ~/code/fun $ javac 
rjha@mint13 ~/code/fun $ java -classpath . queen8 
A4 B2 C7 D3 E6 F8 G5 H1 
A5 B2 C4 D7 E3 F8 G6 H1 
A3 B5 C2 D8 E6 F4 G7 H1 
A3 B6 C4 D2 E8 F5 G7 H1 
A4 B7 C5 D3 E1 F6 G8 H2 
A5 B7 C1 D3 E8 F6 G4 H2 
A4 B6 C8 D3 E1 F7 G5 H2 
A3 B6 C8 D1 E4 F7 G5 H2 
A5 B3 C8 D4 E7 F1 G6 H2 
A5 B7 C4 D1 E3 F8 G6 H2 
A4 B1 C5 D8 E6 F3 G7 H2 
A3 B6 C4 D1 E8 F5 G7 H2 
A6 B4 C2 D8 E5 F7 G1 H3 
A5 B2 C6 D1 E7 F4 G8 H3 
A6 B4 C7 D1 E8 F2 G5 H3 
A1 B7 C4 D6 E8 F2 G5 H3 
A6 B2 C7 D1 E4 F8 G5 H3 
A6 B8 C2 D4 E1 F7 G5 H3 
A5 B8 C4 D1 E7 F2 G6 H3 
A4 B8 C1 D5 E7 F2 G6 H3 
A4 B7 C1 D8 E5 F2 G6 H3 
A4 B2 C7 D5 E1 F8 G6 H3 
A2 B5 C7 D4 E1 F8 G6 H3 
A5 B7 C1 D4 E2 F8 G6 H3 
A2 B7 C5 D8 E1 F4 G6 H3 
A1 B7 C5 D8 E2 F4 G6 H3 
A5 B1 C4 D6 E8 F2 G7 H3 
A6 B4 C1 D5 E8 F2 G7 H3 
A6 B3 C7 D2 E8 F5 G1 H4 
A2 B7 C3 D6 E8 F5 G1 H4 
A5 B1 C8 D6 E3 F7 G2 H4 
A1 B5 C8 D6 E3 F7 G2 H4 
A3 B6 C8 D1 E5 F7 G2 H4 
A7 B5 C3 D1 E6 F8 G2 H4 
A6 B3 C1 D7 E5 F8 G2 H4 
A7 B3 C1 D6 E8 F5 G2 H4 
A5 B7 C2 D6 E3 F1 G8 H4 
A3 B6 C2 D7 E5 F1 G8 H4 
A6 B2 C7 D1 E3 F5 G8 H4 
A7 B3 C8 D2 E5 F1 G6 H4 
A5 B3 C1 D7 E2 F8 G6 H4 
A2 B5 C7 D1 E3 F8 G6 H4 
A3 B6 C2 D5 E8 F1 G7 H4 
A6 B1 C5 D2 E8 F3 G7 H4 
A8 B3 C1 D6 E2 F5 G7 H4 
A2 B8 C6 D1 E3 F5 G7 H4 
A3 B7 C2 D8 E6 F4 G1 H5 
A6 B3 C7 D2 E4 F8 G1 H5 
A4 B2 C7 D3 E6 F8 G1 H5 
A1 B6 C8 D3 E7 F4 G2 H5 
A7 B1 C3 D8 E6 F4 G2 H5 
A6 B3 C7 D4 E1 F8 G2 H5 
A3 B8 C4 D7 E1 F6 G2 H5 
A7 B4 C2 D8 E6 F1 G3 H5 
A4 B6 C8 D2 E7 F1 G3 H5 
A2 B6 C1 D7 E4 F8 G3 H5 
A3 B6 C2 D7 E1 F4 G8 H5 
A7 B2 C6 D3 E1 F4 G8 H5 
A2 B4 C6 D8 E3 F1 G7 H5 
A3 B6 C8 D2 E4 F1 G7 H5 
A8 B4 C1 D3 E6 F2 G7 H5 
A4 B8 C1 D3 E6 F2 G7 H5 
A6 B3 C1 D8 E4 F2 G7 H5 
A2 B6 C8 D3 E1 F4 G7 H5 
A4 B7 C3 D8 E2 F5 G1 H6 
A4 B8 C5 D3 E1 F7 G2 H6 
A3 B5 C8 D4 E1 F7 G2 H6 
A7 B4 C2 D5 E8 F1 G3 H6 
A5 B7 C2 D4 E8 F1 G3 H6 
A4 B2 C8 D5 E7 F1 G3 H6 
A4 B1 C5 D8 E2 F7 G3 H6 
A5 B1 C8 D4 E2 F7 G3 H6 
A5 B2 C8 D1 E4 F7 G3 H6 
A8 B2 C4 D1 E7 F5 G3 H6 
A7 B2 C4 D1 E8 F5 G3 H6 
A3 B7 C2 D8 E5 F1 G4 H6 
A3 B1 C7 D5 E8 F2 G4 H6 
A8 B2 C5 D3 E1 F7 G4 H6 
A3 B5 C2 D8 E1 F7 G4 H6 
A3 B5 C7 D1 E4 F2 G8 H6 
A5 B2 C4 D6 E8 F3 G1 H7 
A6 B3 C5 D8 E1 F4 G2 H7 
A5 B8 C4 D1 E3 F6 G2 H7 
A4 B2 C5 D8 E6 F1 G3 H7 
A4 B6 C1 D5 E2 F8 G3 H7 
A5 B3 C1 D6 E8 F2 G4 H7 
A6 B3 C1 D8 E5 F2 G4 H7 
A4 B2 C8 D6 E1 F3 G5 H7 
A6 B3 C5 D7 E1 F4 G2 H8 
A6 B4 C7 D1 E3 F5 G2 H8 
A4 B7 C5 D2 E6 F1 G3 H8 
A5 B7 C2 D6 E3 F1 G4 H8 
 Total 92 solutions 

Using wordpress export data with PHP simpleXML

I had a site running in word press. This was a 256 MB slice and WP 3.2+, I must say (in a relative sense of course) is not light on resources. So I decided to move this site to my own code. That also meant moving the word press data to my own schema. So I took an XML dump using word press export tool and imported it back using my own scripts that use PHP and SimpleXML.

XML from Wordpress export tool has namespaces and multiple elements of same name so I reckoned my skeleton script can be of use to someone. Here we try to grab the  title, publication date, link (permalink), categories, tags and content from original wordpress post.

The code follows


    function process_post($title,$category,$tags,$createdOn) {
        if(empty($content)) { return ; }
        // process post


    // start:script 
    // wp.xml contains dump of wordpress posts

    if (file_exists('wp.xml')) {
        $doc = simplexml_load_file('wp.xml');

        if($doc === false) {
            echo "Failed loading XML\n";
            foreach(libxml_get_errors() as $error) {
                echo "\t", $error->message;

    } else {
        echo('Failed to open wp.xml.');
        exit ;

    foreach($doc->channel->item as $item) {

        $title = $item->title ;
        // content and other elements can be  wrapped inside 
        // a separate namespace. To deal with such elements we 
        // use item->children on the namespace given in wp.xml 

        $ns_wp = $item->children("");
        $attachment = $ns_wp->attachment_url ;

        if(empty($attachment)) {
            $ns_content = $item->children("");
            $content =  (string) $ns_content->encoded;
            $link = $item->link ;

            $pubDate = $item->pubDate ;
            $createdOn = date("Y-m-d", strtotime($pubDate));

            $tags = "" ;
            $category = "" ;

            // tags and category
            // we can have multiple category elements inside an item

            foreach($item->category as $elemCategory) { 

                if(strcmp($elemCategory["domain"],"category") == 0 ) {
                    $category = $category." ".$elemCategory["nicename"] ;

                if(strcmp($elemCategory["domain"],"post_tag") == 0 ) {
                    $tags = $tags." ".$elemCategory["nicename"] ;

            printf("title = %s, category = %s ,tags = %s , pub_date = %s  \n",$title,$category,$tags,$createdOn);


© Life of a third world developer
Maira Gall