Prab's Code Blog

Because talk is cheap, show me the code!

Make Your if Statements Concise

Sometimes you may encounter a situation where you need to do something if two conditions are satisfied or they both are not satisfied; and do something else if exactly one of them is satisfied. An example would be:

1
2
3
4
5
6
7
if ((resultFromServer1 == null && resultFromServer2 == null) || (resultFromServer1 != null && resultFromServer2 != null)) {
    log.warn("Both servers are aware or unaware of the operation");
    doSomething();
} else if ((resultFromServer1 == null && resultFromServer2 != null) || (resultFromServer1 != null && resultFromServer2 == null)) {
    log.err("Only one server is aware of the operation");
    doSomethingElse();
}

These conditions look messy. We can fix them using XOR! XOR returns true if two conditions are different and false if both conditions are same. So we can write the above complexity as:

1
2
3
4
5
6
7
if ((resultFromServer1 == null) ^ (resultFromServer2 == null)) {
    log.err("Only one server is aware of the operation");
    doSomethingElse();
} else {
    log.warn("Both servers are aware or unaware of the operation");
    doSomething();
}

The reverse of XOR is EX-NOR. And interesting thing about that is, in case of booleans EX-NOR is equivalent to both conditions being equal. So, it is also called as equality gate. Hence, we can write:

1
2
3
4
5
6
7
if ((resultFromServer1 == null) == (resultFromServer2 == null)) {
    log.warn("Both servers are aware or unaware of the operation");
    doSomething();
} else {
    log.err("Only one server is aware of the operation");
    doSomethingElse();
}

Here's a Fluent Builder in C#

A fluent builder makes creating objects more readable than constructors and more concise than setting properties through setters on an uninitialized object. Java devs can use Lombok’s @Builder annotation to get a fluent builder wired into their classes. Unfortunately, in C# world no such thing exists. So here’s how you can roll out your own.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public class Animal
{
  public string Name { get; set; }

  public string Type { get; set; }


  #region Fluent builder
  public static Animal Builder()
  {
    return new Animal();
  }

  public Animal WithName(string name)
  {
    this.Name = name;
    return this;
  }

  public Animal WithType(string type)
  {
    this.Type = type;
    return this;
  }

  public Animal Build()
  {
    return this;
  }
  #endregion
}

And, to construct and object you can simply use:

1
Animal a = Animal.Builder().WithName("Tommy").WithType("Dog").Build();

Boot Linux From Grub Rescue Prompt

If you have a dual boot setup of a Linux based OS and Windows 10, and you have setup grub to choose which OS to boot; you might have experienced the grub rescue prompt which comes up after a Windows 10 update screws up with the boot files.

Panic not, for it’s easy to get back to your beloved Linux distro and fix grub. Here are the steps.

  • Find out the Linux partition (skip, if you already know)
    • Run ls and it should show you a list of partitions like (hd0,msdos1) (hd0,msdos2) ... or in the form (hd0,gpt1), (hd0,gpt2) ...
    • Run ls with the name of the partition followed by a / to see the files on the partition. Like so ls (hd0,gpt1)/. Remember, that forward slash is important. You have to do this for each parition until you find your linux partition - i.e. you see the list of files named dev, proc, usr, etc, mnt etc.
  • Set the grub modules prefix
    • Run this set prefix=(hd0,msdos3)/boot/grub. This assumes, your Linux partition in previous step was msdos3.
  • Set the root partition
    • You can set the linux partition as your root partition by running set root=(hd0,msdos3)
  • Load the needed modules
    • We need to load the linux module to be able to boot Linux. Run insmod linux to do that.
  • Find where your kernel and initramfs are located
    • You can run ls /boot/ and it should show a file named vmlinuz-linux. That’s your Kernel. At least that’s how it is named in Arch Linux and related distros like Antergos and Manjaro. Kernel and Ramdisk have different names on different distros. Centos, Fedora and RHEL have names similar to vmlinuz-3.10.0-693.11.1.el7.x86_64 for the Kernel. I assume you are smart enough to figure the naming out.
    • You can similarly find your initramfs image. Usually it is named something like initramfs-linux.img.
    • Caveats to look out for when finding the Kernel and initramfs image files - Number one, if the file names are versioned, choose the latest ones. Number two, choose the same version for Kernel and initramfs. And the last and number three, don’t choose the names that contain rescue or memtest. They are not the ones we are interested in.
  • Boot Linux!
    • Having found out the Kernel and Ramdisk images, let’s load them up. Run linux /boot/vmlinuz-linux root=/dev/sda3 fb rw quiet. sda3 can probably be hda3 if you have an older machine. And the digit at the end should be same as the root partition number we found in the first step i.e. msdos3 in my case. Now load the Ramdisk by running initrd initramfs-linux.img.
    • Having loaded ‘em both, let’s boot. Run boot and bingo, your system should now boot normally!
  • Fix grub so that you don’t have to go through this ordeal next time
    • Running sudo grub-install /dev/sda or in some cases hda instead of sda, should reinstall grub correctly and fix your booting problems.

Ship a REST API With Node, ES6 and MongoDB - Part 1

Introduction

NodeJS has been gaining a lot of traction recently, especially for web applications. Lots of new web applications are now being built with tools from the Node ecosystem. A possible reason is the umpteen number of javascript developers out in the wild. Other one is the blazing speed of V8 engine). Heck, even shiny desktop apps are now being built with Node viz. Popcorn Time.

In this post I will walk through everything I did to ship an API app. Ship as in - ready to serve production traffic.

About the stack

We will be using Express as our web app framework. Express is minimalistic, fast and quietly gets out of our way when you want to detour through the dirt road. It is unlike convention over configuration style in Rails and Play framework where you are good as long as you follow the conventions recommended by the framework. Going gets tough, if you start deviating. There are frameworks built specifically for facilitating API building. An example would be Restify. The reason of not choosing such a framework for this exercise is to understand the intricacies of building an API. Restify would shield us from understanding lots of stuff.

Also, we wil be writing our javascript in ES6 which is cleaner and hands down a better language than the javascript specced in ES5. Latest version of node still doesn’t fully support ES6, so we will use a transpiler named Babel which transpiles code written by us into code conforming to ES5 for which node has full support. In the future when node catches up, we can turn off transpilation and run our ES6 code directly with node.

MongoDB will be our database. It is a document oriented NoSQL database. What that means is - it is unlike the RDBMSes like MySQL or Oracle which let you model the world by defining a schema in form of tables recording their properties and then linking them with primary and foreign keys. With MongoDB we model our worldly entities in form of documents which are similar to JSON objects . The biggest advantage of NoSQL databases like MongoDB is that they are designed to be horizontally scalable - whenever we need to increase performance, we just add few more machines to the DB cluster and the database engine takes care of redistributing data and query traffic across all available machines. RDBMSes are limited by theory, in terms of horizontal scalability. A discussion of this is beyond the scope of this article, but if you are interested in knowing more about it you can browse the interwebs for this nuisance called CAP theorem.

About the project

We are going to build an API which can power a web application and/or a mobile app. The app is a social code snippet sharing system where the users can share frequently encountered code snippets that are used in a developer’s day to day life for solving mundane problems which they are too lazy to code themselves. Usually, as developers we google for our problem, find link to a stackoverflow question and copy the snippet from there and use from. In our case we let the users share the snippet, enter a description, select the language, optionally add some tags and publish it. A user can also thumb up a snippet or mark it as junk. We also allow some experienced users to edit the snippets and/or descriptions for clarity. We define experience with a formula derived from number of upvotes to their questions by other users, their age on the site and their time spent on the site. Also, we do away with logins through passwords. We provide OAuth based logins through Google, Facebook, Twitter etc. Apart from these functional features, we will also do some non-functional stuff like generate an SEO friendly URL for each snippet so that Googlebot can find our snippets and list in search results. Let’s wear the product manager hat for a moment and write down the high level product requirements for our application.

As a guest user I should be able to

  • search for snipppets through description and/or code matches
  • list snippets by language, and sort them by
    • most voted
    • most viewed
  • login using Google, Facebook or Twitter auth

As a logged-in user I should be able to

  • perform snippet operations
    • post a snippet
    • edit my posted snippets
    • view a snippet and see all previous versions of a snippet, if they have been edited
  • perform profile operations
    • view my profile
    • refresh upstream information in the profile
    • view others' profiles
    • view activity of a user, like snippets posted, snippets upvoted, snippets edited etc.
  • perform friendship operations
    • send friend requests to people
    • accept/reject others' friend requests
    • ignore/block a user
  • perform feed operations
    • view a feed which shows my friends' activity on the website
    • like and comment on activities in the feed
  • perform leaderboard operations
    • view leaderboard of snippets
    • view leaderboard of users
    • filter leaderboard by language, user location etc.
  • perform messaging operations
    • send private messages to my friends
    • send private messages to non-friends
    • view my inbox, categorized with messages from friends/non-friends

Why API?

Let’s take a step back and understand why should we write an API? Why not a simple MVC web application? APIs are great for scaling, technically as well as logistically. With an API, we define a set of operations, which cover all the interactions of users and other systems with our application. Then we write client apps, which talk to the API and get the job done. So, next time you swipe left in gmail to archive an email, you should know that an API call has been made by the GMail app residing on your phone, to the GMail servers along with some parameters which did the actual archiving of the email.

What is logistic stability? Ok, I just made up this term! With an API as contract in place, the client app can independently scale with the actual functional implementation. We can have different people (or teams) working on the different client apps. As long as they have the API spec they can work independently. Also, the same API can power a native iOS app, an Android app, a mobile website and a web application. These client apps make API calls to fetch and display relevant data and perform operations upon user interactions.

In terms of technical scalability, the people working on the backend can independently scale the most used operations in the API. For instance, if we are getting millions of calls for our compute intensive findBugs operation in a day, we can intelligently increase the resources it needs. One way would be to have a dedicated independent machine of high configuration just for this operation in the API. And we leverage the power of distributed systems and horizontal scalability! We will get into the details of this when we talk about shipping once we are code complete.

Dev Env setup

Before we design our data model, let’s get our hands dirty a bit, write a hello world app with Express and get MongoDB setup on our workstation.

I am an Arch Linux devotee. The instructions here should work perfectly on Arch. The onus of changing the commands to suit your package manager(apt, yum, emerge, brew) is upto you.

Install node, npm and mongodb

The following commands will install node, npm and mongodb for us. node is the javascript runtime executable which is actually going to run our code. npm is the package manager which takes care of resolving dependencies on third party libraries, downloading them and setting up their paths correctly so that we have a lot of pre-written code in form of modules available at our disposal.

Mongodb will also be started and enabled for autostart. If you don’t want it to be auto started everytime you boot, don’t execute the last command.

1
2
3
4
5
sudo pacman -Syu node npm mongodb

sudo systemctl start mongodb

sudo systemctl enable mongodb

As of this writing, I got node 7.2, npm 4.0 and mongodb 3.2. The javascript ecosystem is a very fast moving one. Some of the code mentioned hereon might get obsolete in a couple of years but the general idea remains the same.

Setup the initial package.json

In the node world package.json is the file which contains the details of our dependencies and run configurations. In Rails world, it is analogous to Gemfile but on steroids. In java world it is kind of analogous to build.gradle or a pom.xml.

1
2
3
mkdir -p ~/Documents/Projects/js-snipcode
cd !$
npm init

The second command might seem unfamiliar. !$ represents the last paramter of previous command in bash and zsh. So it will just take you to the newly created directory from the previous step.

The npm init is interactive and it will first ask you for a project name. Then configure the version and description. Next it asks for the entry point for the app. You can leave it as index.js. Keep pressing enter to accept the defaults for other fields or change if you like. For now we ignore the test command. A package.json file will be created in the end.

Install Express

Express is the web framework library that we are going to use. Now let’s add a dependency on Express, add babel transpilation of ES6 and get from zero to “Hello World!”.

1
2
npm install express --save
npm install --save-dev babel-cli babel-preset-latest

Running the above commands will download Express, babel-cli and babel’s latest preset. --save flag makes an entry in the dependencies section of package.json. Guess what does the --save-dev flag do? You got it right! It makes an entry in devDependencies in package.json. Dev dependencies are different from dependencies in that they are not required for running the app. They are only used during development, for instance for transpilation during build. Let’s also add two target scripts named build and start in the package.json. Running npm run build will tell babel to transpile everything in src directory recursively and output the transpiled files in dist directory. npm run start in the root directory of our project will start our web server by running the command node dist/index.js. Also, don’t forget mkdir src; mv index.js src. Here’s the updated package.json.

package.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
  "name": "js-snipcode",
  "version": "1.0.0",
  "description": "API in NodeJS for the Snipcode project.",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "babel src -d dist --presets latest",
    "start": "node dist/index.js"
  },
  "author": "Prabhakar Kumar",
  "license": "ISC",
  "dependencies": {
    "express": "^4.14.0"
  },
  "devDependencies": {
    "babel-cli": "^6.18.0",
    "babel-preset-latest": "^6.18.0"
  }
}

Zero to Hello World!

Let’s edit our src/index.js and write relevant code to make a Hello World app.

index.js
1
2
3
4
5
6
7
8
9
10
11
import express from 'express';

const app = express();

app.get('/', (req, res) =>
  res.send('Hello World!')
);

app.listen(3000, () =>
  console.log('Server is up on port 3000!')
);

Now let’s build and run.

1
2
npm run build
npm run start

If everything went well as we discussed, you will see a message on the console that says server is up on port 3000. Now if you fire up firefox or chromium and go to localhost:3000, you will see the evergreen ‘Hello world!’ message staring at you on a white screen.

Before we start writing more code, I’ll explain the meaning of the lines in our Hello World web app. But, before that let’s pivot and focus on learning our database and design a basic data model.

Play with mongo console

Let’s play with mongo console a bit to gain some familiarity with it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ mongo
MongoDB shell version: 3.2.10
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
Server has startup warnings:
2016-12-04T07:40:24.545-0500 I CONTROL  [initandlisten]
2016-12-04T07:40:24.545-0500 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-04T07:40:24.545-0500 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-04T07:40:24.545-0500 I CONTROL  [initandlisten]
> show databases
local  0.000GB
> use snipcode
switched to db snipcode
> db.users.insert({name: "John Doe", country: "United States", "sex": 'M', dob: ISODate('1970-01-01')})
WriteResult({ "nInserted" : 1 })
> db.users.find()
{ "_id" : ObjectId("58441a27d01af6e5b1ae6f74"), "name" : "John Doe", "country" : "United States", "sex" : "M", "dob" : ISODate("1970-01-01T00:00:00Z") ers.insert({name: "Jane Doe", country: "United States", "sex": 'F', dob: ISODate('1970-01-01')})
WriteResult({ "nInserted" : 1 })
> db.users.find()
{ "_id" : ObjectId("58441a27d01af6e5b1ae6f74"), "name" : "John Doe", "country" : "United States", "sex" : "M", "dob" : ISODate("1970-01-01T00:00:00Z") }
{ "_id" : ObjectId("58441a5bd01af6e5b1ae6f75"), "name" : "Jane Doe", "country" : "United States", "sex" : "F", "dob" : ISODate("1970-01-01T00:00:00Z") }
> db.users.find({sex: 'F'})
{ "_id" : ObjectId("58441a5bd01af6e5b1ae6f75"), "name" : "Jane Doe", "country" : "United States", "sex" : "F", "dob" : ISODate("1970-01-01T00:00:00Z") }

Similar to MySQL and friends (MariaDB, Aurora), Mongodb organizes it’s data in databases. We don’t need to explicitly create a database unlike MySQL. It gets created as soon as we try to write something to it. Inside a Mongodb database, the database is organized in collections. A collection is analogous to tables in RDBMS, and more flexible. A collection doesn’t have a fixed schema. So, one document (record/row in RDBMS) can have fields different from others in the same collection. In the above example, we switched to database named snipcode. Then we inserted a document describing a user named ‘John Doe’. Upon doing a find() (which is analogous to SQL’s SELECT) on that collection we got the record, along with a unique record ID called ObjectID. Then we inserted another record for Jane Doe. Then we ran a query which returns all female users. For this, we provided the query condition sex: 'F' which is similar to WHERE sex = ‘F’ in SQL.

As of now, this much familiarity is enough for us to get started. We will dive deeper as and when we need.

Next part

In this part we got bootstrapped and got our hands dirty. In the next part we will start with data modeling and continue further. Stay tuned for it.

Elegant Solutions to Common Interview Problems - Part 1 - Linked Lists

Define a linked list node for integers

LinkedListNode.cpp
1
2
3
4
5
class Node {
    public:
    int data;
    Node* next;
};

Reverse a singly linked list in-place iteratively

ReverseLinkedListIterative.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Node* reverseIterative(Node* head) {
    // Initialize two pointers, current(ptr) and previous
    Node* ptr = head;
    Node* prev = NULL;
    // At each node, we point it's next to previous
    while (ptr) {
        Node* next = ptr->next;
        ptr->next = prev;
        // Advance both pointers
        prev = ptr;
        ptr = next;
    }
    // Point head to last node
    head = prev;
    return head;
}

Reverse a singly linked list in-place recursively

ReverseLinkedListRecursive.cpp
1
2
3
4
5
6
7
8
9
10
Node* reverseRecursive(Node* head) {
    if (!head || !head->next) {
        return head;
    }
    Node* rest = reverseRecursive(head->next);
    // Point head's next node towards head
    head->next->next = head;
    head->next = NULL;
    return rest;
}

Traverse a linked list recursively

traverseRecursive.cpp
1
2
3
4
5
6
7
8
void traverse(Node* head) {
    if (!head) {
        return;
    } else {
        cout<<head->data<<endl;
    }
    traverse(head->next);
}

Traverse a linked list recursively in reverse order

traverseRecursiveReverse.cpp
1
2
3
4
5
6
7
void traverseReverse(Node* head) {
    if (!head) {
        return;
    }
    traverseReverse(head->next);
    cout<<head->data<<endl;
}

Print nth element of linked list (counted from 1)

printNthElement.cpp
1
2
3
4
5
6
7
8
9
void printNthElement(Node* head, int n) {
    if (!head) {
        return;
    }
    if (n == 1) {
        cout<<head->data;
    }
    printNthElement(head->next, n-1);
}

Print nth element from last in linked list

printNthElementFromLast.cpp
1
2
3
4
5
6
7
8
9
10
11
void printNthLastElement(Node* head, int* n) {
    if (!head) {
        return;
    }
    printNthLastElement(head->next, n);
    *n = *n - 1;
    if (*n == 0) {
        cout<<head->data;
        return;
    }
}