16. January 2016

structuring react components

I have tried to share some learning that I had while designing react components. Such that rather than having one huge component doing everything, have smaller more specialized ones doing one thing at a time.

Part 1 (The Problem Statement)

So say I want to list out all the repositories of a github user (eg. sindresorhus).

So here I am using the fetch method to make a request to the github’s API getting the response, parsing it to json and setting it on to the state. The fetching is done as soon as the component is about to mount. The render() method is called automatically as soon as the state is updated.

There is a problem though — state which is initially set to null will throw an exception when I will try to access state.repos. So I need to add another condition to the render function —

So far so good. I want to add another functionality now, being able to do a real time search on the repository names.

So I added an input box and attached an event handler for the onKeyUp event. I also keep two lists, repos and fRepos where fRepos represents the filtered list of the repositories.

I want to add one more feature. I want to show text — no repositories found when the repositories don’t match the search input.

So I have added a simple if condition that renders the list if the fRepos.length > 0 otherwise, just show no repositories found.

Okay, this is good, does the job but if you go to the results page, you will see that it also shows the message initially when the repositories are yet to be loaded from the API. I should ideally show a loading... message, till the time the fetch request doesn’t get completed.

Part 2 (Breaking Components)

If you observe the render() function, it pretty messy right now. It tries to render different things on different occasion. This logic will only get complicated unless I decompose the render function.

For example — if I have a component A that renders child components P, Q, R in different combinations, then the logic of rendering them individually can actually lie inside the individual components P, Q, R, instead of their parent A. This helps us achieve single responsibility principle where the render function of A only determines which all components it will mount. Whether they actually render or not is their (P, Q, R) own responsibility.

In our case we can remove all the if conditions from the main Repositories component and create smaller specialized ones that encapsulate when they should render.

To start with the component decomposition, we can create a component called NoRepositories which shows the ‘No Repositories found’ message when the filtered results are empty. Similarly we can create a component UnorderedList which renders only when a list of items is provided to it.

Effectively we got rid of one condition from the render method. We can apply the same concept for the loading message also, by creating a Loading component. The subtle difference here is that, I want to hide input box and the user name, at the time of loading. This can be done by making the content a child of Loading component —

So we have concluded part one of the refactoring where each component decides by it self, when should it be shown and how it should be shown.

Part 3 (Control rendering declaratively)

We removed the if conditions from the render function, to control which child component needs to be rendered and moved it to the render function of the individual child components. Here we will remove all forms of conditional rendering from all the render functions. To do this I will create a decoratorrenderIf.

const toArray = x => Array.prototype.slice.call(x)
const renderIf = function () {
  const predicates = toArray(arguments)
  return component => {
    var prototype = component.prototype
    const render = prototype.render
    prototype.render = function () {
      return predicates.every(i => i(this)) ? render.call(this) : null
    return component

The decorator takes in a list of predicate functions and evaluates them with the first param as the current instance of the component. If all the predicates return true, then the component is rendered.

The declarative approach makes it much easier for me to understand the render function’s main responsibility.

I have removed all the if conditions from the code except for the one in the Loading component. To remove it I will again have to split the component into two components viz. — LoadingMessage and LoadingContent, then apply the renderIf decorator.

Part 4 (Make declaratives reusable)

We can write helper functions such as — isEmpty to check if the list is empty and have to check if the property exists on the component. Using lodash it will be much easier to write these helpers. Apart from the original 2 helpers, I have also added their negations — isntEmpty & notHave.


14. December 2015

node.js concurrency evaluation

I have been using node.js for a decent amount of time now and I had this hypothesis which I needed to validate —

For a node.js server running on multi core system, if I flood the server with n concurrent requests, to compute something expensive, it would handle the traffic better if the computation can be chunked in such a way that the server can compute those n computations, concurrently.

This is timeslicing, which should be equivalent to creating threads (atleast in theory) in Java. In fact, that’s what node.js uses for IO operations, so technically my server should have a much higher throughput via this approach.

There are different ways to implement timeslicing viz. setTimeout, process.nextTick and setImmediate. There are subtle differences between all the three functions but bottom line is this — passing a callback to any of these functions defers its execution by some cpu cycles. This helps in letting CPU breathe and perform other tasks in the mean time such as — rendering (on frontend) or making HTTP requests etc.

For starters I want to compare the performance of a fibonacci algorithm on a process. So consider the following two fibonacci implementations —


var fibonacci = n => {
  if (n < 2) {
    return 1
  return fibonacci(n - 1) + fibonacci(n - 2)


var fibonacciAsync = (n, cb, slicer) => {
  slicer(() => {
    if (n < 2) {
      return process.nextTick(() => cb(1))
    var out = []
    const add = x => {
      if (out.length === 2) {
        cb(out[0] + out[1])
    fibonacciAsync(n - 1, add, slicer)
    fibonacciAsync(n - 2, add, slicer)

The first one is the most natural way of implementing fibonacci series using recursion, the second one uses slicer as a param, which could be any of the time slicing functions discussed above.

Though one can optimize the algorithm as a whole to have better performance by memoizing the results, I needed something that takes a toll on node.js’s single threaded architecture and get some basic metrics out of it.

Test Suite

var suite = new Benchmark.Suite

// add tests

  .add('SYNC', () => {
  .add('ASYNC:process.nextTick', d => {
    fibonacciAsync(size, d.resolve.bind(d), process.nextTick)
  }, {defer: true})
  .add('ASYNC:setTimeout', d => {
    fibonacciAsync(size, d.resolve.bind(d), setTimeout)
  }, {defer: true})
  .add('ASYNC:setImmediate', d => {
    fibonacciAsync(size, d.resolve.bind(d), setImmediate)
  }, {defer: true})

  .on('cycle', event => console.log(String(event.target)))
  .on('complete', function () {console.log('Fastest is ' + this.filter('fastest').pluck('name'))})
  .run({ 'async': true })


SYNC x 14,611,668 ops/sec ±1.01% (90 runs sampled)

ASYNC:process.nextTick x 770 ops/sec ±1.16% (42 runs sampled)

ASYNC:setTimeout x 116 ops/sec ±0.79% (80 runs sampled)

ASYNC:setImmediate x 739 ops/sec ±0.93% (78 runs sampled)

Fastest is SYNC

NOTE: that setTimeout is the worst performer

Yes that’s a no brainer, SYNC has to be the fastest. But wait, its almost 19,000 times faster than the fastest async! That changes quite a lot of things!

On the front end, sometimes when you are computing something expensive, its often suggested to chunk the computation so that the browser can do other tasks such as rendering etc. This gives an impression of snappy fast UI. This is perceived performance and yes I understand that in totality the task will take a lot more time to complete with this approach.

On the server side, I got seduced into taking the same approach, so that it is able to handle the requests concurrently. Looking at the performance difference it seems like even if it does, the difference is unbelievably high and it would definitely take up a lot more memory, deferring computation everytime and eventually get exhausted of all the resources.

The CPU is taking me for spin if I give him a chance to relax, how dare he!

Alright, things are beginning to get more clearer in my head now, but the real test of my hypothesis will be on a multi core architecture. So I hosted the same code on simple node http server and forked the process 4x.

Load Test using nperf

With concurrency set to 50 and averaging response times and rate of response of 1000 requests, here are the results.

avg rate memory
async 751.467 64.80 1GB
sync 23.38 2028.40 240MB

In this case, sync is still close to 30x faster than async. The results are pretty much the same for different concurrency settings and gets worse for async, as the computation gets more expensive.

The lesson to be learnt here is — First that my hypothesis was absurd and second that application level time slicing using the node’s event loop, will NOT give us the same or even near the same performance of any native node module async behaviour. Neither will it be as fast as any thread based systems like java. This doesn’t mean that node.js or Javascript is slower than java or any other threaded systems.

26. April 2015

Chaining async tasks

Method chaining is a pretty common pattern in object oriented programming. JQuery does an amazing job at it but its all about sychronous tasks. Then there are promises, they have the then method which helps in chaining async requests but its not customizable and also quite verbose. So I created my own project ~ Chaining Tatum (No pun intended with the versatile actor Channing Tatum, big fan!)

19. April 2015

Setting up MOSH on Koding.io

Using a remote server for development is so much cooler for obvious reasons. Using MOSH on top of it naturally makes the experience much better.

Koding.com + MOSH = Ecstasy

Here is how you can get mosh to work on koding.com

  1. SSH into the koding server from your client machine. If you haven’t added ssh keys yet checkout their tutorial.

     ssh <username>@<username>.koding.io
  2. Setup uncomplicated firewall in the remote machine.

     sudo apt-get install ufw
     sudo ufw status verbose
     sudo ufw enable
  3. Open up critical ports first viz. HTTP, SSH and 56789 for koding.

     sudo ufw allow ssh
     sudo ufw allow http
     sudo ufw allow 56789/tcp
  4. Open up the port (60001 is used by mosh in most cases) on the remote machine for the client machine to access it via udp.

     sudo ufw allow 60001/udp
  5. Connect the remote machine by running the mosh command from the client machine. This will automatically ssh into the remote server and start the mosh-server.

     mosh <username>@<username>.koding.io

That’s it, you are done.

02. April 2015

Random Ramblings

Astrology vs Machine Learning ~ 2 Apr 2015

Is astrology an age old machine learning (ML) algorithm that is predicting the future, just like ad networks, that predict the click thru rate of an ad for the current user? Have I been ignorant all this while, failing to find the logic of causation in astrology, without realizing that:

Corelation does not imply causation


28. May 2014

Schemaless scheduling for repeated events

Consider a case where one wants to schedule repeated events in a calendar. For example - a yearly birthday or a fortnightly appointment with the dentist. Though, these cases are quite simple and most of the current solutions such as — Google Calendar, handles them quite well, the problem arises when you want to integrate a system like this with your own application.

This stackoverflow question — Calendar Recurring/Repeating Events - Best Storage Method — gives a good picture of the complexity of the problem.

One thing that I observed, is that you could draw parallels between this problem and the problem of selecting elements in a DOM tree. The latter been solved already using css selectors. So I started developing a language inspired by CSS Selectors but tailored for selecting dates.

This language ultimately gave me a lot of flexibility in terms of writing repetition logic and applying filters. Moreover this setup din’t require any complex schema and thus no schema migrations, with every new feature. Its just one rule, defining any convoluted logic as you may like and still taking up only a single row in the table.

The rules for the language (Named it SHEQL) and a parser prototype has been open sourced and is also available via npm. Though the prototype has been written in javascript one could easily write a version in python or some other language.

21. May 2014

Date a boy who codes

The author of the following content din’t have a blog of her own, I loved it so much that I thought I’ll post it here.


05. December 2013

Using Passbook at Indian Airports

Apple’s passbook is something that I was not expecting to work in India anytime soon. I was travelling to Bangalore, and decided to test this time if the app has any credibility, in the eyes of the security at the Jaipur International Airport, my boarding point.


03. December 2013

AngularJS Seed project

Personally I have always preferred building an offline web app when compared to a native desktop app. Firstly because its lighter, secondly its easy to build, platform independent and most importantly easy to update. I wanted to practice the basics of AngularJS on a project and wanted to learn more about the best practices that are currently set in the industry today. Thats is why I created this project.


02. December 2013

The Universal Theory Righteousness

This a really old piece of text from when I was in my third year of college, back when I was 19. I had written it with the help of a really close friend, whom I used to love talking to. We talked about philosophy, science and hypothetical situations. She turned out to be the person that I eventually ended up marrying, more on her later, for the time being check out what we came up with first :-)