The Reactive Manifesto


The Reactive manifesto is a great approach towards cleaner architectures. It emphasises the most critical aspects that systems should be concerned with, so that they can cope with various traffic values and accomplish their goal.

Be responsive.

Be resilient.

Show elasticity.

Embrace message-based communication.

Using git checkout-index to init project from boilerplate repository


As developers, we are always looking for inspiration in other people’s code. Either we learn something totally unfamiliar to us, or we are just in for a trick, but what makes us better developers is the social aspect of coding.

There are a lot of boilerplate repositories that put together several libraries into a starter package for new projects. So how do we use them as efficiently as possible?

Find your favorite boilerplate repository on github. For this example, I am picking the ultimate boilerplate repo only because it has “ultimate” in its name.

So the first step would be to clone it. Business as usual.

$ git clone

Old school folks would recursively remove .git folders at this point. But not us. Let’s see what git awesomeness there is in our sleeve:

$ git checkout-index -f -a --prefix=/work/webdev/kickstart/

This will create the /work/webdev/kickstart/ folder and copy all the files there, without the .git items.

Now we can safely navigate to our destination folder

$ cd /work/webdev/kickstart/

and continue with

$ git init

and then add everything to the repo

$ git add .

then just

$ git commit -m "initial setup

We’re ready to go, and not starting from scratch!

Use Google Sheet as a free online database for prototyping


I really like prototyping, since I can quickly show a clickable version of an idea that’s floating around my head. But the pain of setting up an environment for that is sometimes too expensive and less attractive. Recently I found a cheap way to using a database that I wanted to share.

Google Drive is a great way of keeping a virtual collection of documents. I especially appreciate small incentives like them offering a dashboard with the online status of each service, and we can easily see that it’s quite a reliable solution for most needs. Another (less popular) use of Google documents is that of an online database to feed data to other applications. I was very happy to discover it has quite a bit of support for the Sheets by having an API exposed.

An added cool factor is the fact that we can access data as json to adding alt=json in the URL query params. And it gets even better, we can also pass a callback by adding alt=json-in-script&callback=handcraftedCallback to have data wrapped to handcraftedCallback.

Some basic things we should note in the setup are the permissions.
For this example I’ve set up a harmless spreadsheet to toy around with.
On the first attempt to access it via this URL the response will be an error saying “We’re sorry. This document is not published.”

This needs us to publish the document to web.

After we access the dialog screen,

then we click “Start publishing”,

then we confirm our choice in the dialog.

Backup code with git archive

I am currently working on a Facebook application using nodejs and other frontend goodies. While that’s all sweet and interesting, localhost can be used to play around after applying certain tricks that I described a while back. Some particular features (more specifically the share button) cannot be tested in a
environment, so I need to place the current code version on a test server and execute it from there.

While I’m coding, the last thing I want to do is to worry about git commits while my functionality might be breaking. I need to quickly see the results of my code changes. So I searched for a way to quickly pack my local code, and came across the beautiful git archive command. As its name says, it allows me to generate an archive of my current code, but it can also just export the files without packing them (it’s called git checkout-index).

I first exported the code as an archive,

g@local $ git archive --format=zip HEAD > ~/Desktop/

Then I uploaded the archive on the testing machine with

g@local $ scp -v ~/Desktop/ deployer@123.x.x.x:.

Next step was to unpack it there,

deployer@remote $ unzip

I then installed its dependencies

deployer@remote $ npm install

Starting the process was easy as pie,

deployer@remote $ npm run dev

I got myself a solid process consisting of just 4 steps, which can easily be the starting point of the production deployment when I get to that point.

For different use-cases where the versioning information is also needed, I find git bundle to be more appropriate.

Best way to learn is to look at the best


Most people I’ve worked with don’t like to learn in an ordered manner from a textbook. They enjoy experimentation more than anything, because it allows them to see how things are moving when change is applied.

And because I’ve been working with React lately, I thought it would be a good idea to see an isomorphic application at its best, so I started to dig Facebook’s desktop implementation.

After overcoming the disappointment that they didn’t implement it universally, I did discover some gems in their frontend code.


To be continued.

List all globally installed npm modules

Today I needed to have a look at all the npm modules that were installed globally on my machine. I found that npm has a list command that does just that. But, in order to only see the modules, and not all their dependencies, I needed to specify a depth value of 0:

npm list -g --depth=0

├── ampersand@3.0.5
├── bower@1.4.1
├── brunch@1.8.3
├── grunt-cli@0.1.13
├── gulp@3.9.0
├── http-server@0.8.0
├── javascripting@2.0.3
├── mocha@2.3.0
├── n@2.0.1
├── node-gyp@2.0.2
├── npm@2.11.3
├── phantomas@1.11.0
├── react-native-cli@0.1.4
├── react-tools@0.13.3
├── supervisor@0.7.1

The -g switch is the same one that I used when globally installing each module.

If the name of some package doesn’t immediately ring a bell, replacing list with ll can help:

npm ll -g --depth=0

│ /usr/local/lib
├── ampersand@3.0.5
│   CLI tool for generating single page apps a. la.
│   git+
├── bower@1.4.1
│   The browser package manager
│   git+
├── brunch@1.8.3
│   A lightweight approach to building HTML5 applications with emphasis on elegance and     simplicity
│   git+
├── grunt-cli@0.1.13
│   The grunt command line interface.
│   git://
├── gulp@3.9.0
│   The streaming build system
│   git+
├── http-server@0.8.0
│   A simple zero-configuration command-line http server
│   git://
├── javascripting@2.0.3
├── phantomas@1.11.0
│   PhantomJS-based web performance metrics collector
│   git://
├── react-native-cli@0.1.4
│   The ReactNative cli tools
├── react-tools@0.13.3
│   A set of complementary tools to React, including the JSX transformer.
│   git+
├── supervisor@0.7.1
│   A supervisor program for running nodejs programs
│   git://

Reattach to a screen session

Over the weekend I was working on a small pet project, and I decided to play with a DigitalOcean droplet as I had some spare credit.

As usual, I was using screen to get everything set up. A sysadmin friend first showed me how we could do a server screen sharing with it, and I never looked back eversince. I usually refer to the one-page manual if I forget the keyboard combination.

At some point the internet connection dropped for a few minutes. When I got back online, I was unable to reattach and continue my work.

$ screen -r
There is a screen on:
    28033.pts-0.konf-api    (08/07/15 06:58:36) (Attached)
There is no screen to be resumed.

Somewhere in the man screen instructions I found the magical -D option.

   -d|-D []
        does not start screen, but detaches the elsewhere running screen session. It has the same effect as typing "C-a d" from screen's control-
        ling terminal. -D is the equivalent to the power detach key.  If no session can be detached, this option is ignored. In combination  with
        the -r/-R option more powerful effects can be achieved

Basically it detaches everything else and allows me to reattach right away. Lovely.

So, I went on to execute with that option enabled, and of course magic happened.

$ screen -D -r '28033.pts-0.konf-api'
[detached from 28033.pts-0.konf-api]

Install node.js on Ubuntu servers


Debian-based distros are not particularly friendly with node.js applications. I found this out the hard way, as I tried to

$ apt-get install node

and noticed that there is zero output when trying to check the node.js version with

$ node -v

Nothing. Nada.

I went on to see what was the node command actually executing. Here’s what it said:

$ which node

$ ll /usr/sbin/node
lrwxrwxrwx 1 root root 9 Oct 29  2012 /usr/sbin/node -> ax25-node*

That can’t be good, I wanted to get node.js, not some creepy ax25.

So I started digging around and found out that readme.debian does provide the proper information if one has time to look around. Also, DigitalOcean has a great writeup for getting node.js properly installed in an Ubuntu environment.

One quick way to do it would be to simply symlink current nodejs into /usr/bin, like this:

$ sudo ln -s /usr/bin/nodejs /usr/bin/node

I wanted to not take this route, and rather go back to square one and start over clean.

# Ubuntu names the package nodejs instead of node
$ apt-get install nodejs

# verify version
$ nodejs -v

# install the legacy bridge
$ apt-get install nodejs-legacy

# verify node version
$ node -v

Now that everything is properly linked, I was able to use the pm2 process manager in the way DigitalOcean guide was suggesting.

Developing a Facebook application: set up local environment


Once I have defined my application, I wanted to set up a development-friendly environment that I could use to interact with it, without touching the production version.

Fortunately, Facebook has made it easy by letting developers define test versions, and I added one with the label “localhost” in the end (step 1). It is highly probable that I will need a staging environment as well, so I will just postfix with “staging” when time will come.

Next, I went to my test version of the application, and opened the “Settings” screen.

There are two important changes to be made here:

  • I had to define the application domain to be “localhost” (step 2)
  • I pointed the site URL to my local web server and port values (step 3). The simple steps needed for this are described separately.

After saving changes, I was able to use Facebook login in my localhost application.

Serve static files locally with the http-server nodejs module

Since I left full-time LAMP development behind me, Apache and MySql servers are turned off by default on my Mac. I only start them when they are needed. I then noticed I didn’t want to deal with Apache for quick prototyping or doing local development on static files.

So I went shopping for a very small web server, and found a nodejs module that fits the bill perfectly. I liked http-server, and it took me literally 5 minutes to start using it.

First, I performed a global install, so I can use it in multiple projects.

$ npm install http-server -g

Then I navigated to the folder that I wanted to play with, and started it on a non-standard port:

$ http-server --cors -p 9879

Loading up http://localhost:9879/ in a browser proved it was working perfectly.

The other available options can be seen when executing $ http-server --help, and they are very handy. For example, one can enable https if needed, or can use a certain certificate file.