Reattach to a screen session

Over the weekend I was working on a small pet project, and I decided to play with a DigitalOcean droplet as I had some spare credit.

As usual, I was using screen to get everything set up. A sysadmin friend first showed me how we could do a server screen sharing with it, and I never looked back eversince. I usually refer to the one-page manual if I forget the keyboard combination.

At some point the internet connection dropped for a few minutes. When I got back online, I was unable to reattach and continue my work.

$ screen -r
There is a screen on:
    28033.pts-0.konf-api    (08/07/15 06:58:36) (Attached)
There is no screen to be resumed.

Somewhere in the man screen instructions I found the magical -D option.

   -d|-D [pid.tty.host]
        does not start screen, but detaches the elsewhere running screen session. It has the same effect as typing "C-a d" from screen's control-
        ling terminal. -D is the equivalent to the power detach key.  If no session can be detached, this option is ignored. In combination  with
        the -r/-R option more powerful effects can be achieved

Basically it detaches everything else and allows me to reattach right away. Lovely.

So, I went on to execute with that option enabled, and of course magic happened.

$ screen -D -r '28033.pts-0.konf-api'
[detached from 28033.pts-0.konf-api]

Install node.js on Ubuntu servers

nodejs-ubuntu-stackoverflow

Debian-based distros are not particularly friendly with node.js applications. I found this out the hard way, as I tried to

$ apt-get install node

and noticed that there is zero output when trying to check the node.js version with

$ node -v
$

Nothing. Nada.

I went on to see what was the node command actually executing. Here’s what it said:

$ which node
/usr/sbin/node

$ ll /usr/sbin/node
lrwxrwxrwx 1 root root 9 Oct 29  2012 /usr/sbin/node -> ax25-node*

That can’t be good, I wanted to get node.js, not some creepy ax25.

So I started digging around and found out that readme.debian does provide the proper information if one has time to look around. Also, DigitalOcean has a great writeup for getting node.js properly installed in an Ubuntu environment.

One quick way to do it would be to simply symlink current nodejs into /usr/bin, like this:

$ sudo ln -s /usr/bin/nodejs /usr/bin/node

I wanted to not take this route, and rather go back to square one and start over clean.

# Ubuntu names the package nodejs instead of node
$ apt-get install nodejs

# verify version
$ nodejs -v
v0.10.25

# install the legacy bridge
$ apt-get install nodejs-legacy

# verify node version
$ node -v
v0.10.25

Now that everything is properly linked, I was able to use the pm2 process manager in the way DigitalOcean guide was suggesting.

Developing a Facebook application: set up local environment

facebook-local-application

Once I have defined my application, I wanted to set up a development-friendly environment that I could use to interact with it, without touching the production version.

Fortunately, Facebook has made it easy by letting developers define test versions, and I added one with the label “localhost” in the end (step 1). It is highly probable that I will need a staging environment as well, so I will just postfix with “staging” when time will come.

Next, I went to my test version of the application, and opened the “Settings” screen.

There are two important changes to be made here:

  • I had to define the application domain to be “localhost” (step 2)
  • I pointed the site URL to my local web server and port values (step 3). The simple steps needed for this are described separately.

After saving changes, I was able to use Facebook login in my localhost application.

Serve static files locally with the http-server nodejs module

Since I left full-time LAMP development behind me, Apache and MySql servers are turned off by default on my Mac. I only start them when they are needed. I then noticed I didn’t want to deal with Apache for quick prototyping or doing local development on static files.

So I went shopping for a very small web server, and found a nodejs module that fits the bill perfectly. I liked http-server, and it took me literally 5 minutes to start using it.

First, I performed a global install, so I can use it in multiple projects.

$ npm install http-server -g

Then I navigated to the folder that I wanted to play with, and started it on a non-standard port:

$ http-server --cors -p 9879

Loading up http://localhost:9879/ in a browser proved it was working perfectly.

The other available options can be seen when executing $ http-server --help, and they are very handy. For example, one can enable https if needed, or can use a certain certificate file.

Easy upgrade the node.js version using n

nodejs

I hadn’t used node.js for about half a year, and when I got a new project lined up, I decided to update it today. At the beginning of the process, my current version was 0.12.0, as checked with:

$ node -v
v0.12.0

My working environment is OSX, where I previously attempted to use nvm and had troubles with it. This time I found an excellent node version manager, with a clean and friendly UI: n.

The update steps were very simple, and straightforward, I detailed below each one.

Clean the npm cache

This step is optional, but as a developer I really enjoy having full control over what is going on. With this, I made sure everything was clean before I began.

$ sudo npm cache clean -f
Password:
npm WARN using --force I sure hope you know what you are doing.

Install n

I performed a global install of n, the node version manager I found earlier.

$ sudo npm install -g n
/usr/local/bin/n -> /usr/local/lib/node_modules/n/bin/n
n@2.0.1 /usr/local/lib/node_modules/n

Install the desired node version and switch to it

I then asked n to install the latest stable node version.

$ sudo n stable

  install : node-v0.12.7
    mkdir : /usr/local/n/versions/node/0.12.7
    fetch : https://nodejs.org/dist/v0.12.7/node-v0.12.7-darwin-x64.tar.gz
installed : v0.12.7

This also made the switch to the installed version.

Verification

Checking the currently running version shows I’m running the latest one indeed:

$ node -v
v0.12.7

Other usage scenarios

n can also be used to switch to a specific node version. Let’s say I wanted 0.12.5, all I would need to do would be to specify that as an input parameter: $ sudo n 0.12.5

Discovering Brackets, or how I improved my development flow in a few hours

brackets-resource-friendly

Plain text editor versus full-fledged IDE

At the beginning of my journey as a software developer, someone told me to try a plain text editor to focus on the basics. It turned out later to be a great advice, as it came with two immediate benefits. First of all, I learned to type faster because there was no IDE autocomplete to help me. Secondly, after looking each PHP function in the online manual, I soon knew most of them by heart. On the long run both brought a great boost to my day-to-day productivity.

After a few years, I gave up Gedit in favor of Zend Studio, which was not Eclipse-based back then, and worked much faster. It taught me to appreciate the debugger when dealing with obscure application issues. Later, PhpStorm convinced me with the price tag, and I used it for a couple of years. Unfortunately, its latest releases started to eat system resources for breakfast.

The quest for a simple, modern editor

Then I came across Brackets and decided to give it a try. I downloaded it from the official website, and was happy to it worked on ElCapitan 10.11 Beta. I immediately liked the minimal interface at my disposal, and was even happier when noticing that there can be only one project open at a given time. Focus, baby!

When I discovered the Live Preview feature I was hooked. No more edits in the clumsy Chrome Web Inspector edits and copying them over to the editor! Plus, code completion was there, nothing to install. See for yourself in Lisa’s video.

Useful Brackets extensions

For developers, using version control is a no-brainer. So Brackets Git was the first extension I added. Easy to use even for git beginners, it offers immediate feedback on the status.

As I found I was increasingly using Markdown in my daily tasks, Markdown Preview was a quick win.

Soon after, I came across the excellent Brackets Icons, which allows to spot file types in the navigation pane. It saved me a few wrong clicks already.

For those who need to edit files on a remote server there’s a handy SFTP extension for you. Please do yourself a favor and start using a proper deployment flow instead!

Next steps

I plan on trying the responsive mode soon, as my current projects are not needing it.

While writing down these notes, I came across the Todo extension that I want to try. It allows grouping of all project todo items into one handy pane.

Brackets also seems to come with built-in node.js awesomeness, and I will definitely give that a go very soon. Watching the videos made by the Brackets team might teach me a few more tricks as well.

The Software Craftsmanship Pyramid

pyramid

Oftentimes I have been called a web developer, because I entered the software world from PHP. I had another burden as well: as a girl, I was taken less seriously than my colleagues, having to work much harder. But I liked what I was doing, and I wanted to do it better, smarter, more efficient. Many colleagues have helped me become better, and showed me a lot of helpful tricks. I learned first-hand about the hurdles which make a software developer’s journey to craftsmanship tedious, and about the strength and perseverance needed to continue. Although there are many paths that can be followed to producing high-quality, easy to maintain software, none of them can actually be viewed as standard.

Some people, myself included, are better learning things following a structured approach, like a book, and I tried hard to find one that would be less about the tricks of the trade and more about the various aspects of what is means to be a software developer, and what that role means in the greater scheme of things.

As today’s world calls for more programmers than are available, it is my intention to summarize the steps I have followed successfully, parts of which I have advertised to colleagues we have onboarded, and which I believe can be inspiring others searching for their way.

Everyone can enter the development world, and it’s great to work in a profession so rewarding. Most people get hooked very soon, and want to continue, but find it difficult to pick their next step. They need help, and I believe structuring the dissipated available information will help them tremendously.

In the past, I found it easy to present things using a pyramid model, so today I am simply introducing it, and a more detailed explanation of each stage will follow separately, to allow for a better understanding of it.

Notes on PSR-7’s Message Interfaces

PSR-7-message-interfaces

The PHP world is currently in the middle of a revolution. As someone working with PHP for more than a decade, I could feel the earthquake that Composer brought to the community. And these days there is an even bigger one happening: the PSR-7. HTTP abstractions have traditionally been implemented by each framework, popular or not, in an attempt to simplify the life of the developers using it. The main reason that made frameworks so popular was the fact that they allowed people to focus on writing the actual behaviour of their application, the magic bit that generates a response based on data in the incoming request. While this is great for the developers in the short term, it does lock them into using that framework on the long run.

PSR-7: Message Interfaces

The framework interoperability group saw this problem, so they decided to work on a higher-level abstraction of the items detailed in the HTTP message syntax and routing and semantics and content RFCs, as well as the URI syntax one, and so PSR-7 was born. It focuses on supplying descriptive interfaces for the Request and Response parts, with an emphasis on using streams for handling the message bodies, in order to address critical performance issues from the very beginning. I consider this approach very powerful and highly beneficial in the long term, as it handles problematic scenarios very elegantly, at the specification level. It definitely accomplishes the goal of focusing on “practical applications and usability”.

Middleware

The term middleware is part of our geek lingo for a long time, in fact it has been in use since 1968, and with the advance of software development techniques it slightly shifted its meaning. In the context of PSR-7, it is used to designate “everything that happens between request and response”. It is the application specific behaviour that interprets the request, possibly enhances it, as well as act upon it. If you are not familiar with some of the community efforts, I strongly encourage you to have a look at them.

Let’s go into a bit more detail with two middleware examples that you are already familiar with. When a user makes a request, it is normal that you:
* check that his credentials are valid (authentication)
* check he is allowed to perform the request (authorization)

Following the MVC pattern, you already have a controller in place, and a method inside it that responds to the request. A common approach is to implement both authentication and authorization inside the controller action, after it has been routed, like  

namespace Sample;

class DemoController
{
    public function action(Request $request)
    {
        // authentication check
        if (! $authenticated) {
            return new Response('Please authenticate first', 403);
        }

        // authorization step
        if (! $authorized) {
            return new Response('Not authorized', 401);
        }

        // application logic
        $results = 'results';

        return new Response($results, 200);
    }
}

Let’s think a bit why this is not such a good idea:

  • while not impossible, it is difficult to reuse the authentication and authorization code in a different part of the application
  • testing the authorization and authentication become part of testing the action, when in fact this would better be extracted to a dedicated place
  • leads to fat controllers antipattern
  • the code is not easy to read, as it draws the reader attention from the actual application behaviour How would this be done better? By having a dedicated authentication middleware, that performs the related tasks, and is piped properly in the application execution chain. The solution is not completely new, but it is now more eloquent and easier to grasp by newcomers to the codebase.

Currently available middleware implementations are:
* Matthew’s Stratigility,
* Paul M Jones’s Pipeline
* Stack

I encourage you to check out each of them for concrete code samples of how the middlewares can be piped, as there is a lot to learn from those codebases.

Some conclusions

By using the abstractions of the HTTP messages, we are making one crucial step towards being framework agnostic. We can be even more granular in our implementation, and use the micro-services approach with more confidence than ever before, because we will be able to reuse it when preparing a scaled approach of our application. The benefits of PSR-7 are great, and the best proof is the large traction that we are able to see already. Symfony has made the process of using it today a breeze. Zend Framework 3 has it on the roadmap for a few months already, while Matthew not only blogs about it, but also supplies code snippets to make it as easy as possible for people to understand the mechanisms. After all, there will be a lot of ways in which various bits of code will be put together and I suspect there will be a frenzy of new frameworks in the short term.

Software branching strategies: A git overview | Part 1: gitworkflows

We will start by presenting a working example, which will be used as a basis to illustrate the various git branching models from a practical point of view. In the first part of the serios, we’ll detail the rules and practices of the gitworkflows branching model. Towards the end of the series we will outline several common approaches that have emerged as best practices in recent years.

An important note to make is that we will approach the branching strategies from several distinct perspectives. The most frequently used is the developer one, and sometimes the only one considered. But we also need to place ourselves in the bug hunter shoes, to account for the different midset needed to identify when a certain change has been introduced. Moving forward from the people creating / changing the software, we will account for the packagist role, responsible for assembling software into the package that will be deployed in a specific environment.

Working example: AcmeGreen

Let’s assume we are working on a new greenfield project, named AcmeGreen. Our internal project management tool is Jira, and the project codename is AG.

When we first created the repository, we started with the master branch. Each of our initial features received their own branch. To easily find them, our team has reused a naming convention they previously used, to use a <project>-<4DigitsTaskNumber>-<details> name them after the Jira tasks they are attached to. So in our system there are currently some extremely recent branches, all originating from master:

  • AG-0001-initial-setup prepares the project codebase
  • AG-0002-use-cases that the BA and product owner are working on to define the core functionality in the form of use cases

Strategy: gitworkflows

The manual page for gitworkflows1 presents a set of rules and tries to motivate each of them. Let’s see what they are, and use our working example as illustration.

  • preparation of the upcoming maintenance release is done on maint
  • master holds the work in progress that is preparing the future release
  • next is a stability brach for testing items that will be promoted to master
  • throw-away integration brach pu, which stands for “proposed updates”

Rule: Topic Branches

Make a side branch for every topic (feature, bugfix, …).
Fork it off at the oldest integration branch that you will eventually want to merge it into.

To prepare the ground for our team, we will open a AG-0001-initial-setup branch off master. This is the place to add initial gitignore rules, define the basic coding standards, etc.

$ git checkout master # start from the master branch
$ git checkout -b origin AG-0001-initial-setup

# [...] relevant commits in between

$ git push origin AG-0001-initial-setup # publish our work for the teammates benefit

Rule: Merge Upwards

Always commit your fixes to the oldest supported branch that require them.
Then (periodically) merge the integration branches upwards into each other.

The AG-0001-initial-setup is a few commits ahead. You can have a quicklook at them by executing

$ git log --pretty=oneline

9cf571c1c1225a5fecf61c43981048fb16193860 setup gitignore rules
3c1fef41a9ca5d1b24f767404f9bfd52affab90c naming convention for short-lived branches
0c1fe345edbebde03f217e7c67d5f67626f2ca7b explain long-term branches

Assuming this would be all we want to do on the initial setup branch, we want to merge our branch upwards. What is our destination branch in this case? Since there is no release yet, we are targetting the next release, so the master branch.

Let’s put on our packagist hat for a second. How will we know what went into a release and what didn’t, if we grab all those 3 commits above individually? Fortunately, there’s an easy way of assembling them together as one single entity in the integration branch, known as squashing2; it is an extremely useful method to maintain a clean commit history.

$ git merge --squash AG-0001-initial-setup
$ git commit -v
$ git push origin master

The result can be viewed online, with the mention that I have left the squash message untouched specifically for the reader to get a feel of the defaults.

At this point, let’s remove the feature branch, as we are done with it.

$ git push origin --delete AG-0001-initial-setup

The maintainers of other topic branches are now able (and should) grab all changes in the integration branches early and often, in order to avoid solving complex conflicts later.

Prepare our first release

We have reached an important milestone, our initial setup is in fact valid, so as a team we decide that our work so far should be refered to as “release 0.0.1″.

Check that master is a superset of maint by executing

$ git log master..maint

and validating there are no commits found.

We can now tag the release:

$ git tag -m "initial setup 0.0.1" 0.0.1 master
$ git push --tags

Once the release is done from a coding perspective, we can now hand it over to the packagist. The next step is to tidy up the maintenance branches to reflect the new state of facts.

Since maint reflects the previous release, it would be a good time to spin off a new maint-FORMER-RELEASE to be able to supply quick fixes there. In our case this is not needed, as we don’t have a previous release to refer to.

What is needed, though, is to grab all the new code from master to maint, and we do that by

$ git checkout maint
$ git merge --ff-only master

Pull requests

While we were busy setting the 0.0.1 ground and making it happen, our colleagues were really busy carving out the use cases, and their branch advanced quite nicely. This time, they need their work to be validated by teammates, so they decide to create a pull request to showcase it.

Our example below will use the github UI for the visual part. Prepare the pull request by clicking on the new pull request button.

Prepare to create pull request

Supply a relevant title and description.

Teamwork

When we are happy with the current state, pressing the green Merge pull request button will enable us to bring it to the main branch, in our case master.

github-pull-request-02-description.jpg

A more complex release

For our first release, we did not have something to keep maintaining, so the steps we needed to execute were fewer and simpler. This time it will take a bit more attention from our end.

# always make sure we have the latest and greatest
$ git pull origin master

# check master is superset of maint
$ git log master..maint

# tag and publish
$ git tag -m "use cases described" 0.0.2 master
$ git push --tags

What is different this time? We will need to ensure we can easily fix items in 0.0.1 that get reported to us after we have created 0.0.2. The way to achieve this is to spin off maintenance branch for 0.0.1, and use that for fixes and tags.

$ git branch maint-0.0.1 maint
$ git push origin maint-0.0.1

Now it is safe to update maint to contain the new release.

$ git checkout maint
$ git merge --ff-only master
$ git push origin maint

Conclusion

In the first part of the series, we have started with a working example, then detailed a few rules of the gitworkflows branching strategy, and illustrated them with real commands. Next we shown a few simple and effective rules of working with branches, then prepared a couple of releases and accounted for the specifics of each.

Part two of the series will showcase git flow, another popular branching strategy.


  1. https://www.kernel.org/pub/software/scm/git/docs/gitworkflows.html 
  2. http://www.git-scm.com/docs/git-merge