All opinions expressed are those of the authors and not necessarily those of, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by Eugenia on 2015-11-24 05:41:01 in the "General" category
Eugenia Loli-Queru

These are the principles that I have built my diet around. Only changes I’ve made is that I do eat beans (as per my Mediterranean ancestry), and I don’t go too low on carbs (due to thyroid issues). Lots of veggies instead, and seafood. Working on getting closer on the lifestyle points too.


published by Eugenia on 2015-11-13 20:27:10 in the "General" category
Eugenia Loli-Queru

Something very interesting is happening right now in the Paleo turf. The Paleo poster boy, Robb Wolf, got into an online shout match with Dr Jack Kruse, the “quantum epigenetics” poster boy. Robb calls Jack a quasi-mystical fraud, while Jack simply asks Robb to look at the evidence and research before he opens his mouth.

Robb is the big guy here, followed by many thousands, and having written the Paleo “bible”. Often, the 4-5 well known Paleo gurus would go in an all-out attack against the medical establishment, arguing how closed minded that establishment is for not agreeing with their points of view (e.g. that grains & pseudograins are all very bad for you, vegetable seed oils are bad, legumes are bad, dairy is bad etc). They basically call them out for not looking too hard at the evidence, that long-term health “starts with food”.

However, as with any system, after a while, it gets cemented. Same with the Paleo system. While it has somewhat evolved in the last 3 years, to not be as hard-core against fermented dairy, or against white rice, it still holds its basic truths cemented, and no one seem to want to research further. The various gurus have a reputation to protect, and products to sell now, so they need to stay true to what they originally preached.

So, when someone like Dr Jack Kruse comes along to shake their castle, by claiming that “it starts with light”, they themselves become the same as the closed minded medical establishment they hate. They react extremely violently against Kruse, without bothering to read his evidence or even just trying to understand his logic. They try to prevent the carpet pulling (that is probably inevitable as science moves on).

It’s funny, really. They fell under the same trap as the medical establishment has.

As for Dr Kruse, he has some blame for the situation too: the guy can’t write properly. The reason why Paleo gurus are “gurus”, is because they know how to communicate. They can write in a very understandable, friendly way, so the people fall behind them easily. Jack on the other hand, feels like he has a super-computer brain that is connected to the outside world only via a 56k modem instead. It also doesn’t help that he’s arrogant, and just not very likable as a person.

But that doesn’t mean that what he argues is wrong. It is my feeling that he’s the one who’s on the right path towards a deeper truth, but he has this extreme difficulty getting the information out properly.

Basically, what Dr Kruse is claiming is that we’re quantum machines. For that machine to work, we need a lot of natural UVB light (in the AM) and no blue light at night. Basically, he’s arguing that proper circadian rhythms, and being a lot outdoors, can have a bigger effect to long term health than “simply cutting down grains”. In terms of food, he argues that the biggest change one should make, is to add more seafood in their diet, because the iodine/DHA help with transporting energy in the mitochondria.

This could explain why Okinawans used to live to be over 120 years old, even if they ate a few grains, and lots of soy (both an anathema to the Paleo doctrine). It’s because they would also eat ungodly amounts of seafood (especially seaweed), and they would work outdoors in their gardens all the time.

Another thing he argues is that depending on location, and time of the year, your diet should vary. For example, most people in the Western world, should eat enough carbs in the summer, but be near-ketogenic in the winter. That’s how we evolved anyway. Also, people who live in the equator, can eat as many carbs as they like and not get fat (e.g. exotic fruits), because they expose themselves into a lot of UVB, and that balances things out in the “machine”. People in the North (or very South) though, need to practice cold thermogenesis, and they need to cut down on the carbs, and eat more seafood in order to be healthy in these harsh environments (which are locations we migrated to out of Africa, we are not fully evolved to live there, therefore, some food and lifestyle changes are required to be healthy in the North)

I can see what he says can sound like mumbo-jumbo, however, I think that what he’s arguing makes sense to me, and he does have basis on facts. Just not Western facts. A lot of the research he cites on his blog, are from Russian research papers. Some of that research has been done by UK and US scientists, but not everything. He has gone into great lengths to get access to these papers, and to have them translated.

So, he’s definitely controversial. But I really think he’s on to something.

published by (Josh Williams) on 2015-11-13 14:44:00 in the "OpenSSL" category
A client recently came to me with an ongoing mystery: A remote Postgres replica needed replaced, but repeatedly failed to run pg_basebackup. It would stop part way through every time, reporting something along the lines of:

pg_basebackup: could not read COPY data: SSL error: decryption failed or bad record mac

The first hunch we had was to turn off SSL renegotiation, as that isn't supported in some OpenSSL versions. By default it renegotiates keys after 512MB of traffic, and setting ssl_renegotiation_limit to 0 in postgresql.conf disables it. That helped pg_basebackup get much further along, but they were still seeing the process bail out before completion.

The client's Chef has a strange habit of removing my ssh key from the database master, so while that was being fixed I connected in and took a look at the replica. Two pg_basebackup runs later, a pattern started to emerge:
$ du -s 9.2/data.test*
67097452        9.2/data.test
67097428        9.2/data.test2
While also being a nearly identical size, those numbers are also suspiciously close to 64GB. I like round numbers, when a problem happens close to one that's often a pretty good tell of some boundary or limit. On a hunch that it wasn't a coincidence I checked around for any similar references and found a recent openssl package bug report:

RHEL 6, check. SSL connection, check. Failure at 64 GiB, check. And lastly, a connection with psql confirmed AES-GCM:
SSL connection (cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256)

Once the Postgres service could be restarted to load in the updated OpenSSL library, the base backup process completed without issue.

Remember, keep those packages updated!

published by Eugenia on 2015-11-10 07:00:55 in the "General" category
Eugenia Loli-Queru

I’m full on in my mostly-vegetarian (“Pegan”) diet now. I believe that all popular diets have something to teach, otherwise, their followers wouldn’t swear for their efficacy. It’s just that not a single diet has all its facts right. So, after years of researching the matter, I have finally found what works for me the best. This is how I have dissected each diet and what I get from each:

– Paleo agreement: no grains (except a bit of white rice), no sugar, no seed oils, no processed foods.
– Where Paleo falters: Not allowing beans, dairy, and having too much emphasis on meat. I now allow beans (except soy), fermented dairy, and mostly fish rather than meat (I eat wild seafood 3 times a week, and land meat only every Sunday — just as my own Greek ancestors did).

– Veg*n agreement: Veggies are good for you. Out of my 21 meals in the week, 17 are vegetarian and/or vegan.
– Where Veg*n falters: The right fish/meat can also be good for you, as long as you don’t over-indulge in it.

– Raw vegan agreement: Raw foods are really good for you.
– Where raw vegan falters: Raw foods ALL the time is not that good for you. We owe our big brains to cooked food, in part. I’d say 50% raw is a good balance.

Please note that my choices have nothing to do with animal ethics. For me, the choice of diet is only about MY health. I don’t see this as selfish, because I’ve been too sick over the years to have to give priority to others (humans, or animals). Having said that, I do choose pastured/wild animals only, while I mostly try to consume parts of the animal that are highly nutritious and are the parts that the animals were NOT killed for (e.g. bones, liver, heart — the parts that Americans throw away).

I find all these a good compromise in my mind.

published by (Greg Sabino Mullane) on 2015-11-10 04:23:00 in the "mediawiki" category

I was recently tasked with resurrecting an ancient wiki. In this case, a wiki last updated in 2005, running MediaWiki version 1.5.2, and that needed to get transformed to something more modern (in this case, version 1.25.3). The old settings and extensions were not important, but we did want to preserve any content that was made.

The items available to me were a tarball of the mediawiki directory (including the LocalSettings.php file), and a MySQL dump of the wiki database. To import the items to the new wiki (which already had been created and was gathering content), an XML dump needed to be generated. MediaWiki has two simple command-line scripts to export and import your wiki, named dumpBackup.php and importDump.php. So it was simply a matter of getting the wiki up and running enough to run dumpBackup.php.

My first thought was to simply bring the wiki up as it was - all the files were in place, after all, and specifically designed to read the old version of the schema. (Because the database scheme changes over time, newer MediaWikis cannot run against older database dumps.) So I unpacked the MediaWiki directory, and prepared to resurrect the database.

Rather than MySQL, the distro I was using defaulted to using the freer and arguably better MariaDB, which installed painlessly.

## Create a quick dummy database:
$ echo 'create database footest' | sudo mysql

## Install the 1.5.2 MediaWiki database into it:
$ cat mysql-acme-wiki.sql | sudo mysql footest

## Sanity test as the output of the above commands is very minimal:
echo 'select count(*) from revision' | sudo mysql footest

Success! The MariaDB instance was easily able to parse and load the old MySQL file. The next step was to unpack the old 1.5.2 mediawiki directory into Apache's docroot, adjust the LocalSettings.php file to point to the newly created database, and try and access the wiki. Once all that was done, however, both the browser and the command-line scripts spat out the same error:

Parse error: syntax error, unexpected 'Namespace' (T_NAMESPACE), 
  expecting identifier (T_STRING) in 
  /var/www/html/wiki/includes/Namespace.php on line 52

What is this about? Turns out that some years ago, someone added a class to MediaWiki with the terrible name of "Namespace". Years later, PHP finally caved to user demands and added some non-optimal support for namespaces, which means that (surprise), "namespace" is now a reserved word. In short, older versions of MediaWiki cannot run with modern (5.3.0 or greater) versions of PHP. Amusingly, a web search for this error on DuckDuckGo revealed not only many people asking about this error and/or offering solutions, but many results were actual wikis that are currently not working! Thus, their wiki was working fine one moment, and then PHP was (probably automatically) upgraded, and now the wiki is dead. But DuckDuckGo is happy to show you the wiki and its now-single page of output, the error above. :)

There are three groups to blame for this sad situation, as well as three obvious solutions to the problem. The first group to share the blame, and the most culpable, is the MediaWiki developers who chose the word "Namespace" as a class name. As PHP has always had very non-existent/poor support for packages, namespaces, and scoping, it is vital that all your PHP variables, class names, etc. are as unique as possible. To that end, the name of the class was changed at some point to "MWNamespace" - but the damage has been done. The second group to share the blame is the PHP developers, both for not having namespace support for so long, and for making it into a reserved word full knowing that one of the poster children for "mature" PHP apps, MediaWiki, was using "namespace". Still, we cannot blame them too much for picking what is a pretty obvious word choice. The third group to blame is the owners of all those wikis out there that are suffering that syntax error. They ought to be repairing their wikis. The fixes are pretty simple, which leads us to the three solutions to the problem.

MediaWiki's cool install image

The quickest (and arguably worst) solution is to downgrade PHP to something older than 5.3. At that point, the wiki will probably work again. Unless it's a museum (static) wiki, and you do not intend to upgrade anything on the server ever again, this solution will not work long term. The second solution is to upgrade your MediaWiki! The upgrade process is actually very robust and works well even for very old versions of MediaWiki (as we shall see below). The third solution is to make some quick edits to the code to replace all uses of "Namespace" with "MWNamespace". Not a good solution, but ideal when you just need to get the wiki up and running. Thus, it's the solution I tried for the original problem.

However, once I solved the Namespace problem by renaming to MWNamespace, some other problems popped up. I will not run through them here - although they were small and quickly solved, it began to feel like a neverending whack-a-mole game, and I decided to cut the Gordian knot with a completely different approach.

As mentioned, MediaWiki has an upgrade process, which means that you can install the software and it will, in theory, transform your database schema and data to the new version. However, version 1.5 of MediaWiki was released in October 2005, almost exactly 10 years ago from the current release (1.25.3 as of this writing). Ten years is a really, really long time on the Internet. Could MediaWiki really convert something that old? (spoilers: yes!). Only one way to find out. First, I prepared the old database for the upgrade. Note that all of this was done on a private local machine where security was not an issue.

## As before, install mariadb and import into the 'footest' database
$ echo 'create database footest' | sudo mysql test
$ cat mysql-acme-wiki.sql | sudo mysql footest
$ echo "set password for 'root'@'localhost' = password('foobar')" | sudo mysql test

Next, I grabbed the latest version of MediaWiki, verified it, put it in place, and started up the webserver:

$ wget
$ wget

$ gpg --verify mediawiki-1.25.3.tar.gz.sig 
gpg: assuming signed data in `mediawiki-1.25.3.tar.gz'
gpg: Signature made Fri 16 Oct 2015 01:09:35 PM EDT using RSA key ID 23107F8A
gpg: Good signature from "Chad Horohoe "
gpg:                 aka " "
gpg:                 aka "Chad Horohoe (Personal e-mail) "
gpg:                 aka "Chad Horohoe (Alias for existing email) "
## Chad's cool. Ignore the below.
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 41B2 ABE8 17AD D3E5 2BDA  946F 72BC 1C5D 2310 7F8A

$ tar xvfz mediawiki-1.25.3.tar.gz
$ mv mediawiki-1.25.3 /var/www/html/
$ cd /var/www/html/mediawiki-1.25.3
## Because "composer" is a really terrible idea:
$ git clone 
$ sudo service httpd start

Now, we can call up the web page to install MediaWiki.

  • Visit http://localhost/mediawiki-1.25.3, see the familiar yellow flower
  • Click "set up the wiki"
  • Click next until you find "Database name", and set to "footest"
  • Set the "Database password:" to "foobar"
  • Aha! Looks what shows up: "Upgrade existing installation" and "There are MediaWiki tables in this database. To upgrade them to MediaWiki 1.25.3, click Continue"

It worked! Next messages are: "Upgrade complete. You can now start using your wiki. If you want to regenerate your LocalSettings.php file, click the button below. This is not recommended unless you are having problems with your wiki." That message is a little misleading. You almost certainly *do* want to generate a new LocalSettings.php file when doing an upgrade like this. So say yes, leave the database choices as they are, and name your wiki something easily greppable like "ABCD". Create an admin account, save the generated LocalSettings.php file, and move it to your mediawiki directory.

Voila! At this point, we can do what we came here for: generate a XML dump of the wiki content in the database, so we can import it somewhere else. We only wanted the actual content, and did not want to worry about the history of the pages, so the command was:

$ php maintenance/dumpBackup.php --current >

It ran without a hitch. However, close examination showed that it had an amazing amount of unwanted stuff from the "MediaWiki:" namespace. While there are probably some clever solutions that could be devised to cut them out of the XML file (either on export, import, or in between), sometimes quick beats clever, and I simply opened the file in an editor and removed all the "page" sections with a title beginning with "MediaWiki:". Finally, the file was shipped to the production wiki running 1.25.3, and the old content was added in a snap:

$ php maintenance/importDump.php

The script will recommend rebuilding the "Recent changes" page by running rebuildrecentchanges.php (can we get consistentCaps please MW devs?). However, this data is at least 10 years old, and Recent changes only goes back 90 days by default in version 1.25.3 (and even shorter in previous versions). So, one final step:

## 20 years should be sufficient
$ echo '$wgRCMAxAge = 20 * 365 * 24 * 3600;' >> LocalSettings.php
$ php maintenance/rebuildrecentchanges.php

Voila! All of the data from this ancient wiki is now in place on a modern wiki!

published by (Dave Jenkins) on 2015-11-09 18:01:00 in the "Liquid Galaxy" category
The National Congress of Industrial Heritage of Japan (NCoIH) recently deployed a Liquid Galaxy at UNESCO Headquarters in Paris, France. The display showed several locations throughout southern Japan that were key to her rapid industrialization in the late 19th and early 20th century. Over the span of 30 years, Japan went from an agrarian society dominated by Samurai still wearing swords in public to an industrial powerhouse, forging steel and building ships that would eventually form a world-class navy and an industrial base that still dominates many lead global industries.

End Point assisted by supplying the servers, frame, and display hardware for this temporary installation. The NCoIH supplied panoramic photos, historical records, and location information. Together using our Roscoe Content Management Application, we built out presentations that guided the viewer through several storylines for each location: viewers could see the early periods of Trial & Error and then later industrial mastery, or could view the locations by technology: coal mining, shipbuilding, and steel making. The touchscreen interface was custom-designed to allow a self-exploration among these storylines, and also showed thumbnail images of each scene in the presentations that, when touched, brought the viewer directly to that location and showed a short explanatory text, historical photos, as well as transitioning directly into Google Street View to show the preserved site.

From a technical point of view, End Point debuted several new features with this deployment:

  • New scene control and editing functionalities in the Roscoe Content Management System
  • A new touchscreen interface that shows presentations and scenes within a presentation in a compact, clean layout
  • A new Street View interface that allows the "pinch and zoom" map navigation that we all expect from our smart phones and tablets
  • Debut of the new ROS-based operating system, including new ROS-nodes that can control Google Earth, Street View, panoramic content viewers, browser windows, and other interfaces
  • Deployment of some very nice NEC professional-grade displays
Overall, the exhibit was a great success. Several diplomats from European, African, Asian, and American countries came to the display, explored the sites, and expressed their wonderment at the platform's ability to bring a given location and history into such vivid detail. Japan recently won recognition for these sites from the overall UNESCO governing body, and this exhibit was a chance to show those locations back to the UNESCO delegates.

From here, the Liquid Galaxy will be shipped to Japan where it will be installed permanently at a regional museum, hopefully to be joined by a whole chain of Liquid Galaxy platforms throughout Japan showing her rich history and heritage to museum visitors.

published by (Patrick Lewis) on 2015-11-06 18:28:00 in the "email" category
Organizing and dealing with incoming email can be tedious, but with IMAPFilter's simple configuration syntax you can automate any action that you might want to perform on an email and focus your attention on the messages that are most important to you.

Most desktop and mobile email clients include support for rules or filters to deal with incoming mail messages but I was interested in finding a client-agnostic solution that could run in the background, processing incoming messages before they ever reached my phone, tablet or laptop. Configuring a set of rules in a desktop email client isn't as useful when you might also be checking your mail from a web interface or mobile client; either you need to leave your desktop client running 24/7 or end up with an unfiltered mailbox on your other devices.

I've configured IMAPFilter to run on my home Linux server and it's doing a great job of processing my incoming mail, automatically sorting things like newsletters and automated Git commit messages into separate mailboxes and reserving my inbox for higher priority incoming mail.

IMAPFilter is available in most package managers and easily configured with a single ~/.imapfilter/config.lua file. A helpful example config.lua is available in IMAPFilter's GitHub repository and is what I used as the basis for my personal configuration.

A few of my favorite IMAPFilter rules (where 'endpoint' is configured as my work IMAP account):

-- Mark daily timesheet reports as read, move them into a Timesheets archive mailbox
timesheets = endpoint['INBOX']:contain_from('')

-- Sort newsletters into newsletter-specific mailboxes
jsweekly = endpoint['INBOX']:contain_from('')
jsweekly:move_messages(endpoint['Newsletters/JavaScript Weekly'])

hn = endpoint['INBOX']:contain_from('')
hn:move_messages(endpoint['Newsletters/Hacker Newsletter'])

Note that IMAPFilter will create missing mailboxes when running 'move_messages', so you don't need to set those up ahead of time. These are basic examples but the sample config.lua is a good source of other filter ideas, including combining messages matching multiple criteria into a single result set.

In addition to these basic rules, IMAPFilter also supports more advanced configurations including the ability to perform actions on messages based on the results of passing their content through an external command. This opens up possibilities like performing your own local spam filtering by sending each message through SpamAssassin and moving messages into spam mailboxes based on the exit codes returned by spamc. As of this writing I'm still in the process of training SpamAssassin to reliably recognize spam vs. ham but hope to integrate its spam detection into my own IMAPFilter configuration soon.

published by (Brian Zenone) on 2015-11-05 15:00:00 in the "Biennale" category

If there is anyone who doesn?t know about the incredible collections of art that the Google Cultural Institute has put together, I would urge them to visit and be overwhelmed by their indoor and outdoor Street View tours of some of the world?s greatest museums. Along these same lines, the Cultural Institute recently finished doing a Street View capture of the interior of 70 pavilions representing 80 countries of the Biennale Arte 2015, in Venice, Italy. We, at End Point, were lucky enough to be asked to come along for the ride: Google decided that not only would this Street View version of the Biennale be added to the Cultural Institute?s collection, but that they would install a Liquid Galaxy at the Biennale headquarters, at Ca? Giustinian on the Grand Canal, where visitors can actually use the Liquid Galaxy to navigate through the installations. Since the pavilions close in November 2015, and the Galaxy is slated to remain open until the end of January 2016, this will permit art lovers who missed the Biennale to experience it in a way that is astoundingly firsthand.

End Point basically faced two challenges during the Liquid Galaxy Installations for the Cultural Institute. The first challenge was to develop a custom touch screen that would allow users to easily navigate/choose among the many pavilions. Additionally, wanting to mirror the way the Google Cultural Institute presents content, both online, as well as on the wall at their Paris office, we decided to add a swipe-able thumbnail runway to the touch screen map which would appear once a given pavilion was chosen.

As we took on this project, it became evident to our R&D team that ordinary Street View wasn't really the ideal platform for indoor pavilion navigation because of the sheer size and scope of the pavilions. For this reason, our team decided that a ROS-based spherical Street View would provide a much smoother navigating experience. The new Street View viewer draws Street View tiles inside a WebGL sphere. This is a dramatic performance and visual enhancement over the old Maps API based viewer, and can now support spherical projection, hardware acceleration, and seamless panning. For a user in the multi-screen Liquid Galaxy setting, this means, for the first time, being able to roll the view vertically as well as horizontally, and zoom in and out, with dramatically improved frame rates. The result was such a success that we will be rolling out this new Street View to our entire fleet.

The event itself consisted of two parts: at noon, Luisella Mazza, Google?s Head of Country Operations at the Cultural Institute, gave a presentation to the international press; as a result, we have already seen coverage emerge in ANSA,, L'Arena, and more. This was followed by a 6PM closed door presentation to the Aspen Institute.

Using the Liquid Galaxy and other supports from the exhibition, Luisella spoke at length about the role of culture in what Google refers to as the ?digital transformation?.

The Aspen Institute is very engaged with these questions of ?whitherto?, and Luisella?s presentation was followed by a long, and lively, round table discussion on the subject.

We were challenged to do something cool here and we came through in a big way: our touchscreen design and functionality are the stuff of real creative agency work, and meeting the technical challenge of making Street View perform in a new and enhanced way not only made for one very happy client, but is the kind of technical breakthrough that we all dream of. And how great that we got to do it all in Venice and be at the center of the action!

published by (Ramkumar Kuppuchamy) on 2015-11-04 20:25:00 in the "bash" category
Here are some of the unix command line tools which we feel make our hands faster and lives easier. Let?s go through them in this post and make sure to leave a comment with your favourite!

1. Find the command that you are unaware of

In many situations we need to perform a command line operation but we might not know the right utility to run. The command (apropos) searches for the given keyword against its short description in the unix manual page and returns a list of commands that we may use to accomplish our need.

If you can not find the right utility, then Google is our friend :)

$ apropos "list dir"
$ man -k "find files"

2. Fix typos in our commands

It's normal to make typographical errors when we type so fast. Consider a situation where we need to run a command with a long list of arguments and when executing it returns "command not found" and you noticed that you have made a "typo" on the executed command.
Now, we really do not want to retype the long list of arguments, instead use the following to simply just correct the typo command and execute
$ ^typo_cmd^correct_cmd
 $ dc /tmp
 $ ^dc^cd
The above will navigate to /tmp directory

3. Bang and its Magic

Bang quite useful, when we want to play with the bash history commands . Bang helps by letting you execute commands in history easily when you need them
  • !! --> Execute the last executed command in the bash history
  • !* --> Execute the command with all the arguments passed to the previous command
  • !? --> Get the first argument of the last executed command in the bash history
  • !$ --> Get the last argument of the last executed command in the bash history
  • ! --> Execute a command which is in the specified number in bash history
  • !?keyword? --> Execute a command from bash history for the first pattern match of the specified keyword
  • !-N --> Execute the command that was Nth position from the last in bash history
$ ~/bin/lg-backup
 $ sudo !!
In the last part of the above example we didn't realize that the lg-backup command had to be run as "sudo". Now, Instead of typing the whole command again with sudo, we can just use "sudo !!" which will re-run the last executed command in bash history as sudo, which saves us lot of time.

4. Working with Incron

This incron configuration is almost like crontab setup, but the main difference is that "Incron" monitors a directory for specific changes and triggers future actions as specified
Syntax: $directory $file_change_mask $command_or_action

/var/www/html/contents/ IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync ?exclude ?*.tmp? -a /home/ram/contents/ user@another_host:/home/ram/contents/
 /tmp IN_ALL_EVENTS logger "/tmp action for #file"
The above example shows triggering an "rsync" event whenever there is a change in "/var/www/html/contents" directory. In cases of immediate backup implementations this will be really helpful. Find more about incron here.

5. Double dash

There are situations where we end up in creating/deleting the directories whose name start with a symbol. These directories can not be removed by just using "rm -rf or rmdir". So we need to use the "double dash" (--) to perform deletion of such directories
$ rm -rf -- $symbol_dir
There are situations where you may want to create a few directory that starts with a symbol. "You can just these create directories using double dash(--) and starting the directory name with a symbol"
$ mkdir -- $symbol_dir

6. Comma and Braces Operators

We can do lot with comma and braces to make our life easier when we are performing some operations, lets see few usages
  • Rename and backup operations with comma & braces operator
  • Pattern matching with comma & braces operator
  • Rename and backup (prefixing name) operations on long file names
To backup the httpd.conf to httpd.conf.bak
$ cp httpd.conf{,.bak}
To revert the file from httpd.conf.bak to httpd.conf
$ mv http.conf{.bak,}
To rename the file with prefixing 'old'
$ cp exampleFile old-!#?

7. Read only vim

As we all know, vim is a powerful command line editor. We can also use vim to view files in read only mode if you want to stick to vim
$ vim -R filename
We can also use the "view" tool which is nothing but read only vim
$ view filename 

8. Push and Pop Directories

Sometimes when we are working with various directories and looking at the logs and executing scripts we find alot of our time is spent navigating the directory structure. If you think your directory navigations resembles a stack structure then use push and pop utilities which will save you lots of time
  • Push the directory using pushd
  • List the stack directories using the command "dirs"
  • Pop the directories using popd
  • This is mainly used in navigating between directories

9. Copy text from Linux terminal(stdin) to the system clipboard

Install xclip and create the below alias
$ alias pbcopy=?xclip -selection clipboard?
$ alias pbpaste=?xclip -selection clipboard -o?
We need to have the X window system running it to work. In Mac OS X, these pbcopy and pbpaste commands are readily available to you
To Copy:
$ ls | pbcopy
To Paste:
$ pbpaste > lstxt.txt 

10. TimeMachine like Incremental Backups in Linux using rsync --link-dest

This means that it will not recopy all of the files every single time a backup is performed. Instead, only the files that have been newly created or modified since the last backup will be copied. Unchanged files are hard linked from prevbackup to the destination directory.
$ rsync -a ?link-dest=prevbackup src dst

11. To display the ASCII art of the Process tree

Showing your processes in a tree structure is very useful for confirming the relationship between every process running on your system. Here is an option which is available by default on most of the Linux systems.
$ ps -aux ?forest
?forest is an argument to ps command, which displays ASCII art of process tree

There are many commands available like 'pstree', 'htop' to achieve the same thing.

12. Tree view of git commits

If you want to see git commits in a repo as tree view to understand the commit history better, the below option will be super helpful. This is available with the git installation and you do not need any additional packages.
$ git log ?graph ?oneline

13. Tee

Tee command is used to store and view (at the same time) the output of any other command.
(ie) At the same time it writes to the STDOUT, and to a file. It helps when you want to view the command output and at the time same time if you want to write it into a file or using pbcopy you can copy the output
$ crontab -l | tee crontab.backup.txt
The tee command is named after plumbing terminology for a T-shaped pipe splitter. This Unix command splits the output of a command, sending it to a file and to the terminal output. Thanks Jon for sharing this.

14. ncurses disk usage analyzer

Analysing disk usage with nurses interface, is fast and simple to use.
$ sudo apt-get install ncdu

15. hollywood

You all have seen the hacking scene on hollywood movies. Yes, there is a package which will let you create that for you.
$ sudo apt-add-repository ppa:hollywood/ppa 
$ sudo apt-get update
$ sudo apt-get install hollywood
$ hollywood

published by (Ben Witten) on 2015-11-04 15:00:00 in the "Liquid Galaxy" category

The Liquid Galaxy is an open source project founded by Google and further developed by End Point along with contributions from others. It allows for ?viewsyncing? multiple instances of Google Earth and Google Maps (including Street View) and other applications that are configured with geometric offsets that allow multiple screens to be set up surrounding users of the system. It has evolved to become an ideal data visualization tool for operations, marketing, and research. It immerses users in an environment with rich satellite imagery, elevation data, oceanic data, and panoramic images.

End Point has had the opportunity to make incredible custom presentation for dozens of clients. I had a chance to connect with members of the End Point Liquid Galaxy team, and learn about which presentations they enjoyed making the most.

Rick Peltzman, CEO

One of the most exciting presentations we made was for my son?s 4th grade history class. They were learning about the American Revolution. So, I came up with the storyboard, and TJ in our NYC office created the presentation. He gathered documents, maps of the time, content (that the kids each took turns reading), drawings and paintings, and put them in an historical context and overlaid them on current topographical presentations. Then the ?tour? went from forts to battlefields to historical locations to important cities. The teachers were able to discuss issues and gather the kids? excited responses to the platform and what it was presenting to them that day. The experience was a big hit! It proved representative of the tremendous educational opportunities that Liquid Galaxy can provide.

Ben Witten, Project Specialist

My favorite presentation was one that I created, for fun, in preparation for the 2015 Major League Baseball Postseason. This was the very first presentation I made on the Liquid Galaxy. I appreciated the opportunity to combine creating a presentation revolving around my favorite sport, while at the same time teaching myself how to make exciting presentations in the process. I was able to combine images and overlays of the teams and players with videos of the matchup, all while creating orbits around the different postseason stadiums using the Liquid Galaxy?s Google Earth capabilities.

Ben Goldstein, President

My favorite experience on the Liquid Galaxy (or at least the one I think is most important) is seeing the XL Catlin Seaview Survey, which is creating a complete panoramic survey of the ocean?s coral reefs. It?s an amazing scientific endeavor and it?s a wonder of the world that they are documenting for humanity?s appreciation and for scientific purposes. Unfortunately, as the survey is documenting, we?re witnessing the destruction of the coral reefs of the world. What XL Catlin is doing is providing an invaluable visual data set for scientific analysis. The panoramic image data sets that the XL Catlin Seaview survey has collected, and that Google presents in Street View, show how breathtakingly beautiful the ocean?s coral reefs are when they are in good health. It is now also documenting the destruction of the coral over time because the panoramic images of the coral reefs are geospatially tagged and timestamped so the change to the coral is apparent and quantifiable.

Kiel Christofferson, Liquid Galaxy Lead Architect

The tour of all of the End Point employees still stands out in my mind, just because it?s data that represents End Point. It was created for our company?s 20th anniversary, to celebrate our staff that works all across the globe. That presentation kind of hit close to home, because it was something we made for ourselves.

Dave Jenkins, VP Business Development

The complex presentations that mix video, GIS data, and unique flight paths are really something to see. We created a sort of ?treasure hunt? at SXSW last year for the XPrize, where viewers entered a code on the touchscreen based on other exhibits that they had viewed. If they got the code right, the Liquid Galaxy shot them into space, but if they entered the wrong code?just a splash into the ocean!

published by (Jeff Boes) on 2015-11-04 14:00:00 in the "perl funny" category

And now for something completely different ...

Programmers in general, and Perl programmers in particular, seem to have excellent, if warped, senses of humor. As a result, the CPAN library is replete with modules that have oddball names, or strange and wonderful purposes, or in some delightful cases -- both!

Let's take a look.

  1. Bone::Easy
    I'm going to take the coward's way out on this one right away. Go see for yourself, or don't.
  2. Acme::EyeDrops
    Really, anything in the Acme::* (meaning "perfect") namespace is just programmer-comedy gold, depending on what you find amusing and what is just plain forehead-smacking stupid to you. This one allows you to transform your Perl programs (small ones work better) from this:
    print "hello worldn";
    to this:
    Oh, that's not just a picture of a camel. That's actual Perl code; you can run that, and it executes in the exact same way as the original one-liner. So much more stylish. Plus, you can impress your boss/cow-orker/heroic scientist boyfriend.
  3. common::sense
    This one makes the list because (a) it is just so satisfying to see
      use common::sense;
    atop a Perl program, and (b) a citation of this on our company IRC chat is what planted the seed for this article.

    Another is, as in "use sanity;". Seems like a good approach.
  4. Silly::Werder
    Not a terribly interesting name, but it produces some head-scratching output. For instance,
    Broringers isess ailerwreakers paciouspiris dests bursonsinvading buggers companislandet despa ascen?
    I suppose you might use this to generate some Lorem ipsum-type text, or maybe temporary passwords? Dialog for your science fiction novel?
  5. Any module with the word "Moose" in it. "Moose" is a funny word.
  6. D::oh
    The humor here is a bit obscure: you have to have been around for Perl4-style namespace addressing, when you would have had to load this via:
    use D'oh;
  7. your
    As in:
    use your qw($wits %head @tools);
    Here the name is the funny bit; the module itself is all business.

Well, that seems like enough to get you started. If you find others, post them here in the comments!

published by (Steph Skardal) on 2015-11-03 13:26:00 in the "ruby" category

Hi! Steph here, former long-time End Point employee now blogging from afar as a software developer for Pinhole Press. While I?m no longer an employee of End Point, I?m happy to blog and share here.

A while back, I was in the middle of upgrading Piggybak, an open source Ruby on Rails platform developed and supported by End Point, and I came across a quick error that I thought I'd share.

I develop locally and I use rbenv on Ubuntu. I need to jump from Ruby 1.9.3 to Ruby 2.1.1 in this upgrade. When I attempt to run rbenv install 2.1.1, I see errors reporting ruby-build: definition not found: 2.1.1, meaning that rbenv and ruby-build (a plugin used with rbenv to ease installation) do not include version 2.1.1 in the available versions. My version of rbenv is out of date, so this isn't surprising. But how do I fix it?

I found many directions for updating rbenv and ruby-build with Homebrew via Google, but that doesn't apply here. Most of the instructions point to running a git pull on rbenv (probably located in ~/.rbenv), but give no references to upgrading ruby-build.

cd ~/.rbenv
git pull

I did a bit of experimenting and simply tried pulling to update the ruby-build plugin (also a git repo):

cd ~/.rbenv/plugins/ruby-build/
git pull

And tada - that was all that was needed. rbenv version -l now includes ruby 2.1.1, and I can install it with rbenv install 2.1.1.

published by (Phin Jensen) on 2015-11-03 03:05:00 in the "company" category

Friday, October 2nd, was the second and final day of our company meeting. (See the earlier report on day 1 of our meeting if you missed it.) Another busy day of talks, this day was kicked off by Ben Goldstein, who gave us a more detailed rundown of End Point's roots.

The History of End Point

Ben and Rick met in the second or third grade (a point of friendly dispute), and from the early days of their friendship were both heavily influenced by each other's parents. Their first business enterprise together was painting houses in the summer to earn money for college.

After attending college, Ben worked with Unix and dabbled with the World Wide Web when it was brand new. Rick worked on Wall Street for a while, then decided he had had enough of that and worked briefly in real estate, then left to pursue more creative interests.

Ben showed Rick some simple websites he had been working on and Rick said that is what they should do: they should start a business building websites together. Soon they made the big decision and End Point was officially incorporated on August 8, 1995. Their earliest clients were all found by word of mouth, with the first website being made for one of Ben's cousins.

At first they made only static websites. But Ben had worked with Oracle databases and knew some scripting languages, so the possibility of making dynamic data-driven web applications on the server seemed within reach. They met someone who had been scanning wine labels and putting the data into a Mini SQL (msql) database. Ben wrote some Perl scripts and soon had created End Point's first dynamic website.

Rick met an employee of Michael C. Fina, a company that did wedding registries and wanted to move to the web. Ben got started working on that in 1998. Around the same time, he found the open source MiniVend web application framework, exactly what he needed for a project like that which would be much more than a few CGI scripts.

Once End Point's early dynamic websites went into production, Ben wanted to grow more solid hosting and support services. After working with a few independent consultants who were a little too fly-by-night, he went to Akopia for help. Akopia had just acquired MiniVend and renamed it to Interchange. They brought Mike Heins, the creator of MiniVend, on board, and were building out a support and hosting business around Interchange.

Before long, Akopia was acquired by Red Hat, and Ben met Jon Jensen there while getting his help with Interchange and Linux questions. Later when Red Hat was phasing out its Interchange consulting group, Ben offered Jon a job, and Jon introduced Ben and Rick to his co-worker Mark Johnson who was expert at all things database, Perl, and Interchange. Rick and Ben hired both Jon and Mark in 2002, and End Point continued to grow with new clients and soon more employees as well.

The story continues with End Point moving into PostgreSQL support, Ruby on Rails development, AFS support, the creation of Spree Commerce, programming with Python & Django, Java, PHP, Node.js, AngularJS, Puppet and Chef and Ansible, and a major move into the Liquid Galaxy world. By then things are documented a little better thanks to wikis and blogs, so Ben was able to keep to the highlights.

A lot happens in a business in 20 years!

Using Trello

Next Josh Ausborne talked about how we make our lives easier by tracking tasks with Trello, a popular software as a service offering. At End Point we use Trello as one way to keep track of what we're working on in a project, along with other systems for certain projects or preferred by our various clients.

Most work tracking systems store data about progress and status, but Trello's strength is that it provides a nice way to look at things as a whole and to streamline collaboration. Trello is simple and easy to use, comes with just enough features to be helpful but not to overwhelm, has great apps for Android and iOS, and costs nothing to use for almost all functionality.

Using Trello is simple. It's made up of "boards", each of which contain lists of "cards". Each card can be used to represent a task or small project. People can be assigned to a card, watch it for notifications, comment, create checklists, upload images, share links, and more.

Cards are organized into lists, where they can be organized by status, priority, person, or any way else you choose. A popular arrangement is a "Kanban"-style board with one list each for "Ideas", "To do", "Blocked", "Doing", and "Done/Review". Nearly everything can be organized or moved with simple drag-and-drop gestures.

Automated Hosting

Lele Calò and Richard Templet talk about automated versus manual infrastructure management. In the beginning of the web era, system administrators did everything by hand. They soon moved on to a ?shell for-loop? style of system administration, but many things were still done by hand and often incompatible between systems. That?s where automation comes in. With tools like Puppet, Chef, Salt, and Ansible, it becomes easy to automate much of the configuration across many servers, even of different operating system distributions and versions.

So what should automation be used for? Mainly repetitive tasks that don?t require human touch. A lot of things in server setup and update deployment are easily done once, but become tedious very quickly.

What does End Point use automation for? We use it in our web hosting environment for initial operating system setup on new servers, managing changes to SSH public key lists and iptables firewall rules, and deploying monitoring configurations. For certain applications, we automate building, deploying, and updating entire systems with consistent configuration across many hundreds of nodes. We use Puppet, Chef, and Ansible for various internal and customer projects.

For those who are looking to get started, Lele and Richard recommended starting with automation on new servers. It's very simple and safe to experiment there, as there isn't anything yet to lose. Later once you're confident in what you're doing you can start to carefully spread your automation to existing servers.

Command Line Tools

Kannan Ponnusamy and Ram Kuppuchamy showed us some of their favorite Unix command-line tools. Here are some of the cool things I liked.

You can use ^ (caret) to correct typos in the previous command, like so:

user@host $ cd Donloads                                                                                                                                                              
cd: no such file or directory: Donloads                                                                                                                                                    
user@host $ ^on^own                                                                                                                                                                       
cd Downloads                                                                                                                                                                               
user@host:~/Downloads $

Use ! ("bang") commands to access commands and arguments in the history:

  • !! - entire previous command
  • !* - all arguments of previous command
  • !^ - first argument of previous command
  • !$ - last argument of previous command
  • !N - command at position N in history
  • !?keyword? - most recent command with pattern match of keyword
  • !-N - command at Nth position from last in history

Using Ctrl-R will do a reverse search of your command history, letting you see and edit old commands. If you press Ctrl-O on a historic command, it will execute it and put the following command from the history into the prompt. Additional presses of Ctrl-O will continue down the history.

The ps --forest option creates a visual ASCII art tree of the process hierarchy. Likewise, Git has git log --graph, which shows a visual representation of the repository history. Try using git log --oneline in addition to --graph to make it a little more concise.

tee $filename lets you pipe to STDOUT and a file at the same time. For example, crontab -l | tee crontab_backup.txt will print the crontab and put it in a text file.

ls -d */ will list all directories in the current directory.

These are just a few of the neat things they showed us. See their blog post about these and other Unix commands.

ROS in the Liquid Galaxy

Wojciech Ziniewicz and Matt Vollrath gave us a preview of their talk ?ROS-driven user applications in idempotent environments? to be presented at ROSCon 2015 in Hamburg, Germany a few days later. The Liquid Galaxy project recently transitioned away from ad-hoc services and protocols to ROS (Robot Operating System) and their presentation slides give a good idea of how much was involved in that process.

State of the Company

Next, Rick gave a talk on the current state of the company, which he summarized with one word: Transition. End Point is a company that has been changing since its inception in 1995, and now is no exception. A major transition over the last year or so has been growing to a head-count of 50 people. While we are in many ways similar to when we were, say, 30 people, more people requires different approaches for management and coordination.

A larger End Point presents us with both opportunities and challenges. However, the core values of our company have remained the same and are part of what make us what we are.

Personal Tech Security

Marco Matarazzo and Lele Calò next spoke to us on personal tech security. Why should you secure your personal or work devices? One obvious reason is to prevent disclosure of sensitive data. But just as important is not losing important data or becoming a conduit for attacks on other systems and networks.

So how should you approach security? It's important to think of usability vs. security. A door with 100 locks on it may be more secure than one with two, but getting in and out of it, even with the proper keys, would be far too difficult. So security should be adapted for the scenario. Securing a personal laptop with pictures, music, and games should be approached differently from a work device with passwords and SSH or GnuPG keys.

For members of our hosting team and employees who work with clients that require it, we have certain more stringent security policies they must follow. Some things are considered common sense, such as shredding or burning business-related papers and being careful with access to work environments.

In public places, make sure shared networks have proper encryption. Do not use untrusted computers, such as public computers at libraries or internet cafes, for work or any personal sites you need to log into. Be careful to not leave any work data behind, whether on an old backup disk or computer you get rid of, or on scraps of paper or notepads.

Keep all of your devices safe physically and in software! Apply operating system and other software updates promptly, and reboot at least a few times a week to let everything get fully updated. That includes laptops, desktops, phones, tablets, etc. And don't forget external drives! Keep automatic password-protected screen locks on your devices, encrypt your data and swap partitions, as well as phones and removable devices.

Backup your data to a safe place, and remember to share your passwords with someone trusted who may need them in case of an emergency.

Make sure your private SSH keys are password-protected, and ensure you're asked for confirmation when using them. Avoid common and unsafe passwords, like '12345' and 'password', although 'pizza1' is perfectly fine :). Use PGP to encrypt private messages and confidential data at rest.

?Brain bowl? challenge

We finished our meetings with a little friendly competition led by Jon Jensen. We were divided into ad-hoc teams by Ron Phipps, and were presented with trivia questions to see which team could answer correctly first. Some of the questions included:

  • Who created the World Wide Web? In what year?
  • What is now wrong with the term ?SSL certificate??
  • What do HIPAA and PCI-DSS stand for?
  • The Agile Manifesto says its authors have come to value what things over what other things?
  • Where does the word ?pixel? come from?
  • Where did the Unix command ?tee? that Kannan mentioned get its name?
  • What does the name UTF-8 stand for?
  • How many bytes are in a terabyte? In a tebibyte?

Then we had some questions about programming languages we work with, such as which of Python's built-in types are immutable, or what values are boolean false in Ruby, Perl, and JavaScript.

We ended with a programming problem that required HTML parsing and number-crunching. The task was the same for all teams, but each team used a different toolset: Node.js, Ruby, Perl, Python, or bash + classic Unix text tools sed, awk, sort, cut, etc. The Perl, Python, and bash/Unix teams came up with working and impressive solutions at about the same time.

Company party

We ended the day with a party nearby at Spin where we played ping-pong and had dinner and socialized and met significant others who were also visiting New York City.

It was great to get everyone together in person!

published by (Josh Lavin) on 2015-10-30 17:00:00 in the "AngularJS" category

At the Perl Dancer Conference 2015, I gave a talk on AngularJS & Dancer for Modern Web Development. This is a write-up of the talk in blog post form.

Legacy Apps

It's a fact of life as a software developer that a lot of us have to work with legacy software. There are many older platforms out there, still being actively used today, and still supporting valid businesses. Thus, legacy apps are unavoidable for many developers. Eventually, older apps are migrated to new platforms. Or they die a slow death. Or else the last developer maintaining the app dies.

Oh, to migrate

It would be wonderful if I could migrate every legacy app I work on to something like Perl Dancer. This isn't always practical, but a developer can dream, right?

Of course, every circumstance is different. At the very least, it is helpful to consider ways that old apps can be migrated. Using new technologies can speed development, give you new features, and breathe new life into a project, often attracting new developers.

As I considered how to prepare my app for migration, here are a few things I came up with:

  • Break out of the Legacy App Paradigm
    • Consider that there are better ways to do things than the way they've always been done
  • Use Modern Perl
  • Organize business logic
    • Try to avoid placing logic in front-end code

You are in a legacy codebase

I explored how to start using testing, but I soon realized that this requires methods or subroutines. This was the sad realization that up till now, my life as a Perl programmer had been spent doing scripting. My code wasn't testable, and looked like a relic with business logic strewn about.


I set out to change my ways. I started exploring object-oriented Perl using Moo, since Dancer2 uses Moo. I started trying to write unit tests, and started to use classes and methods in my code.

Essentially, I began breaking down problems into smaller problems. This, after all, is how the best methods are written: short and simple, that do just one thing. I found that writing code this way was fun.


I quickly realized that I wasn't able to run tests in my Legacy App, as it couldn't be called from the command line (at least not out of the box, and not without weird hacks). Thus, if my modules depended on Legacy App code, I wouldn't be able to call them from tests, because I couldn't run these tests from the shell.

This led me to a further refinement: abstract away all Legacy App-specific code from my modules. Or, at least all the modules I could (I would still need a few modules to rely on the Legacy App, or else I wouldn't be using it it all). This was a good idea, it turned out, as it follows the principle of Separation of Concerns, and the idea of Web App + App, which was mentioned frequently at the conference.

Now I was able to run tests on my modules!

Move already

This whole process of "getting ready to migrate" soon began to look like yak shaving. I realized that I should have moved to Dancer earlier, instead of trying to do weird hacks to get the Legacy App doing things as Dancer would do them.

However, it was a start, a step in the right direction. Lesson for me, tip for you.

And, the result was that my back-end code was all the more ready for working with Dancer. I would just need to change a few things, and presto! (More on this below.)


With the back-end looking tidier, I now turned to focus on the front-end. There was a lot of business logic in my front-end code that needed to be cleaned up.

Here is an example of my Legacy App front-end code:

<h1>[scratch page_title]</h1>
   my $has_course;
   for (grep {$_->{mv_ib} eq 'course'} @$Items) {
   return $has_course ? '

You have a course!

' : ''; [/perl] <button>Buy [if cgi items]more[else]now[/else][/if]</button> @_BOTTOM_@

As you can see, the Legacy App allowed the embedding of all sorts of code into the HTML page. I had Legacy App tags (in the brackets), plus something called "embedded perl", plus regular HTML. Add all this together and you get Tag Soup.

This kind of structure won't look nice if you attempt to view it on your own machine in a web browser, absent from the Legacy App interpreting it. But let's face it, this code doesn't look nice anywhere.

Separation of Concerns

I thought about how to apply the principle of Separation of Concerns to my front-end code. One thing I landed on, which isn't a new idea by any means, is the use of "HTML + placeholders," whereby I would use some placeholders in my HTML, to be later replaced and filled in with data. Here is my first attempt at that:

    page_title="[scratch page_title]"
    has_course="[perl] ... [/perl]"
    buy_phrase="Buy [if cgi items]more[else]now[/else][/if]"

    {HAS_COURSE?}<p>You have a course!</p>{/HAS_COURSE?}


What I have here uses the Legacy App's built-in placeholder system. It attempts to set up all the code in the initial "my-tag-attr-list", then the HTML uses placeholders (in braces) which get replaced upon the page being rendered. (The question-mark in the one placeholder is a conditional.)

This worked OK. However, the logic was still baked into the HTML page. I wondered how I could be more ready for Dancer? (Again, I should have just gone ahead and migrated.) I considered using Template::Toolkit, since it is used in Dancer, but it would be hard to add to my Legacy App.

Enter AngularJS (or your favorite JavaScript framework)

AngularJS is a JavaScript framework for front-end code. It displays data on your page, which it receives from your back-end via JSON feeds. This effectively allows you to separate your front-end from your back-end. It's almost as if your front-end is consuming an API. (Novel idea!)

After implementing AngularJS, my Legacy App page looked like this (not showing JavaScript):

<h1 ng-bind="page.title"></h1>
<p ng-if="items.course">You have a course!</p>
<button ng-show="items">Buy more</button>
<button ng-hide="items">Buy now</button>

Now all my Legacy App is doing for the front-end is basically "includes" to get the header/footer (the TOP and BOTTOM tags). The rest is HTML code with ng- attributes. These are what AngularJS uses to "do" things.

This is much cleaner than before. I am still using the Legacy App back-end, but all it has to do is "routing" to call the right module and deliver JSON (and do authentication).

Here's a quick example of how the JavaScript might look:

<html ng-app="MyApp">
<script src="angular.min.js"></script>
  angular.module / factory / controller
  $scope.items = ...;

This is very simplified, but via its modules/factories/controllers, the AngularJS code handles how the JSON feeds are displayed in the page. It pulls in the JSON and can massage it for use by the ng- attributes, etc.

I don't have to use AngularJS to do this — I could use a Template::Toolkit template delivered by Dancer, or any number of other templating systems. However, I like this method, because it doesn't require a Perl developer to use. Rather, any competent JavaScript developer can take this and run with it.


Now the migration of my entire app to Dancer is much easier. I gave it a whirl with a handful of routes and modules, to test the waters. It went great.

For my modules that were the "App" (not the "Web App" and dependent on the Legacy App), very few changes were necessary. Here is an example of my original module:

package MyApp::Feedback;
use MyApp;
my $app = MyApp->new( ... );
sub list {
    my $self = shift;
    my $code = shift
        or return $app->die('Need code');
    my $rows = $app->dbh($feedback_table)->...;
    return $rows;

You'll see that I am using a class called MyApp. I did this to get a custom die and a database handle. This isn't really the proper way to do this (I'm learning), but it worked at the time.

Now, after converting that module for use with Dancer:

package MyApp::Feedback;
use Moo;
with MyApp::HasDatabase;
sub list {
    my $self = shift;
    my $code = shift
        or die 'Need code';
    my $rows = $self->dbh->...;
    return $rows;

My custom die has been replaced with a Perl die. Also, I am now using a Moo::Role for my database handle. And that's all I changed!


The biggest improvements were in things that I "stole" from Dancer. (Naturally, Dancer would do things better than I.) This is my Legacy App's route for displaying and accepting feedback entries. It does not show any authentication checks. It handles feeding back an array of entries for an item ("list"), a single entry (GET), and saving an entry (POST):

sub _route_feedback {
    my $self = shift;
    my (undef, $sub_action, $code) = split '/', $self->route;
    $code ||= $sub_action;
    $self->_set_status('400 Bad Request');   # start with 400
    my $feedback = MyApp::Feedback->new;
    for ($sub_action) {
        when ("list") {
            my $feedbacks = $feedback->list($code);
            $self->_set_tmp( to_json($feedbacks) );
            $self->_set_content_type('application/json; charset=UTF-8');
            $self->_set_status('200 OK') if $feedbacks;
        default {
            for ($self->method) {
                when ('GET') {
                    my $row = $feedback->get($code)
                        or return $self->_route_error;
                    $self->_set_tmp( to_json($row) );
                    $self->_set_content_type('application/json; charset=UTF-8');
                    $self->_set_status('200 OK') if $row;
                when ('POST') {
                    my $params = $self->body_parameters
                        or return $self->_route_error;
                    $params = from_json($params);
                    my $result = $feedback->save($params);
                    $self->_set_status('200 OK') if $result;
                    $self->_set_content_type('application/json; charset=UTF-8');


Here are those same routes in Dancer:

prefix '/feedback' => sub {
    my $feedback = MyApp::Feedback->new;
    get '/list/:id' => sub {
        return $feedback->list( param 'id' );
    get '/:code' => sub {
        return $feedback->get( param 'code' );
    post '' => sub {
        return $feedback->save( scalar params );

Dancer gives me a lot for free. It is a lot simpler. There's still no authentication shown here, but everything else is done. (And I can use an authentication plugin to make even that easy.)


For the front-end, we have options on how to use Dancer. We could have Dancer deliver the HTML files that contain AngularJS. Or, we could have the web server deliver them, as there is nothing special about them that says Dancer must deliver them. In fact, this is especially easy if our AngularJS code is a Single Page App, which is a single static HTML file with AngularJS "routes". If we did this, and needed to handle authentication, we could look at using JSON Web Tokens.

Now starring Dancer

In hindsight, I probably should have moved to Dancer right away. The Legacy App was a pain to work with, as I built my own Routing module for it, and I also built my own Auth checking module. Dancer makes all this simpler.

In the process, though, I learned something...

Dancer is better?

I learned you can use tools improperly. You can do Dancer "wrong". You can write tag soup in anything, even the best modern tools.

You can stuff all your business logic into Template::Toolkit tags. You can stuff logic into Dancer routes. You can do AngularJS "wrong" (I probably do).

Dancer is better:

Dancer is better when (thanks to Matt S Trout for these):

  • Routes contain code specific to the Web.
  • Routes call non-Dancer modules (where business logic lives; again, Web App + App).
  • The route returns the data in the appropriate format.

These make it easy to test. You are effectively talking to your back-end code as if it's an API. Because it is.

The point is: start improving somewhere. Maybe you cannot write tests in everything, but you can try to write smart code.

Lessons learned

  • Separate concerns
  • Keep it testable
  • Just start somewhere

The end. Or maybe the beginning...

published by (Josh Lavin) on 2015-10-30 11:30:00 in the "Conference" category

In my last post, I shared about the Training Days from the Perl Dancer 2015 conference, in Vienna, Austria. This post will cover the two days of the conference itself.

While there were several wonderful talks, Gert van der Spoel did a great job of writing recaps of all of them (Day 1, Day 2), so here I'll cover the ones that stood out most to me.

Day One

Dancer Conference, by Alexis Sukrieh (used with permission)

Sawyer X spoke on the State of Dancer. One thing mentioned, which came up again later in the conference, was: Make the effort, move to Dancer 2! Dancer 1 is frozen. There have been some recent changes to Dancer:

  • Middlewares for static files, so these are handled outside of Dancer
  • New Hash::MultiValue parameter keywords (route_parameters, query_parameters, body_parameters; covered in my earlier post)
  • Delayed responses (asynchronous) with delayed keyword:
    • Runs on the server after the request has finished.
    • Streaming is also asynchronous, feeding the user chunks of data at a time.

Items coming soon to Dancer may include: Web Sockets (supported in Plack), per-route serialization (currently enabling a serializer such as JSON affects the entire app — later on, Russell released a module for this, which may make it back into the core), Dancer2::XS, and critic/linter policies.

Thomas Klausner shared about OAuth & Microservices. Microservices are a good tool to manage complexity, but you might want to aim for "monolith first", according to Martin Fowler, and only later break up your app into microservices. In the old days, we had "fat" back-ends, which did everything and delivered the results to a browser. Now, we have "fat" front-ends, which take info from a back-end and massage it for display. One advantage of the microservice way of thinking is that mobile devices (or even third parties) can access the same APIs as your front-end website.

OAuth allows a user to login at your site, using their credentials from another site (such as Facebook or Google), so they don't need a password for your site itself. This typically happens via JavaScript and cookies. However, to make your back-end "stateless", you could use JSON Web Tokens (JWT). Thomas showed some examples of all this in action, using the OX Perl module.

One thing I found interesting that Thomas mentioned: Plack middleware is the correct place to implement most of the generic part of a web app. The framework is the wrong part. I think this mindset goes along with Sawyer's comments about Web App + App in the Training Days.

Mickey Nasriachi shared his development on PONAPI, which implements the JSON API specification in Perl. The JSON API spec is a standard for creating APIs. It essentially absolves you from having to make decisions about how you should structure your API.

Panorama from the south tower of St. Stephen's cathedral, by this author

Gert presented on Social Logins & eCommerce. This built on the earlier OAuth talk by Thomas. Here are some of the pros/cons to social login which Gert presented:

  • Pros - customer:
    • Alleviates "password fatigue"
    • Convenience
    • Brand familiarity (with the social login provider)
  • Pros - eCommerce website:
    • Expected customer retention
    • Expected increase in sales
    • Better target customers
    • "Plug & Play" (if you pay) — some services exist to make it simple to integrate social logins, where you just integrate with them, and then you are effectively integrated with whatever social login providers they support. These include Janrain and LoginRadius
  • Cons - customer:
    • Privacy concerns (sharing their social identity with your site)
    • Security concerns (if their social account is hacked, so are all their accounts where they have used their social login)
    • Confusion (especially on how to leave a site)
    • Usefulness (no address details are provided by the social provider in the standard scope, so the customer still has to enter extra details on your site)
    • Social account hostages (if you've used your social account to login elsewhere, you are reluctant to shut down your social account)
  • Cons - eCommerce website:
    • Legal implications
    • Implementation hurdles
    • Usefulness
    • Provider problem is your problem (e.g., if the social login provider goes down, all your customers who use it to login are unable to login to your site)
    • Brand association (maybe you don't want your site associated with certain social sites)
  • Cons - social provider:
    • ???

?imun Kod?oman spoke on Dancer + Meteor = mobile app. Meteor is a JavaScript framework for both server-side and client-side. It seems one of the most interesting aspects is you can use Meteor with the Android or iOS SDK to auto-generate a true mobile app, which has many more advantages than a simple HTML "app" created with PhoneGap. ?imun is using Dancer as a back-end for Meteor, because the server-side Meteor aspect is still new and unstable, and is also dependent on MongoDB, which cannot be used for everything.

End Point's own Sam Batschelet shared his work on Space Camp, a new container-based setup for development environments. This pulls together several pieces, including CoreOS, systemd-nspawn, and etcd to provide a futuristic version of DevCamps.

Day Two

Conference goers, by Sam (used with permission)

Andrew Baerg spoke on Taming the 1000-lb Gorilla that is Interchange 5. He shared how they have endeavored to manage their Interchange development in more modern ways, such as using unit tests and DBIC. One item I found especially interesting was the use of DBIx::Class::Fixtures to allow saving bits of information from a database to keep with a test. This is helpful when you have a bug from some database entry which you want to fix and ensure stays fixed, as databases can change over time, and without a "fixture" your test would not be able to run.

Russell Jenkins shared HowTo Contribute to Dancer 2. He went over the use of Git, including such helpful commands and tips as:

  • git status --short --branch
  • Write good commit messages: one line summary, less than 50 characters; longer description, wrapped to 72 characters; refer to and/or close issues
  • Work in a branch (you shall not commit to master)
  • "But I committed to master" --> branch and reset
  • git log --oneline --since=2.weeks
  • git add --fixup <SHA1 hash>
  • The use of branches named with "feature/whatever" or "bugfix/whatever" can be helpful (this is Russell's convention)

There are several Dancer 2 issues tagged "beginner suitable", so it is easy for nearly anyone to contribute. The Dancer website is also on GitHub. You can even make simple edits directly in GitHub!

It was great to have the author of Dancer, Alexis Sukrieh, in attendance. He shared his original vision for Dancer, which filled a gap in the Perl ecosystem back in 2009. The goal for Dancer was to create a DSL (Domain-specific language) to provide a very simple way to develop web applications. The DSL provides "keywords" for use in the Dancer app, which are specific to Dancer (basically extra functionality for Perl). One of the core aspects of keeping it simple was to avoid the use of $self (a standby of object-oriented Perl, one of the things that you just "have to do", typically).

Alexis mentioned that Dancer 1 is frozen — Dancer 2 full-speed ahead! He also shared some of his learnings along the way:

  • Fill a gap (define clearly the problem, present your solution)
  • Stick to your vision
  • Code is not enough (opensource needs attention; marketing matters)
  • Meet in person (collaboration is hard; online collaboration is very hard)
  • Kill the ego — you are not your code

While at the conference, Alexis even wrote a Dancer2 plugin, Dancer2::Plugin::ProbabilityRoute, which allows you to do A/B Testing in your Dancer app. (Another similar plugin is Dancer2::Plugin::Sixpack.)

Also check out Alexis' recap.

Finally, I was privileged to speak as well, on AngularJS & Dancer for Modern Web Development. Since this post is already pretty long, I'll save the details for another post.


In summary, the Perl Dancer conference was a great time of learning and building community. If I had to wrap it all up in one insight, it would be: Web App + App — that is, your application should be a compilation of: Plack middleware, Web App (Dancer), and App (Perl classes and methods).