All opinions expressed are those of the authors and not necessarily those of, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by (Kirk Harr) on 2015-08-21 11:30:00 in the "collectd" category

Graphing System Statistics (the old fashioned way)

Since the mid 2000s system administrators who wanted to have a visual representations of their systems statistics had access to Graphite. This tool allows for elaborating graphs of values collected periodically to provide a representation of the data over time. Coupling this feature with collectd, which among many built-in supported metrics offer the possibility of sending system statistics to a central location running Graphite, allows to create a single portal for viewing statistics of your entire infrastructure easily. While this still remains a nice setup the graphical visualization capabilities of Graphite and rrdtool left some room for growth.

Enter Grafana

Grafana is a Graphite installation front-end that offers a very robust graphing/plotting library (provided by Flot) along with templates for creating similar displays for multiple datasets. Here you can see a comparison of the same dataset in both graphing libraries:

Graphite (rrdtool)

Grafana (Flot)

Data Analysis

Once you have setup collectd and Graphite to gather simple metrics from the servers you wish to monitor, you will easily have some basic instrumentation for focusing on system resources monitoring which is very helpful to keep an eye on your infrastructure performance and health. However the more data you start to harvest with collectd's many plugins the bigger will be the need for some aggregation and analysis of the data to better understand what the data could be communicating. In this example there are two graphs, the top is a measure of the network traffic going across the external interface of a network firewall, and the bottom is a measure of the total traffic transformed using a logarithmic base 10 function on the data.

Within the logarithmic graph it's easier to perceive the magnitude of the value, as a change in that graph of 1.0 in either direction would reflect a 10 fold change in the underlying data. Using this approach gives an operator a view of magnitude of the change and would so being able to easily track any spikes in the data values. Luckily Graphite offers a huge number of possible functions to elaborate the data before actually displaying it in the graph, they are all clearly documented here.


Going further you may want different contextual groups to aggregate systems by host or by application and with Grafana you can create a dashboard view which is customizable and can be populate with all the needed data. Here is an example of a dashboard for a system we have already seen:

Mousing over any of the data within the charts will allow for detailed examination of the data values measured at that time period and provides the legend for each color in the chart. Changing the dataset timeframe is as simple as adjusting the dropdown near the top of the page, or clicking and dragging a duration onto one of the graphs. The graphing library Flot provides a huge number of features and a very modern visual style which improved on what Graphite had to offer with rrdtool.


Graphite and collectd offered (and still do) a really robust data collection and analysis tools for measuring groups of computers. However this data seemed trapped in a display front-end which just could not meet the needs administrators who wanted to investigate deeper the collected data. Now Grafana provides a vastly improved graphing engine (also thanks to Flot), and combines all the needed tools like templates and dashboards to really empower users and system administrators about what they could do with the collected data. We won't be the first to say it but we do confirm that the combination of the great gathering and analysis capabilities powered by Graphite and collectd with a robust front-end with Grafana creates a very powerful tuning and monitoring stack.

published by Eugenia on 2015-08-21 02:13:25 in the "General" category
Eugenia Loli-Queru

Let’s address something: paper or digital collages? Here are my thoughts about it:

1. Paper collages are more beautiful in person than prints. The real scissor cuts add to the surrealness.

2. The crafting part of paper collages is more pleasurable, as is everything that is being realized with our own hands. You do get some street cred for it too. Digital collages on the other hand are much faster to work with.

3. Prints on the other hand, look exactly the same, no matter if they’re digital or on paper (they only look different if you use soft cuts or if you boost the colors on digital collages).

4. Paper collages usually go for anywhere between $100 to $500 on gallery shows. Digital collages go for $20 to $100. Digital special edition prints though can also go for $500, as long as they’re resized up! At the end, it depends on the quantity sold.

5. Galleries rarely want to work with digital artists. This means that if you are a digital artist, you must do all the marketing and promotion yourself. It does take time.

6. Paper pop collages are usually up to 12″ size (usually smaller, and customers often complain about that). Digital collages can be resized and printed up to 36″ without much loss of quality. My most usual digital size is 18″ though.

7. Commissions for big publications or big clients is asked to be done digitally because they’re very demanding and they ask about changes all the time. Most of these changes can only be realized digitally (eg enlargement or flipping of a single element). About 1/3rd of my income comes from commissions.
8. Digital collages allow modifying elements when exporting for products (eg iPhone cases, pillows, t-shirts etc). Because these exports have specific sizes (e.g. too tall, too wide etc) visual changes must be made to accommodate a collage to that product’s ratios. This can’t be done with an already glued paper collage properly.

9. Digital collaborations are easier. Nothing to mail out or wait weeks for it.

10. Digital workflows liberate the artist. You don’t have to deal anymore with limitations of sizes and decisions made in the 1950s by some editorial guy who put together a magazine back then. The decision on the size, direction, flip, colors etc are now yours. I understand that some people like the limitations. I witnessed a similar thing with Linux: people would install and use it exactly because they wanted to beat its limits as a desktop operating system. I personally am over that phase in my life. I don’t have the need to beat anything anymore, or fight with it. I just create as uninhibitedly as possible.

published by (Peter Hankiewicz) on 2015-08-17 23:30:00 in the "code" category


Before you can start programming production code in any programming language you should know a few things like syntax, purpose, coding standards, good practices. There is another thing that is really important too and can help developers that work with multiple projects and languages: coding style guides. This is just a set of rules that developers should take care about when writing code. They are not so different across various programming languages. In this article, I will sum up and list coding style guides for the most popular languages.

It's not a crime to use different styling, but standardizing this helps in the process of creating software. When your team is using the same standard it's much easier to read each other's code.

There is one universal resource when it comes to programming style guides. Many of you will already have read it: the "Clean Code" book by Robert C. Martin.

Something else that should be mentioned here:

  1. When you don't know how to write a structure, read guides.
  2. When you know how to write a structure but it's different than standardized, ask other developers about it.
  3. When your structure is different than all the other structures made by your co-workers, consider changing it.
  4. When someone will force you to change your styling code, ask her/him/them why, even if she/he/they are your manager or boss.
  5. When there is no good coding style guide for your language, create one.
  6. Try to have the same habits of coding in your team and company -- it's a time saver.
  7. Even if you've adapted to some structures be open for change.
  8. Respect experience. If senior developers tell you that your code is weird, there is probably something wrong with it.

Order of the guides is not random, most important and trusted first. Not all the guides cover all the topics so it would be wise to real all of them when you plan to write code in such a language.

Let's begin.








Visual Basic .NET


Visual Basic





Maybe it's not the most important programming part but it's something you need to think about.

If we will try to compare those rules to human language, English for example, see "Romeo and Juliet", Act III:

Come, come, thou art as hot a Jack in thy mood as
any in Italy, and as soon moved to be moody, and as
soon moody to be moved.

Now try to remove writing standards:

MERCUTIO Come, come, thou art as hot a Jack in thy mood as any in Italy, and as soon moved to be moody, and as soon moody to be moved.

It's not easy to read this one liner. Our code is the same: Be smart, listen to more experienced people, read style guides and you will make live easier for you and other people too.

published by (Jon Jensen) on 2015-08-17 17:09:00 in the "perl" category

This is just a short note to celebrate the fact that the Comprehensive Perl Archive Network (CPAN) turned 20 years old yesterday!

CPAN began as a way to collect, mirror, and distribute open-source Perl packages. Over time it led to development of better packaging and module naming conventions; formed a community for feedback, testing, and contributions; and became one of the great strengths of the Perl world.

It is rare to find some needed functionality that is not available on CPAN. These days a more common problem is finding too much choice there, and needing to choose between several modules based on which are better maintained and supported on current versions of Perl, or kept current against external dependencies.

Perl does not get as much press these days as it once did, but it continues to thrive and improve. On that topic, our former co-worker Steph Skardal sent me an article called We?re still catching up to Perl by Avdi Grimm of Ruby fame. It is not an in-depth language comparison, just a brief observation to his fellow Rubyists that there is plenty to be learned from Perl. (Of course Perl has plenty to learn from Ruby and other languages too.)

published by (Steph Skardal) on 2015-08-13 20:36:00 in the "book review" category

Hi! Steph here, former long-time End Point employee now blogging from afar as a software developer for Pinhole Press. While I?m no longer an employee of End Point, I?m happy to blog and share here.

I?m only about a quarter of the way into Don?t Make Me Think (Revised) by Steve Krug, but I can already tell it?s a winner. It?s a great (and quick) book about web usability, with both high level concepts and nitty gritty examples. I highly recommend it! Even if you aren?t interested in web usability but are a web developer, it?s still a quick read and would be invaluable to whomever you are coding for these days.

A Bookmarklet

The book inspired me to write a quick bookmarklet to demonstrate some high level concepts related to usability. Here?s the bookmarklet:

javascript:(function() {
  var possible = " ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
  $('*:not(iframe)').contents().filter(function() {
     return this.nodeType == Node.TEXT_NODE && this.nodeValue.trim() != '';
  }).each(function() {
     var new_content = '';
     for(var i = 0; i<this.nodeValue.trim().length; i++) {
       new_content += possible.charAt(Math.floor(Math.random() * possible.length));
     this.nodeValue = new_content;

To add the bookmarklet to your browser, simply copy the code in as the location for a new bookmark (and name it anything you want). Note that this particular bookmarklet assumes jQuery is installed, so it may not work on all websites. Gist available here

What does it do?

In short, the bookmarklet converts readable text on the page to jibberish (random characters of the same length). Pictures are worth a thousand words here. Here are some example pages with the bookmarklet in action:

End Point home page.

End Point client list page.

Stance popup, with item in cart. product listing page.

CityPASS home page.

Why does this matter?

The bookmarklet provokes thought related to high level usability concepts, such as:

  • Is it clear which buttons are clickable?
  • Is the visual hierarchy clear?
  • What conventions does the user interface follow?
  • Users browsing behavior is often to hyperfocus and click on what they are looking for while ignoring other content entirely. Does the user interface aid or hinder that behavior?
  • How and what do images communicate on the page?

All of these ideas are great things to talk through when implementing user interface changes or rolling out an entirely new website. And if you are interested in learning more, visit Steve Krug?s website.

published by (Greg Sabino Mullane) on 2015-08-12 15:15:00 in the "database" category

While Bucardo is known for doing "multi-master" Postgres replication, it can do a lot more than simple "master to master" replication (better known as "source to source" replication). As people have been asking for simple Bucardo Bucardo 5 recipes based on pgbench, I decided to present a few here. Since Bucardo allows any number of sources and targets, I will demonstrate a source-source-source-target replication. Targets do not have to be Postgres, so let's also show that we can do source - MariaDB - SQLite replication. Because my own boxes are so customized, I find it easier and more honest when writing demos to start with a fresh system, which also allows you to follow along at home. For this example, I decided to fire up Amazon Web Services (AWS) again.

After logging in at, I visited the AWS Management Console, selected "EC2", clicked on "Launch Instance", and picked the Amazon Linux AMI (in this case, "Amazon Linux AMI 2015.03 (HVM), SSD Volume Type - ami-1ecae776"). Demos like this require very little resources, so choosing the smallest AMI (t2.micro) is more than sufficient. After waiting a couple of minutes for it to start up, I was able to SSH in and begin. The first order of business is always updating the box and installing some standard tools. After that I make sure we can install the most recent version of Postgres. I'll skip the initial steps and jump to the Major Problem I encountered:

$ sudo yum install postgresql94-plperl
Error: Package: postgresql94-plperl-9.4.4-1PGDG.rhel6.x86_64 (pgdg94)
           Requires: perl(:MODULE_COMPAT_5.10.1)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Well, that's not good (and the "You could try" are useless in this case). Although all the other Postgres packages installed without a problem (postgresql94, postgresql94-server, and postgresql94-libs), there is a major incompatibility preventing Pl/Perl from working. Basically, the rpm was compiled against Perl version 5.10, but Amazon Linux is using 5.16! There are many solutions to this problem, from using perlbrew, to downgrading the system Perl, to compiling Postgres manually. However, this is the Age of the Cloud, so a simpler solution is to ditch this AMI and pick a different one. I decided to try a RHEL (Red Hat Enterprise Linux) AMI. Again, I used a t2.micro instance and launched RHEL-7.1 (AMI ID RHEL-7.1_HVM_GA-20150225-x86_64-1-Hourly2-GP2). As always when starting up an instance, the first order of business when logging in is to update the box. Then I installed some important tools, and set about getting the latest and greatest version of Postgres up and running:

$ sudo yum update
$ sudo yum install emacs-nox mlocate git perl-DBD-Pg

Checking the available Postgres version reveals, as expected, that it is way too old:

$ sudo yum list postgresql*-server
Loaded plugins: amazon-id, rhui-lb
Available Packages
postgresql-server.x86_64        9.2.13-1.el7_1        rhui-REGION-rhel-server-release

Luckily, there is excellent support for Postgres packaging on most distros. The first step is to find a rpm to use to get the "pgdg" yum repository in place. Visit and choose the latest version (as of this writing, 9.4). Then find your distro, and copy the link to the rpm. Going back to the AWS box, add it in like this:

$ sudo yum localinstall

This installs a new entry in the /etc/yum.repos.d/ directory named pgdg-94-redhat.repo. However, we want to make sure that we never touch the old, stale versions of Postgres provided by the native yum repos. Keeping it from appearing is as simple as finding out which repo it is in, and adding an exclusion to that repository section by writing exclude=postgres*. Finally, we verify that all yum searches for Postgres return only the 9.4 items:

## We saw above that repo was "rhui-REGION-rhel-server-release"
## Thus, we know which file to edit
$ sudo emacs /etc/yum.repos.d/redhat-rhui.repo
## At the end of the [rhui-REGION-rhel-server-releases] section, add this line:

## Now we can retry the exact same command as above
$ sudo yum list postgresql*-server
Loaded plugins: amazon-id, rhui-lb
Installed Packages
postgresql94-server.x86_64        9.4.4-1PGDG.rhel7        pgdg94

Now it is time to install Postgres 9.4. Bucardo currently needs to use Pl/Perl, so we will install that package (which will also install the core Postgres packages for us). As we are going to need the pgbench utility, we also need to install the postgresql-contrib package.

$ sudo yum install postgresql-plperl postgresql-contrib

This time it went fine - and Perl is at 5.16.3. The next step is to start Postgres up. Red Hat has gotten on the systemd bandwagon, for better or for worse, so gone is the familiar /etc/init.d/postgresql script. Instead, we need to use systemctl. We will find the exact service name, enable it, then try to start it up:

$ systemctl list-unit-files | grep postgres
## postgresql-9.4.service                      disabled

$ sudo systemctl enable postgresql-9.4
ln -s '/usr/lib/systemd/system/postgresql-9.4.service' '/etc/systemd/system/'
$ sudo systemctl start postgresql-9.4
Job for postgresql-9.4.service failed. See 'systemctl status postgresql-9.4.service' and 'journalctl -xn' for details.

As in the pre-systemd days, we need to run initdb before we can start Postgres. However, the simplicity of the init.d script is gone (e.g. "service postgresql initdb"). Poking in the systemd logs reveals the solution:

$ sudo systemctl -l status postgresql-9.4.service
postgresql-9.4.service - PostgreSQL 9.4 database server
   Loaded: loaded (/usr/lib/systemd/system/postgresql-9.4.service; enabled)
   Active: failed (Result: exit-code) since Wed 2015-08-03 10:20:25 EDT; 1min 21s ago
  Process: 11916 ExecStartPre=/usr/pgsql-9.4/bin/postgresql94-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE)

Aug 03 10:20:25 ip- systemd[1]: Starting PostgreSQL 9.4 database server...
Aug 03 10:20:25 ip- postgresql94-check-db-dir[11916]: "/var/lib/pgsql/9.4/data/" is missing or empty.
Aug 03 10:20:25 ip- postgresql94-check-db-dir[11916]: Use "/usr/pgsql-9.4/bin/postgresql94-setup initdb" to initialize the database cluster.
Aug 03 10:20:25 ip- postgresql94-check-db-dir[11916]: See %{_pkgdocdir}/README.rpm-dist for more information.
Aug 03 10:20:25 ip- systemd[1]: postgresql-9.4.service: control process exited, code=exited status=1
Aug 03 10:20:25 ip- systemd[1]: Failed to start PostgreSQL 9.4 database server.
Aug 03 10:20:25 ip- systemd[1]: Unit postgresql-9.4.service entered failed state.

That's ugly output, but what can you do? Let's run initdb, start things up, and create a test database. As I really like to use Postgres with checksums, we can set the environment variables to pass that flag to initdb. After that completes, we can startup Postgres.

$ sudo PGSETUP_INITDB_OPTIONS=--data-checksums /usr/pgsql-9.4/bin/postgresql94-setup initdb
Initializing database ... OK

$ sudo systemctl start postgresql-9.4

Now that Postgres is up and running, it is time to create some test databases and populate them via the pgbench utility. First, a few things to make life easier. Because pgbench installs into /usr/pgsql-9.4/bin, which is certainly not in anyone's PATH, we will put it in a better location. We also want to loosen the Postgres login restrictions, and reload Postgres so it takes effect:

$ sudo ln -s /usr/pgsql-9.4/bin/pgbench /usr/local/bin/
$ sudo sh -c 'echo "local all all trust" > /var/lib/pgsql/9.4/data/pg_hba.conf'
$ sudo systemctl reload postgresql-9.4

Now we can create a test database, put the pgbench schema into it, and then give the pgbench_history table a primary key, which Bucardo needs in order to replicate it:

$ export PGUSER=postgres
$ createdb test1
$ pgbench -i --foreign-keys test1
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 100000 tuples (100%) done (elapsed 0.10 s, remaining 0.00 s).
set primary keys...
set foreign keys...
$ psql test1 -c 'alter table pgbench_history add column hid serial primary key'

We want to create three copies of the database we just created, but without the data:

$ createdb test2
$ createdb test3
$ createdb test4
$ pg_dump --schema-only test1 | psql -q test2
$ pg_dump --schema-only test1 | psql -q test3
$ pg_dump --schema-only test1 | psql -q test4

Next up is installing Bucardo itself. We shall grab version 5.4.0 from the git repository, after cryptographically verifying the tag:

$ git clone
Cloning into 'bucardo'...
$ cd bucardo
$ gpg --keyserver --recv-keys 2529DF6AB8F79407E94445B4BC9B906714964AC8

$ git tag -v 5.4.0
object f1f8b0f6ed0be66252fa203c20a3f03a9382cd98
type commit
tag 5.4.0
tagger Greg Sabino Mullane  1438906359 -0400

Version 5.4.0, released August 6, 2015
gpg: Signature made Thu 06 Aug 2015 08:12:39 PM EDT using DSA key ID 14964AC8
gpg: please do a --check-trustdb
gpg: Good signature from "Greg Sabino Mullane "
gpg:                 aka "Greg Sabino Mullane (End Point Corporation) "
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 2529 DF6A B8F7 9407 E944  45B4 BC9B 9067 1496 4AC8

$ git checkout 5.4.0

Before Bucardo can be fully installed, some dependencies must be installed. What you need will depend on what your particular OS already has. For RHEL 7.1, this means a few things via yum, as well as some things via the cpan program:

$ sudo yum install perl-Pod-Parser perl-Sys-Syslog perl-Test-Simple perl-ExtUtils-MakeMaker cpan
$ echo y | cpan
$ (echo o conf make_install_make_command "'sudo make'"; echo o conf commit) | cpan
$ cpan boolean DBIx::Safe

## Now we can install the Bucardo program:
$ perl Makefile.PL
$ make
$ sudo make install

## Setup some directories we will need
$ sudo mkdir /var/run/bucardo /var/log/bucardo
$ sudo chown $USER /var/run/bucardo /var/log/bucardo

## Install the Bucardo database:
$ bucardo install ## hit "P" twice

Now that Bucardo is ready to go, let's teach it about our databases and tables, then setup a three-source, one-target database sync (aka multimaster or master-master-master-slave)

$ bucardo add db A,B,C,D dbname=test1,test2,test3,test4
Added databases "A","B","C","D"

$ bucardo add all tables relgroup=bench
Creating relgroup: bench
Added table public.pgbench_branches to relgroup bench
Added table public.pgbench_tellers to relgroup bench
Added table public.pgbench_accounts to relgroup bench
Added table public.pgbench_history to relgroup bench
New tables added: 4

$ bucardo add all sequences relgroup=bench
Added sequence public.pgbench_history_hid_seq to relgroup bench
New sequences added: 1

$ bucardo add sync btest relgroup=bench dbs=A:source,B:source,C:source,D:target
Added sync "btest"
Created a new dbgroup named "btest"

$ bucardo start
Checking for existing processes
Starting Bucardo

Time to test that it works. The initial database, "test1", should have many rows in the pgbench_accounts table, while the other databases should have none. Once we update some of the rows in the test1 database, it should replicate to all the others. Changes in test2 and test3 should go everywhere as well, because they are source databases. Changes made to the database test4 should stay in test4, as it is only a target.

$ psql test1 -xtc 'select count(*) from pgbench_accounts'
count | 100000

$ for i in {2,3,4}; do psql test$i -xtc 'select count(*) from pgbench_accounts'; done
count | 0
count | 0
count | 0

## We want to "touch" these four rows to make sure they replicate out:
$ psql test1 -c 'UPDATE pgbench_accounts set aid=aid where aid <= 4'

$ for i in {2,3,4}; do psql test$i -xtc 'select count(*) from pgbench_accounts'; done
count | 4
count | 4
count | 4

$ for i in {1,2,3,4}; do psql test$i -xtc "update pgbench_accounts set abalance=$i*100 where aid=$i"; done
$ psql test1 -tc 'select aid, abalance from pgbench_accounts where aid <= 4 order by aid'
   1 |      100
   2 |      200
   3 |      300
   4 |        0
$ psql test2 -tc 'select aid, abalance from pgbench_accounts where aid <= 4 order by aid'
   1 |      100
   2 |      200
   3 |      300
   4 |        0
$ psql test3 -tc 'select aid, abalance from pgbench_accounts where aid <= 4 order by aid'
   1 |      100
   2 |      200
   3 |      300
   4 |        0
$ psql test4 -tc 'select aid, abalance from pgbench_accounts where aid <= 4 order by aid'
   1 |      100
   2 |      200
   3 |      300
   4 |      400

What happens if we change aid '4' on one of the sources? The local changes to test4 will get overwritten:

$ psql test1 -c 'update pgbench_accounts set abalance=9999 where aid = 4'
$ psql test4 -tc 'select aid, abalance from pgbench_accounts where aid <= 4 order by aid'
   1 |      100
   2 |      200
   3 |      300
   4 |     9999

Let's create one more sync - this time, we want to replicate our Postgres data to a MariaDB and a SQLite database. (Bucardo can also do systems like Oracle, but getting it up and running is NOT an easy task for a quick demo like this!). The first step is to get both systems up and running, and provide them with a copy of the pgbench schema:

## The program 'sqlite3' is already installed, but we still need the Perl module:
$ sudo yum install perl-DBD-SQLite

## MariaDB takes a little more effort
$ sudo yum install mariadb-server ## this also (surprisingly!) installs DBD::MySQL!

$ systemctl list-unit-files | grep maria
mariadb.service                             disabled
$ sudo systemctl enable mariadb
ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/'
$ sudo systemctl start mariadb

$ sudo mysql
mysql> create user 'ec2-user'@'localhost' identified by 'sixofone';
mysql> grant all on *.* TO 'ec2-user'@'localhost';
mysql> quit

## Store the MariaDB / MySQL password so we don't have to keep entering it:
$ cat > ~/.my.cnf 
password = sixofone

Now we can create the necessary tables for both. Note that SQLite does not allow you to change a table's structure once it has been created, so we cannot use the MySQL/Postgres way of using ALTER TABLE after the fact to add the primary keys. Knowing this, we can put everything into the CREATE TABLE statement. This schema will work on all of our systems:

CREATE TABLE pgbench_accounts (
    aid      integer NOT NULL PRIMARY KEY,
    bid      integer,
    abalance integer,
    filler   character(84)
CREATE TABLE pgbench_branches (
    bid      integer NOT NULL PRIMARY KEY,
    bbalance integer,
    filler   character(88)
CREATE TABLE pgbench_history (
    hid    integer NOT NULL PRIMARY KEY,
    tid    integer,
    bid    integer,
    aid    integer,
    delta  integer,
    mtime  datetime,
    filler character(22)
CREATE TABLE pgbench_tellers (
    tid      integer NOT NULL PRIMARY KEY,
    bid      integer,
    tbalance integer,
    filler   character(84)

$ mysql
msyql> create database pgb;
mysql> use pgb
pgb> ## add the tables here
pgb> quit

$ sqlite3 pgb.sqlite
sqlite> ## add the tables here
sqlite> .q

Teach Bucardo about these new databases, then add them to a new sync. As we do not want changes to get immediately replicated, we set this sync to "autokick off". This will ensure that the sync will only run when it is manually started via the "bucardo kick" command. Since database C is also part of another Bucardo sync and may get rows written to it that way, we need to set it as a "makedelta" database, which ensures that the replicated rows from the other sync are replicated onwards in our new sync.

## Teach Bucardo about the MariaDB database
$ bucardo add db M dbname=pgb type=mariadb user=ec2-user dbpass=fred
Added database "M"

## Teach Bucardo about the SQLite database
$ bucardo add db S dbname=pgb.sqlite type=sqlite
Added database "S"

## Create the new sync, replicating from C to our two non-Postgres databases:
$ bucardo add sync abc relgroup=bench dbs=C:source,M:target,S:target autokick=off
Added sync "abc"
Created a new dbgroup named "abc"

## Make sure any time we replicate to C, we create delta rows for the other syncs
$ bucardo update db C makedelta=on
Changed bucardo.db makedelta from off to on

$ bucardo restart
Creating /var/run/bucardo/fullstopbucardo ... Done
Checking for existing processes
Removing file "/var/run/bucardo/fullstopbucardo"
Starting Bucardo

For the final test, all changes to A, B, or C should end up on M and S!

$ for i in {1,2,3,4}; do psql test$i -xtc "update pgbench_accounts set abalance=$i*2222 where aid=$i"; done

$ psql test4 -tc 'select aid, abalance from pgbench_accounts where aid <= 4 order by aid'
   1 |     2222
   2 |     4444
   3 |     6666
   4 |     8888

$ sqlite3 pgb.sqlite 'select count(*) from pgbench_accounts'

$ mysql pgb -e 'select count(*) from pgbench_accounts'
| count(*) |
|        0 |
$ bucardo kick abc 0
Kick abc: [1 s] DONE!

$ sqlite3 pgb.sqlite 'select count(*) from pgbench_accounts'

$ mysql pgb -e 'select count(*) from pgbench_accounts'
| count(*) |
|        3 |

$ sqlite3 pgb.sqlite 'select aid,abalance from pgbench_accounts where aid <=4 order by aid'

$ mysql pgb -e 'select aid,abalance from pgbench_accounts where aid <=4 order by aid'
| aid | abalance |
|   1 |     2222 |
|   2 |     4444 |
|   3 |     6666 |

Excellent. Everything is working as expected. Note that the changes from the test4 database were not replicated onwards, as test4 is not a source database. Feel free to ask any questions in the comments below, or better still, on the Bucardo mailing list.

published by (Josh Williams) on 2015-08-10 22:56:00 in the "monitoring" category
Clocks at Great Northern in Manchester, UK
I almost let this one sneak past! Guess I need to do some lag monitoring on myself. About a month or so ago, a new version of check_postgres was released, and that includes a bit of code I wrote. While the patch has been available in the git checkout for a while, now that it's in the official release and will start appearing in repos (if it hasn't already) it's probably worth writing up a quick note describing its reasoning and usage.

What's the feature? Time-based replication monitoring in the hot_standby_delay action. This was something that had been a long-standing item on my personal TODO list, and happened to scratch the itch of a couple of clients at the time.

Previously it would only take an integer representing how many bytes of WAL data the master could be ahead of a replica before the threshold is crossed:

check_hot_standby_delay --dbhost=master,replica1 --critical=16777594

This is certainly useful for, say, keeping an eye on whether you're getting close to running over your wal_keep_segments value. Of course it can also be used to indicate whether the replica is still processing WAL, or has become stuck for some reason. But for the (arguably more common) problem of determining whether a replica is falling too far behind determining what byte thresholds to use, beyond simply guessing, isn't easy to figure out.

Postgres 9.1 introduced a handy function to help solve this problem: pg_last_xact_replay_timestamp(). It measures a slightly different thing than the pg_last_xlog_* functions the action previously used. And it's for that reason that the action now has a more complex format for its thresholds:

check_hot_standby_delay --dbhost=master,replica1 --critical="16777594 and 5 min"

For backward compatibility, of course, it'll still take an integer and work the same as it did before. Or alternatively if you only want to watch the chronological lag, you could even give it just a time interval, '5 min', and the threshold only takes the transaction replay timestamp into account. But if you specify both, as above, then both conditions must be met before the threshold activates.

Why? Well, that gets in to bit about the measurement of slightly different things. As its name implies, pg_last_xact_replay_timestamp() returns the timestamp of the last transaction it received and replayed. That's fine if you have a database cluster that's constantly active 24 hours a day. But not all of them are. Some have fluctuating periods of activity, perhaps busy during the business day and nearly idle during the night. In other words, if the master isn't processing any transactions, that last transaction timestamp doesn't change.

Then there's the other end of the scale. With the SSD's/high speed disk arrays a master server may in a short interval process more transaction data than it can send over a network wire. For example, we have a system that runs an ETL process between two local databases on a master server, and generates a ton of transaction log data in a short amount of time. However even if it has many megabytes of WAL data to transmit, the replicas never get any more than a handful of seconds behind and soon catch up.

Both conditions on their own are fine. It's when both conditions are simultaneously met, when the replica is behind in both transaction log and it hasn't seen a chronologically recent transaction, that's when you know something is going wrong with your replication connection.

Naturally, this updated check also includes the chronological lag metric, so you can feed that into Graphite, or some other system of choice. Just make sure you're system handles the new metric; our Nagios system seemed to ignore it until the RRD data for the check was cleared and recreated.

Oh, and make sure your clocks are in sync. The timestamp check executes only on the replica, so any difference between its clock and the master's can show up as skew here. ntpd is an easy way to keep everything mostly synchronized, but if you really want to be sure, check_postgres also has a timesync action.

published by Eugenia on 2015-08-08 18:59:57 in the "General" category
Eugenia Loli-Queru

I was watching the Director’s Cut of “Troy” last night, so I soon got interested in reading about the Late Bronze Age.

Right about 1100 BC, all hell broke loose in the Mediterranean: there was massive depopulation & famine, ALL cities were destroyed and burned (not one was left unscathed, and some were burned up to 7 times!), and civilization almost disappeared (we have only small villages with very simple geometric art, while people forgot how to write). So basically, we’re talking about Greece, Asia Minor and Hittites, Israel area and Egypt, all but destroyed. That era is called the “Greek Dark Ages” or “First Dark Ages”, and archaeologists consider these 300 years as much more “dark” than the Dark Ages that followed the fall of the Roman Empire 1500 years later.

Historians give a number of reasons why this happened: raids from the north and from the “sea peoples” (people of different origins got together to pirate), drought and other natural disasters.

Honestly, I think historians got the causes wrong here. Yes, these things happened, but they were not the root of the problem. I believe what happened is rather obvious after a bit of digging among geologists’ information instead: the mines in the Mediterranean ran out of tin!

Tin is a rather rare metal, and without it, they couldn’t forge bronze. Without being able to create bronze, in the Bronze Age, well, you have no Bronze Age anymore. You see, the whole high civilization starting in 3000 BC in the greater area was basing itself on bronze. When that went bust, their trades and economy collapsed. When economy collapsed, massive famine arrived. The ones who survived were trying to kill everybody else to get their hands to a little bit of tin that some might had left.

I base this opinion on the following:

1. There is absolutely no reason to completely burn all cities and kill so many people when you’re simply trying to conquer them. You only burn the cities if you don’t care about the cities, and you only care about what these people had control over that was of little availability: tin.

2. People from completely different nations coming together to pirate (“sea peoples”), only happens when the economy has collapsed. Humans of different origins don’t band together and choose violence, unless there’s no other way. Humanity 101.

And the most damning argument:

3. Iron was known as a metal that could be used by 3200 BC already (pretty much the same time that Bronze was becoming popular). But because it required a special furnace and smelting technique, iron was used very little by blacksmiths. The Bronze Age happened before the Iron Age simply because Bronze was simpler to deal with, not because they didn’t know what iron was.

So, there was no reason for people to switch to iron (especially because we would have to wait many more centuries afterwards to invent steel). And yet, we see a gradual turn from bronze to iron during the Late Bronze Age, despite the practical problems iron had. This to me makes it clear that the people simply ran out of tin, and they were FORCED to *slowly* turn to iron. In the meantime, until they got iron right, the Dark Ages were upon them!

Now, there’s a reason why I’m writing such a post here today.

Think about it for a moment: we have major civilizations that they based their successes on a single metal. When that metal went bust, so did their civilizations. The few who survived, resorted into extreme violence.

Always use History to decode the present and to get a good glimpse of the future.

So, does the above situation remind you of anything? Could this what will happen to us in as few as 50-75 years from now, when our fossil fuels go bust?

We’re in a similar boat, you know: our fossil fuels are going away rapidly, and our solar panel technology is not nearly as effective (the best ones only have 25% efficiency compared to fossil fuels, just like iron was difficult to forge compared to bronze).

Unless Lockheed Martin comes through big time with their announced fusion reactor, we should expect nothing but a similar result: the collapse of our economy, wars over the little bit of oil (and water) that’s left, and a rather Mad Max-like world.

So, I hope I’m gone by that time, and not be re-incarnated for quite a while. 😛

published by (Peter Hankiewicz) on 2015-08-07 23:30:00 in the "contenteditable" category

There are multiple things in a front-end developer work that can be easily fixed by a native browser functions. Based on experience, JavaScript learning path for a big part of front-end developers is not perfect. Therefore, I will give you some examples, for some people as a reminder, for others as a new thing to think about. It is always good to learn, right?


W3C definition

It lets you use built-in browser functions as you wish from JavaScript, simple functions to do simple things. execCommand is very useful when you work with text ranges selected by a user. It's also popular that it comes together with a contentEditable HTML element attribute.

Practical examples

  • You want to have redo button in your application, so when user will click on it, the latest changes will be discarded.
    // after running just this, the latest user changes will be discarded (text input changes)
    execCommand('redo', false, null);
  • You want to remove text after the cursor position.
    // after running this script one character after the cursor 
    // position in a current element will be removed
    execCommand('forwardDelete', false, null);

There are more than 30 functions (look here) that can be used with a limited amount of code. You need to be careful though, keep on testing, multiple browsers have different implementations, but will be unified in the future.


W3C definition

I want you to make a small experiment. On the current page please open developer console (Firebug or DevTools) and try to run this code:

document.designMode = 'on';

Have you noticed anything? What has happened now is that the whole page is editable now, you can edit any paragraph of this page, any header, any link. It's not so practical anymore as you can't run this command on any element, only on the Document object, but it's cool to know that stuff like this exists. Every HTML element has a contentEditable attribute set now with a value set to true.


W3C definition

"contentEditable" attribute can be set to any HTML document element. What it does is it makes element editable or not. By default, every HTML element is not editable, you can easily change it by setting contentEditable attribute value to true like this:

document.getElementById('word-open').contentEditable = 'true';

If you will again run this script on the current page you will be able to edit big slogan on the left side of this text, well, only the "OPEN" word. This can be extremely useful when you need to build an online rich text editor.


W3C definition

Navigator object is not standardized yet by W3C but it's supported by around 90% of the global browsers (known and popular browsers not supporting it are IE8 and Opera Mini). It contains multiple client browser information. For example:

// will return a list of supported/installed browser plugins, you can 
// check if the user has the Flash software installed, or to check if user uses Ad-block plugin

// will return a geolocation object, you can track client geo position

// will return a browser engine name, useful when you need to check it instead 
// of a browser name


I recommend everyone interested in the front-end development to go through W3C standardization documents, read it chapter by chapter. There are resources not widely popular but very helpful.

published by (Marina Lohova) on 2015-08-04 14:00:00 in the "BigVideo.js" category

One pretty common request for every web developer is to "please, make our Stone age website look sleek and modern". Well, no more head scratching about the meaning of "sleek" or "modern" (or "the Stone age" for some?). In times of the crisp and stunning visuals there's no better way to make an impression than to use a big beautiful background video on the home page.

Paired with some impressive infinite scroll which I already covered here and a nice full-screen image gallery (which I will cover later), it will definitely help to bring your website up to date.

Lucky for us, there is a very popular library called BigVideo.js which is based on another well known library videojs, which is a wrapper around HTML5 <video> tag.

Converting the video.

To ensure the cross browser support, it's best to supply the desired video in several formats. The most common formats to use are mp4, ogg and webm. Here is the browser support chart to give you a better idea of why it's so important to use more than one format on your page.

There are a lot of ways to convert the file, but since I'm not particularly good with compression settings or codecs, I'm using the following easy workflow:

  • Upload the video to Vimeo or YouTube.
    There is even a handy 'Share' setting straight from Final Cut for us, Apple users.
  • Go to Vimeo/YouTube and download it from there.
    This way I'm leveraging these web services' optimized and perfected compressing algorithms for web and also getting the smallest file size possible without too much quality loss. The target file should ideally be less than 3MB, otherwise it will slow down your browser, especially Firefox with Firebug installed.
  • The last step is to generate webm and ogg with Firefogg

You may find another process that works best for you, but this is what works for me.

Using BigVideo.js to display video

We will need to include the libraries:

<script src="//"></script>
<script src=""></script>
<script src=""></script>

And the following javascript:

var BV = new $.BigVideo({
controls: false,
forceAutoplay: true, 
container: $('#video')});
{ type: "video/webm", src: "" },
{ type: "video/mp4", src: "" }
], {ambient: true});

Please, note the "ambient:true" setting. This setting does the trick of playing the video in the background.

Mobile and Tablet Support

The sad truth is that the video backgrounds are not supported on touch devices, because HTML5 does not allow autoplay there. Instead there will be a "play" button underneath your content and the user will need to click on it to activate the ambient video. Not so ambient anymore, right? The best option for now is to use a full screen image instead of the video as described here.

Hope you enjoyed the blog post. Let me know your thoughts!

published by (Josh Lavin) on 2015-08-03 11:30:00 in the "perl" category

For larger client projects, I find it helpful to maintain a list of tasks, with the ability to re-order, categorize, and mark tasks as completed. Add in the ability to share this list with coworkers or project owners, and you have a recipe for a better record and task-list for development.

I had been using Pivotal Tracker for this purpose, but I found a lot of its features were too complicated for a small team. On the simpler side, Trello offers project "boards" that meet many needs for project management. Plus, you can accomplish a lot with the free level of Trello.

No import

After being convinced that switching from Pivotal to Trello was the right move for my current project, I was dismayed to find that Trello offers no Import functionality, at least that I could find for front-end users. (They do have an API, but I didn't relish taking time to learn this.) I could easily export my Pivotal project, but how to get those tasks into Trello cards?

One idea

In my search of Trello for an import feature, I found a feature called Email-to-board. Trello provides a custom email address for each Board you create, allowing you to send emails to this address, containing information for a new Trello card. (This email address is unique for each board, containing random letters and numbers, so only the Board owner can use it.)

What if I wrote a quick script that processed a Pivotal CSV export file, and sent an email to Trello for each row (task) in the file? The script might send out quite a few emails, but would it work? Time to try it.

Perl to the rescue

I started cooking up a simple Perl script to test the idea. With the help of some CPAN modules to easily process the CSV file and send the emails, I landed on something that worked. After running the script, each row in the CSV export became an email to my Trello board, containing the item's title, description, estimate (difficulty level), and list of tasks required to complete it.

The script should work for most exports from Pivotal Tracker, and I have published it to my GitHub account, in case it is helpful for others who decide to move to Trello.

Find the pivotal2trello script on GitHub.

If you happen to try it, let me know if you experience any problems, or have any suggestions!

published by (Jon Jensen) on 2015-07-31 16:50:00 in the "company" category

We are excited to announce an expansion of End Point?s ecommerce clientele and our developer ranks! The ecommerce consulting company Perusion has joined End Point. Perusion was founded in 2002 by Mike Heins and Greg Hanson. It quickly became a small powerhouse in the open source ecommerce space, focusing on Interchange, Perl, and MySQL on Linux. We were pleased to welcome Perusion in a merger with End Point at the beginning of July.

Mike Heins is the original creator of MiniVend in 1996. In 2000, Mike?s consultancy and MiniVend were acquired by the ecommerce startup Akopia. With numerous improvements, including the addition of a new full-featured administrative back-office, the new open source ecommerce platform Interchange was created. Akopia was acquired by Red Hat in 2001, and in 2002 the Interchange project became independent, led by its creators and a group of other open source developers who maintain it to this day.

Greg Hanson is a serial entrepreneur and business manager and has worked extensively with data systems of many types. In the mid-1990s he started a computer products ecommerce company, Valuemedia, and oversaw every aspect of its evolution. Greg joined Mike to launch Perusion in 2002, and is now a client consultant and developer. He has shepherded several businesses from small mom & pop operations into strong companies providing goods and services around the world.

Josh Lavin began creating websites professionally in 1998 at his consultancy Kingdom Design. He grew his work into the ecommerce space, helping many companies sell their products online for the first time. Josh joined Perusion in 2007, bringing skills in marketing and user experience that are just as important as his development abilities. In recent years he has enjoyed moving client sites over to responsive front-end designs that work well on desktop, tablet, and mobile phone.

Perusion?s development and hosting clients have joined End Point as well. Some of the noteworthy sites that bear mentioning here are American Welding Society, Bluestone Perennials, Vervanté, Bulk Herb Store, Northern Sun, Penn Herb Company, Air Delights, and Solar Pathfinder.

At End Point we got our start in the industry in 1995 by creating dynamic database-backed websites for our clients. Our tools of choice in those early days were Linux, Apache, msql and MySQL, and Perl. In the late 1990s we began focusing more on ecommerce websites specifically, and we added MiniVend and Interchange to the mix.

Later we branched out into PostgreSQL, Ruby on Rails and Spree, Python and Django, Perl Dancer, NodeJS, and other platforms, while we continued to support and enhance Interchange and the sites running on it. Today we still host and develop many successful ecommerce sites running Interchange, taking many thousands of orders worth millions of dollars every day. So we are delighted to have Perusion join us as we to continue to grow the Interchange-based part of our business.

We have already seen constructive collaboration in this merger, with longtime End Point employees able to add more helping hands to Perusion projects and further breadth of capabilities and depth of support, Perusion developers bringing their expertise to bear, and Perusion contacts leading to new projects and new business.

Perusion has always believed in going above and beyond the call of duty and has made their clients? success their own goal. Likewise, we at End Point feel this is more than a business, and we value the personal relationships we have developed with each other and our clients over the years. We look forward to the new possibilities that are now available through this change, and are glad to welcome Perusion on board at End Point!

published by (Kent K.) on 2015-07-31 13:00:00 in the "defaults" category

Recently, I needed to add some functionality to an older section of code. This code initialized and passed around a reasonably sized set of various hashes, all with similar keys. As those hashes were accessed and manipulated, there were quite a few lines of code devoted to addressing boundary conditions within those hashes. For example, an if/else statement setting a default value to a given key if it didn't already exist. With all the added safety checks, the main method dragged on for several screens worth of code. While puttering around amidst this section, I figured I'd be a good little boyscout and leave my campsite better than when I found it.

I figured a fairly easy way to do that would be to eliminate the need for all the extra if/else clauses laying around. All they were really doing was ensuring we ended up with a minimum set of hash keys. I decided to turn all the various hashes into instances of a very simple class inheriting from Ruby on Rails' HashWithIndifferentAccess along with some basic key management functionality.

My first draft came out looking something like:

class MyHash < HashWithIndifferentAccess
  def initialize(constuctor = {}) do
    self[:important_key_1] ||= "default 1"
    self[:important_key_2] ||= "default 2"
This seemed to work fine. I didn't need to worry about ensuring "important keys" were present anymore. And it was perfectly viable to pass in one of the important keys as part of the initialization.

I soon discovered in my test suite that my code did not do exactly what I intended it to do. In the tests, I wanted to ensure several of my hash keys came out with the right values. I made use of MyHash#slice to ensure I ended up with the right subset of values for my given test. However, no matter what I did, I could not weed out the important keys:

1.9.3 :003 >{foo: 'bar', bar: 'lemon'}).slice(:bar)
=> {"important_key_1"=>"default 1", "important_key_2"=>"default 2", "bar"=>"lemon"}
I admit I was quite perplexed by this. I tried several re-writes of the initialize method looking for some version that didn't exhibit this strange slice behavior. Finally, I took to the Ruby on Rails and Ruby docs.

Looking at the source for slice, I found the problem:

The method slice calls "new" (which includes the default values) to create another object to avoid clobbering the one the method is called on. Since I didn't feel like writing my own custom slice method, or trying to monkey patch Rails, I realized I was beaten. I needed to find a new solution.

After a little thought, I came up with this:

class MyHash < HashWithIndifferentAccess
  def = {}) do
    h = new(constructor)
    h[:important_key_1] ||= "default 1"
    h[:important_key_2] ||= "default 2"
This does not fix the problem, but manages to sidestep it. Most Ruby on Rails veterans will be familiar with methods called "build" so it shouldn't be too clumsy to work with. I replaced all the entries in my code that called with and went on my merry way.

published by (Emanuele 'Lele' Calo') on 2015-07-30 13:54:00 in the "image" category

After a fairly good experience with dnote installed on our own servers as an encrypted notes sharing service, my team decided that it would have been nice to have a similar service for images.

We found a nice project called that is based on NodeJS, Python, Redis and a lot of client-side JavaScript.

The system is divided into two components: the HTML/JS frontend and a Python FastCGI API.

Unfortunately the documentation is a still in its very early stage and it's lacking a meaningful structure and a lot of needed information.

Here's an overview of the steps we followed to setup on our own server behind nginx.

First of all we chose that we wanted to have as much as possible running and confined to a regular user, which is always a good idea with such young and potentially vulnerable tools. We chose to use the imgbi user.

Then since we wanted to keep as clean as possible the root user environment (and system status), we also decided to use pyenv. To be conservative we chose the latest Python 2.7 stable release, 2.7.10.

git clone ~/.pyenv
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile
echo 'eval "$(pyenv init -)"' >> ~/.bash_profile
echo $SHELL -l
pyenv install -l  | grep 2\.7
pyenv install 2.7.10
pyenv global 2.7.10
pyenv version
which python
python --version

In order to use, we also needed NodeJS and following the same approach we chose to use nvm and install the latest NodeJS stable version:

curl -o- | bash
nvm install stable
nvm list
nvm use stable
nvm alias default stable
node --version

As a short note to the usage of the bad practice of blindly using:

curl -o- https://some_obscure_link_or_not | bash

We want to add that we do not endorse this practice as it's dangerous and exposes your system to many security risks. On the other hand, though, it's true that cloning the source via Git and compile/installing it blindly is not much safer, so it's always up to how much you trust the peer review on the project you're about to use. And at least with an https URL you should be talking to the destination you want, whereas an http URL is much more dangerous.

Furthermore going through the entire Python and NodeJS installation as a regular user, was far beyond the scope of this post and the steps proposed here assumes that you're doing everything as the regular user, except where specifically stated differently.

Anyway after that we updated pip and then installed all the needed Python modules:

pip install --upgrade pip
pip install redis m2crypto bcrypt pysha3 zbase62 pyutil flup

Then it's time to clone the actual code from the GitHub repo, install a few missing dependencies and then use the bower and npm .json files to add the desired packages:

git clone
npm install -g bower grunt grunt-cli grunt-multiresize
npm install -g grunt-webfont --save-dev
npm install
bower install

We also faced an issue which made Grunt fail to start correctly. Grunt was complaining about an "undefined property" called "prototype". If you happen to have the same problem just type

cd node_modules/grunt-connect-proxy/node_modules/http-proxy
npm install eventemitter3@0.1.6
cd -

That'll basically install the eventemitter3 NodeJS package module locally to the grunt-connect-proxy module so to overcome the compatibility issues which in turn causes the error mentioned above.

You should use your favourite editor to change the file config.json, which basically contains all your local needed configuration. In particular our host is not exposed on the I2P or Tor network, so we "visually" disabled those options.

# lines with "+" needs to be replace the ones starting with a "-"
-  "name": "",
+  "name": " - End Point image sharing service",

-  "maxSize": "3145728",
+  "maxSize": "32145728",

-  "clearnet": "",
+  "clearnet": "https://imgbi.example",

-  "i2p": "http://imgbi.i2p",
+  "i2p": "http://NOTAVAILABLE.i2p",

-  "tor": "http://imgbifwwqoixh7te.onion",
+  "tor": "http://NOTAVAILABLE.onion",

Save and close the file. At this point you should be able to run "grunt" to build the project but if it fails on the multiresize task, just run

grunt --force

to ignore the warnings.

That's about everything you need for the frontend part, so it's now time to take care of the API.

git clone
cd /home/imgbi/

You now need to edit the two Python files which are the core of the API.

# edit
-upload_dir = '/home/'
+upload_dir = '/home/imgbi/'

Verify that you're not having any Python import related error, due to missing modules or else, by running the Python file directly.


If that's working okay, just create a symlink in the build directory in order to have the API created files available to the frontend

ln -s /home/imgbi/ /home/imgbi/

And then it's time to spawn the actual Python daemon:

spawn-fcgi -f /home/imgbi/ -a -p 1234

The file is used by a cronjob which periodically checks if there's any image/content that should be removed because its time has expired. First of all let's call the script directly and if there's no error, let's create the crontab:

python /home/imgbi/

crontab -e

@reboot spawn-fcgi -f /home/imgbi/ -a -p 1234
30 4 * * * python /home/imgbi/

It's now time to install nginx and Redis (if you still haven't done so), and then configure them. For Redis you can just follow the usual simple, basic installation and that'll be just okay. Same is true for nginx but we'll add our configuration/vhost file content here as an example /etc/nginx/sites-enabled/imgbi.example.conf for everyone who may need it:

upstream imgbi-fastcgi {

server {
  listen 80;
  listen [::]:80;
  server_name imgbi.example;
  access_log /var/log/nginx/sites/imgbi.example/access.log;
  error_log /var/log/nginx/sites/imgbi.example/error.log;
  rewrite ^ https://imgbi.example/ permanent;

server {
  listen 443 ssl spdy;
  listen [::]:443 ssl spdy;
  server_name  imgbi.example;
  server_name  imgbi.example;
  access_log /var/log/nginx/sites/imgbi.example/access.log;
  error_log /var/log/nginx/sites/imgbi.example/error.log;

  client_max_body_size 4G;

  include include/;

  add_header Strict-Transport-Security max-age=31536000;
  add_header X-Frame-Options SAMEORIGIN;
  add_header X-Content-Type-Options nosniff;
  add_header X-XSS-Protection "1; mode=block";

  location / {
    root /home/imgbi/;

  location /api {
    fastcgi_param QUERY_STRING $query_string;
    fastcgi_param REQUEST_METHOD $request_method;
    fastcgi_param CONTENT_TYPE $content_type;
    fastcgi_param CONTENT_LENGTH $content_length;

    fastcgi_param SCRIPT_NAME "";
    fastcgi_param PATH_INFO $uri;
    fastcgi_param REQUEST_URI $request_uri;
    fastcgi_param DOCUMENT_URI $document_uri;
    fastcgi_param DOCUMENT_ROOT $document_root;
    fastcgi_param SERVER_PROTOCOL $server_protocol;

    fastcgi_param GATEWAY_INTERFACE CGI/1.1;
    fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;

    fastcgi_param REMOTE_ADDR $remote_addr;
    fastcgi_param REMOTE_PORT $remote_port;
    fastcgi_param SERVER_ADDR $server_addr;
    fastcgi_param SERVER_PORT $server_port;
    fastcgi_param SERVER_NAME $server_name;
    fastcgi_param HTTPS on;

    fastcgi_pass imgbi-fastcgi;
    fastcgi_keep_conn on;

Well, that should be enough to get you started and at least have all the components in place. Enjoy your secure image sharing now.

published by (Jon Jensen) on 2015-07-29 00:35:00 in the "philosophy" category

A brief thought:

You may have heard the saying that nothing is more permanent than a temporary fix. Or that prototypes are things we just haven't yet recognized will be permanent. Or some variation on the theme.

As an illustration of this, I recently came across the initial commit to the source code repository of our website when we ported it to Ruby on Rails back in April 2007. Our then co-worker PJ's comment is a perfect example of how long-lasting some of our planned temporary work can be:

commit 2ee55da6ed953c049b3ef6f9f132ed3c1e0d4de9
Author: PJ Cabreras <>
Date:   Wed Apr 18 13:07:46 2007 +0000

    Initial test setup of repository for mkcamp testing -- will probably throw away later
    git-svn-id: file:///home/camp/endpoint/svnrepo/trunk@1 7e1941c4-622e-0410-b359-a11864f70de7

It's wise to avoid big architecture up front for experimental things we don't know the needed shape and size of. But we should plan on iterating and being agile (in the real basic sense of the word), because we may never have the chance to start over from scratch. And starting over from scratch is often ill-advised in any case.