All opinions expressed are those of the authors and not necessarily those of, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by (Selvakumar Arumugam) on 2014-11-19 16:02:00 in the "big-data" category
The 11th edition of Open Source India, 2014 was held at Bengaluru, India. The two day conference was filled with three parallel tech talks and workshops which was spread across various Open Source technologies.


In-depth look at Architecting and Building solutions using MongoDB

Aveekshith Bushan & Ranga Sarvabhouman from MongoDB started off the session with a comparison of the hardware cost involved with storage systems in earlier and recent days. In earlier days, the cost of storage hardware was very expensive, so the approach was to filter the data to reduce the size before storing into the database. So we were able to generate results from filtered data and we didn?t have option to process the source data. After the storage became cheap, we can now store the raw data and then we do all our filter/processing and then distribute it.

        Filter -> Store -> Distribute
        Store -> Filter -> Distribute

Here we are storing huge amount of data, so we need a processing system to handle and analyse the data in efficient manner. In current world, the data is growing like anything and 3Vs are phenomenal of growing (Big)Data. We need to handle the huge Volume of Variety of data in a Velocity. MongoDB follows certain things to satisfy the current requirement.

MongoDB simply stores the data as a document without any data type constraints which helps to store huge amount of data quickly. It leaves the constraints checks to the application level to increase the storage speed in database end. But it does recognises the data type after the data is stored as document. In simple words, the philosophy is: Why do we need to check the same things (datatype or other constraints) in two places (application and database)?

MongoDB stores all relations as single document and fetches the data in single disk seek. By avoiding multiple disk seeks, this results in the fastest retrieval of data. Whereas in relational database the relations stored in different tables which leads to multiple disk seek to retrieve the complete data of an entity. And MongoDB doesn?t support joins but it have Reference option to refer another collection(Table) without imposing foreign key constraints.

As per db-engines rankings, MongoDB stays in the top of NoSQL database world. Also it provides certain key features which I have remembered from the session:
    • Sub-documents duplicates the data but it helps to gain the performance(since the storage is cheap, the duplication doesn?t affect much)
    • Auto-sharding (Scalability)
    • Sharding helps parallel access to the system
    • Range Based Sharding 
    • Replica Sets (High availability)
    • Secondary indexes available
    • Indexes are single tunable part of the MongoDB system 
    • Partition across systems 
    • Rolling upgrades
    • Schema free
    • Rich document based queries
    • Read from secondary
When do you need MongoDB?
    • The data grows beyond the system capacity in relational database
    • In a need of performance in online requests
Finally, speakers emphasized to understand use case clearly and choose right features of MongoDB to get effective performance.

OpenStack Mini Conf

A special half day OpenStack mini conference was organised at second half of first day. The talks were spread across basics to in depth of OpenStack project. I have summarised all the talks here to give an idea of OpenStack software platform.

OpenStack is a Open Source cloud computing platform to provision the Infrastructure as a Service(IaaS). There is a wonderful project DevStack out there to set up the OpenStack on development environment in easiest and fastest way. A well written documentation of the OpenStack project clearly explains everything. In addition, anyone can contribute to OpenStack with help of How to contribute guide, also project uses Gerrit review system and Launchpad bug tracking system.

OpenStack have multiple components to provide various features in Infrastructure as a Service. Here is the list of OpenStack components and the purpose of each one.

Nova (Compute) - manages the pool of computer resources
Cinder (Block Storage) - provides the storage volume to machines
Neutron (Network) - manages the networks and IP addresses
Swift (Object Storage) - provides distributed high availability(replication) on storage system.
Glance (Image) - provides a repository to store disk and server images
KeyStone (Identity) - enables the common authentication system across all components
Horizon (Dashboard) - provides GUI for users to interact with OpenStack components
Ceilometer (Telemetry) - provides the services usage and billing reports
Ironic (Bare Metal) - provisions bare metal instead of virtual machines
Sahara (Map Reduce) - provisions hadoop cluster for big data processing

OpenStack services are usually mapped to AWS services to better understand the purpose of the components. The following table depicts the mapping of similar services in OpenStack and AWS:

AWS Console
Elastic Mapreduce

Along with the overview of OpenStack architecture, there were couple of in-depth talks which are listed below with slides.
That was a wonderful Day One of OSI 2014 which helped me to get better understanding of MongoDB and OpenStack.

published by (Marco Manchego) on 2014-11-18 01:16:00 in the "company" category

End Point Corporation tem o prazer de anunciar o lançamento oficial do seu novo website em Português! O site, oficialmente sinaliza a chegada do Liquid Galaxy da End Point ao Brasil e tem como objetivo fornecer serviço a todos os atuais e futuros clientes em um dos maiores e mais dinâmicos mercados da América do Sul.

Com uma população de mais 200 milhões, o Brasil também é um rápido adoptante de novas tecnologias com um numero considerável de líderes do setor que podem beneficiar diretamente a implementação do Liquid Galaxy. Isto inclui um setor de commodities solido, uma expansão imobiliária cresente, turismo e um mercado de mídia vibrante, todos fortes candidatos para a nova tecnologia.

Brasil também é o ponto de entrada para o mercado sul-americano em geral. Estamos confiantes de que podemos aumentar a penetração no mercado Brasileiro, outras oportunidades na região irão seguir. Dave Jenkins, nosso vice-presidente de vendas e Marketing, oferece o seguinte: "nós estamos excitados para ver essa expansão no Brasil. Eu sempre vejo grandes coisas saindo de São Paulo e Rio, sempre participo das conferências tecnologicas, que estão sempre superlotadas.

Se você gostaria de saber mais sobre esta tecnologia, por favor contacte-nos:

published by (Marco Manchego) on 2014-11-18 01:16:00 in the "company" category

End Point Corporation is pleased to announce the official launch of its new Brazilian Portuguese Liquid Galaxy website! The site, found at officially signals the arrival of End Point?s Liquid Galaxy to Brazil, and aims to provide service to all current and future customers in what is South America?s largest and most dynamic market.

With a population over 200 million, Brazil is also a quick adopter of new technologies with sizeable industry sectors that can benefit directly from the implementation of a Liquid Galaxy. This includes a massive commodities sector, booming real estate, tourism and a vibrant media market, all of which are strong candidates for the technology.

Brazil is also a logical entry-point into the larger South American market in general. We?re confident that as we increase market penetration in Brazil, other opportunities in the region will soon follow. Dave Jenkins, our VP of Sales and Marketing, offers the following: ?We?re excited to see this expansion into Brazil. I always see great things coming out of São Paulo and Rio whenever I go there for tech conferences, which are always booked to overflowing levels.?

If you have international business in South America, or are based in Brazil and would like to know more about this great technology, please contact us at

published by (Bianca Rodrigues) on 2014-11-14 17:39:00 in the "e-commerce" category


I recently started working with Spree and wanted to learn how to implement some basic features. I focused on one of the most common needs of any e-commerce business - adding a sale functionality to products. To get a basic understanding of what was involved, I headed straight to the Spree Developer Guides. As I was going through the directions, I realized it was intended for the older Spree version 2.1. This led to me running into a few issues as I went through it using Spree's latest version 2.3.4. I wanted to share with you what I learned, and some tips to avoid the same mistakes I made.


I'll assume you have the prerequisites it lists including Rails, Bundler, ImageMagick and the Spree gem. These are the versions I'm running on my Mac OS X:
  • Ruby: 2.1.2p95
  • Rails: 4.1.4
  • Bundler: 1.5.3
  • ImageMagick: 6.8.9-1
  • Spree: 2.3.4

What is Bundler? Bundler provides a consistent environment for Ruby projects by tracking and installing the exact gems and versions that are needed. You can read more about the benefits of using Bundler on their website. If you're new to Ruby on Rails and/or Spree, you'll quickly realize how useful Bundler is when updating your gems.

After you've successfully installed the necessary tools for your project, it's time to create our first Rails app, which will then be used as a foundation for our simple Spree project called mystore

Let's create our app

Run the following commands:

$ rails new mystore
$ cd mystore
$ gem install spree_cmd

*Note: you may get a warning that you need to run bundle install before trying to start your application since spree_gateway.git isn't checked out yet. Go ahead and follow those directions, I'll wait.

Spree-ify our app

We can add the e-commerce platform to our Rails app by running the following command:

spree install --auto-accept

If all goes well, you should get a message that says, "Spree has been installed successfully. You're all ready to go! Enjoy!". Now the fun part - let's go ahead and start our server to see what our demo app actually looks like. Run rails s to start the server and open up a new browser page pointing to the URL localhost:3000.
*Note - when you navigate to localhost:3000, watch your terminal - you'll see a lot of processes running in the background as the page loads simultaneously in your browser window. It can be pretty overwhelming, but as long as you get a "Completed 200 OK" message in your terminal, you should be good to go! See it below:

Our demo app actually comes with an admin interface ready to use. Head to your browser window and navigate to http://localhost:3000/admin. The login Spree instructs you to use is and password spree123.

Once you login to the admin screen, this is what you should see:

Once you begin to use Spree, you'll soon find that the most heavily used areas of the admin panel include Orders, Products, Configuration and Promotions. We'll be going into some of these soon.

Extensions in 3.5 steps

The next part of the Spree documentation suggests adding the spree_fancy extension to our store to update the look and feel of the website, so let's go ahead and follow the next few steps:

Step 1: Update the Gemfile

We can find our Gemfile by going back to the terminal, and within the mystore directory, type ls to see a list of all the files and subdirectories within the Spree app. You will see the Gemfile there - open it using your favorite text editor. Add the following line to the last line of your Gemfile, and save it:
gem 'spree_fancy', :git => 'git://', :branch => '2-1-stable'

Notice the branch it mentions is 2-1-stable. Since you just installed Spree, you are most likely using the latest version, 2-3-stable. I changed my branch in the above gem to '2-3-stable' to reflect the Spree version I'm currently using. After completing this step, run bundle install to install the gem using Bundler.

Now we need to copy over the migrations and assets from the spree_fancy extension by running this command in your terminal within your mystore application:

$ bundle exec rails g spree_fancy:install

Step 1.5: We've hit an error!

At this point, you've probably hit a LoadError, and we can no longer see our beautiful Spree demo app, instead getting an error page which says "Sprockets::Rails::Helper::AssetFilteredError in Spree::Home#index" at the top. How do we fix this?

Within your mystore application directory, navigate to config/intializers/assets.rb file and edit the last line of code by uncommenting it and typing:

Rails.application.config.assets.precompile += %w ( bx_loader.gif )

Now restart your server and you will see your new theme!

Step 2: Create a sales extension

Now let's see how to create an extension instead of using an existing one. According to the Spree tutorial, we first need to generate an extension - remember to run this command from a directory outside of your Spree application:

$ spree extension simple_sales
Once you do that, cd into your spree_simple_sales directory. Next, run bundle install to update your Spree extension.

Now you can create a migration that adds a sale_price column to variants using the following command:

bundle exec rails g migration add_sale_price_to_spree_variants sale_price:decimal

Once your migration is complete, navigate in your terminal to db/migrate/XXXXXXXXXXXX_add_sale_price_to_spree_variants.rb and add in the changes as shown in the Spree tutorial:

class AddSalePriceToSpreeVariants < ActiveRecord::Migration
  def change
    add_column :spree_variants, :sale_price, :decimal, :precision => 8, :scale => 2
Now let's switch back to our mystore application so that we can add our extension before continuing any development. Within mystore, add the following to your Gemfile:
gem 'spree_simple_sales', :path => '../spree_simple_sales'
You will have to adjust the path ('../spree_simple_sales') depending on where you created your sales extension.

Now it's time to bundle install again, so go ahead and run that. Now we need to copy our migration by running this command in our terminal:

$ rails g spree_simple_sales:install

Step 3: Adding a controller Action to HomeController

Once the migration has been copied, we need to extend the functionality of Spree::HomeController and add an action that selects ?on sale? products. Before doing that, we need to make sure to change our .gemspec file within the spree_simple_sales directory (remember: this is outside of our application directory). Open up the spree_simple_sales.gemspec file in your text editor Add the following line to the list of dependencies:

s.add_dependency ?spree_frontend?

Run bundle.

Run $ mkdir -p app/controllers/spree to create the directory structure for our controller decorator. This is where we will create a new file called home_controller_decorator.rb and add the following content to it:

module Spree
  HomeController.class_eval do
    def sale
      @products = Product.joins(:variants_including_master).where('spree_variants.sale_price is not null').uniq

As Spree explains it, this script will select just the products that have a variant with a sale_price set.

Next step - add a route to this sales action in our config/routes.rb file. Make sure your routes.rb file looks like this:

Spree::Core::Engine.routes.draw do
  get "/sale" => "home#sale"

Let's set a sale price for the variant

Normally, to change a variant attribute, we could do it through the admin interface, but we haven?t created this functionality yet. This means we need to open up our rails console:
*Note - you should be in the mystore directory

Run $ rails console

The next steps are taken directly from the Spree documentation:

?Now, follow the steps I take in selecting a product and updating its master variant to have a sale price. Note, you may not be editing the exact same product as I am, but this is not important. We just need one ?on sale? product to display on the sales page.?

> product = Spree::Product.first
=> #<Spree::Product id: 107377505, name: "Spree Bag", description: "Lorem ipsum dolor sit amet, consectetuer adipiscing...", available_on: "2013-02-13 18:30:16", deleted_at: nil, permalink: "spree-bag", meta_description: nil, meta_keywords: nil, tax_category_id: 25484906, shipping_category_id: nil, count_on_hand: 10, created_at: "2013-02-13 18:30:16", updated_at: "2013-02-13 18:30:16", on_demand: false>

> variant = product.master
=> #<Spree::Variant id: 833839126, sku: "SPR-00012", weight: nil, height: nil, width: nil, depth: nil, deleted_at: nil, is_master: true, product_id: 107377505, count_on_hand: 10, cost_price: #<BigDecimal:7f8dda5eebf0,'0.21E2',9(36)>, position: nil, lock_version: 0, on_demand: false, cost_currency: nil, sale_price: nil>

> variant.sale_price = 8.00
=> 8.0

=> true

Hit Ctrl-D to exit the console.

Now we need to create the page that renders the product that is on sale. Let?s create a view to display these ?on sale? products.

Create the required views directory by running: $ mkdir -p app/views/spree/home

Create the a file in your new directory called sale.html.erb and add the following to it:

<%= render 'spree/shared/products', :products => @products %>

Now start your rails server again and navigate to localhost:3000/sale to see the product you listed on sale earlier! Exciting stuff, isn't it? The next step is to actually reflect the sale price instead of the original price by fixing our sales price extension using Spree Decorator.

Decorate your variant

Create the required directory for your new decorator within your mystore application: $ mkdir -p app/models/spree

Within your new directory, create a file called variant_decorator.rb and add:

module Spree
  Variant.class_eval do
    alias_method :orig_price_in, :price_in
    def price_in(currency)
      return orig_price_in(currency) unless sale_price.present? =>, :amount => self.sale_price, :currency => currency)

The original method of price_in now has an alias of price_in unless there is a sale_price present, in which case the sale price is returned on the product?s master variant.

In order to ensure that our modification to the core Spree functionality works, we need to write a couple of unit tests for variant_decorator.rb. We need a full Rails application present to test it against, so we can create a barebones test_app to run our tests against.

Run the following command from the root directory of your EXTENSION: $ bundle exec rake test_app

It will begin the process by saying ?Generating dummy Rails application?? - great! you?re on the right path.

Once you finish creating your dummy Rails app, run the rspec command and you should see the following output: No examples found.

Finished in 0.00005 seconds
0 examples, 0 failures

Now it?s time to start adding some tests by replicating your extension?s directory structure in the spec directory: $ mkdir -p spec/models/spree

In your new directory, create a file called variant_decorator_spec.rb and add this test:

require 'spec_helper'

describe Spree::Variant do
  describe "#price_in" do
    it "returns the sale price if it is present" do
      variant = create(:variant, :sale_price => 8.00)
      expected = =>, :currency => "USD", :amount => variant.sale_price)

      result = variant.price_in("USD")

      result.variant_id.should == expected.variant_id
      result.amount.to_f.should == expected.amount.to_f
      result.currency.should == expected.currency

    it "returns the normal price if it is not on sale" do
      variant = create(:variant, :price => 15.00)
      expected = =>, :currency => "USD", :amount => variant.price)

      result = variant.price_in("USD")

      result.variant_id.should == expected.variant_id
      result.amount.to_f.should == expected.amount.to_f
      result.currency.should == expected.currency

Deface overrides

Next we need to add a field to our product admin page, so we don?t have to always go through the rails console to update a product?s sale_price. If we directly override the view that Spree provides, whenever Spree updates the view in a new release, the updated view will be lost, so we?d have to add our customizations back in to stay up to date.

A better way to override views is to use Deface, which is a Rails library to directly edit the underlying view file. All view customizations will be in ONE location: app/overrides which will make sure your app is always using the latest implementation of the view provided by Spree.

  1. Go to mystore/app/views/spree and create an admin/products directory and create the file _form.html.erb.
  2. Copy the full file NOT from Spree?s GitHub but from your Spree backend. You can think of your Spree backend as the area to edit your admin (among other things) - the spree_backend gem contains the most updated _form.html.erb - if you use the one listed in the documentation, you will get some Method Errors on your product page.

In order to find the _form.html.erb file in your spree_backend gem, navigate to your app, and within that, run the command: bundle show spree_backend

The result is the location of your spree_backend. Now cd into that location, and navigate to app/views/spree/admin/products - this is where you will find the correct _form.html.erb. Copy the contents of this file into the newly created _form.html.erb file within your application?s directory structure you just created: mystore/app/views/spree/admin/products.

Now we want to actually add a field container after the price field container for sale price so we need to create another override by creating a new file in your application?s app/overrides directory called add_sale_price_to_product_edit.rb and add the following content: => 'spree/admin/products/_form',
  :name => 'add_sale_price_to_product_edit',
  :insert_after => "erb[loud]:contains('text_field :price')",
  :text => "
    <%= f.field_container :sale_price do %>
      <%= f.label :sale_price, raw(Spree.t(:sale_price) + content_tag(:span, ' *')) %>
      <%= f.text_field :sale_price, :value =>
        number_to_currency(@product.sale_price, :unit => '') %>
      <%= f.error_message_on :sale_price %>
    <% end %>

The last step is to update our model in order to get an updated product edit form. Create a new file in your application?s app/models/spree directory called product_decorator.rb. Add the following content:

module Spree
  Product.class_eval do
    delegate_belongs_to :master, :sale_price

Now you can check to see if it worked by heading to http://localhost:3000/admin/products and you should edit one of the products. Once you?re on the product edit page, you should see a new field container called SALE PRICE. Add a sale price in the empty field and click on update. Once completed, navigate to http://localhost:3000/sale to find an updated list of products on sale.


Congratulations, you've created the sales functionality! If you're using Spree 2.3 to create a sales functionality for your application, I would love to know what your experience was like. Good luck!

published by (Josh Williams) on 2014-11-13 15:25:00 in the "ssl" category

The encryption times, they are a-changin'.

Every once in a while I'll take a look at the state of SNI, in the hopes that we're finally ready for putting it to wide-scale use.
It started a few years back when IPv6 got a lot of attention, though in reality very few end user ISP's had IPv6 connectivity at that time.  (And very few still do!  But that's another angry blog.)  So, essentially, IPv4 was still the only option, and thus SNI was still important.

Then earlier this year when Microsoft dropped [public] support for Windows XP.  Normally this is one of those things that would be pretty far off my radar, but Internet Explorer on XP is one of the few clients* that doesn't support SNI.  So at that time, with hope in my heart, I ran a search through the logs on a few of our more active servers, only to find that roughly 5% of the hits are MSIE on Windows XP.  So much for that.

(* Android < 3.0 has the same problem, incidentally. But it in contrast constituted 0.2% of the hits.  So I'm not as worried about the lack of support in that case.)

Now in fairly quick succession a couple other things have happened: SSLv3 is out, and SSL certificates with SHA-1 signatures are out.  This has me excited.  I'll tell you why in a moment.

First, now that I've written "SNI" four times at this point I should probably tell you that it stands for Server Name Indication, and basically means the client sends the intended server name very early in the connection process.  That gives the server the opportunity to select the correct certificate for the given name, and present it to the client.

If at this point you're yelling "of course!" at the screen, press Ctrl-F and search for "SSLv3" below.

For the rest, pull up a chair, it's time for a history lesson.

Back in the day, when a web browser wanted to connect to a site it performed a series of steps: it looks up the given name in DNS and gets an IP address, connects to that IP address, and then requests the path in the form of "GET /index.html".  Quite elegant, fairly straightforward.  And notice the name only matters to the client, as it uses it for the DNS look-up.  To the server it matters not at all.  The server accepts a connection on an IP address and responds to the request for a specific path.

A need arose for secure communication.  Secure Socket Layer (SSL) establishes an encrypted channel over which private information can be shared.  In order to fight man-in-the-middle attacks, a certificate exchange takes place. When the connection is made the server offers up the certificate and the client (among other things I'm glossing over) confirms the name on the certificate matches the name it thinks it tried to connect to.  Note that the situation is much the same as above, in that the client cares about the name, but the server just serves up what it has associated with the IP address.

Around the same time, the Host header appears.   Finally the browser has a way to send the name of the site it's trying to access over to the server; it just makes it part of the request.  What a simple thing it is, and what a world that opens up on the server side.  Instead of associating a single site per IP address, a single web server listening on one address can examine the Host header and serve up a virtually unlimited number of completely distinct sites.

However there's a problem.  At the time, both were great advances.  But, unfortunately, were mutually exclusive.  SSL is established first, immediately upon connection, and after which the HTTP communication happens over the secure channel.  But the Host header is part of the HTTP request.  Spot the problem yet?  The server has to serve up a certificate before the client has an opportunity to tell the server what site it wants.  If the server has multiple and it serves up the wrong one, the name doesn't match what the client expects, and at best it displays a big, scary warning to the end user, or at worst refuses to continue with communication.

There's a few work-arounds, but this is already getting long and boring.  If you're really curious, search for "Subject Alternate Name" and try to imagine why it's an inordinately expensive solution when the list of sites a server needs to support changes.

So for a long time, that was the state of things.  And by a long time, I mean almost 20 years, from when these standards were released.  In computing, that's a long time.  Thus I'd hoped that by now, SNI would be an option as the real solution.

Fast forward to the last few months.  We've had the news that SSLv3 isn't to be trusted, and the news that SHA-1 signatures on SSL certificates are pretty un-cool.  SHA-256 is in, and major CA's are now using that signature by default.  Why does this have me excited?  In the case of the former, Windows XP pre-SP3 only supports up to SSLv3, so any site that's mitigated the POODLE vulnerability is already excluding these clients.  Similarly pre-IE8 clients are excluded by sites that have implemented SHA-2 certificates in the case of the latter.

Strictly speaking we're not 100% there, as a fully up-to-date Internet Explorer on Windows XP is still compatible with these recent SSL ecosystem changes.  But the sun is setting on this platform, and maybe soon we'll be able to start putting this IPv4-saving technology into use.


published by (Joshua Tolley) on 2014-11-12 23:33:00 in the "postgres" category

From Flickr user Jitze Couperus

When debugging a problem, it's always frustrating to get sidetracked hunting down the relevant logs. PostgreSQL users can select any of several different ways to handle database logs, or even choose a combination. But especially for new users, or those getting used to an unfamiliar system, just finding the logs can be difficult. To ease that pain, here's a key to help dig up the correct logs.

Where are log entries sent?

First, connect to PostgreSQL with psql, pgadmin, or some other client that lets you run SQL queries, and run this:
foo=# show log_destination ;
(1 row)
The log_destination setting tells PostgreSQL where log entries should go. In most cases it will be one of four values, though it can also be a comma-separated list of any of those four values. We'll discuss each in turn.


Syslog is a complex beast, and if your logs are going here, you'll want more than this blog post to help you. Different systems have different syslog daemons, those daemons have different capabilities and require different configurations, and we simply can't cover them all here. Your syslog may be configured to send PostgreSQL logs anywhere on the system, or even to an external server. For your purposes, though, you'll need to know what "ident" and "facility" you're using. These values tag each syslog message coming from PostgreSQL, and allow the syslog daemon to sort out where the message should go. You can find them like this:
foo=# show syslog_facility ;
(1 row)

foo=# show syslog_ident ;
(1 row)
Syslog is often useful, in that it allows administrators to collect logs from many applications into one place, to relieve the database server of logging I/O overhead (which may or may not actually help anything), or any number of other interesting rearrangements of log data.


For PostgreSQL systems running on Windows, you can send log entries to the Windows event log. You'll want to tell Windows to expect the log values, and what "event source" they'll come from. You can find instructions for this operation in the PostgreSQL documentation discussing server setup.


This is probably the most common log destination (it's the default, after all) and can get fairly complicated in itself. Selecting "stderr" instructs PostgreSQL to send log data to the "stderr" (short for "standard error") output pipe most operating systems give every new process by default. The difficulty is that PostgreSQL or the applications that launch it can then redirect this pipe to all kinds of different places. If you start PostgreSQL manually with no particular redirection in place, log entries will be written to your terminal:
[josh@eddie ~]$ pg_ctl -D $PGDATA start
server starting
[josh@eddie ~]$ LOG:  database system was shut down at 2014-11-05 12:48:40 MST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  statement: select syntax error;
ERROR:  column "syntax" does not exist at character 8
STATEMENT:  select syntax error;
In these logs you'll see the logs from me starting the database, connecting to it from some other terminal, and issuing the obviously erroneous command "select syntax error". But there are several ways to redirect this elsewhere. The easiest is with pg_ctl's -l option, which essentially redirects stderr to a file, in which case the startup looks like this:
[josh@eddie ~]$ pg_ctl -l logfile -D $PGDATA start
server starting
Finally, you can also tell PostgreSQL to redirect its stderr output internally, with the logging_collector option (which older versions of PostgreSQL named "redirect_stderr"). This can be on or off, and when on, collects stderr output into a configured log directory.

So if you end see a log_destination set to "stderr", a good next step is to check logging_collector:
foo=# show logging_collector ;
(1 row)
In this system, logging_collector is turned on, which means we have to find out where it's collecting logs. First, check log_directory. In my case, below, it's an absolute path, but by default it's the relative path "pg_log". This is relative to the PostgreSQL data directory. Log files are named according to a pattern in log_filename. Each of these settings is shown below:
foo=# show log_directory ;
(1 row)

foo=# show data_directory ;
(1 row)

foo=# show log_filename ;
(1 row)
Documentation for each of these options, along with settings governing log rotation, is available here.

If logging_collector is turned off, you can still find the logs using the /proc filesystem, on operating systems equipped with one. First you'll need to find the process ID of a PostgreSQL process, which is simple enough:
foo=# select pg_backend_pid() ;
(1 row)
Then, check /proc/YOUR_PID_HERE/fd/2, which is a symlink to the log destination:
[josh@eddie ~]$ ll /proc/31113/fd/2
lrwx------ 1 josh josh 64 Nov  5 12:52 /proc/31113/fd/2 -> /var/log/postgresql/postgresql-9.2-local.log


The "csvlog" mode creates logs in CSV format, designed to be easily machine-readable. In fact, this section of the PostgreSQL documentation even provides a handy table definition if you want to slurp the logs into your database. CSV logs are produced in a fixed format the administrator cannot change, but it includes fields for everything available in the other log formats. For these to work, you need to have logging_collector turned on; without logging_collector, the logs simply won't show up anywhere. But when configured correctly, PostgreSQL will create CSV format logs in the log_directory, with file names mostly following the log_filename pattern. Here's my example database, with log_destination set to "stderr, csvlog" and logging_collector turned on, just after I start the database and issue one query:
[josh@eddie ~/devel/pg_log]$ ll
total 8
-rw------- 1 josh josh 611 Nov 12 16:30 postgresql-2014-11-12_162821.csv
-rw------- 1 josh josh 192 Nov 12 16:30 postgresql-2014-11-12_162821.log
The CSV log output looks like this:
[josh@eddie ~/devel/pg_log]$ cat postgresql-2014-11-12_162821.csv 
2014-11-12 16:28:21.700 MST,,,2993,,5463ed15.bb1,1,,2014-11-12 16:28:21 MST,,0,LOG,00000,"database system was shut down at 2014-11-12 16:28:16 MST",,,,,,,,,""
2014-11-12 16:28:21.758 MST,,,2991,,5463ed15.baf,1,,2014-11-12 16:28:21 MST,,0,LOG,00000,"database system is ready to accept connections",,,,,,,,,""
2014-11-12 16:28:21.759 MST,,,2997,,5463ed15.bb5,1,,2014-11-12 16:28:21 MST,,0,LOG,00000,"autovacuum launcher started",,,,,,,,,""
2014-11-12 16:30:46.591 MST,"josh","josh",3065,"[local]",5463eda6.bf9,1,"idle",2014-11-12 16:30:46 MST,2/10,0,LOG,00000,"statement: select 'hello, world!';",,,,,,,,,"psql"

published by (Greg Sabino Mullane) on 2014-11-10 22:07:00 in the "git" category

When using git, being able to track down a particular version of a file is an important debugging skill. The common use case for this is when someone is reporting a bug in your project, but they do not know the exact version they are using. While normal software versioning resolves this, bug reports often come in from people using the HEAD of a project, and thus the software version number does not help. Finding the exact set of files the user has is key to being able to duplicate the bug, understand it, and then fix it.

How you get to the correct set of files (which means finding the proper git commit) depends on what information you can tease out of the user. There are three classes of clues I have come across, each of which is solved a different way. You may be given clues about:

  1. Date: The date they downloaded the files (e.g. last time they ran a git pull)
  2. File: A specific file's size, checksum, or even contents.
  3. Error: An error message that helps guide to the right version (especially by giving a line number)

Finding a git commit by date

This is the easiest one to solve. If all you need is to see how the repository looked around a certain point in time, you can use git checkout with git-rev-parse to get it. I covered this in detail in an earlier post, but the best answer is below. For all of these examples, I am using the public Bucardo repository at git clone git://

$ DATE='Sep 3 2014'
$ git checkout `git rev-list -1 --before="$DATE" master`
Note: checking out '79ad22cfb7d1ea950f4ffa2860f63bd4d0f31692'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 79ad22c... Need to update validate_sync with new columns

Or if you prefer xargs over backticks:

$ DATE='Sep 3 2014'
$ git rev-list -1 --before="$DATE" master | xargs -Iz git checkout z

What about the case in which there were multiple important commits on the given day? If the user doesn't know the exact time, you will have to make some educated guesses. You might add the -p flag to git log to examine what changes were made and how likely they are to interact with the bug in question. If it is still not clear, you may just want to have the user mail you a copy or a checksum of one of the key files, and use the method below.

Once you have found the commit you want, it's a good idea to tag it right away. This applies to any of the three classes of clues in this article. I usually add a lightweight git tag immediately after doing the checkout. Then you can easily come back to this commit simply by using the name of the tag. Give it something memorable and easy, such as the bug number being reported. For example:

$ git checkout `git rev-list -1 --before="$DATE" master`
## Give a lightweight tag to the current commit
$ git tag bug_23142
## We need to get back to our main work now
$ git checkout master
## Later on, we want to revisit that bug
$ git checkout bug_23142
## Of course, you may also want to simply create a branch

Finding a git commit by checksum, size, or exact file

Sometimes you can find the commit you need by looking for a specific version of an important file. One of the "main" files in the repository that changes often is your best bet for this. You can ask the user for the size, or just a checksum of the file, and then see which repository commits have a matching entry.

Finding a git commit when given a checksum

As an example, a user in the Bucardo project has encountered a problem when running HEAD, but all they know is that they checked it out of sometime in the last four months. They also run "md5sum" and report that the MD5 of the file is 767571a828199b6720f6be7ac543036e. Here's the easiest way to find what version of the repository they are using:

$ SUM=767571a828199b6720f6be7ac543036e
$ git log --format=%H 
  | xargs -Iz sh -c 
    'echo -n "z "; git show | md5sum' 
  | grep -m1 $SUM 
  | cut -d " " -f 1 
  | xargs -Iz git log z -1
xargs: sh: terminated by signal 13
commit b462c256e62e7438878d5dc62155f2504353be7f
Author: Greg Sabino Mullane 
Date:   Fri Feb 24 08:34:50 2012 -0500

    Fix typo regarding piddir

I'm using variables in these examples both to make copy and paste easier, and because it's always a good idea to save away constant but hard-to-remember bits of information. The first part of the pipeline grabs a list of all commit IDs: git log --format=%H.

We then use xargs to feed list of commit ids one by one to a shell. The shell grabs a copy of the file as it existed at the time of that commit, and generates an MD5 checksum of it. We echo the commit on the line as well as we will need it later on. So we now generate the commit hash and the md5 of the file.

Next, we pipe this list to grep so we only match the MD5 we are looking for. We use -m1 to stop processing once the first match is found (this is important, as the extraction and checksumming of files is fairly expensive, so we want to short-circuit it as soon as possible). Once we have a match, we use the cut utility to extract just the commit ID, and pipe that back into git log. Voila! Now we know the very last time the file existed with that MD5, and can checkout the given commit. (The "terminated by signal 13" is normal and expected)

You may wonder if a sha1sum would be better, as git uses those internally. Sadly, the process remains the same, as the algorithm git uses to generate its internal SHA1 checksums is sha1("blob " . length(file) . "" . contents(file)), and you can't expect a random user to compute that and send it to you! :)

Finding a git commit when given a file size

Another piece of information the user can give you very easily is the size of a file. For example, they may tell you that their copy of weighs in at 167092 bytes. As this file changes often, it can be a unique-enough marker to help you determine when they checkout out the repository. Finding the matching size is a matter of walking backwards through each commit and checking the file size of every as it existed:

$ SIZE=167092
$ git rev-list --all 
  | while read commit
 do if git ls-tree -l -r $commit 
  | grep -q -w $SIZE
 then echo $commit

The git ls-tree command generates a list of all blobs (files) for a given commit. The -l option tells it to also print the file size, and the -r option asks it to recurse. So we use git rev-list to generate a list of all the commits (by default, these are output from newest to oldest). Then we pass each commit to the ls-tree command, and use grep to see if that number appears anywhere in the output. If it does, grep returns truth, making the if statement fire the echo, which shows is the commit. The break ensures we stop after the first match. We now have the (probable) commit that the user checked the file out of. As we are not matching by filename, it's probably a good idea to double-check by running git ls-tree -l -r on the given commit.

Finding a git commit when given a copy of the file itself

This is very similar to the size method above, except that we are given the file itself, not the size, so we need to generate some metadata about it. You could run a checksum or a filesize and use one of the recipes above, or you could do it the git way and find the SHA1 checksum that git uses for this file (aka a blob) by using git hash-object. Once you find that, you can use git ls-tree as before, as the blob hash is listed next to the filename. Thus:

$ HASH=`git hash-object ./bucardo.clue`
$ echo $HASH
$ git rev-list --all 
  | while read commit
 do if git ls-tree -r $commit 
  | grep -F -q $HASH
 then echo $commit

Finding a git commit by error message

Sometimes the only clue you are given is an error message, or some other snippet that you can trace back to one or more commits. For example, someone once mailed the list to ask about this error that they received:

DBI connect('dbname=bucardo;host=localhost;port=5432',
  'bucardo',...) failed: fe_sendauth: no password supplied at 
  /usr/local/bin/bucardo line 8627.

A quick glance at line 8627 of the file "bucardo" in HEAD showed only a closing brace, so it must be an earlier version of the file. What was needed was to walk backwards in time and check that line for every commit until we find one that could have triggered the error. Here is one way to do that:

$ git log --format=%h 
  | xargs -n 1 -I sh -c 
  "echo -n {}; git show {}:bucardo | head -8627 | tail -1" 
  | less
## About 35 lines down:
379c9006     $dbh = DBI->connect($BDSN, 'bucardo'...

Therefore, we can do a "git checkout 379c9006" and see if we can solve the user's problem.

These are some of the techniques I use to hunt down specific commits in a git repository. Are there other clues you have run up against? Better recipes for hunting down commits? Let me know in the comments below.

published by (Spencer Christensen) on 2014-11-07 18:18:00 in the "CentOS" category

When installing PostgreSQL 9.3 onto a CentOS 6 system, you may notice that some postgres commands appear to be missing (like pg_ctl, initdb, and pg_config). However, they actually are on your system but just not in your path. You should be able to find them in /usr/pgsql-9.3/bin/. This can be frustrating if you don't know that they are there.

To solve the problem, you could just use full paths to these commands, like /usr/pgsql-9.3/bin/initdb, but that may get ugly quick depending on how you are calling these commands. Instead, we can add them to the path.

You could just copy them to /usr/bin/ or create symlinks for them, but both of these methods are hack-ish and could have unintended consequences. Another option is to add /usr/pgsql-9.3/bin/ to your path manually, or to the path for all users by adding it to /etc/bashrc. But again, that seems hack-ish and when you upgrade postgres to 9.4 down the road things will break again.

So instead, let's look at how Postgres' other commands get installed when you install the rpm. When you run "yum install postgresql93" the rpm contains not only all the files for that package but it also includes some scripts to run (in this case one at install time and another at uninstall time). To view everything that is getting installed and to see the scripts that are run use this command: "rpm -qilv --scripts postgresql93". There will be a lot of output, but in there you will see:

postinstall scriptlet (using /bin/sh):
/usr/sbin/update-alternatives --install /usr/bin/psql pgsql-psql /usr/pgsql-9.3/bin/psql 930
/usr/sbin/update-alternatives --install /usr/bin/clusterdb  pgsql-clusterdb  /usr/pgsql-9.3/bin/clusterdb 930
/usr/sbin/update-alternatives --install /usr/bin/createdb   pgsql-createdb   /usr/pgsql-9.3/bin/createdb 930

That line "postinstall scriptlet (using /bin/sh):" marks the beginning of the list of commands that are run at install time. Ah ha! It runs update-alternatives! If you're not familiar with alternatives, the short description is that it keeps track of different versions of things installed on your system and automatically manages symlinks to the version you want to run.

Now, not all the commands we're interested in are installed by the package "postgresql93"- you can search through the output and see that pg_config gets installed but is not set up in alternatives. The commands initdb and pg_ctl are part of the package "postgresql93-server". If we run the same command to view its files and scripts we'll see something interesting- it doesn't set up any of its commands using alternatives! Grrr. :-(

In the postgresql93-server package the preinstall and postinstall scripts only set up the postgres user on the system, set up /var/log/pgsql, add postgres to the init scripts, and set up the postgres user's .bash_profile. That's it. But, now that we know what commands are run for getting psql, clusterdb, and createdb into the path, we can manually run the same commands for the postgres commands that we need. Like this:

/usr/sbin/update-alternatives --install /usr/bin/initdb pgsql-initdb /usr/pgsql-9.3/bin/initdb 930
/usr/sbin/update-alternatives --install /usr/bin/pg_ctl pgsql-pg_ctl /usr/pgsql-9.3/bin/pg_ctl 930
/usr/sbin/update-alternatives --install /usr/bin/pg_config pgsql-pg_config /usr/pgsql-9.3/bin/pg_config 930

These commands should be available in your path now and are set up the same as all your other postgres commands. Now, the question about why these commands are not added to alternatives like all the others is a good one. I don't know. If you have an idea, please leave it in the comments. But at least now you have a decent work-around.

published by (Kamil Ciemniewski) on 2014-11-06 13:41:00 in the "Android" category

My high school math teacher used to say that mathematicians are the laziest people on Earth. Why? Because they always look for clever ways to simplify their work.

If you stop and think about it, all that technology is, is just simplification. It's taking the infinitely complex world and turning it into something sterile and simple. It's all about producing simple models with a limited number of elements and processes.

Today I?d like to walk you through creation of a mobile app that could be used on iOS, Android or Windows Phone. We?ll use a very cool set of technologies that allow us to switch from using multiple languages and frameworks (Objective-C for iOS, Java for Android and C# for Windows Phone) to just using HTML, CSS and JavaScript.

Let?s start turning complex into simple!

PhoneGap and Ionic Framework

Creating the project

In order to be able to start playing along, you need to get yourself a set of toys. Assuming that you've got NodeJS and Npm installed already, all you have to do is:

$ npm install -g cordova ionic

Now you should be able to create the project's scaffold. We'll be creating a simple app that will list all the latest cartoons from the xkcd blog. Let's call it Cartoonic.

Ionic comes with a handy tool called 'ionic'. It allows you to create a new project as well as perform some automated project-structure-management tasks. The project creation task accepts a 'skeleton' name that drives an initial layout of the app. Possible options are: 'blank', 'tabs' and 'sidemenu'.

We'll be creating an app from scratch so:

$ ionic start Cartoonic blank

The framework gives you an option of whether you want to use Sass or just plain old Css. To turn Sass on for the project run:

$ cd Cartoonic && ionic setup sass

All went well, but now let's see if it works well. For this, Ionic gives you an ability to test your app in the browser, as if it were a screen of your mobile device. To run the app in the browser now:

$ ionic serve

Working with the app layout

We need to let all users know what a cool name we've chosen for our app. The default one provided by the scaffold wouldn't work well. Also we'd like the color of the header to be blue instead of the default white.

In order to do so you can take a look at the CSS documentation for different aspects of the UI:

--- a/www/index.html
+++ b/www/index.html
@@ -21,8 +21,8 @@
-      <ion-header-bar class="bar-stable">
-        <h1 class="title">Ionic Blank Starter</h1>
+      <ion-header-bar class="bar-positive">
+        <h1 class="title">Cartoonic</h1>

So far so good, now let's play with the list of cartoons:

--- a/scss/
+++ b/scss/
+.cartoon {
+  text-align: center; 
+  box-shadow: 1px 1px 2px rgba(0, 0, 0, 0.2);
+  width: 96%;
+  margin-left: 2%;
+  margin-top: 2%;
+  margin-bottom: 2%;
+  img {
+    width: 90%;
+  }

--- a/www/index.html
+++ b/www/index.html
+        <ion-list>
+          <ion-item class="item-divider">Where Do Birds Go</ion-item>
+          <ion-item class="cartoon">
+            <img src="" alt="">
+          </ion-item>
+          <ion-item class="item-divider">Lightsaber</ion-item>
+          <ion-item class="cartoon">
+            <img src="" alt="">
+          </ion-item>
+        </ion-list>

Alright, we've got the list that's looking quite nice. The data behind is static, but for now we just wanted to make sure the look&feel is good.

Using AngularJS to manage the app

Ionic is built around the fantastic AngularJS framework. That's our means of developing the logic behind. We need to make the list of cartoons use real data from the Xckd blog RSS feed. We also need to enable tapping on images to see the picture in the browser (so it can be zoomed in).

Let's start with making the UI use dynamically bound data that we can operate on with JavaScript. In order to do so, we need to add a controller for our view. We also need to specify the data binding between the markup we've created previously and the variable in the controller that we intend to use as our data store.

--- a/www/index.html
+++ b/www/index.html
     <script src="js/app.js"></script>
+    <script src="js/controllers.js"></script>
-  <body ng-app="starter">
+  <body ng-app="starter" ng-controller="CartoonsCtrl">

-          <ion-item class="item-divider">Where Do Birds Go</ion-item>
-          <ion-item class="cartoon">
-            <img src="" alt="">
-          </ion-item>
-          <ion-item class="item-divider">Lightsaber</ion-item>
-          <ion-item class="cartoon">
-            <img src="" alt="">
+          <ion-item class="item-divider" ng-repeat-start="cartoon in cartoons">{{ cartoon.title }}</ion-item>
+          <ion-item class="cartoon" ng-repeat-end>
+            <img ng-src="{{ cartoon.href }}" alt="">

--- a/www/js/app.js
+++ b/www/js/app.js
-angular.module('starter', ['ionic'])
+angular.module('starter', ['ionic', 'starter.controllers'])

--- /dev/null
+++ b/www/js/controllers.js
+angular.module('starter.controllers', [])
+.controller('CartoonsCtrl', function($scope) {
+  $scope.cartoons = [
+    { 
+      href: "", 
+      id: 1434,
+      title: "Where Do Birds Go"
+    },
+    { 
+      href: "",
+      id: 1433,
+      title: "Lightsaber"
+    }
+  ];

You can notice that "ng-controller" directive has been added to the body element. It points at the newly created controller, which we're loading with a script tag and making available to the rest of the app by including its module (starter.controllers) in the 'starter' module's dependencies list.

Let's implement opening the picture upon the tap:

--- a/www/index.html
+++ b/www/index.html
           <ion-item class="item-divider" ng-repeat-start="cartoon in cartoons">{{ cartoon.title }}</ion-item>
-          <ion-item class="cartoon" ng-repeat-end>
+          <ion-item class="cartoon" ng-repeat-end ng-click="openCartoon(cartoon)">
             <img ng-src="{{ cartoon.href }}" alt="">

--- a/www/js/controllers.js
+++ b/www/js/controllers.js
@@ -13,4 +13,8 @@ angular.module('starter.controllers', [])
       title: "Lightsaber"
+  $scope.openCartoon = function(cartoon) {
+, '_blank', 'location=no');
+  };

That was simple wasn't it? We've just added the ng-click directive making the click/tap event bound to the openCartoon function from the scope. This function in turn is using passing '_blank' as target. Et voilà!

Now, let's implement loading images from the real feed:

--- a/www/index.html
+++ b/www/index.html
     <script src="cordova.js"></script>
+    <script type="text/javascript" src=""></script>
+    <script type="text/javascript">
+      google.load("feeds", "1");
+    </script>
     <!-- your app's js -->
     <script src="js/app.js"></script>
     <script src="js/controllers.js"></script>
+    <script src="js/services.js"></script>
   <body ng-app="starter" ng-controller="CartoonsCtrl">

--- a/www/js/app.js
+++ b/www/js/app.js
-angular.module('starter', ['ionic', 'starter.controllers'])
+angular.module('starter', ['ionic', 'starter.controllers', ''])

--- a/www/js/controllers.js
+++ b/www/js/controllers.js
-angular.module('starter.controllers', [])
+angular.module('starter.controllers', [''])
-.controller('CartoonsCtrl', function($scope) {
-  $scope.cartoons = [
-    { 
-      href: "", 
-      id: 1434,
-      title: "Where Do Birds Go"
-    },
-    { 
-      href: "",
-      id: 1433,
-      title: "Lightsaber"
-    }
-  ];
+.controller('CartoonsCtrl', function($scope, cartoons) {
+  $scope.cartoons = [];
   $scope.openCartoon = function(cartoon) {, '_blank', 'location=no');
+  $scope.$watch(function() {
+    return cartoons.list;
+  }, function(list) {
+    $scope.cartoons = list;
+  });

--- /dev/null
+++ b/www/js/services.js
@@ -0,0 +1,26 @@
+angular.module('', [])
+.factory('cartoons', function($rootScope) {
+  var self = {
+    list: [],
+    url: "",
+    fetch: function() {
+      var feed = new google.feeds.Feed(self.url);
+      feed.load(function(result) {
+        $rootScope.$apply(function() {
+          if(result.status.code == 200) {
+            self.list = {
+              return {
+                href: entry.content.match(/src="[^"]*/g)[0].substring(5, 100),
+                title: entry.title
+              }
+            });
+          }
+        });
+      });
+    }
+  };
+  self.fetch();
+  return self;

Okay, a couple of comments here. You may wonder why have we loaded Google APIs? That's because if we were to try to load the xml that comes from the blog's feed, we would inevitably fail because of the "Same Origin Policy". Basically, the Ajax request would not complete successfully and there's nothing we can do locally about it.

Luckily, Google has created a service we can use as a middleman between our in-browser JavaScript code and blog's web server. Long story made short: when you load the feed with Google's Feed API - the data's there and it's also already parsed.

We're also adding a custom service here. The service fetches the entries upon its initialization. And because the controller's depending on this service - we're guaranteed to get the data as soon as the controller is initialized. The controller is also using the $watch function to make sure it has the most recent copy of the entries list.


published by (Kent K.) on 2014-11-05 14:00:00 in the "css" category

When laying out HTML forms, instead of using a table (re: tables-are-only-for-tabular-data et al), I've had good results making use of the table family of values for the CSS display property. I find it a reliable way to ensure items line up in a wide range of situations, a change in the size of labels or a resize of the browser?s window for example.

Something simple like the following has worked well for me on several occasions:

.table {
  display: table;
.table>* {
  display: table-row;
.table>*>* {
  display: table-cell;

Occasionally though, I've wanted to leave a column empty for one reason or another. To accomplish this I found myself including empty HTML tags like:

A Cell
A Cell
A Cell

The empty elements function well enough but they feel a little out of place. Recently I came up with a solution I like better. By using the CSS ::after and ::before selectors, you can insert an arbitrary element that can take the place of a missing cell. The following CSS rule can be used to replace the empty div above.

.table>*:nth-child(2)::before {
  content: " ";
  display: table-cell;

The nth-child(2) selector can be tailored to your given situation. You could replace it with something like a specific CSS class that you assign to the rows that you want to include empty columns.

Making use of CSS selectors instead of extra HTML elements can help you respect the separation of your document content from your document presentation. If at a later date, you decide you want to switch to a layout that doesn't resemble a table, you can simply update the CSS rules to achieve a different look.

published by Eugenia on 2014-11-04 06:53:22 in the "Politics" category
Eugenia Loli-Queru

It rubs me the wrong way when people say “I’m proud that the XYZ place exists in my country” (e.g. a monument, or a natural place). Why would anyone be “proud” for something they had nothing to do with, is beyond me. For example, why would you be proud if Parthenon, or Santorini is in Greece, for example? You had nothing to do with either the building of the Parthenon, nor the volcano that created the Santorini island.

The correct vocabulary would be “I’m happy to live close by to such a place”. Anything more than that, is chauvinism at worse, or stupidity at best.

published by (Emanuele 'Lele' Calo') on 2014-10-30 15:07:00

I find it hard to remember a period in my whole life in which I issued, reissued, renewed and revoked so many certificates.

And while that's usually fun and interesting, there's one thing I often needed and never figured out, till a few days ago, which is how to generate CSRs (Certificate Signing Requests) with AlternativeNames (eg: including www and non-www domain in the same cert) with a one-liner command.

This need is due to the fact that some certificate providers (like GeoTrust) don't cover the parent domain when requesting a new certificate (eg: CSR for won't cover, unless you specifically request so.

Luckily that's not the case with other Certificate products (like RapidSSL) which already offer this feature built-in.

This scenario is starting to be problematic more often since we're seeing a growing number of customers supporting sites with HTTPs connections covering both www and "non-www" subdomains for their site.

Luckily the solution is pretty simple and straight-forward and the only requirement is that you should type the CSR subject on the command line directly, basically without the use of the interactive question mechanism.

If you managed to understand how an SSL certificate works this shouldn't be a huge problem, anyway just as a recap here's the list of the meaning for the common Subject entries you'll need:

  • C => Country
  • ST => State
  • L => City
  • O => Organization
  • OU => Organization Unit
  • CN => Common Name (eg: the main domain the certificate should cover)
  • emailAddress => main administrative point of contact for the certificate

So by using the common syntax for OpenSSL subject written via command line you need to specify all of the above (the OU is optional) and add another section called subjectAltName=.

By adding DNS.n (where n is a sequential number) entries under the "subjectAltName" field you'll be able to add as many additional "alternate names" as you want, even not related to the main domain.

Obviously the first-level parent domain will be covered by most SSL products, unless specified differently.

So here's an example to generate a certificate which will cover and

openssl req -new -key -sha256 -nodes
  -subj '/C=US/ST=New York/L=New York/O=End Point/OU=Hosting Team/' >

So here's another example with multiple DNS.n entries:

openssl req -new -key -sha256 -nodes
  -subj '/C=US/ST=New York/L=New York/O=End Point/OU=Hosting Team/,,' >

warning: we had to split the command into multiple lines to make it readable, but you should keep it all on one line, otherwise you may lose some Subject details.

Now with that I'm able to generate proper multi-domain CSRs effectively.

Please note the use of the -sha256 to use the SHA256 algorithm to sign the CSR that, while not required, is appreciated considered the last round of concerning "attentions" to SHA1.

published by (Greg Sabino Mullane) on 2014-10-29 20:45:00 in the "mediawiki" category

Sok Kwu Wan

I recently created a new MediaWiki extension named ControlSpecialVersion whose purpose is to allow some control over what is shown on MediaWiki's "special" page Special:Version. The latest version of this extension can be downloaded from You can see it in action on the Special:Version page for The primary purpose of the module is to prevent showing the PHP and database versions to the public.

As with most MediaWiki extensions, installation is easy: download the tarball, unzip it into your extensions directory, and add this line to your LocalSettings.php file:

require_once( "$IP/extensions/ControlSpecialVersion/ControlSpecialVersion.php" );

By default, the extension removes the PHP version information from the page. It also changes the PostgreSQL reported version from its revision to simply the major version, and changes the name from the terrible-but-official "PostgreSQL" to the widely-accepted "Postgres". Here is what the Software section of looks like before and after the extension is used:

Note that we are also eliding the git revision information (sha and date). You can also do things such as hide the revision information from the extension list, remove the versions entirely, or even remove an extension from showing up at all. All the configuration parameters can be found on the extension's page on

It should be noted that there are typically two other places in which your PHP version may be exposed, both in the HTTP headers. If you are running Apache, it may show the version as part of the Server heading. To turn this off, edit you httpd.conf file and change the ServerTokens directive to ProductOnly. The other header is known as X-Powered-By and is added by PHP to any pages it serves (e.g. MediaWiki pages). To disable this header, edit your php.ini file and make sure expose_php is set to Off.

While these methods may or may not make your server safer, there really is no reason to expose certain information to the world. With this extension, you at least have the choice now.

published by (Steph Skardal) on 2014-10-20 13:50:00 in the "tips" category

It's been a while since I shared a blog article where I share End Point tidbits, or bits of information passed around the End Point team that don't necessarily merit a single blog post, but are worth mentioning and archiving. Here are some notes shared since that last post that I've been collecting:

  • Skeuocard and creditcard.js are intuitive user interface (JS, CSS) plugins for credit card form inputs (card number, security code, billing name).

    Skeuocard Screenshot
  • StackExchange UX is a Stack Overflow resource for user interface Q&A.
  • wpgrep is an available tool for grepping through WordPress databases.
  • Here is a nifty little tool that analyzes GitHub commits to report on language convention, e.g. space vs. tab indentation & spacing in argument definitions.

    Example comparison of single vs. double quote convention in JavaScript.
  • Ag (The Silver Searcher) is a document searching tool similar to ack, with improved speed. There's also a Ag plugin for vim.
  • GitHub released Atom earlier this year. Atom is a desktop application text editor; features include Node.js support, modular design, and a full feature list to compete with existing text editors.
  • SpeedCurve is a web performance tool built on WebPagetest data. It focuses on providing a beautiful user interface and minimizing data storage.

    Example screenshot from SpeedCurve
  • Here is an interesting article by Smashing Magazine discussing mobile strategy for web design. It covers a wide range of challenges that come up in mobile web development.
  • Reveal.js, deck.js, Impress.js, Shower, and showoff are a few open source tools available for in-browser presentation support.
  • Have you seen Firefox's 3D view? It's a 3D representation of the DOM hierarchy. I'm a little skeptical of its value, but the documentation outlines a few use cases such as identifying broken HTML and finding stray elements.

    Example screenshot of Firefox 3D view
  • Here is an interesting article discussing how to approach sales by presenting a specific solution and alternative solutions to clients, rather than the generic "Let me know how I can help." approach.
  • A coworker inquired looking for web based SMS providers to send text messages to customer cellphones. Responses included services recommended such as txtwire, twilio, The Callr, and Clickatell.

published by (Jeff Boes) on 2014-10-16 14:23:00 in the "chrome" category

If you are updating your Firefox installation for Windows and you get a puzzling black screen of doom, here's a handy tip: disable graphics acceleration.

The symptoms here are that after you upgrade Firefox to version 33, the browser will launch into a black screen, possibly with a black dialog box (it's asking if you want to choose Firefox to be your default browser). Close this as you won't be able to do much with it.

Launch Firefox by holding down the SHIFT key and clicking on the Firefox icon. It will ask if you want to reset Firefox (Nope!) or launch in Safe mode (Yes).

Once you get to that point, click the "Open menu" icon (three horizonal bars, probably at the far right of your toolbar). Choose "Preferences", "Advanced", and uncheck "Use hardware acceleration when available".

Close Firefox, relaunch as normal, and you should be AOK. You can try re-enabling graphics acceleration if and when your graphics driver is updated.

Reference: here.