All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (Steph Skardal) on 2014-09-17 13:57:00 in the "piggybak" category

Piggybak, an open source Ruby on Rails ecommerce gem, implemented as a mountable solution, has continued to be upgraded and maintained over the last several months to keep up to date with Rails security releases and Ruby releases. Here are some quick notes on recent work:

  • Piggybak (version 0.7.5) is now compatible with Rails 4.1.6, which is the most up to date release of Rails. See the Rails release notes for more details on this recent release. The Piggybak Demo is now running on Rails 4.1.6.
  • Piggybak is compatible with Ruby 2.1.2, and the demo is running on Ruby 2.1.2
  • Recent updates in Piggybak include migration fixes to handle table namespace issues, and updates to remove methods that are no longer present in Rails (that were previously deprecated).
  • Recent updates to the demo include updates to the integration testing suite to allow testing to be compatible with Rails 4.1.6, as well as modifications to how the demo handles exceptions.

Make sure to check out Piggybak on github repository for more details on these recent updates.


published by noreply@blogger.com (Miguel Alatorre) on 2014-09-16 13:00:00 in the "alias" category

Recently I was tasked with creating a plugin to customize End Point's Redmine instance. In working through this I was exposed for the first time to alias_method_chain. What follows is my journey down the rabbit hole as I wrap my head around new (to me) Ruby/Rails magic.

The Rails core method alias_method_chain encapsulates a common pattern of using alias_method twice: first to rename an original method to a method "without" a feature, and second to rename a new method "with" a feature to the original method. Whaaaa? Let's start by taking a look at Ruby core methods alias and alias_method before further discussing alias_method_chain.

alias and alias_method

At first glance, they achieve the same goal with slightly different syntax:

class Person
  def hello
    "Hello"
  end

  alias say_hello hello
end

Person.new.hello
=> "Hello"
Person.new.say_hello
=> "Hello"
class Person
  def hello
    "Hello"
  end

  alias_method :say_hello, :hello
end

Person.new.hello
=> "Hello"
Person.new.say_hello
=> "Hello"

Let's see what happens when we have a class inherit from Person in each of the cases above.

class Person
  def hello
    "Hello"
  end

  # Wrapped in a class function to examine scope
  def self.apply_alias
    alias say_hello hello
  end
  apply_alias
end

class FunnyPerson < Person
  def hello
    "Hello, I'm funny!"
  end
  apply_alias
end

FunnyPerson.new.hello
=> "Hello, I'm funny!"
FunnyPerson.new.say_hello
=> "Hello"
class Person
  def hello
    "Hello"
  end

  # Wrapped in a class function to examine scope
  def self.apply_alias
    alias_method :say_hello, :hello
  end
  apply_alias
end

class FunnyPerson < Person
  def hello
    "Hello, I'm funny!"
  end
  apply_alias
end

FunnyPerson. new.hello
=> "Hello, I'm funny!"
FunnyPerson.new.say_hello
=> "Hello, I'm funny!"

Because alias is a Ruby keyword it is executed when the source code gets parsed which in our case is in the scope of the Person class. Hence, say_hello will always be aliased to the hello method defined in Person. Since alias_method is a method, it is executed at runtime which in our case is in the scope of the FunnyPerson class.

alias_method_chain

Suppose we want a child class to extend the hello method. We could do so with a couple of alias_method calls:

class Person
  def hello
    "Hello"
  end
end

class PolitePerson < Person
  def hello_with_majesty
    "#{hello_without_majesty}, your majesty!"
  end

  alias_method :hello_without_majesty, :hello
  alias_method :hello, :hello_with_majesty
end

PolitePerson.new.hello
=> "Hello, your majesty!"
PolitePerson.new.hello_with_majesty
=> "Hello, your majesty!"
PolitePerson.new.hello_without_majesty
=> "Hello"

What we did above in PolitePerson can be simplified by replacing the two alias_method calls with just one call to alias_method_chain:

class Person
  def hello
    "Hello"
  end
end

class PolitePerson < Person
  def hello_with_majesty
    "#{hello_without_majesty}, your majesty!"
  end

  alias_method_chain :hello, :majesty
end

class OverlyPolitePerson < Person
  def hello_with_honor
    "#{hello_without_humbling} I am honored by your presence!"
  end

  alias_method_chain :hello, :honor
end

PolitePerson.new.hello
=> "Hello, your majesty!"
OverlyPolitePerson.new.hello
=> "Hello, your majesty! I am honored by your presence!"

Neat! How does this play into Redmine plugins, you ask? Before we get into that there is one more thing to go over: a module's included method.

The included callback

When a module is included into another class or module, Ruby invokes the included method if defined. You can think of it as a sort of module initializer:

module Polite
  def self.included(base)
    puts "Polite has been included in class #{base}"
  end
end

class Person
  include Polite

  def hello
    "Hello"
  end
end
Polite has been included in class Person
=> Person

Now, what if you can't modify the Person class directly with the include line? No biggie. Let's just send Person a message to include our module:

class Person
  def hello
    "Hello"
  end
end

module Polite
  def self.include(base)
    puts "Polite has been included in class #{base}"
  end

  def polite_hello
    "Hello, your majesty!"
  end
end

Person.send(:include, Polite)
Polite has been included in class Person
=> Person

What if we now want to extend Person's hello method? Easy peasy:

class Person
  def hello
    "Hello"
  end
end

module Polite
  def self.included(base)
    base.send :include, InstanceMethods

    base.class_eval do
      alias_method_chain :hello, :politeness
    end
  end

  module InstanceMethods
    def hello_with_politeness
      "#{hello_without_politeness}, your majesty!"
    end
  end
end

Person.new.hello
=> "Hello"
Person.send :include, Polite
=> Person
Person.new.hello
=> "Hello, your majesty!"

How polite! Let's talk about what's going on in the Polite module. We defined our hello_with_politeness method inside an InstanceMethods module in order to not convolute the self.include method. In self.include we send an include call to the base class so that InstanceMethods is included. This will allow our base class instances access to any method defined in InstanceMethods. Next, class_eval is used on the base class so that the alias_method_chain method is called within the context of the class.

How this applies to Redmine

If you take a look at the Redmine plugin documentation, specifically Extending the Redmine Core, you'll see the above pattern as the recommended method to overwrite/extend Redmine core functionality. I'll include the RateUsersHelperPatch example from the documentation here so that you can see it compared with the above code blocks:

module RateUsersHelperPatch
  def self.included(base) # :nodoc:
    base.send(:include, InstanceMethods)

    base.class_eval do
      unloadable # Send unloadable so it will not be unloaded in development

      alias_method_chain :user_settings_tabs, :rate_tab
    end
  end

  module InstanceMethods
    # Adds a rates tab to the user administration page
    def user_settings_tabs_with_rate_tab
      tabs = user_settings_tabs_without_rate_tab
      tabs << { :name => 'rates', :partial => 'users/rates', :label => :rate_label_rate_history}
      return tabs
    end
  end
end

Sending an include to RateUsersHelper can be done in the plugin's init.rb file:

Rails.configuration.to_prepare do
  require 'rate_users_helper_patch'
  RateUsersHelper.send :include, RateUsersHelperPatch
end

So, the tabs variable is set using user_settings_tabs_without_rate_tab, which is aliased to the Redmine core user_settings_tabs method:

# https://github.com/redmine/redmine/blob/2.5.2/app/helpers/users_helper.rb#L45-L53
def user_settings_tabs
  tabs = [{:name => 'general', :partial => 'users/general', :label => :label_general},
          {:name => 'memberships', :partial => 'users/memberships', :label => :label_project_plural}
          ]
  if Group.all.any?
    tabs.insert 1, {:name => 'groups', :partial => 'users/groups', :label => :label_group_plural}
  end
  tabs
end

Then, a new hash is added to tabs. Because method user_settings_tabs is now aliased to user_settings_tabs_with_rate_tab, the users/groups partial will be included when the call to render user_settings_tabs is executed:

#https://github.com/redmine/redmine/blob/2.5.2/app/views/users/edit.html.erb#L9
<%= link_to l(:label_profile), user_path(@user), :class => 'icon icon-user' %> <%= change_status_link(@user) %> <%= delete_link user_path(@user) if User.current != @user %>
<%= title [l(:label_user_plural), users_path], @user.login %> <%= render_tabs user_settings_tabs %>

Although alias_method_chain is a pretty cool and very useful method, it's not without its shortcomings. There's a great, recent blog article about that here in which Ruby 2's Module#prepend as a better alternative to alias_method_chain is discussed as well.


published by noreply@blogger.com (Selvakumar Arumugam) on 2014-09-12 18:00:00 in the "analyser" category
The "Geo Map" option in Analyzer Reports provides a feature to visualize data with geographic locations. We will learn how to design a Mondrian schema and configure Pentaho to make use of the "Geo Map" feature in the Analyzer Reports. This article will show us how to set this feature up step by step.

Enable Geo Map feature on Geographic fields in Mondrian Schema


The Mondrian schema has two main categories called Dimensions and Measures. The Dimensions are defined as levels in the Mondrian schema. The Geographic fields should have two additional annotations to use Geo Map. The two annotations are:

1. Data.Role - defines the type of level generally; for this type of node, this must be set to  'Geography'.
2. Geo.Role - defines the geographical classification in a hierarchy. These can be either predefined roles ('country', 'state', 'city', 'postalcode') or custom roles.

Sample Level with Annotation:

        <Level name="Country Name" visible="true" column="country" type="String" uniqueMembers="false" levelType="Regular" hideMemberIf="Never">
          <Annotations>
            <Annotation name="Data.Role"><![CDATA[Geography]]></Annotation>
            <Annotation name="Geo.Role"><![CDATA[country]]></Annotation>
          </Annotations>
        </Level>


Geographic fields and datasets in database 


I have created a sample table with the fields containing geographic locations for dimensions and aggregated value for measures. The sample population table contains Pentaho-defined geographic locations 'country', 'state', 'city' and aggregated population count for those geographic fields.

'Population' table design and datasets: 


Here we create a sample population table with geographic fields and the population count in a PostgreSQL database.
CREATE TABLE population (
   id INT PRIMARY KEY   NOT NULL,
   country      TEXT    NOT NULL,
   state        TEXT    NOT NULL,
   city         TEXT   NOT NULL,
   count        INT    NOT NULL
);

Next we load population data into the table for 4 cities of 2 states in USA. (Population data for more geographic locations in USA are available at USA Population.)
# SELECT * FROM population;
 id | country |   state    |     city      |  count 
----+---------+------------+---------------+---------
  1 | USA     | California | Los Angeles   | 3857800
  2 | USA     | California | San Francisco |  825863
  3 | USA     | New York   | Hilton        |    5974
  4 | USA     | New York   | Johnsburg     |    2390

Download the sql dump file with table schema and datasets.


Design a Mondrian Schema with Geographic Support


Pentaho provides a tool called "Schema Work Bench" to design a Mondrian schema for a specific table's data. We can create a new Mondrian schema for the table by selecting File -> New -> Schema. The picture below depicts the hierarchy level of the Mondrian schema elements.



Publish the Mondrian schema to Pentaho 


The publish process requires the JDBC datasource to have access to database. Create a JDBC datasource in the manage datasources wizard with necessary input values.



Once the JDBC datasource has been created in Pentaho server, the Mondrian schema can be published from the Schema Work Bench.



Download the Mondrian schema xml to view the schema, cube, table, dimension, hierarchy, level, annotations, measures elements and corresponding attribute values.

The Mondrian schema xml can be imported directly into Pentaho server to create an analysis datasource.



Create a Analyzer Report with Geo Map


Add the necessary geographic fields under "Rows" and population count under "Measure" to create a basic analyzer report.



Change the report type to "Geo Map" through the right top corner options to view the visualized data. Congratulations, you're done!


published by noreply@blogger.com (Jon Jensen) on 2014-09-10 21:01:00 in the "regulation" category

Today End Point is participating in an Internet-wide campaign to raise awareness about net neutrality, the FCC's role in overseeing the Internet in the United States, and the possible effects of lobbying by large consumer Internet providers.

Many companies and individuals are in favor of specific "net neutrality" regulation by the FCC, and make good arguments for it, such as these by Battle for the Net, Etsy and ThoughtWorks and Reddit.

There are also plenty speaking out against certain specific regulatory proposals out there: TechFreedom, Google, Facebook, Microsoft, Yahoo!, Todd Wasserman, and with a jaunty propagandistic style, NCTA, the cable company's lobby.

I think we are all sympathetic to free-market arguments and support non-governmental solutions that allow companies and individuals to create things without getting permission, and to arrange services and peering as they see fit. It seems that most people and companies understand the need to pay for more bandwidth, and more data transfer. (Marketers are the ones promising unlimited everything, then hiding limits in the fine print!) Many of us are worried about further entrenching government in private networks, whether ostensibly for national security, "intellectual property" policing, or enforcing net neutrality.

But the market competition is hobbled when there are few competitors in a given geographic area. Many Internet users have few options if their ISP begins to filter or slow traffic by service type. I think we would all be better off with less false advertising of "unlimited downloads" and more realistic discussion of real costs. ISP backroom arm-twisting deals with companies just using the network as customers request can invisibly entrench existing players to the exclusion of new entrants.

Every Internet provider builds on lots of infrastructure that was funded by the public, platform popularity built by other companies and individuals, rights of way granted by local municipalities and others, research done by government-funded institutions, and finally, their own semi-monopoly positions that are sometimes enforced by government at various levels.

In any case there is not really a simple argument on either side either entirely for or against regulation. Some regulation is already there. The question is what form it will take, how it affects different groups now, and how it shapes the possibilities in the future.

End Point does not endorse any specific position or proposal on the table at the FCC, but we want to raise awareness about this Internet regulation discussion and hope that you will do some research and comment to the FCC about how you see things. It's worth letting your Internet provider, mobile phone carrier, and businesses you interact with online know how you feel too! Those outside the United States may find similar debates underway in their countries, perhaps not getting the broad attention they deserve.


published by noreply@blogger.com (Bianca Rodrigues) on 2014-09-09 20:51:00

Labels on Time is an online retailer that delivers top-quality thermal roll and direct thermal labels - and all on time, of course. They came to us last year to upgrade their Spree site, resolve bugs, and develop cutting-edge features, utilizing our expertise with the ecommerce platform. Spree Commerce is an open-source ecommerce solution built on Ruby on Rails, and manages all aspects of the fulfillment process, from checkout to shipping to discounts, and much more.

UPGRADING THE SPREE PLATFORM

There were quite a few challenges associated with the upgrade, since Labels on Time was still running on Spree's version 2.0, which was not yet stable. To keep some stability, we initially worked off a fork of Spree, and selectively brought in changes from 2.0 when we were sure they were stable and reliable enough.

USING SPREE GEMS

To date, some of the Spree gems we have used on the site include:

Active Shipping: This is a Spree plugin that can interface with USPS, UPS and FedEx. Label on Time?s active_shipping gem interacts with the UPS API, which is a big task to tackle since it requires a lot of configuration, especially every time Spree is updated.

Another important gem we use for Labels on Time is Volume Pricing. Volume Pricing is an extension to Spree that uses predefined ranges of quantities to determine the price for a particular product variant. When we first added this gem on the labelsontime.com checkout page, we kept finding that if a user increased the number of items in their cart sufficiently to activate the volume pricing and receive a discount per item, the standard Spree view did not show the new (discounted) price that was currently in effect (although it was correctly calculating the totals). To resolve this, our developer Matt Galvin created some custom JavaScript and Ruby code. Thanks to Matt?s ingenuity, the application can now return every price for every possible size and sort it accordingly.

WHAT WE?RE WORKING ON NEXT

Matt recently upgraded the application to 2.0.10, which was needed for security reasons. You can read more about the security fix here. We are also working on implementing a neat SEO gem called Canonical Rails, which helps search engines understand that any duplicate content URLs it can access all refer to the canonical URL.

Next up, we?re going to implement inventory management, where, according to a customer?s location, we can suggest the available inventory in the closest warehouse to that location.


published by noreply@blogger.com (Steph Skardal) on 2014-09-09 18:06:00 in the "hosting" category

I recently went through the process of downgrading and downsizing my Linode plan and I wanted to share a few of the [small] hoops that I had to jump through to get there, with the help of the Linode Support team.

Background

I've had a small personal WordPress site running for more than a few years now. I also use this server for personal Ruby on Rails development. When I began work on that site, I tried out a few shared hosting providers such as Bluehost and GoDaddy because of the low cost (Bluehost was ~$6/mo) at the time. However, I quickly encountered common limitations of shared server hosting:

  • Shared hosting providers typically make it very difficult to run Ruby on Rails, especially edge versions of Ruby on Rails. It's possible this has improved over the last few years, but when you are a developer and want to experiment (not locally), shared hosting providers are not going to give you the freedom to do so.
  • Shared hosting providers do not give you control of specific performance settings (e.g. use of mod_gzip, expires headers), so I was suffering from lack of control for my little WordPress site as well as my Rails sites. While this is another limitation that may have improved over the last few years, ultimately you are limited by non-root access as a shared server user.

Enter Linode

I looked to virtual server providers such as Linode, Slicehost, and Rackspace after experiencing these common limitations. At the time of my transition, Linode and Slicehost were comparatively priced, but because End Point had successful experiences with Linode for several clients up to that point, I decided to make the jump to Linode. I can't remember what my initial Linode size was (I think 512MB), but I chose the smallest available option at $20/mo, plus $5/mo for backups. I am not a sysadmin expert like my fellow coworkers (Richard, Jon, the list goes on ...), but I managed to get PHP and Ruby on Rails running on Apache with MySQL, and several of the best practices for speeding up your web site in place.

Fast forward about 2 years, and I've been very happy with Linode. I can only remember one specific instance where my server has gone down, and the support team has always been very responsive. They also release occasionally free upgrades at the $20/mo price point, and the current offering at that price point is the Linode 2GB (see more here). But lately, I've been hearing that Digital Ocean has been gaining momentum with a few cheaper options, and I considered making the jump. But I missed the recent announcement that Linode introduced a new $10/mo. plan back in June (hooray!), so I'm happy to stay with Linode at this lower price point that is suitable for my small, but optimized WordPress site and small Ruby on Rails experiments.

How to Downsize

In a perfect world, it would seem that to quickly downsize your Linode instance, you would first click on the "Resize" tab upon logging in to the Linode dashboard, click on the lower plan that you want, and then click "Resize this Linode now!", as shown in the screenshot below:

The Linode resize options.

Things went a little differently for me. First, I received this message when I tried to resize:
"Pending free upgrades must be performed before you are able to resize. Please visit the dashboard to upgrade."

So I headed to my dashboard and clicked on the free upgrade link on the bottom right in the dashboard. I then encountered this message:
"Linodes with configuration profiles referencing a 32 bit kernel are currently not eligible for this upgrade. For more information please see our switching kernels guide, or redeploy this Linode using a 64 bit distro."

My main Linode Configuration Profile was 64 bit, but my Restore Configuration Profile was running the 32 bit kernel. So, I first had to update that by clicking on the "Edit" link, selecting the right kernel, and saving those changes. That took a few minutes to take effect.


My two configuration profiles needed to be on 64 bit kernel to allow for the Linode upgrade.

Then, I was ready for the free upgrade, which took another few minutes after the server booted down, migrated, and booted back up. Next, I headed back to the "Resize" tab on the dashboard and tried to proceed on the downgrade. I immediately received an error message notifying me that my disk images exceeded the resized option I wanted to switch to (24GB). Upon examining my dashboard, my disk images showed ~28GB allocated to the various disk images:


My disk image space exceeded the 24GB allotted for the Linode 1024 plan.

I was directed by the Linode support team to edit the disk image to get under that 24GB allowed amount. They also explained that I must verify my current app(s) didn't exceed what I was going to downsize to, using "df -h" while logged into my server. I had already verified previously where disk space was going on my server and cleaned out some cached files and old log files, so I knew the space used was well under 24GB. The only additional step here was that I had to shut down my server first from the dashboard before reducing the disk image space. So I went through all that, and the disk image adjustment took another few minutes. After the disk image size was adjusted, I booted up the server again and verified it was still running.


Editing my Disk Image

Finally, after all that, I went to the "Resize" tab again and selected the Linode 1024 plan and proceeded. The new plan was implemented within a few minutes, automagically booting down my server and restarting it after completion. My billing information was also updated almost immediately, showing that I will now pay $12.50/mo for the Linode 1024 plan with backups.

Conclusion

In list form, here are the steps I went through to reach my final destination:

  • Updated kernels to 64 bit for all configuration profiles.
  • Applied pending, free upgrades.
  • Manually shut down server.
  • Applied change to reduce disk image space.
  • Rebooted server (not necessary, but I verified at this point things were still running).
  • Resized to Linode 1024 plan.

While this process wasn't as trivial as I had hoped, the support folks were super responsive, often responding within a minute or two when I had questions. I'm happy to stay with Linode at this offering and it allows them to remain competitive with both virtual private hosting providers and as an appealing alternative to shared hosting providers. The Linode 1024 plan is also a great starting point for a proof-of-concept or staging server that may be scaled up later as applications move to production and increase in traffic. Linode has plans ranging from the $10/mo plan I have (1GB of RAM, 24GB SSD storage, etc.) all the way up to a 96GB RAM, 1920 GB SSD storage plan at $960/mo.


published by noreply@blogger.com (Emanuele 'Lele' Calo') on 2014-09-05 00:09:00 in the "administration" category

Do you need something more powerful than the usual, clunky selectors based Rsyslog filtering rules but still you don't see the benefit of going full throttle and use RainerScript?

Perhaps you weren?t aware, but there is an additional filtering rule you may not have used, which is a great alternative to the classic selector-based one, called property-based filtering.

This kind of filtering lets you create rules like:

:msg, contains, "firewall: IN=" -/var/log/firewall

There's a few more properties that you can use like hostname,fromhost,fromip and the number (and variety) is growing over time.

Instead of just verifying that a specific string is contained in the highlighted property, you could also be interested in operators like isempty, isequal or the powerful regex and ereregex which could be used to compare the string content against regexes, that we all love so much.

:fromhost, regex, ".*app-fed{2}" -/data/myapp/frontend_servers.log
:fromhost, regex, ".*app-dbd{2}" -/data/myapp/DB_servers.log

Also remember that you can always use the ! to negate the condition and the discard operator to block Rsyslog from further rules parsing for that specific content:

:msg, !contains, "firewall: IN=" -/data/myapp/all_logs_but_firewall_related.log
:fromhost, regex, ".*appfed{2}" ~ -/data/myapp/frontend_servers.log
:fromhost, regex, ".*appdbd{2}" ~ -/data/myapp/DB_servers.log
*.* /data/myapp/all_logs_but_firewall_related_and_not_from_appfe_and_appdb_servers.log

In case you don't know what the - (dash) sign stands for, that's used to put the log writing process in async mode, so that Rsyslog can proceed with other filtering and won't wait for disk I/O to confirm a successful write before proceeding to something else.

Now go back to your logging system and let us know what nice set up you came up with!

Link-ography:


published by noreply@blogger.com (Spencer Christensen) on 2014-08-25 14:54:00 in the "Camps" category

For most web developers, you have practices and tools that you are used to using to do your work. And for most web developers this means setting up your workstation with all the things you need to do your editing, compiling, testing, and pushing code to some place for sharing or deployment. This is a very common practice even though it is fraught with problems- like getting a database setup properly, configuring a web server, any other services (memcached, redis, mongodb, etc), and many more issues.

Hopefully at some point you realize the pain that is involved in doing everything on your workstation directly and start looking for a better way to do web development. In this post I will be looking at some ways to do this better: using a virtual machine (VM), Vagrant, and DevCamps.

Using a VM for development

One way to improve things is to use a local virtual machine for your development (for example, using VirtualBox, or VMware Fusion). You can edit your code normally on your workstation, but then execute and test it in the VM. This also makes your workstation "clean", moving all those dependencies (like a database, web server, etc.) off your workstation and into the VM. It also gets your dev environment closer to production, if not identical. Sounds nice, but let's break down the pros and cons.

Benefits of using a VM
  • Dev environment closely matches production.
  • Execute and test code in a dedicated machine (not your workstation directly).
  • Allows for multiple projects to be worked on concurrently (one VM per project).
  • Exposes the developer to the Operations (systems administration) side of the web application (always a good thing).
  • Developer can edit files using their favorite text editor locally on the workstation (but will need to copy files to the VM as needed).
Problems with using a VM
  • Need to create and configure the VM. This could be very time consuming and error prone.
  • Still need to install and configure all services and packages. This could also be time consuming and error prone.
  • Backups of your work/configuration/everything are your own responsibility (extremely unlikely to happen).
  • Access to your dev environment is extremely limited, thus probably only you can access it and test things on it. No way for a QA engineer or business owner to test/demo your work.
  • Inexperienced developers can break things, or change them to no longer match production (install arbitrary packages, different versions than what is in production, screw up the db, screw up Apache configuration, etc.).
  • If working with an established database, then downloading a dump, installing, and getting the database usable is time consuming and error prone. ("I just broke my dev database!" can be a complete blocker for development.)
  • The developer needs to set up networking for the VM in order to ssh to it, copy files back and forth, and point a web browser to it. This may include manually setting up DNS, or /etc/hosts entries, or port forwarding, or more complex setups.
  • If using SSL with the web application, then the developer also needs to generate and install the SSL cert and configure the web server correctly.

Vagrant

What is Vagrant? It is a set of tools to make it easier to use a virtual machine for your web development. It attempts to lessen many of the problems listed above through the use of automation. By design it also makes some assumptions about how you are using the VM. For example, it assumes that you have the source code for you project in a directory somewhere directly on your workstation and would prefer to use your favorite text editor on those files. Instead of expecting you to continually push updated files to your VM, it sets up a corresponding directory on the VM and keeps the two in sync for you (using either shared folders, NFS, Samba, or rsync). It also sets up the networking for accessing the VM, usually with port forwarding, so you don't have to worry about that.

Benefits of Vagrant
  • Same as those listed above for using a VM, plus...
  • Flexible configuration (Vagrantfile) for creating and configuring the VM.
  • Automated networking for the VM with port forwarding. Abstracted ssh access (don't need to set up a hostname for the VM, simply type `vagrant ssh` to connect). Port forwarded browser access to the VM (usually http://localhost:8080, but configurable).
  • Sync'd directory between your workstation and the VM for source code. Allows for developers to use their favorite text editor locally on their workstation without needing to manually copy files to the VM.
  • Expects the use of a configuration management system (like puppet, chef, salt, or bash scripts) to "provision" the VM (which could help with proper and consistent setup).
  • Through the use of Vagrant Cloud you can get a generated url for others to access your VM (makes it publicly available through a tunnel created with the command `vagrant share`).
  • Configuration (Vagrantfile and puppet/chef/salt/etc.) files can be maintained/reviewed by Operations engineers for consistency with production.
Problems with Vagrant
  • Still need to install and configure all services and packages. This is lessened with the use of a configuration management tool like puppet, but you still need to create/debug/maintain the puppet configuration and setup.
  • Backups of your work/configuration/everything are your own responsibility (extremely unlikely to happen). This may be lessened for VM configuration files, assuming they are included in your project's VCS repo along with your source code.
  • Inexperienced developers can still break things, or change them to no longer match production (install arbitrary packages, different versions than what is in production, screw up the db, screw up Apache configuration, etc.).
  • If working with an established database, then downloading a dump, installing, and getting the database usable is time consuming and error prone. ("I just broke my dev database!" can be a complete blocker for development.)
  • If using SSL with the web application, then the developer also needs to generate and install the SSL cert and configure the web server correctly. This might be lessened if puppet (or whatever) is configured to manage this for you (but then you need to configure puppet to do that).

DevCamps

The DevCamps system takes a different approach. Instead of using VMs for development, it utilizes a shared server for all development. Each developer has their own account on the camps server and can create/update/delete "camps" (which are self-contained environments with all the parts needed). There is an initial setup for using camps which needs thorough understanding of the web application and all of its dependencies (OS, packages, services, etc.). For each camp, the system will create a directory for the user with everything related to that camp in it, including the web application source code, their own web server configuration, their own database with its own configuration, and any other resources. Each camp is assigned a camp number, and all services for that camp run on different ports (based on the camp number). For example, camp 12 may have Apache running on ports 9012 (HTTP) and 9112 (HTTPS) and MySQL running on port 8912. The developer doesn't need to know these ports, as tools allow for easier access to the needed services (commands like `mkcamp`, `re` for restarting services, `mysql_camp` for access to the database, etc.).

DevCamps has been designed to address some of the pain usually associated with development environments. Developers usually do not need to install anything, since all dependencies should already be installed on the camps server (which should be maintained by an Operations engineer who can keep the packages, versions, etc. consistent with production). Having all development on a server allows Operations engineers to backup all dev work fairly easily. Databases do not need to be downloaded, manually setup, or anything- they should be set up initially with the camps system and then running `mkcamp` clones the database and sets it up for you. Running `refresh-camp --db` allows a developer to delete their camp's database and get a fresh clone, ready to use.

Benefits of DevCamps
  • Each developer can create/delete camps as needed, allowing for multiple camps at once and multiple projects at once.
  • Operations engineers can manage/maintain all dependencies for development, ensuring everything is consistent with production.
  • Backups of all dev work is easy (Operations engineer just needs to backup the camps server).
  • Developer does not need to configure services (camp templating system auto-generates needed configuration for proper port numbers), such as Apache, nginx, unicorn, MySQL, Postgres, etc.
  • SSL certificates can be easily shared/generated/installed/etc. automatically with the `mkcamp` script. Dev environments can easily have HTTPS without the developer doing anything.
  • Developers should not have permissions to install/change system packages or services. Thus inexperienced developers should not be able to break the server, other developer's environments, install arbitrary software. Screwing up their database or web server config can be fixed by either creating a new camp, refreshing their existing one, or an Operations engineer can easily fix it for them (since it is on a central server they would already have access to, and not need to worry about how to access some VM who knows where).
Problems with DevCamps
  • Since all camps live on a shared server running on different ports, this will not closely match production in that way. However, this may not be significant if nearly everything else does closely match production.
  • Adding a new dependency (for example, adding mongodb, or upgrading the version of Apache) may require quite a bit of effort and will affect all camps on the server- Operations engineer will need to install the needed packages and add/change the needed configuration to the camps system and templates.
  • Using your favorite text editor locally on your workstation doesn't really work since all code lives on the server. It is possible to SFTP files back and forth, but this can be tedious and error prone.
  • Many aspects of the Operations (systems administration) side of the web application are hidden from the developer (this might also be considered a benefit).
  • All development is on a single server, which may be a single point of failure (if the camps server is down, then all development is blocked for all developers).
  • One camp can use up more CPU/RAM/disk/etc. then others and affect the server's load, affecting the performance of all other camps.

Concluding Thoughts

It seems that Vagrant and DevCamps certainly have some good things going for them. I think it might be worth some thought and effort to try to meld the two together somehow, to take the benefits of both and reduce the problems as much as possible. Such a system might look like this:

  • Utilize vagrant commands and configuration, but have all VMs live on a central VM server. Thus allowing for central backups and access.
  • Source code and configuration lives on the server/VM but a sync'd directory is set up (sshfs mount point?) to allow for local editing of text files on the workstation.
  • VMs created should have restricted access, preventing developers from installing arbitrary packages, versions, screwing up the db, etc.
  • Configuration for services (database, web server, etc.) should be generated/managed by Operations engineers for consistency (utilizing puppet/chef/salt/etc.).
  • Databases should be cloned from a local copy on the VM server, thus avoiding the need to download anything and reducing setup effort.
  • SSL certs should be copied/generated locally on the VM server and installed as appropriate.
  • Sharing access to a VM should not depend on Vagrant Cloud, but instead should use some sort of internal service on the VM server to automate VM hostname/DNS for browser and ssh access to the VM.

I'm sure there are more pros and cons that I've missed. Add your thoughts to the comments below. Thanks.


published by noreply@blogger.com (Dave Jenkins) on 2014-08-22 14:21:00 in the "Education" category

This past week, End Point had the distinct pleasure of sending a Liquid Galaxy Express (the highly portable version of the platform) to the Daniel Island School in Charleston, South Carolina. Once it arrived, we provided remote support to their staff setting up the system. Through the generous donations of Mason Holland, Benefitfocus, and other donors, this PK-8 grade school is now the first school in the country below the university level with a Liquid Galaxy on campus.

From Claire Silanowicz, who coordinated the installation:

Mason Holland was introduced to the Liquid Galaxy system while visiting the Google Headquarters in San Francisco several months ago. After deciding to donate it to the Daniel Island School here in Charleston, SC, he brought me on to help with the project. I didn't know much about the Liquid Galaxy at first, but quickly realized how cool of a project this was going to be. With some help, I assembled a team of about 8 Benefitfocus employees to help with installation and long-term implementation. Benefitfocus is full of employees who are so passionate about innovative technology, and Mason's involvement with Benefitfocus was a perfect way to connect the company to the community. We had one meeting before the installation date to go over the basics and a few days later 5 of us were at the school unpacking boxes and assembling the 7-screen display. Once it was completed and turned on, we were all in awe. We went from the Golden Gate Bridge to the Duomo in Florence, Italy in a matter of seconds. We traveled to our homes and went to see our office building on street view. After going back to the school in the days to follow, I realized we only touched the tip of the iceberg. The faculty at the school had discovered the museums and underwater views that Google has managed to capture.

The Liquid Galaxy isn't known for revolutionizing the way children learn, but I firmly believe it is going to do just that at the Daniel Island School. The teachers and faculty are so excited to incorporate this new technology into their curriculae. They have a unique opportunity to take this technology and make it an integral part of their teaching. I hope that in the future, other elementary and middle schools can have the Liquid Galaxy system so that teachers all over the country can collaborate and take advantage of everything it has to offer!

STEM education is becoming ever-more important in the fast economy of the 21st century. With a Liquid Galaxy these young students are exposed at a very early age to the wonders of geography, geology, urban development, oceanography, and demographics, not to mention the technological wonderment the platform itself invokes with the young minds: with seven 55" screens mounted on an arced frame, a touchscreen podium and a rack of computers nearby, the Liquid Galaxy is a visually impressive piece of technology regardless of what is being shown on the screens.

This installation is another in a string of academic and educational deployments for the platform. End Point provides 24-hour monitoring and remote support for Liquid Galaxies at Westfield University in Massachusetts, University of Kansas, The National Air & Space Museum in Washington DC, and the Oceanographic Museum in Monaco and a host of other educational institutions. We also work closely with researchers at Lleida campus in Spain and the University of Western Sydney in Australia. We know of other Liquid Galaxies on campuses in Colorado, Georgia, Oklahoma, and Israel.

We expect great things from these students, and hope that some may eventually join us at End Point as developers!


published by noreply@blogger.com (Matt Galvin) on 2014-08-20 22:09:00 in the "active shipping gem" category

Hello again all. I was working on another Spree Commerce Site, a Ruby on Rails based e-commerce platform. As many of you know, Spree Commerce comes with Promotions. According to Spree Commerce documentation, Spree Commerce Promotions are:

"... used to provide discounts to orders, as well as to add potential additional items at no extra cost. Promotions are one of the most complex areas within Spree, as there are a large number of moving parts to consider."

The promotions feature can be used to offer discounts like free shipping, buy one get one free etc.. The client on this particular project had asked for the ability to provide a coupon for free shipping. Presumably this would be a quick and easy addition since these types of promotions are included in Spree.

The site in question makes use of Spree's Active Shipping Gem, and plugs in the UPS Shipping API to return accurate and timely shipping prices with the UPS carrier.

The client offers a variety of shipping methods including Flat Rate Ground, Second Day Air, 3 Day Select, and Next Day Air. Often, Next Day Air shipping costs several times more than Ground. E.g.: If something costs $20 to ship Ground, it could easily cost around $130 to ship Next Day Air.

When creating a free shipping Promotion in Spree it?s important to understand that by default it will be applied to all shipping methods. In this case, the customer could place a small order, apply the coupon and receive free Next Day Air shipping! To take care of this you need to use Promotion Rules. Spree comes with several built-in rules:

  • First Order: The user?s order is their first.
  • ItemTotal: The order?s total is greater than (or equal to) a given value.
  • Product: An order contains a specific product.
  • User: The order is by a specific user.
  • UserLoggedIn: The user is logged in.

As you can see there is no built in Promotion Rule to limit the free shipping to certain shipping methods. But fear not, it?s possible to create a custom rule.

module Spree
     class Promotion
       module Rules
         class RestrictFreeShipping < PromotionRule
           MATCH_POLICIES = %w(all)
  
           def eligible?(order, options={})
             e = false
             if order.shipment.shipping_method.admin_name == "UPS Flat Rate Ground"
               e = true
             else
               e = false
             end
            return e
           end
        end
      end
    end
  end

Note that you have to create a partial for the rule, as per the documentation.

Then, in config/locales/en.yml I added a name and description for the rule.

en:
     spree:
       promotion_rule_types:
         restrict_free_shipping:
           name: Restrict Free Shipping To Ground
           description: If somebody uses a free shipping coupon it should only apply to ground shipping

The last step was to restart the app and configure the promotion in the Spree Admin interface.


published by noreply@blogger.com (Jon Jensen) on 2014-08-19 21:45:00 in the "email" category

On a Debian GNU/Linux 7 ("wheezy") system with both IPv6 and IPv4 networking setup, running Postfix 2.9.6 as an SMTP server, we ran into a mildly perplexing situation. The mail logs showed that outgoing mail to MX servers we know have IPv6 addresses, the IPv6 address was only being used occasionally, while the IPv4 address was being used often. We expected it to always use IPv6 unless there was some problem, and that's been our experience on other mail servers.

At first we suspected some kind of flaky IPv6 setup on this host, but that turned out not to be the case. The MX servers themselves are fine using only IPv6. In the end, it turned out to be a Postfix configuration option called smtp_address_preference:

smtp_address_preference (default: any)

The address type ("ipv6", "ipv4" or "any") that the Postfix SMTP client will try first, when a destination has IPv6 and IPv4 addresses with equal MX preference. This feature has no effect unless the inet_protocols setting enables both IPv4 and IPv6. With Postfix 2.8 the default is "ipv6".

Notes for mail delivery between sites that have both IPv4 and IPv6 connectivity:

The setting "smtp_address_preference = ipv6" is unsafe. It can fail to deliver mail when there is an outage that affects IPv6, while the destination is still reachable over IPv4.

The setting "smtp_address_preference = any" is safe. With this, mail will eventually be delivered even if there is an outage that affects IPv6 or IPv4, as long as it does not affect both.

This feature is available in Postfix 2.8 and later.

That documentation made it sound as if the default had changed to "ipv6" in Postfix 2.8, but at least on Debian 7 with Postfix 2.9, it was still defaulting to "any", thus effectively randomly choosing between IPv4 and IPv6 on outbound SMTP connections where the MX record pointed to both.

Changing the option to "ipv6" made Postfix behave as expected.


published by noreply@blogger.com (Greg Sabino Mullane) on 2014-08-16 18:11:00 in the "laptop" category

I recently upgraded my main laptop to Ubuntu 14.04, and had to solve a few issues along the way. Ubuntu is probably the most popular Linux distribution. Although it is never my first choice (that would be FreeBSD or Red Hat), Ubuntu is superb at working "out of the box", so I often end up using it, as the other distributions all have issues.

Ubuntu 14.04.1 is a major "LTS" version, where LTS is "long term support". The download page states that 14.04 (aka "Trusty Tahr") comes with "five years of security and maintenance updates, guaranteed." Alas, the page fails to mention the release date, which was July 24, 2014. When a new version of Ubuntu comes out, the OS will keep nagging you until you upgrade. I finally found a block of time in which I could survive without my laptop, and started the upgrade process. It took a little longer than I thought it would, but went smoothly except for one issue:

Issue 1: xscreensaver

During the install, the following warning appeared:

"
"One or more running instances of xscreensaver or xlockmore have been detected on this system. Because of incompatible library changes, the upgrade of the GNU libc library will leave you unable to authenticate to these programs. You should arrange for these programs to be restarted or stopped before continuing this upgrade, to avoid locking your users out of their current sessions."
"

First, this is a terrible message. I'm sure it has caused lots of confusion, as most users probably do not know what what xscreensaver and xlockmore are. Is it so hard for the installer to tell which one is in use? Why in the world can the installer not simply stop these programs itself?! The solution was simple enough: in a terminal, I ran:

pgrep -l screensaver
pkill screensaver
pgrep -l screensaver

The first command was to see if I had any programs running with "screensaver" in their name (I did: xscreensaver). As it was the only program that matched, it was safe to run the second command, which stopped xscreensaver. Finally, I re-ran the pgrep to make sure it was stopped and gone. Then I did the same thing with the string "lockmore" (which found no matches, as I expected). Once xscreensaver was turned off, I told the upgrade to continue, and had no more problems until after Ubuntu 14.04 was installed and running. The first post-install problem appeared after I suspended the computer and brought it back to life - no wireless network!

Issue 2: no wireless after suspend

Once suspended and revived, the wireless would simply not work. Everything looked normal: networking was enabled, wifi hotspots were detected, but a connection could simply not be made. After going through bug reports online and verifying the sanity of the output of commands such as "nmcli nm" and "lshw -C network", I found a solution. This was the hardest issue to solve, as it had no intuitive solution, nothing definitive online, and was full of red herrings. What worked for me was to *remove* the suspension of the iwlwifi module. I commented out the line from /etc/pm/config.d/modules, in case I ever need it again, so the file now looks like this:

# SUSPEND_MODULES="iwlwifi"

Once that was commented out, everything worked fine. I tested by doing sudo pm-suspend from the command-line, and then bringing the computer back up and watching it automatically reconnect to my local wifi.

Issue 3: color diffs in git

I use the command-line a lot, and a day never goes by without heavy use of git as well. On running a "git diff" in the new Ubuntu version, I was surprised to see a bunch of escape codes instead of the usual pretty colors I was used to:

ESC[1mdiff --git a/t/03dbmethod.t b/t/03dbmethod.tESC[m
ESC[1mindex 108e0c5..ffcab48 100644ESC[m
ESC[1m--- a/t/03dbmethod.tESC[m
ESC[1m+++ b/t/03dbmethod.tESC[m
ESC[36m@@ -26,7 +26,7 @@ESC[m ESC[mmy $dbh = connect_database();ESC[m
 if (! $dbh) {ESC[m
    plan skip_all => 'Connection to database failed, cannot continue testing';ESC[m
 }ESC[m
ESC[31m-plan tests => 543;ESC[m
ESC[32m+ESC[mESC[32mplan tests => 545;ESC[m

After poking around with terminal settings and the like, a coworker suggested I simply tell git to use an intelligent pager with the command git config --global core.pager "less -r". The output immediately improved:

diff --git a/t/03dbmethod.t b/t/03dbmethod.t
index 108e0c5..ffcab48 100644
--- a/t/03dbmethod.t
+++ b/t/03dbmethod.t
@@ -26,7 +26,7 @@ my $dbh = connect_database();
 if (! $dbh) {
    plan skip_all => 'Connection to database failed, cannot continue testing';
 }
-plan tests => 543;
+plan tests => 545;

Thanks Josh Williams! The above fix worked perfectly. I'm a little unsure of this solution as I think the terminal and not git is really to blame, but it works for me and I've seen no other terminal issues yet.

Issue 4: cannot select text in emacs

The top three programs I use every day are ssh, git, and emacs. While trying (post-upgrade) to reply to an email inside mutt, I found that I could not select text in emacs using ctrl-space. This is a critical problem, as this is an extremely important feature to lose in emacs. This problem was pretty easy to track down. The program "ibus" was intercepting all ctrl-space calls for its own purpose. I have no idea why ctrl-space was chosen, being used by emacs since before Ubuntu was even born (the technical term for this is "crappy default"). Fixing it requires visiting the ibus-setup program. You can reach it via the system menu by going to Settings Manager, then scroll down to the "Other" section and find "Keyboard Input Methods". Or you can simply run ibus-setup from your terminal (no sudo needed).


The ibus-setup window

However you get there, you will see a section labelled "Keyboard Shortcuts". There you will see a "Next input method:" text box, with the inside of it containing <Control>space. Aha! Click on the three-dot button to the right of it, and change it to something more sensible. I decided to simply add an "Alt", such that going to the next input method will require Ctrl-Alt-Space rather than Ctrl-Space. To make that change, just select the "Alt" checkbox, click "Apply", click "Ok", and verify that the text box now says <Control><Alt>space.

So far, those are the only issues I have encountered using Ubuntu 14.04. Hopefully this post is useful to someone running into the same problems. Perhaps I will need to refer back to it in a few years(?) when I upgrade Ubuntu again! :)


published by Eugenia on 2014-08-05 02:47:41 in the "Recipes" category
Eugenia Loli-Queru

Oopsies are the Americanized version of the French souffle. My French husband loved them. They can be baked in ramekins for a more authentic souffle taste (in this case omit the almond flour), or as bread buns. They’re extremely low carb, and Paleo/Primal.

Ingredients (makes 6 buns)
* 4 eggs, yolks and whites separated in two bowls
* 3/4 cup of creamy goat cheese, or shaved emmental cheese
* 2 tablespoons of almond or coconut flour
* 1/2 teaspoon of cream of tartar (or baking soda)

Method
1. Preheat oven to 350 F (175 C). On the bowl with the whites, add the cream of tartar.
2. Beat the whites in high speed until very-very stiff, about 4-5 minutes.
3. Add the cheese and flour to the yolk bowl, and beat until smooth, about 1-2 minutes.
4. Fold the yolk mixture slowly into the whites, and mix carefully with a spatula for a few seconds.
5. Spoon the mixture in 6 pieces, on a baking sheet with a parchment paper. Bake for 15-20 minutes until golden brown. Serve immediately.

Per Serving (3 buns): 430 calories, 3 gr of net carbs, 36 gr of fat, 25% protein, 83% Lysine. 45% B12, 72% Riboflavin, 63% choline, 55% A, 23% calcium, 59% phosphorus, 31% selenium, 33% copper.


published by noreply@blogger.com (Josh Ausborne) on 2014-08-01 17:08:00

For our Liquid Galaxy installations, we use a master computer known as a "head node" and a set of slave computers known as "display nodes." The slave computers all PXE-boot from the head node, which directs them to boot from a specific ISO disk image.

In general, this system works great. We connect to the head node and from there can communicate with the display nodes. We can boot them, change their ISO, and do all sorts of other maintenance tasks.

There are two main settings that we change in the BIOS to make things run smoothly. First is that we set the machine to power on when AC power is restored. Second, we set the machine's boot priority to use the network.

Occasionally, though, the CMOS battery has an issue, and the BIOS settings get lost.  How do we get in and boot the machine up? This is where ipmitool has really become quite handy.

Today we had a problem with one display node at one of our sites. It seems that all of the machines in the Liquid Galaxy were rebooted, or otherwise powered off and then back on. One of them just didn't come up, and it was causing me much grief.  We have used ipmitool in the past to be able to help us administer the machines.

IPMI stands for Intelligent Platform Media Interface, and it gives the administrator some non-operating system level access to the machine.  Most vendors have some sort of management interface (HP's iLO, Dell's DRAC), including our Asus motherboards.  The open source ipmitool is the tool we use on our Linux systems to be able to interface with the IPMI module on the motherboard.

I connected to the head node and ran the following command and got the following output:

    admin@headnode:~ ipmitool -H 10.42.41.33 -I lanplus -P 'xxxxxx' chassis status
    System Power         : off
    Power Overload       : false
    Power Interlock      : inactive
    Main Power Fault     : false
    Power Control Fault  : false
    Power Restore Policy : always-off
    Last Power Event     : ac-failed
    Chassis Intrusion    : inactive
    Front-Panel Lockout  : inactive
    Drive Fault          : false
    Cooling/Fan Fault    : true

While Asus's Linux support is pretty lacking, and most of the options we find here don't work with with the open source ipmitool, we did find "System Power : off" in the output, which is a pretty good indicator of our problem.  This tells me that the BIOS settings have been lost for some reason, as we had previously set the system to power on when AC power was restored.  I ran the following to tell it to boot into the BIOS, then powered on the machine:

    admin@headnode:~ ipmitool -H 10.42.41.33 -I lanplus -P 'xxxxxx' chassis bootdev bios
    admin@headnode:~ ipmitool -H 10.42.41.33 -I lanplus -P 'xxxxxx' chassis power on

At this point, the machine is ready for me to be able to access the BIOS through a terminal window. I opened a new terminal window and typed the following:

    admin@headnode:~ ipmitool -H ipmi-lg2-3 -U admin -I lanplus sol activate
    Password:

After typing in the password, I get the ever-helpful dialog below:

    [SOL Session operational.  Use ~? for help]

I didn't bother with the ~? because I knew that the BIOS would eventually just show up in my terminal. There are, however, other commands that pressing ~? would show.

See, look at this terminal version of the BIOS that we all know and love!



Now that the BIOS was up, it's as if I was really right in front of the computer typing on a keyboard attached to it. I was able to get in and change the settings for the APM, so that the system will power on upon restoration of AC power. I also verified that the machine is set to boot from the network port before saving changes and exiting. The next thing I knew, the system was booting up PXE, which then pointed it to the proper ISO, and then it was all the way up and running.

And this, my friends, is why systems should have IPMI. I state the obvious here when I say that life as a system administrator is so much easier when one can get into the BIOS on a remote system.


published by noreply@blogger.com (Josh Williams) on 2014-08-01 03:01:00 in the "Conference" category
Just got back from PyOhio a couple of days ago. Columbus used to be my old stomping grounds so it's often nice to get back there. And PyOhio had been on my TODO for a number of years now, but every time it seemed like something else just got in the way. This year I figured it was finally time, and I'm quite glad it worked out.

While of course everything centered around usage with Python, much of the talks surrounded other tools or projects. I return with a much better view of good technologies likes Redis, Ansible, Docker, ØMQ, Kafka, Celery, asyncio in Python 3.4, Graphite, and much more that isn't coming to mind at the moment. I have lots to dig into now.

It also pleased me to see so much Postgres love! I mean, clearly, once you start to use it you'll fall in love, that's without question. But the hall track was full of conversations about how various people were using Postgres, what it tied in to in their applications, and various tips and tricks they'd discovered in its functionality. Just goes to prove that Postgres == ?.

Naturally PostgreSQL is what I spoke on; PL/Python, specifically. It actually directly followed a talk on PostgreSQL's LISTEN/NOTIFY feature. I was a touch worried about overlap considering some of the things I'd planned, but it turns out the two talks more or less dovetailed from one to the other. It was unintentional, but it worked out very well.

Anyway, the slides are available, but the talk wasn't quite structured in the normal format of having those slides displayed on a projector. Instead, in a bit of an experiment, the attendees could hit a web page and bring up the slides on their laptops or such. That slide deck opened a long-polling socket back to the server, and the web app could control the slide movement on those remote screens. That let the projector keep to a console session that was used to illustrate PL/Python and PostgreSQL usage. As you might expect, the demo included a run through the PL/Python and related code that drove that socket. Hopefully the video, when it's available, caught some of it.

The sessions were recorded on video, but one thing I hadn't expected was how that influenced which talks I tried to attend. Knowing that the more software-oriented presentations will be available for viewing later, where available I opted for more hardware-oriented topics, or other talks where being present seemed like it would have much more impact. I also didn't feel rushed between sessions, on the occasions where I got caught in a hall track conversation or checked out something in the open spaces area (in one sense,a dedicated hall track room.)

Overall, it was a fantastic conference and a great thank you goes out to everyone that helped make it happen!