All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (Ben Witten) on 2016-07-21 16:53:00 in the "Liquid Galaxy" category
The Liquid Galaxy was recently featured on the front page of Reef Builders, the original and longest running saltwater fish blog and a leading source of aquarium news. Reef Builders writer Nicole Helgason wrote the story, which can be found here.

The Liquid Galaxy is an amazing tool for aquariums (not to mention other kinds of museums, offices, educational institutions, and more) around the world. It is a particularly effective for aquariums due to the underwater imagery shown on the system, as well as End Point's Content Management System that allows users the opportunity to tell underwater "stories" with supporting images, videos, and content. We have deployed to aquariums and science museums throughout the US, Europe, and East Asia.

The Liquid Galaxy lets visitors explore coral reefs and underwater environments the exact same way they navigate Street View (it's the same app and data set) with a full set of screens to give a totally immersive experience. While viewing the dazzling immersive display, the user can make use of the Liquid Galaxy touchpad and 3D joystick to look around and navigate through the display.

A video demonstrating how the Liquid Galaxy is utilized in aquariums can be found below. If you're interested in learning more about Liquid Galaxy for your aquarium, please contact us here for more information.


published by noreply@blogger.com (Elizabeth Garrett) on 2016-07-21 13:15:00 in the "clients" category

Responsive design has been a hot topic in the e-commerce world for several years now. End Point has worked on many sites over the last few years to transition clients to a responsive design website model. While many large sized retailers have already transitioned to a responsive design, there are many smaller e-commerce sites that are still on an older design model and I would like to show that the return on investment for those stores is still noteworthy.

The lead of our Interchange team, Greg Hanson, led a responsive design project and I?d like to summarize that work on our blog. For confidentiality, I am leaving out the client?s name in this case.

Why Go Responsive?

There are two main reasons every e-commerce website, even a small one, needs to become responsive:

  • Your customers
  • Google

The march toward mobile sales at this point is undeniable and unstoppable. As more and more people become comfortable using their phones and tablets to purchase things, the bigger this market share will become. Also, Google has begun de-prioritizing websites that do not cater to mobile users. If you are waiting to go responsive because of budget, you might surprised to learn how dramatically the mobile revenue increased for the customer in this case.

Goals

This client is a small e-commerce business with a heavy ?on? season and ?off? season where the business owners could focus on this project. They wanted:

  • To accommodate the increasing mobile user base; they knew it was there from looking at their Google Analytics.
  • To increase revenue from mobile users. They could see that they had a 10% mobile user base, but they were converting only a small percentage.

End Point?s strategy

Our strategy with this small client, to minimize impact to the business and cost was to:

Use Bootstrap

Bootstrap is one of many front end frameworks that allows you to create a design template and roll that out to all the pages within a website without having to re-code each and every page in a website. This dramatically increases the speed and decreases cost.

Break up the work into segments

In this case, we had a three phase approach:

  • Phase I, re¬≠design the site using Bootstrap but still at fixed width
  • Phase II, site checkout sequence changed to responsive
  • Phase III, entire site responsive

The Project

Converting the site to Bootstrap was the biggest chunk of the time and money spent on this project. Since we knew this would be the foundation for the changes to come, we took our time getting it right, keeping the client involved every step of the way. We didn?t want to get in the way of their busy season either, so we completed that project and waited a half a year to begin the next piece.

The second step was updating checkout sequence to be responsive since this was arguably the hardest part about using the non-responsive site on a mobile device. The client considered this piece top priority. Plus, since it was only a few pages, it allowed us all to better understand the scope of coding the rest of the site and give the client an accurate picture of the budget.

Last, once we had a responsive checkout sequence, we changed the rest of the internal pages to be responsive and rolled out the entire site as responsive and mobile friendly.

Results

The first time we looked at the analytics following the responsive conversion, we were SHOCKED! Using a small sample period, from just a year prior, we saw a 280% increase in mobile purchases.


The timeframe for these comparison numbers was August 2014 before the responsive transition and August of 2015, after the transition.

To sanity check those numbers, we just re-ran some of the analytics recently and in May of 2016, revenue from mobile users is still up over 90% and the clients total revenue year-over-year is up 12%.


We also ran some numbers on the growth of mobile revenue as a percentage of overall revenue through this process. As you can see, two years ago mobile revenue was less than 1% and is now near 12%.

Untitled drawing.jpg

In this case, I didn?t go into all the ways to slice and dice the numbers in Google Analytics. However I'm happy to speak with you if you want to know more about mobile users, this client?s data, or how responsive design might help grow your e-commerce business.

The Lesson

The lesson here is that any store will benefit from going responsive. Having an experienced partner like End Point that can work with your budget and timeline can make this is a doable project for any size store and budget.


published by noreply@blogger.com (Greg Sabino Mullane) on 2016-07-14 03:46:00 in the "postgres" category

Constraints in Postgres are very powerful and versatile: not only are foreign keys, primary keys, and column uniqueness done internally via constraints, but you may create your own quite easily (at both the column and table level). Most of the time constraints are simply set and forget, but there is one time constraints may become a problem: copying the database using the pg_dump program.

The issue is that constraints are usually added *before* the data is copied to the new table via the COPY command. This means the constraint fires for each added row, to make sure that the row passes the conditions of the constraint. If the data is not valid, however, the COPY will fail, and you will not be able to load the output of your pg_dump into a new database. Further, there may be a non-trivial performance hit doing all that validation. Preventing the constraint from firing may provide a significant speed boost, especially for very large tables with non-trivial constraints.

Let's explore one way to work around the problem of pg_dump failing to work because some of the data is not valid according to the logic of the constraints. While it would be quicker to make some of these changes on the production system itself, corporate inertia, red tape, and the usual DBA paranoia means a better way is to modify a copy of the database instead.

For this example, we will first create a sample "production" database and give it a simple constraint. This constraint is based on a function, to both emulate a specific real-world example we came across for a client recently, and to allow us to easily create a database in which the data is invalid with regards to the constraint:

dropdb test_prod; createdb test_prod
pgbench test_prod -i -n
creating tables...
100000 of 100000 tuples (100%) done (elapsed 0.82 s, remaining 0.00 s)
set primary keys...
done.
psql test_prod -c 'create function valid_account(int) returns bool language sql immutable as $$ SELECT $1 > 0$$;'
CREATE FUNCTION
psql test_prod -c 'alter table pgbench_accounts add constraint good_aid check ( valid_account(aid) )'
ALTER TABLE

Note that the constraint was added without any problem, as all of the values in the aid column satisfy the function, as each one is greater than zero. Let's tweak the function, such that it no longer represents a valid, up to date constraint on the table in question:

## Verify that the constraint is working - we should get an error:
psql test_prod -c 'update pgbench_accounts set aid = -1 where aid = 1'
ERROR:  new row for relation "pgbench_accounts" violates check constraint "good_aid"
DETAIL:  Failing row contains (-1, 1, 0,                                         ...).

## Modify the function to disallow account ids under 100. No error is produced!
psql test_prod -c 'create or replace function valid_account(int) returns bool language sql volatile as $$ SELECT $1 > 99$$'
CREATE FUNCTION

## The error is tripped only when we violate it afresh:
psql test_prod -c 'update pgbench_accounts SET aid=125 WHERE aid=125'
UPDATE 1
psql test_prod -c 'update pgbench_accounts SET aid=88 WHERE aid=88'
ERROR:  new row for relation "pgbench_accounts" violates check constraint "good_aid"
DETAIL:  Failing row contains (88, 1, 0,                                         ...).

The volatility was changed from IMMUTABLE to VOLATILE simply to demonstrate that a function called by a constraint is not bound to any particular volatility, although it *should* always be IMMUTABLE. In this example, it is a moot point, as our function can be immutable and still be "invalid" for some rows in the table. Owing to our function changing its logic, we now have a situation in which a regular pg_dump cannot be done:

dropdb test_upgraded; createdb test_upgraded
pg_dump test_prod | psql test_upgraded -q
ERROR:  new row for relation "pgbench_accounts" violates check constraint "good_aid"
DETAIL:  Failing row contains (1, 1, 0,                                          ...).
CONTEXT:  COPY pgbench_accounts, line 1: "1             1   0          "
## Ruh roh!

Time for a workaround. When a constraint is created, it may be declared as NOT VALID, which simply means that it makes no promises about the *existing* data in the table, but will start constraining any data changed from that point forward. Of particular importance is the fact that pg_dump can dump things into three sections, "pre-data", "data", and "post-data". When a normal constraint is dumped, it will go into the pre-data section, and cause the problems seen above when the data is loaded. However, a constraint that has been declared NOT VALID will appear in the post-data section, which will allow the data to load, as it will not be declared until after the "data" section has been loaded in. Thus, our workaround will be to move constraints from the pre-data to the post-data section. First, let's confirm the state of things by making some dumps from the production database:

pg_dump test_prod --section=pre-data -x -f test_prod.pre.sql
pg_dump test_prod --section=post-data -x -f test_prod.post.sql
## Confirm that the constraint is in the "pre" section:
grep good_aid test*sql
test_prod.pre.sql:    CONSTRAINT good_aid CHECK (valid_account(aid))

There are a few ways around this constraint issue, but here is one that I like as it makes no changes at all to production, and produces valid SQL files that may be used over and over.

dropdb test_upgraded; createdb test_upgraded
## Note that --schema-only is basically the combination of pre-data and post-data
pg_dump test_prod --schema-only | psql test_upgraded -q
## Save a copy so we can restore these to the way we found them later:
psql test_upgraded -c "select format('update pg_constraint set convalidated=true where conname=%L and connamespace::regnamespace::text=%L;', 
  conname, nspname) from pg_constraint c join pg_namespace n on (n.oid=c.connamespace) 
  where contype ='c' and convalidated" -t -o restore_constraints.sql
## Yes, we are updating the system catalogs. Don't Panic!
psql test_upgraded -c "update pg_constraint set convalidated=false where contype='c' and convalidated"
UPDATE 3
## Why 3? The information_schema "schema" has two harmless constraints
pg_dump test_upgraded --section=pre-data -x -o test_upgraded.pre.sql
pg_dump test_upgraded --section=post-data -x -o test_upgraded.post.sql
## Verify that the constraint has been moved to the "post" section:
grep good test*sql
test_prod.pre.sql:    CONSTRAINT good_aid CHECK (valid_account(aid))
test_upgraded.post.sql:-- Name: good_aid; Type: CHECK CONSTRAINT; Schema: public; Owner: greg
test_upgraded.post.sql:    ADD CONSTRAINT good_aid CHECK (valid_account(aid)) NOT VALID;
## Two diffs to show the inline (pre) versus ALTER TABLE (post) constraint creations:
$ diff -u1 test_prod.pre.sql test_upgraded.pre.sql 
--- test_prod.pre.sql        2016-07-04 00:10:06.676766984 -0400
+++ test_upgraded.pre.sql    2016-07-04 00:11:07.978728477 -0400
@@ -54,4 +54,3 @@
     abalance integer,
-    filler character(84),
-    CONSTRAINT good_aid CHECK (valid_account(aid))
+    filler character(84)
 )

$ diff -u1 test_prod.post.sql test_upgraded.post.sql 
--- test_prod.post.sql        2016-07-04 00:11:48.683838577 -0400
+++ test_upgraded.post.sql    2016-07-04 00:11.57.265797869 -0400
@@ -17,2 +17,10 @@
 
+--
+-- Name: good_aid; Type: CHECK CONSTRAINT; Schema: public; Owner: greg
+--
+
+ALTER TABLE pgbench_accounts
+    ADD CONSTRAINT good_aid CHECK (valid_account(aid)) NOT VALID;
+
+
 SET default_tablespace = '';

Now we can simply sandwich our data load between the new pre and post files, and avoid having the constraints interfere with the data load portion at all:

dropdb test_upgraded; createdb test_upgraded
psql test_upgraded -q -f test_upgraded.pre.sql
pg_dump test_prod --section=data | psql test_upgraded -q
psql test_upgraded -q -f test_upgraded.post.sql
## As the final touch, make all the constraints we changed exactly how each were before:
psql test_upgraded -f restore_constraints.sql

A final sanity check is always a good idea, to make sure the two databases are identical, despite our system catalog tweaking:

diff -s <(pg_dump test_prod) <(pg_dump test_upgraded)
Files /dev/fd/63 and /dev/fd/62 are identical

Although we declared a goal of having the upgraded database match production as closely as possible, you can always not apply that final restore_constraints.sql file and leave the constraints as NOT VALID, which is a better reflection of the reality of things. It also means you will not have to go through this rigmarole again, as those constraints shall forevermore be put into the post-data section when doing a pg_dump (unless someone runs the ALTER TABLE ... VALIDATE CONSTRAINT ... command!).

While there is no direct way to disable constraints when loading data, using this pre-data to post-data trick can not only boost data load times, but get you out of a potential jam when your data is invalid!


published by noreply@blogger.com (Phin Jensen) on 2016-07-14 02:17:00 in the "book review" category

Two Scoops of Django: Best Practices for Django 1.8 is, shockingly, a book about best practices. It?s not a Django library reference, or a book about Django fundamentals, or tips and tricks. It?s a book designed to help web developers, novices and experts alike, avoid common pitfalls in every stage of the web development process, specifically the process of developing with Django.

The book can be used as a reference of best practices and a cover-to-cover guide to best practices. I?ve done both and found it to be enjoyable, accessible, and educational when read cover-to-cover and a valuable reference when setting up a new Django project or doing general Django development. It covers a huge range of material, answering questions like:

  • Where should I store secret keys?
  • Should I use virtualenv?
  • How should I structure my Django project?
  • When should I use ?blank? and ?null? in model fields?
  • Should I use Class-Based Views or Function-Based Views?
  • How should I structure my URLConfs?
  • When should I use Forms?
  • Where should templates be stored?
  • Should I write custom template tags and filters?
  • What package should I use to create a REST API?
  • What core components should I replace?
  • How can I modify or replace the User and authentication system?
  • How should I test my app?
  • How can I improve performance?
  • How can I keep my app secure?
  • How do I properly use the logging library?
  • How do I deploy to a Platform as a Service or my own server(s)?
  • What can I do to improve the debugging process?
  • What are some good third-party packages?
  • Where can I find good documentation and tutorials?
  • Where should I go to ask more questions?

The question, then, is whether or not this book delivers this information well. For the most part, it does. It?s important to recognize that the book doesn?t cover any of these subjects in great detail, but it does do a great job explaining the ?why? behind some of the simple rules that are established and referencing online resources that can be used to go much more in-depth with the subject. It does a great job showing clearly marked bad examples, making it very easy to see whether or not you are or were planning on doing something badly. The writing style is very accessible and straightforward; I read large portions of the book during breakfast or lunch.

Two sections stood out to me as being very helpful for my own projects. First is Chapter 4, Fundamentals of Django App Design, which explained better than any resource I?ve found yet exactly what a Django ?app? (as in ./manage.py startapp polls) should be used for. It explains what an app should or shouldn?t have in it, how much an app should do, when you should break into separate apps, and more.

The next section that really helped me was Chapter 6, Model Best Practices, which explained things like what code should and shouldn?t be in a model, how to use migrations and managers, and ModelFields that should be avoided. Perhaps the most useful part of that chapter is a table in the section ?When to use Null and Blank,? which makes for an easy and quick reference to which fields go well with the null and blank parameters and when you should use both or neither.

The only real problem I had with Two Scoops of Django was that the illustrations rarely felt necessary or helpful. The majority of them are ice cream-themed jokes that aren?t particularly funny. Overall, I really enjoyed this book and I definitely recommend it for anybody who is or is interested in doing serious Django development.


published by noreply@blogger.com (Dave Jenkins) on 2016-07-13 20:54:00 in the "cesium" category

Data visualization continues to evolve, with ever-more complex data sets available openly, and a corresponding increased pace in visualization tools. In mapping GIS data, the Cesium app is gaining quite a bit of traction. As we continue to branch out with new functionality and visualization apps for the Liquid Galaxy, we wanted to try the Cesium app as well.


Cesium is written all in JavaScript WebGL and offers some nice advantages over other engines: it's open source, it's flexible, and it's quick. It can accept an array of points, shapefiles, 3D models, and even KML. The JavaScript then chews these up and delivers a nice consistent 3D environment that we can fly through with the SpaceNav controller, set scenes in a presentation to tell a story, or mix together with video or graphic popups for a fully immersive multimedia experience. Cesium is open source, and provides a great deal of flexibility and accessibility to build different kinds of data visualizations and interactions. There are a lot of new startups exploiting this platform and we welcome the opportunity to work with them.

As we've written previously, the main advantage of the Liquid Galaxy platform is the ability to adjust the viewing angle on each screen to match the physical offset, avoiding (as much as possible) artificial distortions, fisheye views, or image stretching. The trickiest bit of this project was setting the distributed camera driver, which takes input from the SpaceNav controller and correctly aligns the view position for each of the geometrically offset displays. Once the math is worked out, it's relatively quick work to put the settings into a rosbridge WebSockets driver. Once again, we're really enjoying the flexibility that the ROS architecture grants this system.

Looking forward, we anticipate this can open up many more visualizations for the Liquid Galaxy. As we continue to roll out in corporate, educational, and archival environments such as real estate brokerages, hospitality service providers, universities, and museums, the Cesium platform will offer yet another way for our customers to visualize and interact with their data.


published by noreply@blogger.com (Ben Witten) on 2016-07-12 16:11:00 in the "Liquid Galaxy" category
PBS recently aired a segment about the Liquid Galaxy! Just before we presented at New York Tech Meetup, we were interviewed about the Liquid Galaxy for SciTech Now, a PBS publication. The interview took place in NYU?s Skirball Center For The Performing Arts, which is where New York Tech Meetup takes place every month.



The Liquid Galaxy segment, which can be viewed above, features Ben Goldstein and me talking with complementary visuals playing at the same time.

Ben Goldstein opens the segment by talking about how the Liquid Galaxy is a panoramic system that engages your peripheral vision, and is immersive in that way.

I go on to add that the system consists of large paneled screens set up in an arch around the viewer. The Liquid Galaxy includes a touchscreen and 3D joystick that allows users can fly around the world. From there, with the use of End Point's Content Management System, users can add images, video, kml, other overlay, to add interactivity and build custom presentations on the system. Thus far, the Liquid Galaxy has been particularly popular in real estate, museums, aquariums, research libraries, hospitality, and travel.



















Ben concludes the segment by talking about how the system kind of plays "follow the leader". Navigation occurs on the central display, while the other displays are configured at appropriate geometric offsets. The other displays pull down their appropriate section of the world so that the viewer can see the world in an immersive panoramic view all at once.

We hope you enjoy our segment!



published by Eugenia on 2016-07-11 00:31:37 in the "General" category
Eugenia Loli-Queru

I’m Greek. I’ve lived both in rural, mountainous places of mainland Greece (I grew up near the supposed entrance to Hades Underworld no less!), and near-sea towns.

Now that I live in the US, what kind of grinds my gears is when I read about how great the Mediterranean Diet (MD) is. Don’t get me wrong. The Mediterranean Diet is better than most other regular Western diets out there. But the health benefits researchers had seen prior to 1970 in these regions is only partial to the diet. The rest, is lifestyle. It’s a logical fallacy to separate the lifestyle of these people from their diet. Western researchers like to pick and choose elements, so they can easily make their case, but in this case, either they have to represent the whole lifestyle of Mediterranean people, or they need to shut their holes about the MD diet.

And what was that lifestyle? Well, take a peek:

Prior to 1970s, before the Westernized diet also took hold even in rural Greek places, here is a typical day for my grandparents, and my parents (my mother lived 15 years in that lifestyle, right until electricity finally came to these villages in 1971, and my father for 22 years):

– Morning
Wake up in the crack of dawn. Open the chicken’s coop door. Eat some sour milk (similar to kefir) or yogurt (not strained yogurt like FAGE, that’s not truly traditional Greek), or boiled eggs, or cheese & olives and home-made (well-fermented, with OLD variety of wheat) bread for breakfast, along Turkish-style (unfiltered) coffee. In the winter, they’d eat “trahanas”, which looks similar to porridge, but it’s made out of lacto-fermented wheat.

Kids get ready to go to school, father (or older son) will go up to the mountains with the goats/sheep (usually 150-250 animals) and the dogs (usually 2-5), while the mother (& older children) will go down to the valley to work in the fields, or the trees, or in the house’s vegetable garden (each house had one). Going up the mountain is steep, and it takes about an hour to reach the top (they could climb up real fast! — my grandfather was impossibly fast up to the age of 80). The village itself is usually situated in the middle of the mountain, so it takes the same time to either go down to the valley and back up again, or up to the top and down again.

– Midday
Father would go from pasture to pasture with the animals, and at around midday, the goats/sheep will find some shade and sit around at the hottest time of the day. He would eat largely the same thing he ate for breakfast: eggs, feta cheese, olives, bread. After he ate, he’ll sleep for 1 sleep cycle (1.5 hours) under a tree. The dogs would take care of any potential wolf problems.

The mother in the fields will do the same. Eat and sleep, and then restart work. Here’s a picture from National Geographic from the 1940s Thessaloniki wheat fields, eating lunch:

The younger kids would finish school by 1:30 PM and come back home. They’d eat some lite lunch at school (it used to be free up until the early ’80s) and then they come back home and eat some more and then start working around the house: do the laundry by hand, prepare dinner, make some yogurt from scratch, bake bread if required (usually they’d bake bread twice a week), do some homework if they have time etc.

– Afternoon, after 6 PM in the summer, 3 PM in the winter
Father and mother would start to come back home. If it’s the season, and there are young goats/sheep, one of the kids will have to take these out of the stable (usually up to 50 younglings), and go with them to a nearby pasture so they can eat. Young animals can’t make the trip yet with their parents all the way to the top of the mountain, so they get limited pasture-time, nearby only. While at the pasture, the kids will also gather wild vegetables, including dandelion greens, purslane, mustard greens, chicory in the winter, asparagus in April, and amaranth greens in August (in Greece, we never eat the quinoa-like amaranth seeds, we only eat the greens, and ONLY before the plant has flowered/seeded!). In the even older days, there would also search for wild parsnips and other types of veggies from the wild (e.g. centaurea, goosefoot *greens* which is nothing but a European version of quinoa, nettles etc), but since the 1950s and later, when cans/pasta/flour became more available, these were stopped getting picked.

Upon coming home, some food are given to the chickens, and then they will be locked in their little house.

– Evening
The animals are now in the stable. It’s time for milking (in the near-dark, no less). Mother & father will go through the female goats/sheep one by one, while leaving some milk for their babies too. The kids will help by allowing the animals to pass through one by one, so they can be milked.

Then, it’s dinner time. The biggest meal of the day.

It’s usually greens year-round with bread. There are garden veggies & potatoes in the hot months, and (pre-soaked) beans in the winter. Fruits when in season only. Honey a few times a year. In general, all grains & dairy products that consumed are well-fermented. Some would drink raw milk directly from the animals, but this stopped after the 1960s, because that’s when their animals would get mysteriously sick (even if antibiotic shots didn’t start by law before ~1975 — maybe pollution was catching up from the rest of the world in the ’60s in these rural places?).

There will be fish, crawfish or eels twice a week, either by the nearby river, or from salesmen from the nearby sea towns who come with their donkeys once a week (salted fish and shellfish in that case). Sea town people would eat fresh fish from the sea 3-4 times a week instead.

There might be chicken (from their own chickens) once or twice a month or so only. BTW, look below how a TRUE pastured chicken from my grandmother looks like — it looks like duck meat!

2767738737_dcc99d4056_o

When you cook it, the bones are so incredibly white because they have so much calcium! And the meat looks, and tastes like red meat!

2767738741_1779d6f7a5_o

There will be red meat, but only once a month. That would be goat mostly, sheep a bit less, pork less often than sheep, and beef very rarely. Because they didn’t have fridges, when someone in the village would slaughter an animal, they would share with other families, so they don’t go bad. When the other families would slaughter one of their own animals, they would share back. In village/religious festivities there would be a bit more meat going around too (usually boiled goat, or lamb on a spit for Easter day). The whole animal was eaten, head to toe. Most of you are aware of liver, heart, kidneys, brains and tongue, but that’s nothing compared to how we eat these animals: we’d also eat the stomach & intestines (in an incredibly good soup, called patsas), the spleen (which tastes something between liver and boiled oysters), thyroid glands, eyes, testicles, and lungs (I’d say, “mushy” lungs are the least yummy part of the offal, with spleen being the yummiest for me). Occasionally, in the winter when they had time away from the fields, they might catch a hare, or a small bird too, with traps. Greece used to have deers, wild boar, pheasants, and many more hares, but these now are mostly gone (over-hunted).

Nuts & seeds were eaten periodically, but not religiously.

Everything was cooked with olive oil, or butter (which was white btw, not yellow).

After dinner, they’d throw scraps to the dogs (and some to outdoor cats), and then everyone would go to sleep. Dogs are sleeping in the stable with the animals (goats/sheep, often donkeys too), chickens in the coop, and cats, only god knows where. And the day starts again anew the next day, even on Sundays (only people who had older children to take care of business they’d have time to go to church). The animals need to eat every day, you see. There was no such thing as “day off”. If you had to leave for a few days (e.g. to visit a doctor in a town), you’d have to ask others in the village to take care of your animals, water the vegetable garden, feed the kids etc.

But fear not, they did have fun, daily. It’s called gossip.

Now, here’s the twist!

Greeks are/were religious. The Greek Orthodox fasting was observed by all. Fasting in Greek Orthodoxy (and in old Catholic church) does not mean “intermittent fasting” (IF). It means: no animal products (except shellfish that were allowed because they contain no blood — although most people would not eat them anyway). Every Monday, Wednesday and Friday, the most religious would be vegan (mostly women, men would remain vegetarian).

However, the whole family would fast without exception, in the three biggest religious celebrations of the year: Christmas, Easter, and Assumption of Mary (August 15th). This means that people would go vegan for 40 days before Christmas, 40 days before Easter, and 15 days before the Assumption of Mary. This means that for 26% of the year, everyone was vegan. For the women who were also fasting weekly, or periodically, that goes to over 40% of the time. And let’s not forget that when they were NOT fasting, they were mostly vegetarian anyway. This means that these people were vegan ~30% of the time (as an average), 55% of the time vegetarian, 10% vegetarian/pescetarian, and only about 5% of the time land meat eaters.

The only Greek people eating a bit more meat (mostly in the form of fish instead of land meat) were the more affluent people in cities.

Most interestingly, in the final week before Easter, olive oil & butter were not consumed either (which means no bread either, since it requires some oil in the recipe). They’d basically just eat veggies (often raw, with raw garlic), fruits, and soaked beans cooked in plain water & sea salt. I guess that part of the fasting is the closest they ever got to raw veganism (minus the cooked beans).

Q: But why was this lifestyle healthier in the Mediterranean than other parts of the world?

A: The lifestyle mentioned above was NOT unique for Mediterranean people. But it proved to be healthier there because of various reasons. Lots and lots of D3 due to being a sunny place, with both seafood and land food in good balance. Civilization thrived from ancient times there because simply, the geography, food, climate all helped out. Not to mention that because of the closed sea and proximity to both Asia and Africa, merchants could bring over fruits or foods not available directly to their region (something not as easily done, as let’s say, in Northern Europe – the fruits would spoil before they could reach these countries). The only other place where people lived a similar lifestyle with plenty of field work, D3 and circadian rhythms, and ate similarly-balanced foods, was Okinawa. And we already know how well these people did before the Western Diet caught up with them too.

Conclusion:
If you want to get the Mediterranean Diet effect, then you need to change everything about your life. It requires huge changes to how you sleep, work, being out in nature all day long away working on your own garden and animals, and be away from pollution/cellphones etc. It’s not just the diet.

I’d go on a limb here and say that if you can’t do everything as well as they did, but you want to come close to their results, you might get some extra push if you ditch grains (except some rice), particularly modern wheat. Their (low gluten, old variety and very fermented) wheat had nothing to do with modern wheat and flours.


published by noreply@blogger.com (Peter Hankiewicz) on 2016-07-07 23:30:00 in the "crawler" category

Introduction

There is a lot of data flowing everywhere. Not structured, not useful pieces of data moving here and there. Getting this data and structuring, processing can make it really expensive. There are companies making billions of dollars just (huh?) for scraping web content and showing in a nice form.

Another reason for doing such things can be for example, lack of an API from a source website. In this case, it's the only way to get data that you need to process.

Today I will show you how to get web data using PHP and that it can be as easy as pie.

Just do it

There are multiple scraping scripts ready to use. I can recommend one of them: PHP Simple HTML DOM Parser. It's extremely easy to start with and initial cost is almost nothing, it's open sourced also.

First, download a library from an official site: http://sourceforge.net/project/showfiles.php?group_id=218559. You can use a composer version too, it's here: https://github.com/sunra/php-simple-html-dom-parser.

Let's say that you have downloaded this file already. It's just a one PHP file called simple_html_dom.php. Create a new PHP file called scraper.php and include mentioned library like this:

<?php

require('simple_html_dom.php');

In our example, we will scrape top 10 trending YouTube videos and create a nice array of links and names out of it. We will use this link: https://www.youtube.com/feed/trending?gl=GB.

We need to grab this page first. Using PHP it's just a one additional line in our script:

<?php

require('simple_html_dom.php');

// Create DOM from URL or file
$html = file_get_html('https://www.youtube.com/feed/trending?gl=GB');

A PHP object was just created with the YouTube page structure.

Look at the YouTube page structure to find a repeating structure for a list of videos. It's best to use Chrome developer tools and its HTML browser. At the time of writing this post (it can change in the future of course) it's:

<ul class="expanded-shelf-content-list has-multiple-items">
 <li class="expanded-shelf-content-item-wrapper">...</li>
 <li class="expanded-shelf-content-item-wrapper">...</li>
 <li class="expanded-shelf-content-item-wrapper">...</li>
 ...
</ul>

Thanks Google! This time it will be easy. Sometimes a structure of the page lacks of classes and ids and it's more difficult to select exactly what we need.

Now, for each item of expanded-shelf-content-item-wrapper we need to find its title and url. Using developer tools again, it's easy to achieve:

<a 
 class="yt-uix-sessionlink yt-uix-tile-link yt-ui-ellipsis yt-ui-ellipsis-2 spf-link " 
 dir="ltr" 
 aria-describedby="description-id-284683" 
 title="KeemStar Swatted My Friend." 
 href="/watch?v=oChvoP8zEBw">
 KeemStar Swatted My Friend
</a>

Jackpot! We have both things that we need in the same HTML tag. Now, let's grab this data:

<?php

require('simple_html_dom.php');

// Create DOM from URL or file
$html = file_get_html('https://www.youtube.com/feed/trending');

// creating an array of elements
$videos = [];

// Find top ten videos
$i = 1;
foreach ($html->find('li.expanded-shelf-content-item-wrapper') as $video) {
        if ($i > 10) {
                break;
        }

        // Find item link element 
        $videoDetails = $video->find('a.yt-uix-tile-link', 0);

        // get title attribute
        $videoTitle = $videoDetails->title;

        // get href attribute
        $videoUrl = 'https://youtube.com' . $videoDetails->href;

        // push to a list of videos
        $videos[] = [
                'title' => $videoTitle,
                'url' => $videoUrl
        ];

        $i++;
}

var_dump($videos);

Look, it's simple as using CSS. What we just did? First, we extracted all videos and started looping through them here:

foreach ($html->find('li.expanded-shelf-content-item-wrapper') as $video) {

Then, just extracted a title and url per each video item here:

// Find item link element 
$videoDetails = $video->find('a.yt-uix-tile-link', 0);

// get title attribute
$videoTitle = $videoDetails->title;

At the end, we just push an array object with scraped data to the array and dump it. The result looks like this:

array(10) {
  [0]=>
  array(2) {
    ["title"]=>
    string(90) "Enzo Amore & Big Cass help John Cena even the odds against The Club: Raw, July 4, 2016"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=940-maoRY3c"
  }
  [1]=>
  array(2) {
    ["title"]=>
    string(77) "Loose Women Reveal Sex Toys Confessions In Hilarious Discussion | Loose Women"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=Xxzy_bZwNcI"
  }
  [2]=>
  array(2) {
    ["title"]=>
    string(51) "Tinie Tempah - Mamacita ft. Wizkid (Official Video)"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=J4GQxzUdZNo"
  }
  [3]=>
  array(2) {
    ["title"]=>
    string(54) "Michael Gove's Shows you What's Under his Kilt"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=GIpVBLDky30"
  }
  [4]=>
  array(2) {
    ["title"]=>
    string(25) "Deception, Lies, and CSGO"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=_8fU2QG-lV0"
  }
  [5]=>
  array(2) {
    ["title"]=>
    string(68) "Last Week Tonight with John Oliver: Independence Day (Web Exclusive)"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=IQwMCQFgQgo"
  }
  [6]=>
  array(2) {
    ["title"]=>
    string(21) "Last Week I Ate A Pug"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=TTk5uQL2oO8"
  }
  [7]=>
  array(2) {
    ["title"]=>
    string(59) "PEP GUARDIOLA VS NOEL GALLAGHER | Exclusive First Interview"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=ZWE8qkmhGmc"
  }
  [8]=>
  array(2) {
    ["title"]=>
    string(78) "Skins, lies and videotape - Enough of these dishonest hacks. [strong language]"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=8z_VY8KZpMU"
  }
  [9]=>
  array(2) {
    ["title"]=>
    string(62) "We Are America ft. John Cena | Love Has No Labels | Ad Council"
    ["url"]=>
    string(39) "https://youtube.com/watch?v=0MdK8hBkR3s"
  }
}

Isn't it easy?

The end

I have some advice if you want to make this kind of script be processing the same page all the time:

  • set the user agent header to simulate a real web browser request,
  • make calls with a random delay to avoid blacklisting from a web server,
  • use PHP 7,
  • try to optimize the script as much as possible.
You can use this script for production code but, to be honest, it's not the most optimal approach. If you are not satisfied, code it by yourself :-).

Nice documentation is located here: http://simplehtmldom.sourceforge.net/


published by noreply@blogger.com (Josh Williams) on 2016-07-01 22:08:00 in the "database" category
I originally titled this: Inferring Record Timestamps by Analyzing PITR Streams for Transaction Commits and Cross-Referencing Tuple xmin Values. But that seemed a little long, though it does sum up the technique.

In other words, it's a way to approximate an updated_at timestamp column for your tables when you didn't have one in the first place.

PostgreSQL stores the timestamp of a transaction's commit into the transaction log. If you have a hot standby server, you can see the value for the most-recently-applied transaction as the output of the pg_last_xact_replay_timestamp() function. That's useful for estimating replication lag. But I hadn't seen any other uses for it, at least until I came up with the hypothesis that all the available values could be extracted wholesale, and matched with the transaction ID's stored along with every record.

If you're on 9.5, there's track_commit_timestamps in postgresql.conf, and combined with the pg_xact_commit_timestamp(xid) function has a similar result. But it can't be turned on retroactively.

This can -- sort of. So long as you have those transaction logs, at least. If you're doing Point-In-Time Recovery you're likely to at least have some of them, especially more recent ones.

I tested this technique on a pgbench database on stock PostgreSQL 9.4, apart from the following postgresql.conf settings that (sort of) turn on WAL archival -- or at least make sure the WAL segments are kept around:

wal_level = archive
archive_mode = on
archive_command = '/bin/false'

We'll be using the pg_xlogdump binary to parse those WAL segments, available from 9.3 on. If you're on an earlier version, the older xlogdump code will work.

Once pgbench has generated some traffic, then it's time to see what's contained in the WAL segments we have available. Since I have them all I went all the way back to the beginning.

$ pg_xlogdump -p pg_xlog/ --start 0/01000000 --rmgr=Transaction
rmgr: Transaction len (rec/tot):     12/    44, tx:          3, lsn: 0/01006A58, prev 0/01005CA8, bkp: 0000, desc: commit: 2016-05-15 22:32:32.593404 EDT
rmgr: Transaction len (rec/tot):     12/    44, tx:          4, lsn: 0/01008BC8, prev 0/01008A60, bkp: 0000, desc: commit: 2016-05-15 22:32:32.664374 EDT
rmgr: Transaction len (rec/tot):     12/    44, tx:          5, lsn: 0/01012EA8, prev 0/01012E58, bkp: 0000, desc: commit: 2016-05-15 22:32:32.668813 EDT
(snip)
rmgr: Transaction len (rec/tot):     12/    44, tx:       1746, lsn: 0/099502D0, prev 0/099501F0, bkp: 0000, desc: commit: 2016-05-15 22:55:12.711794 EDT
rmgr: Transaction len (rec/tot):     12/    44, tx:       1747, lsn: 0/09951530, prev 0/09951478, bkp: 0000, desc: commit: 2016-05-15 22:55:12.729122 EDT
rmgr: Transaction len (rec/tot):     12/    44, tx:       1748, lsn: 0/099518D0, prev 0/099517F0, bkp: 0000, desc: commit: 2016-05-15 22:55:12.740823 EDT
pg_xlogdump: FATAL:  error in WAL record at 0/99518D0: record with zero length at 0/9951900

The last line just indicates that we've hit the end of the transaction log records, and it's written to stderr, so it can be ignored. Otherwise, that output contains everything we need, we just need to shift around the components so we can read it back into Postgres. Something like this did the trick for me, and let me import it directly:

$ pg_xlogdump -p pg_xlog/ --start 0/01000000 --rmgr=Transaction | awk -v Q=' '{sub(/;/, ""); print $8, Q$17, $18, $19Q}' > xids

postgres=# CREATE TABLE xids (xid xid, commit timestamptz);
CREATE TABLE
postgres=# copy xids from xids csv
COPY 1746

At which point it's a simple join to pull in the commit timestamp records:

postgres=# select xmin, aid, commit from pgbench_accounts inner join xids on pgbench_accounts.xmin = xids.xid;
 xmin |  aid   |            commit             
------+--------+-------------------------------
  981 | 252710 | 2016-05-15 22:54:34.03147-04
 1719 | 273905 | 2016-05-15 22:54:35.622406-04
 1183 | 286611 | 2016-05-15 22:54:34.438701-04
 1227 | 322132 | 2016-05-15 22:54:34.529027-04
 1094 | 331525 | 2016-05-15 22:54:34.26477-04
 1615 | 383361 | 2016-05-15 22:54:35.423995-04
 1293 | 565018 | 2016-05-15 22:54:34.688494-04
 1166 | 615272 | 2016-05-15 22:54:34.40506-04
 1503 | 627740 | 2016-05-15 22:54:35.199251-04
 1205 | 663690 | 2016-05-15 22:54:34.481523-04
 1585 | 755566 | 2016-05-15 22:54:35.368891-04
 1131 | 766042 | 2016-05-15 22:54:34.33737-04
 1412 | 777969 | 2016-05-15 22:54:34.953517-04
 1292 | 818934 | 2016-05-15 22:54:34.686746-04
 1309 | 940951 | 2016-05-15 22:54:34.72493-04
 1561 | 949802 | 2016-05-15 22:54:35.320229-04
 1522 | 968516 | 2016-05-15 22:54:35.246654-04

published by noreply@blogger.com (Jeff Boes) on 2016-06-28 13:00:00 in the "human vacation balance" category

Recently I returned from the longest (8 workdays) vacation I have ever taken from this job (almost 11 years). I made an interesting discovery which I'm happy to share with you:

Life goes on without you.

I spent most of the time aboard a small cruise ship touring Alaska's Inside Passage and Glacier Bay. During almost all of that, I was out of cell phone range (and, unlike a lot of my colleagues, I don't carry a smart phone but just a dumb $5 flip phone). With no wifi on the ship, I was completely cut off from the Internet for the longest stretch in at least the last 15 years – maybe the longest since I first got Internet at my house back in the mid-90s.

Life (on the Internet) goes on without you.

Facebook posts get posted, liked, commented on. Tweets happen, get re-tweeted. Emails are sent (for those that still use it, anyway). And life goes on.

I can't say I came back from vacation recharged in some stupendous way, but I think I'm better off than if I'd taken a shorter vacation in an Internet-connected location, checking up on the virtual world before breakfast and bedtime every day.

So take vacations, and take meaningful ones – disconnect from work. Don't worry about what's piling up, or what's happening in the online communities in which you participate. If you're going away for a week, really go away and leave work behind. If a crisis arises, make sure someone else is equipped to at least try to handle it, but don't go into your vacation planning for it to be interrupted.

If you can't do that, you should start preparing for it anyway. Train someone to be able to jump in and do your job (inefficiently, sure, but that's far better than "not at all"). Because quite frankly, if you can't cut the ties that bind you to that mission-critical production system on a voluntary, scheduled basis, then you are far too vulnerable to the random interruptions of life (car accident, death in the family, lengthy power failure for us telecommuters) that will come (oh, they will come).


published by Eugenia on 2016-06-18 00:17:34 in the "Hardware" category
Eugenia Loli-Queru

As you may already know, I have the most interesting dreams, hehe…

Apart from seeing weird alien entities during my nap time, I was also shown how the CubeSat idea can be properly commercialized. Beat that, MIT (or DMT).

So basically, I was shown a bunch of 3U CubeSats (around 10 or 12 of them), held together by some sort of string, forming a web. At the edges of the web, there were semi-large solar panels and antennas, while in the middle of the web, there was a propulsion engine, not larger than a 3U CubeSat itself.

Right now, all CubeSats are released in the wild on their own, with no propulsion (sometimes they end up facing the wrong way), terrible power abilities, and even more terrible communication (FM among others!!!). These satellites usually die within 3-5 months, quickly burning in the atmosphere. On top of that, they’re usually get released as secondary payload in LEO, while CubeSats are benefited in higher SSO orbit.

Here’s the business idea behind of what I saw:

– You let customers buy one of the CubeSats and customize it out of an array of most-popular components (third party components that pass evaluation can be accepted — that costs extra).

– The CubeSats run Android, so writing drivers for it, updating them over the air, or even completely erase them to their default status can be done. Each of the 12 CubeSats runs a slightly different version of the OS, and has different hardware — depending on the customer needs.

– The customer can access their CubeSat via a secure backend on the manufacturer’s web site. Even firmware updates can be performed, not just plain updates or downlink data.

– Because of the shared propulsion, the constellation web can be in SSO for up to 5 years.

– 1 year of backend support is included in the overall price, but after that time, owners can continue using it for an additional fee, or lease or sell the rights to their CubeSat to another commercial entity, getting back some of that invested value.

– Even if 1 CubeSat goes bad, the others continue to work, since they’re independent of each other. Triple redundancy system in case of shorting. To avoid over-usage of power due to faulty hardware or software (that could run down the whole system), a pre-agreed specific amount is allocated to each CubeSat daily.

– Eventually, a more complex system could be developed, under agreement with all the responsible parties, to have CubeSats share information with their neighbor CubeSats (either an internal wired network, or Bluetooth — whatever proves more secure and fast). For example, if there’s a hardware ability one CubeSat in the web has, but the others don’t, and one of the other CubeSats needs it, they could ask for its service — for the right price.

– Instead of dispensing the CubeSats one by one, the web is a single machine, about 2/3s the size of a dishwasher. The CubeSats have very specific allowed weight in their specification, so overall, while the volume is medium size, the overall weight doesn’t have to be more than 100 kg. That easily fits on the payload of small, inexpensive rockets, like the upcoming RocketLab Electron, which costs just $4.9 million per launch. Falcon 9 becomes cheaper only if it could launch 13 of these webs at once. While it can very easily lift their weight, it might not have the volume required (the Falcon9 fairing is rather small at 3.2m).

– This comes overall to about $600,000 per CubeSat overall (with a rather normal configuration).

The current 3U CubeSats cost anywhere between $20k and $50k to make, plus another $200k or so to launch. Overall, sure, $600k is more than the current going price, but with the web idea you get enough power, communication that doesn’t suck, propulsion, and an extended life — plus the prospect of actually making money out of them by leasing them or selling them. A lot of the revenue will come after the launch, as a service/marketplace business.

In a sense, this business idea is the equivalent of a shared hosting server service, which has revolutionized the way servers work, and has democratized people’s ability to run code or servers online. PlanetLabs is doing something similar by leasing “time” on their CubeSats, but by releasing them one by one, they fall on the stated shortcomings.

For all of this to become true, the CubeSats themselves would need an overhaul of how customizable their modularity is, and easy access to the latest mobile hardware. Overall, we’re probably 2-3 years away from such an idea getting even started to materialize, and possibly 5 years away from becoming reality. I haven’t seen anyone else suggested it, so, here I am. Thank my weird dreams.


published by noreply@blogger.com (Greg Sabino Mullane) on 2016-06-13 20:47:00 in the "postgres" category

(A Unicode rabbit face 🐰 will never be as cute
as this real bunny. Photo by Wade Simmons)

One of our clients recently reached out to us for help in upgrading their Postgres database. The use of the pg_upgrade program was not an option, primarily because the client was also taking the opportunity to change from their SQL_ASCII encoding to UTF-8. (If any of your databases, gentle reader, are still SQL_ASCII, please do the same!). Naturally, we also took advantage of the lack of pg_upgrade to enable the use of data checksums, another action we highly recommend. Although there were plenty of wrinkles, and stories to be told about this migration/upgrade, I wanted to focus on one particular problem we had: how to detect if a table has changed.

We needed to know if any applications were modifying certain tables because the speed of the migration was very important. If we could assert that no changes were made, there were some shortcuts available that would greatly speed things up. Initial testing showed that the migration was taking over eight hours, a time unacceptable to the client (no worries, we eventually reduced the time to under an hour!).

Looking closer, we found that over half that time was spent converting a single small (50MB) table from SQL_ASCII to UTF-8. How this conversion was performed is a story for another day, but suffice to say the table had some really, really messy bytes inside of it; the conversion program had to struggle mightily. When you are converting a database to a new encoding, it is imperative to examine every byte and make sure it gets changed to a format that Postgres will accept as valid UTF-8, or the entire table import will fail with an error similar to this:

ERROR:  invalid byte sequence for encoding "UTF8": 0xf4 0xa5 0xa3 0xa5

Looking closer at the data in the table showed that it might - just might! - be a historical table. In other words, it no longer receives updates, just selects. We really wanted this to be true, for it meant we could dump the whole table, convert it, and simply load the converted table into the new database (which took only a few seconds!). First, however, we had to confirm that the table was not changing.

Detecting changes may be done in several ways. For all of them, you can never prove that the table shall not change at some point in the future, but you can prove that it has not changed over a certain period of time. How you go about doing that depends on what kind of access you have. If you do not have super-user access, you could add a simple trigger to the table that updates another table when a update, insert, or delete is performed. Then, checking in on the second table will indicate if any changes have been made.

A better solution is to simply look at the underlying file that makes up the table. To do this, you need be a Postgres superuser or have access to the underlying operating system. Basically, we will trust the operating system's information on when the table was last changed to determine if the table itself has changed. Although not foolproof, it is an excellent solution. Let's illustrate it here. First: create a test table and add some rows:

$ psql
greg=# CREATE TABLE catbox AS SELECT 8675309::INT AS id FROM generate_series(1,1000);
SELECT 1000

Now we can use the pg_stat_file() function, which returns some basic information about a file on disk. With the help of the pg_relation_filepath() function, we can see when the table was last modified:

greg=# select * from pg_stat_file( pg_relation_filepath('catbox') ) xg
Expanded display is on.
-[ RECORD 1 ]+-----------------------
size         | 40960
access       | 2015-11-08 22:36:00-04
modification | 2015-11-08 22:36:00-04
change       | 2015-11-08 22:36:00-04
creation     | 
isdir        | f

Next we will revisit the table after some time (e.g. 24 hours) and see if the "modification" timestamp is the same. If it is, then the table has not been modified either. Unfortunately, the possibility of a false positive is possible due to VACUUM, which may change things on disk but does NOT change the data itself. (A regular VACUUM *may* modify the file, and a VACUUM FULL *always* modifies it).

greg=# select * from pg_stat_file( pg_relation_filepath('catbox') ) xg

-[ RECORD 1 ]+-----------------------
size         | 40960
access       | 2015-11-08 22:36:00-04
modification | 2015-11-08 22:36:00-04
change       | 2015-11-08 22:36:00-04
creation     | 
isdir        | f


greg=# vacuum catbox;
VACUUM

greg=# select * from pg_stat_file( pg_relation_filepath('catbox') );

2016-06-09 22:53:24-04
-[ RECORD 1 ]+-----------------------
size         | 40960
access       | 2015-11-08 22:36:00-04
modification | 2015-11-08 22:40:14-04
change       | 2015-11-08 22:40:14-04
creation     | 
isdir        | f

A second (and more foolproof) method is to simply generate a checksum of the entire table. This is a fairly straightforward approach; just pump the output of pg_dump to a checksum program:

$ pg_dump -t catbox --data-only | sha1sum
6f724565656f455072736e44646c207472536e61  -

The advantage here is that even a VACUUM FULL will not change the checksum. However, because pg_dump does no ORDER BY when dumping out the table, it is possible for the rows to be returned in a different order. To work around that, issue a VACUUM FULL yourself before taking the checksum. As before, come back later (e.g. 24 hours) and re-run the command. If the checksums match, then the table has not changed (and is probably no longer updated by the application). By using this method, we were able to verify that the large, SQL_ASCII byte-soup table was indeed not being updated, and thus we took it out of the direct migration.

Of course, that table needed to be part of the new database, but we simply dumped the table, ran the conversion program on it, and (four hours later), had a complete dump of the table that loads extremely fast into the new database.

That solved only one of the problems, however; another table was also slowing down the migration. Although it did not have the SQL_ASCII conversion issue, it was a large table, and took a large percentage of the remaining migration time. A quick look at this table showed it had a "creation_time" column as well as a SERIAL primary key, and was obviously being updated quite often. Close examination showed that it was possible this was an append-only table, such that older rows were never updated. This called for a similar approach: could we prove that a large chunk of the table was not changing? If we could, we could pre-populate the new database and copy over only the most recent rows during the migration, saving a good bit of time.

The previous tricks would not work for this situation, because the underlying file would change constantly as seen by pg_stat_file(), and a pg_dump checksum would change on every insert. We needed to analyze a slice of the table - in this particular case, we wanted to see about checksumming all rows except those created in the last week. As a primary key lookup is very fast, we used the "creation_time" column to determine an approximate primary key to start with. Then it was simply a matter of feeding all those rows into the sha1sum program:

greg=# CREATE TABLE catbox2 (id SERIAL PRIMARY KEY, creation_time TIMESTAMPTZ);
CREATE TABLE
greg=# INSERT INTO catbox2(creation_time) select now() - '1 year'::interval + (x* '1 hour'::interval) from generate_series(1,24*365) x;
INSERT 0 8760

greg=# select * from catbox2 where creation_time > now()-'1 week'::interval order by 1 limit 1
  id  |         creation_time         
------+-------------------------------
 8617 | 2016-06-11 10:51:00.101971-08

$ psql -Atc "select * from catbox2 where id < 8617 order by 1" | sha1sum
456272656d65486e6f203139353120506173733f  -

## Add some rows to emulate the append-only nature of this table:
greg=# insert into catbox2(creation_time) select now() from generate_series(1,1000)
INSERT 0 1000

## Checksums should still be identical:
$ psql -Atc "select * from catbox2 where id < 8617 order by 1" | sha1sum
456272656d65486e6f203139353120506173733f  -

Despite the large size of this table (around 10 GB), this command did not take that long to run. A week later, we ran the same commands, and got the same checksum! Thus, we were able to prove that the table was mostly append-only - or at least enough for our use case. We copied over the "old" rows, then copied over the rest of the rows during the critical production migration window.

In the future, this client will able to take advantage of pg_upgrade, but getting to UTF-8 and data checksums was absolutely worth the high one-time cost. There were several other tricks used to speed up the final migration, but being able to remove the UTF-8 conversion of the first table, and being able to pre-copy 99% of the second table accounted for the lion's share of the final speed improvements.


published by noreply@blogger.com (Dave Jenkins) on 2016-06-10 20:54:00 in the "Korea" category
image courtesy of retaildesignblog
End Point and AZero, a South Korean system integrator, have partnered to deploy a 10-screen Liquid Galaxy to the newly-opened Hyundai Card Travel Library in Seoul, South Korea. This project presented a number of unique challenges for our teams, but we have launched successfully, to the great satisfaction of AZero's client.

The Hyundai Card Travel Library is an incredible space: wall-to-wall maple bookshelves hold travel brochures, photo books, and other travel-related material. The Liquid Galaxy displays itself sits in a small alcove off the main library space. Being fully enclosed, viewers can control the lighting and get a full immersion experience through the two rows of five 47" screens arrayed in an wall-mounted semi-circle arc. The viewer can control the screens via the podium-mounted touchscreen and SpaceNav mouse controller.

We solved several technical challenges for this deployment: the extremely tight space made cabling and display configuration tricky. Also, this isn't a "standard" 7-screen single row deployment, but rather two rows of 5 screens each. Working with AZero, End Point reconfigured the Liquid Galaxy display configurations to account for this unique layout. NOTE: the Liquid Galaxy scales quite easily, and can be arrayed in any number of configurations. Other recent deployments include a 40-screen control room, with 10 columns of 4 screens each!

Intended as a travel planning platform, Hyundai provided a number of set tours to showcase on the Liquid Galaxy, such as "The Orient Express", "Rio!", or "European Capitals". Each tour shows an overview map as a graphic overlay, while the Liquid Galaxy hops from each destination on the route to the next within Google Earth. At each location, the potential traveler can drop into Google Street View and see the fully panoramic images of street scenes in Paris or the top of Sugarloaf Mountain in Brazil. This should allow potential travelers to virtually experience the tour locations, and make more informed decisions about which trip might suit their tastes. Beyond that, it's a pretty cool way to view the planet inside a travel library.

published by noreply@blogger.com (Richard Peltzman) on 2016-06-10 18:26:00 in the "Business" category
Wall Street. Back where it all started for me some 35 years ago. Only instead of being an employee of Oppenheimer and Company, I had the experience and honor of representing End Point as one of 7 companies chosen to ?ring the bell? at the New York Stock Exchange, chiming in Small Business Week for JPMorgan Chase. (They work with 4,000,000 small companies.)

The morning started early by going through security checks rivaling the airport, except I didn?t have to take my shoes off. After getting my nifty credential, we went back to the street where the President of the NYSE gave a welcoming speech, pointing out the buildings that are still standing from when Hamilton, Monroe and Aaron Burr all started their own banks. As well as where George Washington was sworn in.

All this while also getting free coffee from the main small business honoree, Gregory?s Coffee, and picture taking from the business paparazzi!

We then went inside the storied NYSE building and made our way to the trading floor. The surroundings were immensely impressive, as the Stock Exchange still inhabits a huge floor with gorgeous 40 foot mid-18th century ceilings high above, holding up all sorts of 21st century technology and equipment. In the center of it all is CNBC? stage set, with the show Squawk Box airing live, and with the infamous Mad Money man himself, Jim Kramer, sitting in, talking about the impact the newly minted presumptive Republican Candidate (to be nameless) might have on the markets.

At about 9:15, the 7 small business owners and the head of marketing and small market banking for Chase made our way to the real stage, a perch overlooking the entire floor where the actual apparatus for the bell ringing takes place. In front of four live cameras, we waited for 9:30 to hit, and then the bell was rung....rather, the button was pressed to start the day?s trading. (Sorry they use a button now, not an actual clapper to ring the bell.) Aired live, we shook hands, smiled for a national audience, passed around compliments and enjoyed the moment.

A few moments we went back down to the floor where we were able to float around and I could watch the operation of what is still the heart of the financial world come to life around me.

The Exchange has changed in all those years since my days in one important way; Instead of thousands of crazed floor traders frantically buying and selling millions of shares using ticker tape and hand signals, there were maybe a couple of hundred standing by their booths, watching millions of shares being exchanged electronically. Looking almost bored, they calmly talked with each other sipping more of Gregory?s coffee.

I?ll dispense with the rest of the details of the day and the ensuing reception. Rather, I shall now explain the importance of the event as it relates to End Point and to our clients and my own history.

The first day I was on the job in Wall Street, the chairman of the company said to me something that to this day still guides me as to how I run End Point and I have repeated countless times to whomever will listen. He said, ?Rick, to have a good restaurant, you have to have a great kitchen!? So, he had me learn all aspects of the company from back office, to accounting, to the fledgling world of computers to the front office and to how to work effectively with employees and customers alike.

End Point may be one of the smaller of the ?small companies? chosen by Chase this day. But, we were selected because we are a company that personifies good management, great engineering, unrelenting commitment to our customers, and a company that has great potential. Why? Because we believe in having a great kitchen! Our engineering staff is exemplary, our clients are fully appreciative of our partnerships with them, and we are doing all we can to be a model business.

While I may not have rung the actual bell this year, our banker and Chase has every confidence ? as do I ? that one day we will.



published by noreply@blogger.com (Matt Galvin) on 2016-06-02 18:00:00 in the "authorize.net" category

Authorize.net has disabled the RC4 cipher suite on their test server. Their production server update will follow soon. So, in order to ensure your, or your client's, site(s) do not experience any interruption in payment processing it is wise to place a test order in the Authorize.net test environment.

The projects I was testing were all Spree Gem (2.1.x). The Spree Gem uses the ActiveMerchant Gem (in Spree 2.1.x it's ActiveMerchant version 1.34.x). Spree allows you to sign into the admin and select which server your Authorize.net payment method will hit- production or test. There is another option for selecting a "Test Mode" transaction. The difference between a test server transaction and a test mode transaction is explained quite succinctly on the Authorize.net documentation. To summarize it, test server transactions are never sent to financial institutions for processing but are stored in Authorize.net (so you can see their details). Transactions in test mode however are not stored and return a transaction ID of zero.

I wanted to use my Authorize.net test account to ensure my clients were ready for the RC4 Cypher Suite disablement. I ran across a few strange things. First, for three sites, no matter what I did, I kept getting errors saying my Authorize.net account was either inactive or I was providing the wrong credentials. I signed in to Authorize.net and verified my account was active. I triple checked the credentials, they were right. So, I re-read the Spree docs thinking that perhaps I needed to use a special word or format to actually use the test server ("test" versus "Test" or something like that).

Below is a screenshot of the test payment method I had created and was trying to use.

Since I kept getting errors I looked through the Spree code, then the ActiveMerchant Gem that Spree is using.

Below, you can see that the ActiveMerchant is deciding which URL to use (test or live) based on the value of test? (line 15). active_merchant/lib/active_merchant/billing/gateways/authorize_net.rb

require 'nokogiri'

module ActiveMerchant #:nodoc:
  module Billing #:nodoc:
    class AuthorizeNetGateway < Gateway
      include Empty

      self.test_url = 'https://apitest.authorize.net/xml/v1/request.api'
      self.live_url = 'https://api2.authorize.net/xml/v1/request.api'

.
.
.
      def url
        test? ? test_url : live_url
      end

How and where is this set? Spree passes the ActiveMerchant Gem some data which the ActiveMerchant Gem uses to create Response objects. Below is the code where ActiveMerchant handles this data.
active_merchant/lib/active_merchant/billing/response.rb


module ActiveMerchant #:nodoc:
  module Billing #:nodoc:
    class Error < ActiveMerchantError #:nodoc:
    end

    class Response
      attr_reader :params, :message, :test, :authorization, :avs_result, :cvv_result, :error_code, :emv_authorization
.
.
.
      def test?
        @test
      end
.
.
.
      def initialize(success, message, params = {}, options = {})
        @success, @message, @params = success, message, params.stringify_keys
        @test = options[:test] || false
        @authorization = options[:authorization]
        @fraud_review = options[:fraud_review]
        @error_code = options[:error_code]
        @emv_authorization = options[:emv_authorization]

        @avs_result = if options[:avs_result].kind_of?(AVSResult)
          options[:avs_result].to_hash
        else
          AVSResult.new(options[:avs_result]).to_hash
        end

        @cvv_result = if options[:cvv_result].kind_of?(CVVResult)
          options[:cvv_result].to_hash
        else
          CVVResult.new(options[:cvv_result]).to_hash
        end
      end
    end
active_merchant/lib/active_merchant/billing/gateway.rb
      # Are we running in test mode?
      def test?
        (@options.has_key?(:test) ? @options[:test] : Base.test?)
      end

Now that I was more familiar with ActiveMerchant, I wanted to verify that Spree was passing the data as intended

I could see in spree/core/app/models/spree/gateway.rb that Spree was setting ActiveMerchant::Billing::Base.gateway_mode equal to the server param as a symbol. I verified it with some logging.

    def provider
      gateway_options = options
      gateway_options.delete :login if gateway_options.has_key?(:login) and gateway_options[:login].nil?
      if gateway_options[:server]
        ActiveMerchant::Billing::Base.gateway_mode = gateway_options[:server].to_sym
      end 
      @provider ||= provider_class.new(gateway_options)
    end 

At this point I was satisfied that Spree was sending a server param. I also knew Spree was setting Active Merchant's Base.gateway_mode as intended. I then reviewed active_merchant/lib/active_merchant/billing/gateway.rb once more

      # Are we running in test mode?
      def test?
        (@options.has_key?(:test) ? @options[:test] : Base.test?)
      end
and active_merchant/lib/active_merchant/billing/base.rb
      def self.test?
        self.gateway_mode == :test
      end

So, that's it! We know from the exceptions I raised that Spree is sending a test key and a test_mode key. They seem to be the same value but with different keys (I'm guessing that's a mistake), and they both just seem to indicate if the test mode checkbox was checked or not in the Spree admin. However, Base.test? is the server selection and comes from whatever anyone enters in the server input box in the Spree admin. So, we just need to update the ternary operator to check if @options[:test] (test mode) or Base.test? (test server) is true.

Since this is Spree, I created a decorator to override the test? method.

app/models/gateway_decorator.rb

ActiveMerchant::Billing::Gateway.class_eval do
  def test?
    @options.has_key?(:test) && @options[:test] || ActiveMerchant::Billing::Base.test?
  end 
end

Lastly, I placed some test orders and it all worked as intended.

Summary

Authorize.net is disabling the RC4 Cypher suite. If your site(s) uses that, your payment processing may be interrupted. Since the test environment has been updated by Authorize.net, you can see if your site(s) is compliant by posting test transactions to the test environment. If it works, then your site(s) should be compliant and ready when Authorize.net applies the changes to the production server.

Spree 2.1.x (and perhaps all other Spree versions) ALWAYS send the test key, so the ActiveMerchant Gem will always just use the boolean value of that key instead of ever checking to see what the server was set to. Further, this fix makes things a little bit more robust in my opinion by checking if test mode OR the test server was specified, rather than only checking which server (gateway_mode) was specified if the test key was absent.

Alternatively, you could probably make Spree only pass the test key if the value was true. Either way, if you are trying to send test orders to the test environment for a Spree site of at least some versions and have not implemented one of these changes, you will be unable to do so until you add a similar fix as I have described here. If you need any further assistance, please reach out to us at ask@endpoint.com.