All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (Peter Hankiewicz) on 2017-05-26 22:00:00 in the "backend" category

In Endpoint, we had the pleasure to be a part of multiple Drupal 6, 7 and 8 projects. Most of our clients wanted to use the latest Drupal version, to have a long term support, stable platform.

A few years ago, I already had big experience with PHP itself and other, various PHP frameworks like Wordpress, Joomla or Typo3. I was happy to use all of them, but then one of our clients asked us for a simple Drupal 6 task. That?s how I started my Drupal journey which continues until now.

To be honest, I had a difficult start, it was different, new and pretty inscrutable for me. After a few days of reading documentation and playing with the system I was ready to do some simple work. Here, I wanted to share my thoughts about Drupal and tell you why I LOVE! it.

Low learning curve

It took, of course, a few months until I was ready to build something more complex, but it really takes a few days only to be ready for simple development. It?s not only about Drupal, but also PHP, it?s much cheaper to maintain and extend a project. Maybe it?s not so important with smaller projects, but definitely important for massive code bases. Programmers can jump in and start being productive really quick.

Great documentation

Drupal documentation is well structured and constantly developed, usually you can find what you need within a few minutes. It?s critical and must have for any other framework and not so common unfortunately.

Big community

The Drupal community is one of the biggest IT communities I have ever encountered. They extend, fix and document the Drupal core regularly. Most of them have their other jobs and work on this project just for fun and with passion.

It?s free

It?s an open source project, that?s one of the biggest pros here. You can get it for free, you can get support for free, you can join the community for free too (:)).

Modules

On the official Drupal website you can find tons of free plugins/modules. It?s a time and money saver, you don?t need to reinvent the wheel for every new widget on your website and focus on fireworks.

Usually you can just go there and find a proper component. E-commerce shop? Slideshow? Online classifieds website? No problem! It?s all there.

PHP7 support

I can often hear from other developers that PHP is slow, well, it?s not the Road Runner, but come on, unless you are Facebook (and I think that they, correct me if I?m wrong, still use PHP :)) it?s just OK to use PHP.

Drupal fully supports PHP7.

With PHP7 it?s much faster, better and safer. To learn more: https://pages.zend.com/rs/zendtechnologies/images/PHP7-Performance%20Infographic.pdf.

In the infographic you can see that PHP7 is much faster than Ruby, Perl and Python when you try to render a Mandelbrot fractal. In general, you definitely can?t say that PHP is slow, same as Drupal.

REST API support

Drupal has the built in, ready to use API system. In a few moments you can spawn a new API endpoint for you application. You don?t need to implement a whole API by yourself, I did it a few times in multiple languages, believe me, it?s problematic.

Perfect for a backend system

Drupal is a perfect candidate for a backend system. Let?s imagine that you want to build a beautiful, mobile application. You want to let editors, other people to edit content. You want to grab this content through the API. It?s easy as pie with Drupal.

Drupal?s web interface is stable and easy to use.

Power of taxonomies

Taxonomies are, really basically, just dictionaries. The best thing about taxonomies is that you don?t need to touch code to play with them.

Let?s say that on your website you want to create a list of states in the USA. Using most of the frameworks you need to ask your developer/technical person to do so. With taxonomies you just need a few clicks and that?s it, you can put in on your website. That?s sweet, not only for non technical person, but for us, developers as well. Again, you can focus on actually making the website attractive, rather than spending time on things that can be automated.

Summary

Of course, Drupal is not perfect, but it?s undeniably a great tool. Mobile application, single page application, corporate website - there are no limits for this content management system. And actually, it is, in my opinion, the best tool to manage your content and it does not mean that you need to use Drupal to present it. You can create a mobile, ReactJS, AngularJS, VueJS application and combine it with Drupal easily.

I hope that you?ve had a good reading and wish to hear back from you! Thanks.


published by noreply@blogger.com (Muhammad Najmi Ahmad Zabidi) on 2017-05-25 12:59:00 in the "Malaysia" category
A three days Malaysia Open Source Conference (MOSC) ended last week. MOSC is an open source conference which is held annually and this year it reaches its 10 years anniversary. I managed to attend the conference with a selective focus on system administration related presentations, computer security and web application development.

The First Day

The first day's talks were occupied with keynotes from the conference sponsors and major IT brands. After the opening speech and a lightning talk from the community, Mr Julian Gordon delivered his speech which regards to the Hyperledger project, a blockchain technology based ledger. Later Mr Sanjay delivered his speech on the open source implementation in the financial sector in Malaysia. Before lunch break we then listened to Mr Jay Swaminathan from Microsoft whom presented his talks on Azure based service for blockchain technology.




For the afternoon part of the first day I then attended a talk by Mr Shak Hassan on the Electron based application development. You can read his slides here. I personally used Electron based application for Zulip so basically as a non web developer I already have a mental picture what Electron is prior to the talk, but the speaker's session enlightened me more on what was happening at the background of the application. Finally for the first day before I went back I attended a slot delivered by Intel Corp on Yocto Project - in which we could automate the process of creating a bootable Linux image to any platform - whether it is an Intel x86/x86_64 platform or ARM based platform.



The Second Day

The second day of the conference was started with a talk from Malaysia Digital Hub. The speaker, Diana, presented the state of Malaysian-based startups which are currently shaped and assisted by Malaysia Digital Hub and also the ones which already matured and able to stand by themselves. Later, a presenter from Google - Mr Dambo Ren - presented a talk on Google cloud projects.



He also pointed out several major services which are available on the cloud, for example - the TensorFlow. After that I chose to enter the Scilab software slot. Dr Khatim who is an academician shared his experience on using Scilab - an open source software which is similar to Matlab - to be used in his research and for his students. Later I entered a speaking slot with a title "Electronic Document Management System with Open Source Tools".


Here two speakers from Cyber Security Malaysia (an agency within the Malaysia's Ministry of Science and Technology) presented their studies on two open source document management software - OpenDocMan and LogicalDoc. The evaluation matrices were based from the following elements - the access easiness, costs, centralized repo, disaster recovery and the security features. From their observation LogicalDoc managed to get higher scores compared to OpenDocMan.

Later after that I attended a talk by Mr Kamarul on his experience using R language and R studio in his university for medical-based research. After the lunch break then it was my turn on delivering a workshop. Basically my talk was targeted upon the entry level system administration, in which I shared pretty much my experiences using tmux/screen, git, AIDE to monitor file changes on our machines and Ansible in order to automate common tasks as much as possible within the system administration context. I demonstrated the use of Ansible with multiple Linux distros - CentOS, Debian/Ubuntu in order to show how Ansible would handle heterogeneous Linux distribution after the command execution. Most of the presented stuffs were "live" during the workshop, but I also created a slides in order to help the audience and the public to get the basic ideas of the tools which I presented. You can read about them here [PDF].


The Third Day (Finale)

On the third day I came into the workshop slot which was delivered by a speaker with his pseudonym - Wak Arianto (not his original name though). He explained Suricata, a tool which has an almost similar syntax for pattern matching with the well known Snort IDS. Mr Wak explained OS fingerprinting concepts, flowbits and later how to create rules with Suricata. It was an interesting talk as I could see how to quarantine suspicious files captured from the network (let's say - possible malware) to a sandbox for further analysis. As far as I understood from the demo and from my extra readings, flowbits is a syntax which being used to grab the state of the session which being used by Suricata that works primarily with TCP in order to detect. You can read an article about flowbits here. It's being called a flowbits because it does the parsing on the TCP flows. I can see that we can parse the state of the TCP (for example, if it is established) based from the writings here.

I have a chance to listen to FreeBSD developer's slot too. We were lucky to have Mr Martin Wilke who is living in Malaysia and actively advocating FreeBSD to the local community. Together with Mr Muhammad Moinur Rahman - another FreeBSD developer they presented the FreeBSD development ecosystem and the current state of the operating system.



Possibly we preserved the best thing at the last - I attended a Wi-Fi security workshop which was presented by Mr Matnet and Mr Jep (both are pseudonyms). This workshop began with the theoretical foundations on the wireless technology and later the development of encryption around it.



The outline of the talks were outlined here. The speakers introduced the frame types of 802.11 protocols, which includes Control Frame, Data Frame and Management Frame. Management Frame is unencrypted so the attacking tools were developed to concentrate on this part.



The Management Frames is susceptible to the following attacks:
  • Deauthentication Attacks
  • Beacon Injection Attacks
  • Karma/MANA Wifi Attacks
  • EvilTwin AP Attacks

    Matnet and Jep also showed a social engineering tool called as "WiFi Phisher" in which it could be used as (according to the developer's page in GitHub) a "security tool that mounts automated victim-customized phishing attacks against WiFi clients in order to obtain credentials or infect the victims with malwares". It works together with the EvilTwin AP attacks by putting its role after achieving a man-in-the-middle position - Wifiphisher will redirect all HTTP requests to an attacker-controlled phishing page. Matnet told us the safest way to work within the WiFi environment is either using 802.11w supported device (which is yet to be widely found - at least in Malaysia). I found some infos on 802.11w that possibly could help to understand a bit on this protocol here.

    Conclusion

    For me this is considered the most anticipated annual event where I could meet professionals from different backgrounds and keeping my knowledge up to date with the latest development of the open source tools in the industry. The organizer surely had done a good job by organizing this event and I hope to attend this event again next year! Thank you for giving me opportunity to talk within this conference (and for the nice swag too!)

    Apart from MOSC I also planned to attend the annual Python Conference (Pycon) in which this year it is going to be special as it will be organized at the Asia Pacific (APAC) level. You can read more about Pycon APAC 2017 here (in case you probably would like to attend this event).


  • published by noreply@blogger.com (Ben Witten) on 2017-05-22 19:11:00 in the "360" category
    End Point Liquid Galaxy will be coming to San Antonio to participate in GEOINT 2017 Symposium. We are excited to demonstrate our geospatial capabilities on an immersive and panoramic 7 screen Liquid Galaxy system. We will be exhibiting at booth #1012 from June 4-7.

    On the Liquid Galaxy, complex data sets can be explored and analyzed in a 3D immersive fly-through environment. Presentations can highlight specific data layers combined with video, 3D models, and browsers for maximum communications efficiency. The end result is a rich, highly immersive, and engaging way to experience your data.

    Liquid Galaxy?s extensive capabilities include ArcGIS, Cesium, Google Maps, Google Earth, LIDAR point clouds, realtime data integration, 360 panoramic video, and more. The system always draws huge crowds at conferences; people line up to try out the system for themselves.

    End Point has deployed Liquid Galaxy systems around the world. This includes many high profile clients, such as Google, NOAA, CBRE, National Air & Space Museum, Hyundai, and Barclays. Our clients utilize our content management system to create immersive and interactive presentations that tell engaging stories to their users.

    GEOINT is hosted and produced by the United States Geospatial Intelligence Foundation (USGIF). It is the nation?s largest gathering of industry, academia, and government to include Defense, Intelligence and Homeland Security communities as well as commercial, Fed/Civil, State and Local geospatial intelligence stakeholders.

    We look forward to meeting you at booth #1012 at GEOINT. In the meantime, if you have any questions please visit our website or email ask@endpoint.com.


    published by noreply@blogger.com (Kiel) on 2017-05-22 15:59:00 in the "bash" category

    You want your script to run a command only if elapsed-time for a given process is greater than X?

    Well, bash does not inherently understand a time comparison like:

    if [ 01:23:45 -gt 00:05:00 ]; then
        foo
    fi
    

    However, bash can compare timestamps of files using -ot and -nt for "older than" and "newer than", respectively. If the launch of our process includes creation of a PID file, then we are in luck! At the beginning of our loop, we can create a file with a specific age and use that for quick and simple comparison.

    For example, if we only want to take action when the process we care about was launched longer than 24 hours ago, try:

    touch -t $(date --date=yesterday +%Y%m%d%H%M.%S) $STAMPFILE
    

    Then, within your script loop, compare the PID file with the $STAMPFILE, like this:

    if [ $PIDFILE -ot $STAMPFILE ]; then
        foo
    fi
    

    And of course if you want to be sure you're working with the PID file of a process which is actually responding, you can try to send it signal 0 to check:

    if kill -0 `cat $PIDFILE`; then
        foo
    fi
    

    published by noreply@blogger.com (Jon Jensen) on 2017-05-10 04:57:00 in the "ecommerce" category

    We do a lot of ecommerce development at End Point. You know the usual flow as a customer: Select products, add to the shopping cart, then check out. Checkout asks questions about the buyer, payment, and delivery, at least. Some online sales are for ?soft goods?, downloadable items that don?t require a delivery address. Much of online sales are still for physical goods to be delivered to an address. For that, a postal code or zip code is usually required.

    No postal code?

    I say usually because there are some countries that do not use postal codes at all. An ecommerce site that expects to ship products to buyers in one of those countries needs to allow for an empty postal code at checkout time. Otherwise, customers may leave thinking they aren?t welcome there. The more creative among them will make up something to put in there, such as ?00000? or ?99999? or ?NONE?.

    Someone has helpfully assembled and maintains a machine-readable (in Ruby, easily convertible to JSON or other formats) list of the countries that don?t require a postal code. You may be surprised to see on the list such countries as Hong Kong, Ireland, Panama, Saudi Arabia, and South Africa. Some countries on the list actually do have postal codes but do not require them or commonly use them.

    Do you really need the customer?s address?

    When selling both downloadable and shipped products, it would be nice to not bother asking the customer for an address at all. Unfortunately even when there is no shipping address because there?s nothing to ship, the billing address is still needed if payment is made by credit card through a normal credit card payment gateway ? as opposed to PayPal, Amazon Pay, Venmo, Bitcoin, or other alternative payment methods.

    The credit card Address Verification System (AVS) allows merchants to ask a credit card issuing bank whether the mailing address provided matches the address on file for that credit card. Normally only two parts are checked: (1) the street address numeric part, for example, ?123? if ?123 Main St.? was provided; (2) the zip or postal code, normally only the first 5 digits for US zip codes, and often non-US postal code AVS doesn?t work at all with non-US banks.

    Before sending the address to AVS, validating the format of postal codes is simple for many countries: 5 digits in the US (allowing an optional -nnnn for ZIP+4), and 4 or 5 digits in most others countries ? see the Wikipedia List of postal codes in various countries for a high-level view. Canada is slightly more complicated: 6 characters total, alternating a letter followed by a number, formally with a space in the middle, like K1A 0B1 as explained in Wikipedia?s components of a Canadian postal code.

    So most countries? postal codes can be validated in software with simple regular expressions, to catch typos such as transpositions and missing or extra characters.

    UK postcodes

    The most complicated postal codes I have worked with is the United Kingdom?s, because they can be from 5 to 7 characters, with an unpredictable mix of letters and numbers, normally formatted with a space in the middle. The benefit they bring is that they encode a lot of detail about the address, and it?s possible to catch transposed character errors that would be missed in a purely numeric postal code. The Wikipedia article Postcodes in the United Kingdom has the gory details.

    It is common to use a regular expression to validate UK postcodes in software, and many of these regexes are to some degree wrong. Most let through many invalid postcodes, and some disallow valid codes.

    We recently had a client get a customer report of a valid UK postcode being rejected during checkout on their ecommerce site. The validation code was using a regex that is widely copied in software in the wild:

    [A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]?[0-9][ABD-HJLN-UW-Z]{2}

    (This example removes support for the odd exception GIR 0AA for simplicity?s sake.)

    The customer?s valid postcode that doesn?t pass that test was W1F 0DP, in London, which the Royal Mail website confirms is valid. The problem is that the regex above doesn?t allow for F in the third position, as that was not valid at the time the regex was written.

    This is one problem with being too strict in validations of this sort: The rules change over time, usually to allow things that once were not allowed. Reusable, maintained software libraries that specialize in UK postal codes can keep up, but there is always lag time between when updates are released and when they?re incorporated into production software. And copied or customized regexes will likely stay the way they are until someone runs into a problem.

    The ecommerce site in question is running on the Interchange ecommerce platform, which is based on Perl, so the most natural place to look for an updated validation routine is on CPAN, the Perl network of open source library code. There we find the nice module Geo::UK::Postcode which has a more current validation routine and a nice interface. It also has a function to format a UK postcode in the canonical way, capitalized (easy) and with the space in the correct place (less easy).

    It also presents us with a new decision: Should we use the basic ?valid? test, or the ?strict? one? This is where it gets a little trickier. The ?valid? check uses a regex validation approach will still let through some invalid postcodes, because it doesn?t know what all the current valid delivery destinations are. This module has a ?strict? check that uses a comprehensive list of all the ?outcode? data ? which as you can see if you look at that source code, is extensive.

    The bulkiness of that list, and its short shelf life ? the likelihood that it will become outdated and reject a future valid postcode ? makes strict validation checks like this of questionable value for basic ecommerce needs. Often it is better to let a few invalid postcodes through now so that future valid ones will also be allowed.

    The ecommerce site I mentioned also does in-browser validation via JavaScript before ever submitting the order to the server. Loading a huge list of valid outcodes would waste a lot of bandwidth and slow down checkout loading, especially on mobile devices. So a more lax regex check there is a good choice.

    When Christmas comes

    There?s no Christmas gift of a single UK postal code validation solution for all needs, but there are some fun trivia notes in the Wikipedia page covering Non-geographic postal codes:

    "

    A fictional address is used by UK Royal Mail for letters to Santa Claus:

    Santa?s Grotto
    Reindeerland XM4 5HQ

    Previously, the postcode SAN TA1 was used.

    In Finland the special postal code 99999 is for Korvatunturi, the place where Santa Claus (Joulupukki in Finnish) is said to live, although mail is delivered to the Santa Claus Village in Rovaniemi.

    In Canada the amount of mail sent to Santa Claus increased every Christmas, up to the point that Canada Post decided to start an official Santa Claus letter-response program in 1983. Approximately one million letters come in to Santa Claus each Christmas, including from outside of Canada, and they are answered in the same languages in which they are written. Canada Post introduced a special address for mail to Santa Claus, complete with its own postal code:

    SANTA CLAUS
    NORTH POLE H0H 0H0

    In Belgium bpost sends a small present to children who have written a letter to Sinterklaas. They can use the non-geographic postal code 0612, which refers to the date Sinterklaas is celebrated (6 December), although a fictional town, street and house number are also used. In Dutch, the address is:

    Sinterklaas
    Spanjestraat 1
    0612 Hemel

    This translates as ?1 Spain Street, 0612 Heaven?. In French, the street is called ?Paradise Street?:

    Saint-Nicolas
    Rue du Paradis 1
    0612 Ciel

    "

    That UK postcode for Santa doesn?t validate in some of the regexes, but the simpler Finnish, Canadian, and Belgian ones do, so if you want to order something online for Santa, you may want to choose one of those countries for delivery. :)


    published by noreply@blogger.com (Matt Galvin) on 2017-05-04 13:00:00 in the "training" category

    This blog post is for people like me who are interested in improving their knowledge about computers, software and technology in general but are inundated with an abundance of resources and no clear path to follow. Many of the courses online tend to not have any real structure. While it's great that this knowledge is available to anyone with access to the internet, it often feels overwhelming and confusing. I always enjoy a little more structure to study, much like in a traditional college setting. So, to that end I began to look at MIT's OpenCourseWare and compare it to their actual curriculum.

    I'd like to begin by acknowledging that some time ago Scott Young completed the MIT Challenge where he "attempted to learn MIT?s 4-year computer science curriculum without taking classes". My friend Najmi here at End Point also shared a great website with me to "Teach Yourself Computer Science". So, this is not the first post to try to make sense of all the free resources available to you, it's just one which tries to help organize a coherent plan of study.

    Methodology

    I wanted to mimic MIT's real CS curriculum. I also wanted to limit my studies to Computer Science only, while stripping out anything not strictly related. It's not that I am not interested in things like speech classes or more advanced mathematics and physics, but I wanted to be pragmatic about the amount of time I have each week to put in to study outside of my normal (very busy) work week. I imagine anyone reading this would understand and very likely agree.

    I examined MIT's course catalog. They have 4 undergraduate programs in the Department of Electrical Engineering and Computer Science:

    • 6-1 program: Leads to the Bachelor of Science in Electrical Science and Engineering. (Electrical Science and Engineering)
    • 6-2 program: Leads to the Bachelor of Science in Electrical Engineering and Computer Science and is for those whose interests cross this traditional boundary.
    • 6-3 program: Leads to the Bachelor of Science in Computer Science and Engineering.(Computer Science and Engineering)
    • 6-7 program: Is for students specializing in computer science and molecular biology.
    Because I wanted to stick what I believed would be most practical for my work at End Point, I selected the 6-3 program. With my intended program selected, I also decided that the full course load for a bachelor's degree was not really what I was interested in. Instead, I just wanted to focus on the computer science related courses (with maybe some math and physics only if needed to understand any of the computer courses).

    So, looking at the requirements, I began to determine which classes I'd require. Once I had this, I could then begin to search the MIT OpenCourseWare site to ensure the classes are offered, or find suitable alternatives on Coursera or Udemy. As is typical, there are General Requirements and Departmental Requirements. So, beginning with the General Institute Requirements, lets start designing a computer science program with all the fat (non-computer science) cut out.


    General Requirements:



    I removed that which was not computer science related. As I mentioned, I was aware I may need to add some math/science. So, for the time being this left me with:


    Notice that it says "

    one subject can be satisfied by 6.004 and 6.042[J] (if taken under joint number 18.062[J]) in the Department Program
    "
    It was unclear to me what "if taken under joint number 18.062[J]" meant (nor could I find clarification) but as will be shown later, 6.004 and 6.042[J] are in the departmental requirements, so let's commit to taking those two which would leave the requirement of one more REST course. After some Googling I found the list of REST courses here. So, if you're reading this to design your own program, please remember that later we will commit to 6.004 and 6.042[J] and go here to select a course.

    So, now on to the General Institute Requirements Laboratory Requirement. We only need to choose one of three:

    • - 6.01: Introduction to EECS via Robot Sensing, Software and Control
    • - 6.02: Introduction to EECS via Communications Networks
    • - 6.03: Introduction to EECS via Medical Technology


    So, to summarize the general requirements we will take 4 courses:

    Major (Computer Science) Requirements:


    In keeping with the idea that we want to remove non-essential, and non-CS courses, let's remove the speech class. So here we have a nice summary of what we discovered above in the General Requirements, along with details of the computer science major requirements:


    As stated, let's look at the list of Advanced Undergraduate Subjects and Independent Inquiry Subjects so that we may select one from each of them:



    Lastly, it's stated that we must "

    Select one subject from the departmental list of EECS subjects
    "
    a link is provided to do so, however it brings you here and I cannot find a list of courses. I believe that this link no longer takes you to the intended location. A Google search brought up a similar page, but with a list of courses, as can be seen here. So, I will pick one from that page.

    The next step was to find the associated courses on MIT OpenCourseWare

    Sample List of Classes

    So, now you will be able to follow the links I provided above to select your classes. I was not always able to find courses that matched by exact name and/or course number. Sometimes I had to read the description and look through several courses which seemed similar. I will provide my own list in case you'd just like to us mine:

    Conclusion

    So there you have it, please feel free to comment with any of your favorite resources.


    published by noreply@blogger.com (Dave Jenkins) on 2017-04-21 17:21:00 in the "browsers" category


    As many of you may have seen, earlier this week Google released a major upgrade to the Google Earth app. Overall, it's much improved, sharper, and a deeper experience for viewers. We will be upgrading/incorporating our managed fleet of Liquid Galaxies over the next two months after we've had a chance to fully test its functionality and polish the integration points, but here are some observations for how we see this updated app impacting the overall Liquid Galaxy experience.

    • Hooray! The new Earth is here! The New Earth is here! Certainly, this is exciting for us. The Google Earth app plays a key central role in the Liquid Galaxy viewing experience, so a major upgrade like this is a most welcome development. So far, the reception has been positive. We anticipate it will continue to get even better as people really sink their hands into the capabilities and mashup opportunities this browser-based Earth presents.

    • We tested some pre-release versions of this application, and successfully integrated them with the Liquid Galaxy and are very happy with how we are able to view-synchronize unique instances of the new Google Earth across displays with appropriate geometrically configured offsets.

    • What to look for in this new application:
      • Stability: The new Google Earth runs as a NaCl application in a Chrome browser. This is an enormous advance for Google Earth. As an application in Chrome it is instantly accessible to billions of new users with their established expectations. Because the new Google Earth uses Chrome the Google Earth developers will no longer need to engage in the minutiae of having to support multiple desktop operating systems, but now can instead concentrate on the core-functionality of Google Earth and leverage the enormous amount of work that the Chrome browser developers do to make Chrome a cross-platform application.
      • Smoother 3D: The (older) Google Earth sometimes has a sort of "melted ice cream" look to the 3D buildings in many situations. Often, buildings fail to fully load from certain viewpoints. From what we're seeing so far, the 3D renderings in the New Earth appear to be a lot sharper and cleaner.
      • Browser-based possibilities: As focus turns more and more to browser-based apps, and as JavaScript libraries continue to mature, the opportunities and possibilities for how to display various data types, data visualizations, and interactions really start to multiply. We can already see this with the sort of deeper stories and knowledge cards that Google is including in the Google Earth interface. We hope to take the ball and run with it, as the Liquid Galaxy can already handle a host of different media types. We might exploit layers, smart use controls, realtime content integration from other available databases, and... okay, I'm getting ahead of myself.

    • The New Google Earth makes a major point of featuring stories and deeper contextual information, rather than just ogling at the terrain: as pretty as the Grand Canyon is to look at, knowing a little about the explorers, trails, and history makes it such a nicer experience to view. We've gone through the same evolution with the Liquid Galaxy: it used to be just a big Google Earth viewer, but we quickly realized the need for more context and usable information for a richer interaction with the viewers by combining Earth with street view, panoramic video, 3D objects, etc. It's why we built a content management system to create presentations with scenes. We anticipate that the knowledge cards and deeper information that Google is integrating here will only strengthen that interaction.
    We are looking to roll out the new Google Earth to the fleet in the next couple of months. We need to do a lot of testing and then update the Liquid Galaxies with minimal (or no) disturbance to our clients, many of whom rely on the platform as a daily sales and showcasing tool for their businesses. As always, if you have any questions, please reach us directly via email or call.

    published by noreply@blogger.com (Jon Jensen) on 2017-04-20 23:50:00 in the "company" category

    We are looking for another talented software developer to consult with our clients and develop web applications for them in Ruby on Rails, Django, AngularJS, Java, .NET, Node.js, and other technologies. If you like to solve business problems and can take responsibility for getting a job done well without intensive oversight, please read on!

    End Point is a 20-year-old web consulting company based in New York City, with 45 full-time employees working mostly remotely from home offices. We are experts in web development, databases, and DevOps, collaborating using SSH, Screen/tmux, chat, Hangouts, Skype, and good old phones.

    We serve over 200 clients ranging from small family businesses to large corporations. We use open source frameworks in a variety of languages including JavaScript, Ruby, Java, Scala, Kotlin, C#, Python, Perl, and PHP, tracked by Git, running mostly on Linux and sometimes on Windows.

    What is in it for you?

    • Flexible full-time work hours
    • Paid holidays and vacation
    • For U.S. employees: health insurance subsidy and 401(k) retirement savings plan
    • Annual bonus opportunity
    • Ability to move without being tied to your job location

    What you will be doing:

    • Work from your home office, or from our offices in New York City and the Tennessee Tri-Cities area
    • Consult with clients to determine their web application needs
    • Build, test, release, and maintain web applications for our clients
    • Work with open source tools and contribute back as opportunity arises
    • Use your desktop platform of choice: Linux, macOS, Windows
    • Learn and put to use new technologies
    • Direct much of your own work

    What you will need:

    • Professional experience building reliable server-side apps
    • Good front-end web skills with responsive design using HTML, CSS, and JavaScript, including jQuery, Angular, Backbone.js, Ember.js, etc.
    • Experience with databases such as PostgreSQL, MySQL, SQL Server, MongoDB, CouchDB, Redis, Elasticsearch, etc.
    • A focus on needs of our clients and their users
    • Strong verbal and written communication skills

    We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.

    Please email us an introduction to jobs@endpoint.com to apply. Include a resume, your GitHub or LinkedIn URLs, or whatever else that would help us get to know you. We look forward to hearing from you! Full-time employment seekers only, please -- this role is not for agencies or subcontractors.


    published by Eugenia on 2017-04-20 00:37:37 in the "Metaphysics" category
    Eugenia Loli-Queru

    During my lucid dream today, semi-jokingly my spirit guide Esther says that the only way to avoid an abduction, is to induce coma for at least 1 yr, until they lose interest. Hardly a great way to get out of that ain’t it? But she says there’s no other way, you just need to make your body and spirit unavailable to them.

    That reply really got me surprised, because it was definitely not my own subconscious generating it. I never thought about getting into a… coma as a solution to the reported phenomena. In fact, it surprised me so much during my dream, that I kept repeating it throughout until I woke up (“remember it, remember it…”). I have long discussions with my spirit guide often about stuff, but I rarely remember them, but this time I needed to remember it.

    I also asked her then why do they abduct people, and she told me “you know why” (and I do, a post for another time).

    I then asked her if the Greys are good or bad, and she said that they’re not necessarily great, they are just doing what they were “hired” to do. Then I asked who ordered the project, and she says: “I can’t tell you that” (she wouldn’t reveal it).

    So, yeah, that happened today.


    published by Eugenia on 2017-04-17 00:53:44 in the "General" category
    Eugenia Loli-Queru

    After many years of research on the subject, I found that these are the six most important points for one’s health. In no particular order, but sunlight is probably the most important of them all.

    – Exposure to Sunlight

    Two hours of early AM sunlight, as minimum. Without sunlight, our mitochondria don’t work.

    – Exposure to Clean Air

    Extra oxygenation via walking, breathing exercises, yoga, tai chi and meditation. Vigorous exercise is not needed, and especially if you’re already sick, it must not be pursued. Sitting too much or not knowing how to breath deeply, creates lactate acidosis in the body, which is the beginning of the end for health. This is what Chinese also call “Qi liver stagnation”.

    – Exposure to Clean Water

    Spring water, non-fluoridated, alkaline if possible. And LOTS of it! The water, along with some salt and DHA, will act as the electricity in your body, to carry out the needed functions of what some people call “detoxification” (although that’s not the right word for what’s going on).

    – Exposure to the Right Diet

    Plant-based Paleo, also known as Pegan (some offal, some wild fish and eggs, but mostly plants/fruits). Removing grains and sugars from the diet, we assure that the liver will have enough B vitamins to do its job: releasing away or converting the lactic acid. Otherwise, you end up with a non-alcoholic fatty liver, and everything starts breaking down in the body. More explanation of the Pegan diet here.

    – Exposure to the Right Sleep

    No sleep, no bueno. Circadian rhythms is our clock, and without that clock, things fall apart. Sleep when the sun goes down, or at the very least use blue-blocker glasses at night.

    – Exposure to the Right Frequencies

    This might be seen as quackery, but it’s not. Non-native EMF signals, are detrimental to our health. Avoid wifi, cellphones as much as you can, and anything of the like. Walk barefoot on the bare Earth to get the right frequency to heal your body.


    published by noreply@blogger.com (Greg Sabino Mullane) on 2017-04-13 21:11:00 in the "cryptography" category

    SSH (Secure Shell) is one of the programs I use every single day at work, primarily to connect to our client's servers. Usually it is a rock-solid program that simply works as expected, but recently I discovered it behaving quite strangely - a server I had visited many times before was now refusing my attempts to login. The underlying problem turned out to be a misguided decision by the developers of OpenSSH to deprecate DSA keys. How I discovered this problem is described below (as well as two solutions).

    The use of the ssh program is not simply limited to logging in and connecting to remote servers. It also supports many powerful features, one of the most important being the ability to chain multiple connections with the ProxyCommand option. By using this, you can "login" to servers that you cannot reach directly, by linking together two or more servers behind the scenes.

    As as example, let's consider a client named "Acme Anvils" that strictly controls access to its production servers. They make all SSH traffic come in through a single server, named dmz.acme-anvils.com, and only on port 2222. They also only allow certain public IPs to connect to this server, via whitelisting. On our side, End Point has a server, named portal.endpoint.com, that I can use as a jumping off point, which has a fixed IP that we can give to our clients to whitelist. Rather than logging in to "portal", getting a prompt, and then logging in to "dmz", I can simply add an entry in my ~/.ssh/config file to automatically create a tunnel between the servers - at which point I can reach the client's server by typing "ssh acmedmz":

    ##
    ## Client: ACME ANVILS
    ##
    
    ## Acme Anvil's DMZ server (dmz.acme-anvils.com)
    Host acmedmz
    User endpoint
    HostName 555.123.45.67
    Port 2222
    ProxyCommand ssh -q greg@portal.endpoint.com nc -w 180s %h %p
    

    Notice that the "Host" name may be set to anything you want. The connection to the client's server uses a non-standard port, and the username changes from "greg" to "endpoint", but all of that is hidden away from me as now the login is simply:

    [greg@localhost]$ ssh acmedmz
    [endpoint@dmz]$
    

    It's unusual that I'll actually need to do any work on the dmz server, of course, so the tunnel gets extended another hop to the db1.acme-anvils.com server:

    ##
    ## Client: ACME ANVILS
    ##
    
    ## Acme Anvil's DMZ server (dmz.acme-anvils.com)
    Host acmedmz
    User endpoint
    HostName 555.123.45.67
    Port 2222
    ProxyCommand ssh -q greg@portal.endpoint.com nc -w 180s %h %p
    
    ## Acme Anvil's main database (db1.acme-anvils.com)
    Host acmedb1
    User postgres
    HostName db1
    ProxyCommand ssh -q acmedmz nc -w 180s %h %p
    
    

    Notice how the second ProxyCommand references the "Host" of the section above it. Neat stuff. When I type "ssh acemdb1", I'm actually connecting to the portal.endpoint.com server, then immediately running the netcat (nc) command in the background, then going through netcat to dmz.acme-anvils.com and running a second netcat command on *that* server, and finally going through both netcats to login to the db1.acme-anvils.com server. It sounds a little complicated, but quickly becomes part of your standard tool set once you wrap your head around it. After you update your .ssh/config file, you soon forget about all the tunneling and feel as though you are connecting directly to all your servers. That is, until something breaks, as it did recently for me.

    The actual client this happened with was not "Acme Anvils", of course, and it was a connection that went through four servers and three ProxyCommands, but for demonstration purposes let's pretend it happened on a simple connection to the dmz.acme-anvils.com server. I had not connected to the server in question for a long time, but I needed to make some adjustments to a tail_n_mail configuration file. The first login attempt failed completely:

    [greg@localhost]$ ssh acmedmz
    endpoint@dmz.acme-anvils.com's password: 
    

    Although the connection to portal.endpoint.com worked fine, the connection to the client server failed. This is not an unusual problem: it usually signifies that either ssh-agent is not running, or that I forgot to feed it the correct key via the ssh-add program. However, I quickly discovered that ssh-agent was working and contained all my usual keys. Moreover, I was able to connect to other sites with no problem! On a hunch, I tried breaking down the connections into manual steps. First, I tried logging in to the "portal" server. It logged me in with no problem. Then I tried to login from there to dmz.acme-anvils.com - which also logged me in with no problem! But trying to get there via ProxyCommand still failed. What was going on?

    When in doubt, crank up the debugging. For the ssh program, using the -v option turns on some minimal debugging. Running the original command from my computer with this option enabled quickly revealed the problem:

    [greg@localhost]$ ssh -v acmedmz
    OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
    debug1: Reading configuration data /home/greg/.ssh/config
    debug1: /home/greg/.ssh/config line 1227: Applying options for acmedmz
    debug1: Reading configuration data /etc/ssh/ssh_config
    ...
    debug1: Executing proxy command: exec ssh -q greg@portal.endpoint.com nc -w 180s 555.123.45.67 2222
    ...
    debug1: Authenticating to dmz.acme-anvils.com:2222 as 'endpoint'
    ...
    debug1: Host 'dmz.acme-anvils.com' is known and matches the ECDSA host key.
    ...
    debug1: Skipping ssh-dss key /home/greg/.ssh/greg2048dsa.key - not in PubkeyAcceptedKeyTypes
    debug1: SSH2_MSG_SERVICE_ACCEPT received
    debug1: Authentications that can continue: publickey,password
    debug1: Next authentication method: publickey
    debug1: Offering RSA public key: /home/greg/.ssh/greg4096rsa.key
    debug1: Next authentication method: password
    endpoint@dmz.acme-anvils.com's password: 
    

    As highlighted above, the problem is that my DSA key (the "ssh-dss key") was rejected by my ssh program. As we will see below, DSA keys are rejected by default in recent versions of the OpenSSH program. But why was I still able to login when not hopping through the middle server? The solution lays in the fact that when I use the ProxyCommand, *my* ssh program is negotiating with the final server, and is refusing to use my DSA key. However, when I ssh to the portal.endpoint.com server, and then on to the next one, the second server has no problem using my (forwarded) DSA key! Using the -v option on the connection from portal.endpoint.com to dmz.acme-anvils.com reveals another clue:

    [greg@portal]$ ssh -v endpoint@dmz.acme-anvils.com:2222
    ...
    debug1: Connecting to dmz [1234:5678:90ab:cd::e] port 2222.
    ...
    debug1: Next authentication method: publickey
    debug1: Offering RSA public key: /home/greg/.ssh/endpoint2.ssh
    debug1: Authentications that can continue: publickey,password
    debug1: Offering DSA public key: /home/greg/.ssh/endpoint.ssh
    debug1: Server accepts key: pkalg ssh-dss blen 819
    debug1: Authentication succeeded (publickey).
    Authenticated to dmz ([1234:5678:90ab:cd::e]:2222).
    ...
    debug1: Entering interactive session.
    [endpoint@dmz]$
    

    If you look closely at the above, you will see that we first offered an RSA key, which was rejected, and then we successfully offered a DSA key. This means that the endpoint@dm account has a DSA, but not a RSA, public key inside of its ~/.ssh/authorized_keys file. Since I was able to connect to portal.endpoint.com, its ~/.ssh/authorized_keys file must have my RSA key.

    For the failing connection, ssh was able to use my RSA key to connect to portal.endpoint.com, run the netcat command, and then continue on to the dmz.acme-anvils.com server. However, this connection failed as the only key my local ssh program would provide was the RSA one, which the dmz server did not have.

    For the working connection, ssh was able to connect to portal.endpoint.com as before, and then into an interactive prompt. However, when I then connected via ssh to dmz.acme-anvils.com, it was the ssh program on portal, not my local computer, which negotiated with the dmz server. It had no problem using a DSA key, so I was able to login. Note that both keys were happily forwarded to portal.endpoint.com, even though my ssh program refused to use them!

    The quick solution to the problem, of course, was to upload my RSA key to the dmz.acme-anvils.com server. Once this was done, my local ssh program was more than happy to login by sending the RSA key along the tunnel.

    Another solution to this problem is to instruct your SSH programs to recognize DSA keys again. To do this, add this line to your local SSH config file ($HOME/.ssh/config), or to the global SSH config file (/etc/ssh/config):

    PubkeyAcceptedKeyTypes +ssh-dss
    

    As mentioned earlier, this whole mess was caused by the OpenSSH program deciding to deprecate DSA keys. Their rationale for targeting all DSA keys seems a little weak at best: certainly I don't feel that my 2048-bit DSA key is in any way a weak link. But the writing is on the wall now for DSA, so you may as well replace your DSA keys with RSA ones (and an ed25519 key as well, in anticipation of when ssh-agent is able to support them!). More information about the decision to force out DSA keys can be found in this great analysis of the OpenSSH source code.


    published by noreply@blogger.com (Emanuele 'Lele' Calo') on 2017-04-10 17:40:00
    Not long ago, one of our customers had their website compromised because of a badly maintained, not-updated WordPress. At End Point we love WordPress, but it really needs to be configured and hardened the right way, otherwise it's easy to end up in a real nightmare.

    This situation is worsened even more if there's no additional security enforcement system to protect the environment on which the compromised site lives. One of the basic ways to protect your Linux server, especially RHEL/Centos based ones, is using SELinux.

    Sadly, most of the interaction people has with SELinux happens while disabling it, first on the running system:

    setenforce disabled
    # or
    setenforce 0
    

    and then permanently by manually editing the file /etc/sysconfig/selinux to change the variable SELINUX=enforcing to SELINUX=disabled.

    Is that actually a good idea though? While SELinux can be a bit of a headache to tune appropriately and can easily be misconfigured, here's something that could really convince you to think twice before disabling SELinux once and forever.

    Back to our customer's compromised site. While going through the customer's system for some post-crisis cleaning, I found this hilarious piece of bash_history:

    ls
    cp /tmp/wacky.php .
    ls -lFa
    vim wacky.php
    set
    ls -lFa
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    fg
    ls -lFa
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    php wacky.php > THE-EVIL 2>&1
    vim THE-EVIL
    ls -lFA
    less wacky.php
    ls
    less THE-EVIL
    less wacky.php
    cat /selinux/enforce
    ls
    less THE-EVIL
    exit
    

    As you can see, what happened was that the attacker was able to manage having a shell connection as the customer user, and started using a PHP files injected in /tmp as a possible further vector of attack.

    Sadly, for the attacker at least, what happened was that SELinux was setup in enforcing mode with some strict rules and prevented all kind of execution on that specific script so after a few frantic attempts the attacker surrendered.

    Looking into the /var/log/audit/auditd.log file I found all the type=AVC denied errors that SELinux was shouting while forbidding the attacker to pursue his nefarious plan.

    Hilarious and good props to SELinux for saving the day.

    less THE-EVIL, more SELinux! ?

    published by noreply@blogger.com (Marco Matarazzo) on 2017-04-07 18:06:00 in the "CentOS" category

    During a recent CentOS 7 update, among other packages, we updated our Percona 5.7 installation to version 5.7.17-13.

    Quickly after that, we discovered that mysqldump stopped working, thus breaking our local mysql backup script (that complained loudly).

    What happened?


    The error we received was:

    mysqldump: Couldn't execute 'SELECT COUNT(*) FROM INFORMATION_SCHEMA.SESSION_VARIABLES WHERE VARIABLE_NAME LIKE 'rocksdb_skip_fill_cache'': The 'INFORMATION_SCHEMA.SESSION_VARIABLES' feature is disabled; see the documentation for 'show_compatibility_56' (3167)

    After a bit of investigation, we discovered this was caused by this regression bug, apparently already fixed but not yet available on CentOS:

    Everything revolves around INFORMATION_SCHEMA being deprecated in version 5.7.6, when Performance Schema tables has been added as a replacement.

    Basically, a regression caused mysqldump to try and use deprecated INFORMATION_SCHEMA tables instead of the new Performance Schema.

    How to fix it?


    Immediate workaround is to add this line to /etc/my.cnf or (more likely) /etc/percona-server.conf.d/mysqld.cnf, depending on how your configuration files are organized:

    show_compatibility_56=1

    This flag was both introduced and deprecated in 5.7.6. It will be there for some time to help with the transition.

    It seems safe and, probably, good to keep if you have anything still actively using INFORMATION_SCHEMA tables, that would obviously be broken if not updated to the new Performance Schema since 5.7.6.

    With this flag, it is possible to preserve the old behavior and keep your old code in a working state, while you upgrade it. Also, according to the documentation, it should not impact or turn off the new behavior with Performance Schema.

    More information on how to migrate to the new Performance Schema can be found here.


    published by noreply@blogger.com (Marco Matarazzo) on 2017-04-04 13:45:00 in the "CentOS" category

    In End Point, we use different hosting providers based on the specific task needs. One provider we use extensively with good results is Linode.

    During a routine CentOS 7 system update, we noticed a very strange behavior where our IPv6 assigned server address was wrong after restarting the server.

    IPv6 on Linode and SLAAC


    Linode is offering IPv6 on all their VPS, and IPv6 dynamic addresses are assigned to servers using SLAAC.

    In the provided CentOS 7 server image, this is managed by NetworkManager by default. After some troubleshooting, we noticed that during the update the NetworkManager package was upgraded from 1.0.6 to 1.4.0.

    This was a major update, and it turned out that the problem was a change in the configuration defaults between the two version.

    Privacy stable addressing


    Since 1.2, NetworkManager added the Stable Privacy Addressing feature. This allows for some form of tracking prevention, with the IPv6 address to be stable on a network but changing when entering another network, and still remain unique.

    This new interesting feature has apparently become the default after the update, with the ipv6.addr-gen-mode property set to "stable-privacy". Setting it to ?eui64? maintains the old default behavior.

    Privacy Extension


    Another feature apparently also caused some problems on our VPS: the Privacy Extension. This is a simple mechanism that somewhat randomizes the network hardware?s (MAC) address, to add another layer of privacy. Alas, this is used in address generation, and that randomization seemed to be part of the problem we were seeing.

    This too has become the default, with the ipv6.ip6-privacy property set to 1. Setting it to 0 turns off the feature.

    To sum it up


    In the end, after the update, we could restore the old behavior and resolve our issues by running, in a root shell:

    nmcli connection modify "Wired connection 1" ipv6.ip6-privacy 0
    nmcli connection modify "Wired connection 1" ipv6.addr-gen-mode eui64

    After a reboot, the IPv6 address finally matched the one actually assigned by Linode, and everything was working ok again.

    If you want to know more on Privacy Extensions and Privacy Stable Addressing, this great blog post by Lubomir Rintel helped us a lot understanding what was going on.


    published by noreply@blogger.com (Josh Williams) on 2017-04-01 23:24:00 in the "database" category

    In the spirit of April 1st, resurrecting this old classic post:


    Maybe you work at one of those large corporations that has a dedicated DBA staff, separate from the development team. Or maybe you're lucky and just get to read about it on thedailywtf.com. But you've probably seen battles between database folk and the developers that "just want a table with "ID " VARCHAR(255), name VARCHAR(255), price VARCHAR(255), post_date VARCHAR(255). Is that so much to ask?!"

    Well if you ever feel the need to get back at them, here's a few things you can try. Quoted identifiers let you name your objects anything you want, even if they don't look like a normal object name...

    CREATE TABLE "; rollback; drop database postgres;--" ("'';
    delete from table order_detail;commit;" INT PRIMARY KEY,
    ";commit;do $$`rm -rf *`$$ language plperlu;" TEXT NOT NULL);
    
    COMMENT ON TABLE "; rollback; drop database postgres;--"
    IS 'DON''T FORGET TO QUOTE THESE';

    Good advice, that comment. Of course, assuming they learn, they'll be quoting everything you give them. So, drop a quote right in the middle of it:

    CREATE TABLE "messages"";rollback;update products set price=0;commit;--"
    ("am i doing this right" text);
    
    [local]:5432|production=# dt *messages*
     List of relations
     Schema |                           Name                           | Type  |   Owner   
    --------+----------------------------------------------------------+-------+-----------
     public | messages";rollback;update products set price=0;commit;-- | table | jwilliams
    (1 row)
    A copy & paste later...
    [local]:5432|production=# SELECT "am i doing this right" FROM "messages";rollback;update products set price=0;commit;--";
    ERROR:  relation "messages" does not exist
    LINE 1: select "am i doing this right" from "messages";
                                                ^
    NOTICE:  there is no transaction in progress
    ROLLBACK
    UPDATE 100
    WARNING:  there is no transaction in progress
    COMMIT

    Then again, if this is your database, that'll eventually cause you a lot of headache. Restores aren't fun. But UTF-8 can be...

    CREATE TABLE suo????su??? (?nu???p?o SERIAL PRIMARY KEY,
    ???u??sn text REFERENCES s??sn, ???o????p?o NUMERIC(5,2));
    
    CREATE TABLE ?????_????? (?????_????_?? SERIAL PRIMARY KEY, ... );