“It’s becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It’s all about ‘containerization’, to employ a really ugly but useful word.”
looks like he was in the right place at the right time. now we just have to see if this really takes off.
He talks like he invented the package manager. The only thing new about his idea is that the package manager can output certain kinds of disk images in addition to directory trees. It’s not like there aren’t existing tools for creating a disk image from a directory. You don’t even need that functionality for use with KVM and other full virtualization solutions, which is where the hardware industry is heading.
You can do the same thing with Fedora’s new build tools. Gentoo and Puppy have similar capabilities. He’s offering yet another way of building a Linux distro. The only thing new is the suggestion that you might want to build a Linux distro so that people can run your application in a virtual machine. The result is a bad compromise where you use heavy hardware virtualization as if it were lightweight OS virtualization.
If ISVs are going to build ready-to-run virtual machines, then they might has well base it one of the more popular systems. This way more users would have the option of simply installing the required packages on their system instead of firing up an entire virtual machine. They could even distribute OpenVZ images for popular Linux systems.
I don’t want to participate in this deluded “one OS instance per application” nonsense. Virtualization is about creating commonality on various levels, not only at the hardware level. Applications should use a higher-level abstraction. Pretty soon I’m going to get baby pictures by email that have a self-extracting Linux kernel and image viewer. They might as well ship me a computer that boots up and displays the image.
Always seemed to me like “containerization” and software appliances were the natural first steps for virtualization. I thought hardware was cheap enough for consolidation to be placed on the back burner–still of importance, of course, but the former being the main Big Things about virtualization.
But what do I know?
I have checked out rpath and some of its children, it’s not a bad system by any means. I don’t think he acts like he invented the package manager, though actually they do have their own unique package manager called Conary. If you have done much work in enterprise/production land, the idea of a slimline OS specifically gears to run a specific application, has some serious perks. IMHO, rPath is flying under the radar at the moment, simply because they don’t have alot of name recognition with distros like those from Canonical,Red Hat, and Novell taking up the vast majority of the spotlight. I sincerely hope that their business model is sturdy, and they can sweat it out long-term, because this is the direct I see many small-medium size 3rd party software makers moving.
Linux distros have gotten rather chunky as if late, and even the slim and trim gentoo can get chubby at times. The idea behind rBuilder is polar opposite of tools liek those from fedora,gentoo, and ubuntu..rBuilder does the packaging and prepping of the images for you, and for all of the different virt methods at the same time. I would be close to futile for a traditional ditro release team to try to build, and keep images up-to-date for microsoft,vmware,xen, qemu, livecd, and install cd/dvd.