Linked by snydeq on Wed 16th Dec 2009 20:13 UTC
OSNews, Generic OSes InfoWorld's Randall Kennedy takes an in-depth look at VMware Workstation 7, VirtualBox 3.1, and Parallels Desktop 4, three technologies at the heart of 'the biggest shake-up for desktop virtualization in years.' The shake-up, which sees Microsoft's once promising Virtual PC off in the Windows 7 XP Mode weeds, has put VirtualBox -- among the best free open source software available for Windows -- out front as a general-purpose VM, filling the void left by VMware's move to make Workstation more appealing to developers and admins. Meanwhile, Parallels finally offers a Desktop for Windows on par with its Mac product, as well as Workstation 4 Extreme, which delivers near native performance for graphics, disk, and network I/O.
Thread beginning with comment 400027
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: I've given up on Parallels
by chekr on Thu 17th Dec 2009 01:22 UTC in reply to "I've given up on Parallels"
chekr
Member since:
2005-11-05

Every kernel update would break it, and eventually they stopped breaking it. THis was even though Canonical sold it in their store!!


Another example of why Linux' total disregard for stable interfaces is bad for users and vendors

Reply Parent Score: 8

3rdalbum Member since:
2008-05-26

"Every kernel update would break it, and eventually they stopped breaking it. THis was even though Canonical sold it in their store!!


Another example of why Linux' total disregard for stable interfaces is bad for users and vendors
"

DKMS. You don't need a stable interface.

The article kept going on and on about how Virtualbox is free - but the open-source edition doesn't have all the bells and whistles, and the proprietary edition is NOT free if you're using it in enterprise. The article was about enterprise.

Reply Parent Score: 3

boldingd Member since:
2009-02-19

I've never gotten DKMS to actually work -- or save me effort against just re-building all the closed-source drivers I use by hand. Actually, looking back at it, it was probably more effort to use DKMS with ATI's binary driver on Debian 4 when I tried that a few years ago then it is to just re-run the installer every time the the kernel gets updated on the RHEL4 machine I'm using at work now. Which is sad.

Edit: and, if I recall correctly, the Open Source edition isn't missing anything that you'd care deeply about. The main thing I can think of off the top of my head is that the Open Source edition doesn't include utilization of the host's USB sub-system on the guest. That's probably not a big deal, in an enterprise environment; I use VirtualBox at work, and I've only ever used that feature once, to try to mount my iPhone on my Windows gues: it didn't work, and I haven't bothered with it since.

Edited 2009-12-17 19:08 UTC

Reply Parent Score: 3

bannor99 Member since:
2005-09-15

I've heard arguments both for and against a stable ABI and I don't think the current model helps end-users.

I wonder if it would be possible to stabilize it peridoically, say twice a year.

Reply Parent Score: 1

boldingd Member since:
2009-02-19

Not that it's relevant to anything, but it's entirely possible. I believe that reasons that the Linux kernel has no driver-loading API are entirely political.

Edit: Er, re-reading your post, I guess you already knew that, didn't you?

Edited 2009-12-17 19:02 UTC

Reply Parent Score: 2

SamAskani Member since:
2006-01-03

Vmware Player has come a long way to handle the changes in the kernel and now it is quite pleasant. In earlier versions, you needed to "reinstall" manually the player (basically recompile the virtual devices and plug them into the running kernel).
Now, in the latest version, when you start the player it detects that the kernel changed and recompiles/plugs the devices in the fly. It just takes a few extra seconds than usual and it only happens when a new kernel is installed.

I think vmware nailed it nicely and made the extra effort to give the final user a consistent and polished solution.

Reply Parent Score: 3

Bending Unit Member since:
2005-07-06

No. Compiling should never be necessary. That is for developers, before the system is released, not for users.

What if we on Windows had to compile whatever when we upgraded VMware (or some other software). How many would like that? How popular would that software become?
Why should Linux users deserve any less?

Reply Parent Score: 2

gilboa Member since:
2005-07-06

I could point out that the Linux kernel was never designed to support out-of-tree modules - let alone proprietary modules.
I could also point out that a large number of proprietary kernel driver developers have learned to live with this by-design limitation, and by designing their modules with distinct kernel-interfacing-layer (as opposed to calling the kernel API from 10,000 different places), managed to reduce the changes required after each new upstream release. *

... But given that fact that your short comment had more-or-less nothing to do with the subject at hand (the problem might have had nothing to do with upstream kernel API changes and everything to do with sloppy package maintainer in the Ubuntu side or problematic driver building script on parallel's side - I have no idea [but neither do you...]), I can only assume that were simply trolling. Oh well...

- Gilboa
* Personal experience.

Edited 2009-12-18 00:58 UTC

Reply Parent Score: 2

boldingd Member since:
2009-02-19

I could point out that the Linux kernel was never designed to support out-of-tree modules - let alone proprietary modules.


This does not mean that it would either be impossible to add, or unreasonable to request.

The lack of a external-driver-loading interface is still one of the greater (completely unnecessary) hassles that face Linux users on a regular basis. Whatever the reasons for the situation are, and there may be good ones, it's still a significant annoyance that doesn't have to be there. And it's also a risk; I've had it happen where re-installing the same driver dozens of times eventually left the system in an unusable state.

I could also point out that a large number of proprietary kernel driver developers have learned to live with this by-design limitation, and by designing their modules with distinct kernel-interfacing-layer (as opposed to calling the kernel API from 10,000 different places), managed to reduce the changes required after each new upstream release. *


That solution is not perfect, and still causes a lot of unnecessary hassle to many users. It was something of a revelation to the literal rocket-scientists where I work that they where going to have to re-install their graphics drivers every time they let the Red Hat update agent install a new kernel version: after installing an update and then re-installing the ATI binary driver left X unusable on my system, some decided that, as a rule, they should never install updates at all, because it was both too much of a disruption, and too risky! Think about that for a minute: there is very obviously a problem there.

... But given that fact that your short comment had more-or-less nothing to do with the subject at hand (the problem might have had nothing to do with upstream kernel API changes and everything to do with sloppy package maintainer in the Ubuntu side or problematic driver building script on parallel's side - I have no idea [but neither do you...]), I can only assume that were simply trolling. Oh well...

- Gilboa
* Personal experience.


This much is true: whinging about Linux is becoming a popular past-time around here. I do notice that I'm doing it too.

Reply Parent Score: 2

Laurence Member since:
2007-03-26


Another example of why Linux' total disregard for stable interfaces is bad for users and vendors


Parallels isn't a Linux product and you can't blame Linux if the kernel breaks Parallels when it's supposed to be transparent to the OS (just like every other VM product is)

Reply Parent Score: 2

bannor99 Member since:
2005-09-15

"
Another example of why Linux' total disregard for stable interfaces is bad for users and vendors


Parallels isn't a Linux product and you can't blame Linux if the kernel breaks Parallels when it's supposed to be transparent to the OS (just like every other VM product is)
"

It's been a while since I've used VMware on Linux but my experience then was hardly what one would call "transparent" - the installer built a kernel module for the running kernel.
If you change kernels or run mulitple, you would need
a kernel module for each. Not all the "user-friendly" distros have all the required dev tools installed for you to accomplish this, and, in some cases, like recent Ubuntus , it's not a straightforward process to obtain them.

Reply Parent Score: 1