“Lines that once seemed clear are being smudged. Perhaps we delude ourselves to think that we once knew the difference between a ‘big’ operating system and a ‘little’ one, but today the biggest operating system ever written runs on desktop personal computers, not mainframes, and desktop operating systems are migrating to telephones and other consumer devices, while there is a trend for the “little” operating systems developed specifically for those devices to take on many of the capabilities of desktop operating systems as those devices themselves become more like computers. And, as further evidence that the apocalypse is upon us, you can, with Apple’s blessing, run Windows Vista natively on your Macintosh. What are operating systems coming to?”
Operating Systems are ‘advancing’ to a point where they are more than bootloaders for applications. Microsoft’s problem is that they are trying too hard. Look at apple and linux:
1. apple – two flavors: desktop & service
2. linux – 1 flavor (pick and choose what you want to install)
Additionally, microsoft has a ton of legacy baggage, in addition to trying to keep compatible with many different manufacturers (not to mention hobbyists). They need to expand their zune/xbox strategy of making their own hardware – this will certainly not be the last version of windows – but it will be problematic if they dont reign (sp?) in their support base.
“linux – 1 flavor (pick and choose what you want to install)”
Only 1 flavor? They could save a lot of resources at Distrowatch.com if only they knew this fact.
Please 🙂
Linux is linux is linux – i can create a RH distro tomorrow and call it mini-me linux – doesnt mean that it’s a separate thing like windows != macos x
Well, if you think about it Windows only really comes in one flavor. Take Vista for example. The underlying code is basically the same. The only difference is Microsoft decides what you get and how much you pay for it.
However, if I was to really guess were the OS is heading I would say to big bad bloat land.
You’re so very wrong IMHO. It’s not about hardware/software integration, and operating systems are not bootloaders for applications. It’s about software/software integration, and operating systems are what provide these communication services.
Let’s consider the state of the free software desktop, for example. One of the many remaining problems is that users have three application environments on their desktop: KDE/GNOME, Mozilla, and OpenOffice. Each has their own idea of how to provide interface widgets, fonts, interprocess communication, and more. The situation is considerably worse on Windows in theory, but in practice even Firefox looks more at home on Windows than it does on Linux. Although operating systems should allow applications to reinvent the wheel, they should also provide very high-level abstractions to allow applications to present themselves and communicate with each other in a coherent way. This is why I think KDE will be a very compelling environment for Windows, which has never had an application environment that provides so much integration.
Let’s choose another good example, OLPC. On this system, each application executes in its own protection domain, or sandbox. Since each process effectively runs as its own user with the minimum privileges, this provides excellent security, but it comes with the limitation that glue code expecting another application to be running as the same user will break. This exposes glue as the nasty hack it is. We already have a mechanism that allows user code to execute on behalf of a given process: shared libraries. In a world where no application is trusted, where applications ideally shouldn’t run as a user with login privileges, applications must use shared libraries provided by the operating system to communicate with each other.
Given the fact that applications must be integrated from the perspective of functionality and isolated with regard to security, we arrive at the conclusion that the OS is increasingly relevant, rather than the other way around. It also means that applications need to be written to understand their execution environment, which comprises the libraries provided for communication (both with the user and with other applications on the system) and the privileges it is assigned. This is a great, big, earthshaking quagmire for the Windows ecosystem; a formidable challenge for the Mac; and a boon for the free software stack (although it isn’t there yet, it’s at least a surmountable goal, as we’ve seen with OLPC).
EDIT: Yes, this means that it is much more correct to call your system KDE/Linux or GNOME/Linux than GNU/Linux. Just had to throw that in there…
Edited 2007-02-11 03:01
I very much agree to butters’ ideas. The days when operating systems were bootloaders for applications are pretty much long gone, and there are two main ”ways” in which they’re gone.
# The operating system has integrated a lot of other features (some of them *ahem* choosing to integrate a graphics subsystem so closely you just can’t get rid of it)
# The operating system integrates (necessary/convenience) features. A lot of features could probably be provided in userspace (like network connectivity), but right now we pretty much have drivers for them.
# The operating system has ”pushed it”: either talking about hybrid language-OS (the likes of Forth-based operating systems, Java-based OS-es) or various research ideas like exokernels.
If we were to reduce the operating system to a state where it only has a bootloader and launches a shell, we would get some sort of a CP/M, and really, even with a GUI, 3D acceleration and serious multimedia support, if anyone would use it, I can hardly believe anyone would be so crazy to *develop* for it today, in any other way than for the sake of hobby. [No hard feelings. I liked CP/M in its time.]
KDE is a great example of integration, indeed, but I do believe it suffers from a notable lack of applications. Some of them — like Amarok or Kile are very good, and there are some others which are worth mentioning, like Kdevelop or Kopete. But coming to Windows, it would need to have the same degree of integration with native Windows applications. I know little about how this is (or is going to be) handled though.
As for Linux being the same on regardless of the distribution, well, yes, as in the kernel stays the same, but this is hardly the point. The software context can be radically different between distributions. There were at least three or four occasions when one application would run perfectly on most distributions but quirk on another due to some strange decision by the distribution developers. If it was indeed the same thing, I can hardly see the point of having more than 10 distributions.
Thanks. You’d think on OSNews that a story entitled “Where are Operating Systems Headed” would receive a lot of comments, so I tried to really put some thought into that post. I guess there’s just not a lot of interest if there’s nothing superficial to argue about.
As for KDE, and KDE on Windows in particular: KDE has a whole boatload of applications, covering about 99% of standard desktop functionality plus nifty hobby apps like the one for astronomy buffs. No set of software is 100% complete for everybody, but KDE comes pretty close. On Windows, KDE will sit atop the Explorer shell, so it will integrate on a basic level with non-KDE apps. You might not be able to drag and drop rich content like RTF or ODF, but I’m not sure you can do that on Windows anyway. Note that I am a GNOME user at the moment, but that doesn’t mean I can’t be impressed with KDE’s recent progress.
You’re so very wrong IMHO. It’s not about hardware/software integration, and operating systems are not bootloaders for applications. It’s about software/software integration, and operating systems are what provide these communication services.
Sure, for people who do integrated work with their computers, as I mentioned. BUT – most people don’t know how to drag-and-drop info from application to application. Many do only the following: download, edit a single file, print, email, etc. One item, one app. You do not require application-to-application communication. If you do provide it, the unwashed masses will misuse it, and they do – often.
Let’s consider the state of the free software desktop, for example. One of the many remaining problems is that users have three application environments on their desktop: KDE/GNOME, Mozilla, and OpenOffice. Each has their own idea of how to provide interface widgets, fonts, interprocess communication, and more.
Exactly my point. A simpler interface and more rigid methods would work far better for the masses. Again, there will always be a market for GP computing, but the majority of the market just complains about how complicated things are. Even TVs are ridiculous. Useless features are useless features, no matter what the device. To the mass market, 90% of their computer is a wasted mystery.
Let’s choose another good example, OLPC. On this system, each application executes in its own protection domain, or sandbox.
Sandboxing can be completely offloaded to hardware and backend OS support with little overhead.
Given the fact that applications must be integrated from the perspective of functionality and isolated with regard to security, we arrive at the conclusion that the OS is increasingly relevant, rather than the other way around.
Applications do NOT have to be integrated. Again, I stress, I’m talking mass market. Security is easier if they aren’t integrated. Isolation can completely remove security issues when a simple OS works with better hardware. A large OS is absolutely necessary for some types of work, but the mass market can do the job better without direct integration and improve stability and security.
People don’t use inter-application features like drag-and-drop because they don’t work consistently on Windows. If they did, like they do on Mac OSX and KDE/GNOME, people would use them. People do change their usage habits over time, albeit slowly, but the technology has to be there when they’re ready.
I don’t understand your second point. Consistency makes things easier to understand. Learning new paradigms is uncomfortable, and that is where most of the confusion comes from. If applications all followed the same basic HIG (human interface guidelines), then they’ll intuitively pick up any new application.
Sandboxing can be completely offloaded to hardware and backend OS support with little overhead.
I beg to differ. As far as I know, there is only one hardware-enforced user permissions scheme in existence. It’s very high-end and would not be possible on commodity hardware. I can’t tell you any more than that, because it’s confidential to my employer.
I guess we’re going to have to agree to disagree here. I think that while expert users could possibly live in a world of disparate applications made to communicate through pipes and other mechanisms, the mass market needs integration. They need their workflows to appear seamless even if they involve numerious applications. The mass market wants to import photos from their camera, resize them, and upload them to Flickr as if they are using one application. Novell has implemented this particular workflow through integrating existing applications with their new F-Spot application.
I’m sorry, but the era of file->open, file->save, file->exit is over. People need to do more with their computers, and they need it to be easy.
Terrible article:
1. Massive adverts to content ratio for each page.
2. Nothing really about the “future of operating systems” just vague statements of what various companies say they’d like to do.
3. The iPhone has been announced as using OS X for about a month now. Why then does the author keep going on about a “rumored consumer electronics version of OS X” ?
Recently I configured VMWare Workstation to use my physical Windows partition on my laptop, so that I could access that partition directly from Linux (and found out the performance was better than running from a virtual install in the process).
With technologies such as KVM (or Parallels on the Mac), this will become even easier, and more efficient…if distros such as Ubuntu made this automatic upon a dual-boot install, many arguments against switching to Linux (or Mac) would simply evaporate – and when you add 3D acceleration into the mix, you get the best of all worlds. Definitely this is going to change the way we look at OSes.
“Recently I configured VMWare Workstation to use my physical Windows partition on my laptop, so that I could access that partition directly from Linux (and found out the performance was better than running from a virtual install in the process).”
I am a heavy tester of vmware software. I made myself believe its the solution for my needs, but I have discovered finally that their software are not that stable after all to depend on it.
Their software I have tested were the workstation version which was quite acceptable in quality with just desktop purposes (2 hours of heavy work); vmware server 1.0.1 which was unacceptable in crashablity on both platforms windows server 2003 and RHEL 4.4 Linux, couldn’t stand for 24hour job without crashing.
And finally the linux-like hardware support limited vmware esx 3.0.1 which will not work with 99.8% of hardware on the market; is not worth it due to price you will have to spend on both hardware and software that will reach easily 10,000$.
So, vitualization is not always the good solution for platform incompatiblity, while hardware redundancy is.
By the way linux versions of vmware tend to crash more frequently with me than windows couterparts, and these kind of crashes are nasty because it froze linux with it to a point I was not able to recover it unless restart (not konsole access or gdm shutdown possible).
Well, I probably don’t use it as intensively as you do, and only do so for Desktop uses, but I’ve never had VMWare crashing on me.
I should have qualified my post to specify this was for desktop use, not heavy server use (which seems to be the case for you). I think it’s safe to say that YMMV depending on the type of stress you put it under.
I’m interested in what KVM will bring, but I don’t have the required hardware to try it on.
P.S. for recoverability of crashes, did you have the “Magic SysRq” key activated in your kernel? I find that this often works when everything else fails.
I haven’t had a problem running VMware Workstation on Slackware 10.1 and now 11.0 for years. I do most of my compiles in virtual machines and they are completely stable.
And VMware Workstation runs in a VNC server.
This system previously dual-booted Windows XP, which was too unstable, and Redhat 9, which was becoming unsupported.
I think most of your stability problems have to do with the heavy patching that most commercial distributions tend to do with respect to the kernel. I compile my own kernels from kernel.org mirrors.
Neither have I found Red Hat to be all that stable, especially when you patch it with updates. I have experience with Red Hat 8, 9 and Fedora Core 1. I think of Red Hat as something that’s made for (former) Windows administrators and not really for developers or UNIX power users.
The instability of the latter in particular prompted me to move to Slackware, which has been a breath of fresh air. A client of mine is finding it’s even more stable than SUSE in a business setting so he asked me to gradually get rid of SUSE.
I would consider a migration to Solaris if only VMware Workstation or equivalent software were available for it (with equal or better performance).
At the moment I have VMware Server 1.0.1 running on a minimal SUSE 10.1 at the client’s site and it’s been running for 3 months now without any downtime for both the host and the guest (Windows 2003).
However I am afraid to patch it for fear that something might break in kernel or user land. I can only say that not all Linux distributions are equal in stability, reliability and flexibility. So it ranges from the very worst to the very best.
…all of us who read this site will assume that the majority of computer users want or need an elaborate OS to do what they want to do with the computer.
There is little need for the average mass market computer user for excess features or complicated file management. These are things that complicate the users lives.
The average user uses maybe ten application suites. If a basic OS and a clear access to were set in ROM (on a cart or a CD), then the system could be integrated smoothly. Printer manufacturers could all implement postscript and avoid the driver problems. Other devices could act similarly. There is no compelling need for consumer devices to have a million protocols.
It used to be that a phone was a simple device. Now the poor consumer design practices that plague the computer industry are trickling down to cell phone users. PATHETIC!
I’m not saying general purpose computers are useless. Many of us would still find uses for them, and Linux would still meet our needs. But those needs are not the mass market. Not Linux, not OS X, and not Windows.
>>Now the poor consumer design practices that plague the
>>computer industry are trickling down to cell phone
>>users. PATHETIC!
Cell phones? Hell, they have infested my television (at least in Germany)!
All television remotes have 4 colored buttons that allow you to access menus and sub menus; the problem is that while this looks like there is some sort of system to this there isn’t. What menus you can access differs for each TV so does the menu design and sometimes there is one button more or less, some don’t access menus but do something different entirely or just don’t function at all. This is just one dumb interface decision under many.
I can still cope with this crap by trying around and maybe looking at the manual but there are a ton of people out there that have suddenly become to “stupid” to watch TV. I can just hope those people don’t mess around with my car (any more then they have already).
You’re quite right of course. Living in Germany, you must be aware of cars made by BMW and others that are adding such complexity that even car thieves need a tutorial.
Remotes for TVs and other equipment are ridiculously complicated. Let’s see, for a TV I need:
Power on/off
Mute
Channel Up/Channel Down
Video select
(maybe) a config button (then again, how often do you reconfig the TV?)
Volume Up/Volume Down
Now if Power is RED and Mute was YELLOW, channels and volume be larger – how hard could this be?
A universal remote (another oxymoron) should have no more than 50 buttons and more likely no more than 25.
Thanks for your comments. You’re right about TVs.
“The average user uses maybe ten application suites. If a basic OS and a clear access to were set in ROM (on a cart or a CD), then the system could be integrated smoothly. Printer manufacturers could all implement postscript and avoid the driver problems. Other devices could act similarly. There is no compelling need for consumer devices to have a million protocols.”
That’s right. I’d like to see such a solution, but the growing diversion does not seem to go into this direction. The standards exist (and do it since years), but why use them? It’s easier to tie users to constantly new protocols because you can make more money with it…
“It used to be that a phone was a simple device. Now the poor consumer design practices that plague the computer industry are trickling down to cell phone users. PATHETIC!”
Cell phones are mostly designed for children. Games, squeaking sounds (reminds me to 8 bit 11 kHz Sound Blaster 1.0), tiny keys. Watching movies on a cell phone… on a small and blurry display… and send some “happy slapping” video clips to the gang next door. That’s what cell phones are used usually (at least here in Germany). Devices that are easy to handle and are designed for phone operations as the main use are not easy to find. The capabilies of using written or spoken language suffer from this development, because SMS and MMS and UMTS are “cooler” than the mastery of one’s own native language.
Kiddies who need a new phone twice a year are a better market than adults who are interested to use a phone for some years.
“There is no compelling need for consumer devices to have a million protocols.”
That’s right. I’d like to see such a solution, but the growing diversion does not seem to go into this direction. The standards exist (and do it since years), but why use them? It’s easier to tie users to constantly new protocols because you can make more money with it…
Oh that’s what they’d like people to believe. What they really mean is that they’d have to innovate or learn to make money through volume – one or the other. If there’s a standard, then the more people that use it, the cheaper it gets. But there is a point where you can make your product better than others.
Let’s take printers for example. We currently are often forced to install a specific printer driver, right? But does this keep us from installing a printer of another brand? No. So if they all used postscript and there was one driver, not only would it help make it simpler for competitors, but I could hook three of the same manufacturer up without three separate drivers. Think about it – what’s going on is that making a printer PS compliant makes the printer cost more. So there’d be no $50 printer that is cheaper to replace the printer rather than to buy new ink for. Printer manufacturers would have to build good printers.
I’ve worked in industry for 20 years (as many of you have) and I’ve seen one good instance of a standard (of sorts) – IDE. Sure it’s been expanded, but it has come a long way and has simplified installation of many mass storage devices.
Many cameras, mp3 players, etc. are MSC compliant. Wow, what a concept. Thumb drives are too (over the USB bus, just as USB HDDs and CD/DVD recorders are).
Standards work for everybody. The money argument is a manufacturer’s cop-out.
Cell phones are mostly designed for children. *snip* The capabilies of using written or spoken language suffer from this development, because SMS and MMS and UMTS are “cooler” than the mastery of one’s own native language.
Can’t argue that.
“Let’s take printers for example. We currently are often forced to install a specific printer driver, right? But does this keep us from installing a printer of another brand? No. So if they all used postscript and there was one driver, not only would it help make it simpler for competitors, but I could hook three of the same manufacturer up without three separate drivers. Think about it – what’s going on is that making a printer PS compliant makes the printer cost more. “
At work, we actually have a “all in one” printer that does not comply to any standard. It’s the only reason why our migration to UNIX isn’t complete. The printer is living his last days on earth in the moment, it will stop working soon. If it had PS or PCL support, I would get a new one similar to this, but because it hasn’t, I’ll switch to another system, maybe HP Laserjet or Lexmark Optra (puke).
At home I use a HP laserjet 4 with PCL. The printer is in use since 1994 (more than 12 years), and I got it as used, so I don’t know how long it really is in use. Still works fine. I would be glad to have such a reliable printer at work, but no, “the big chief” knows better and buys cheap crap for expensive money. 🙂
“So there’d be no $50 printer that is cheaper to replace the printer rather than to buy new ink for. Printer manufacturers would have to build good printers.”
Market: The manufacturers produce the kind of hardware the users are interested in. Users are not interested in intersystem portable, standard compliant or even long live hardware. A printer lives one year, then it’s broken. That’s completely normal, people want it that way. (Laser printers live longer and are cheaper then inkjet printers, but users prefer inkjet printers.) They don’t care about standard compatibility because they rely on the manufacturer to provide the proper drivers for their OS. If the OS is outdated, there are no drivers available. If the manufacturer has another great product, support for older product on actual OSes is not available as well.
Consume now. Consume more. Consume… and… be happy. 🙂
“I’ve worked in industry for 20 years (as many of you have) and I’ve seen one good instance of a standard (of sorts) – IDE. Sure it’s been expanded, but it has come a long way and has simplified installation of many mass storage devices.”
Do you mean ATA when you talk about IDE in concern of storage devices?
Personally, I like the concept of attached hardware. The manufacturer of the base device publishes the specifications how to attach some device (e. g. a CD recorder, a scanner, a camera or a printer) to the specific ports and the accessory manufacturer have to follow these specifications in order to get the hardware attached and working. All devices use a generic driver which supports all functions. With SCSI, you have something similar.
Example: I got a SCSI scanner as a present. “It does not work, here, you can have it.” Plugged it in, scanner0 was recognized by the OS, started xscanimage to test… voila! Working. “Oh, it works! Can I have it back?” – “No, it doesn’t work with your PC because you don’t have a SCSI controller.” 🙂
Same with SCSI driven harddisks, CD recorders and tape drives.
“Many cameras, mp3 players, etc. are MSC compliant. Wow, what a concept. Thumb drives are too (over the USB bus, just as USB HDDs and CD/DVD recorders are).”
My neighbor even owns a mobile phone wich I can connect via USB and have a DASD to use – without any attitional driver.
“Standards work for everybody. The money argument is a manufacturer’s cop-out.”
I won’t say anything else. It’s used to cheat consumers. And finally, they don’t know better.
The manufacturers produce the kind of hardware the users are interested in. Users are not interested in intersystem portable, standard compliant or even long live hardware. A printer lives one year, then it’s broken. That’s completely normal, people want it that way.
If that’s the way you feel, I feel really sorry for you. I can guarantee you that I get “why don’t printers last?” or “why is this so hard?” about things all the time. Consumers DO NOT WANT THIS! Manufacturers want this. As they say in sales, the consumer wants what we tell them to want.
Do you mean ATA when you talk about IDE in concern of storage devices?
Yes, I goofed there. Thanks for correcting that…
Personally, I like the concept of attached hardware.
I agree, but I hate all the cables. Bluetooth was supposed to help that. Wireless USB is chomping at the bit. I like the idea from a mass market point of view. If the CD/DVD fails, unplug it and plug in a new one. That’s easy, the way it should be.
Example: I got a SCSI scanner as a present. “It does not work, here, you can have it.” Plugged it in, scanner0 was recognized by the OS, started xscanimage to test… voila! Working. “Oh, it works! Can I have it back?” – “No, it doesn’t work with your PC because you don’t have a SCSI controller.” 🙂
LOL. I had a SCSI scanner once. Awesome machine. It never did die, but I did have trouble getting decent SCSI cards for computers. That sucked. I also hated the cable. I like firewire, and tolerate USB (which has improved a bit). But yeah, you have the same idea I have.
“If that’s the way you feel, I feel really sorry for you. I can guarantee you that I get “why don’t printers last?” or “why is this so hard?” about things all the time. Consumers DO NOT WANT THIS! Manufacturers want this. As they say in sales, the consumer wants what we tell them to want.”
Sadly, you’re right. But on the other hand, the usual average user is not willing to think first and then buy. “Will my computer support this printer?” or “Can I use my iPod with this PC?” are no questions – they are assumptions of the user. If you want to “be compatible”, it’s not neccessarily much money you have to pay, e. g. there are cheap sound cards with fantastic support on all major OSes.
And I’ve heard it. “Why can’t I use my priter with ‘Windows XP’? It worked under ‘Windows ME’ perfectly.” or “Do I need to replace half empty ink cartridges every two months?” or “The paper jams, but the printer is new!” Go buy a color laserprinter for EUR 250. “But it’s so expensive! I can buy a inkjet printer for EUR 20. Well, the cartridges cost EUR 15 and you need four of them…” 🙂
I’d like to see customers boycotting nonstandard crapware, but they won’t, because the manufacturer makes them cheap. Cheaper than standardized components.
“I agree, but I hate all the cables.”
In some cases, cables mean security. It’s hard to get keystrokes from a cable attached keyboard, but it’s easy, e. g. for me as a radio amateur 🙂 – to get them from a wireless keyboard working in the 27 MHz or 430 MHz band.
“I like the idea from a mass market point of view. If the CD/DVD fails, unplug it and plug in a new one. That’s easy, the way it should be.”
I’ve done this on my SGI system yesterday, the CD recorder died. 🙂
“I had a SCSI scanner once. Awesome machine. It never did die, but I did have trouble getting decent SCSI cards for computers. That sucked.”
At the moment I use an old Adaptec AH2940 which has enough power for the PD drive, the JAZ drive, the backup unit (external HDs) and the old scanner.
“I also hated the cable.”
SCSI knows some different cable types. That’s what I hated.
“I like firewire, and tolerate USB (which has improved a bit). But yeah, you have the same idea I have.”
Mostly, I consider USB as a port for input devices (keyboard, mouse) and slow data transfer rate stuff (cameras, external HDs), where speed does not matter. For bigger amounts of data I prefer FW, but it’s not build in many cameras and DV cams. (That may be the reason I still stick with the Canon EOS-50 without D.) Maybe, USB will get really nice and stop having this “toy character”, but external SATA seems to be a good choice, too. But finally, it’s up to the manufacturers to make their USB devices (especially digital cameras) compatible to standards used by clients like digikam or gphoto2 – or simply direct access (DASD).
Let’s take printers for example. We currently are often forced to install a specific printer driver, right? But does this keep us from installing a printer of another brand? No. So if they all used postscript and there was one driver, not only would it help make it simpler for competitors, but I could hook three of the same manufacturer up without three separate drivers. Think about it – what’s going on is that making a printer PS compliant makes the printer cost more. So there’d be no $50 printer that is cheaper to replace the printer rather than to buy new ink for. Printer manufacturers would have to build good printers.
Adding postscript support to a printer adds non-trivial cost to the printer. Not only do you need to pay for the Postscript license from Adobe, you need to add a beefier processor and more RAM so that they can RIP the Postscript file that gets sent to them. Furthermore, Postscript isn’t a very efficient format for documents that are purely raster data and photo printing has become a popular use of inkjets.
I’m sure if there was a decent raster format standard for printers that could do a reasonable job of handling all the output options inkjets typically have and it offered inexpensive licensing, printer manufacturers would be all over it.
Adding postscript support to a printer adds non-trivial cost to the printer. Not only do you need to pay for the Postscript license from Adobe, you need to add a beefier processor and more RAM so that they can RIP the Postscript file that gets sent to them. Furthermore, Postscript isn’t a very efficient format for documents that are purely raster data and photo printing has become a popular use of inkjets.
You’re right on both counts, but I think you see where I was heading with this. A single, open standard for printing could be devised that adds better support of raster printing while supporting vector printing. Yes, the cost of building such a printer would still be non-trivial. But they’d all work when you plugged them in. How can we be so stupid as consumers as to not demand that this driver idiocy end?
None of the ideas in this article are news or have been news in decades.
Little OSes start to resemble Big OSes when they grow? If you pick up an introductory OS book you will see that the features developed for mainframe OSes crept into minicomputer OSes as the hardware advanced, the same happened with micro/home computers and the trend was expected to continue with embedded and cell phone OSes for a very long time now.
Hypervisors have been around for decades, in fact IBM’s z/VM is a direct descendent from IBM’s work in that field dating back to the 60s and their similarity to OSes is also long known.
What about VMs converging with OSes and running directly on bare hardware? As far as I know that what Xerox Park did with Smalltalk.
The only OS identity crises is going on at Microsoft and that because they didn’t consolidate their bases in a sane manner. They made DOS (a CP/M clone), Xenix (a Unix), worked on OS/2 with IBM, created Windows (16bit) and then moved on to Windows 9x (basically Windows 32bit on DOS) and NT (which was supposed to support all of that on one OS).
For some reason, knowing how OSes evolve they still decided to base CE on a separate kernel from NT. After finally getting rid of DOS based kernels you would have thought they would try for a unified kernel as soon as CEs kernel needed to provide a significant proportion of the features that the NT kernel needs to provide.
Top that mess of with .NET and you can think why they might have problems right now.
“you can think why they might have problems right no”
95% market share is a good problem to have. Money and hardware advances can get them out of any future techincal problems.
MS is one of the richest companies in the world, an can afford top developers.
Faster hardware makes virtualization, isolation feasible for backward comaptibility on a new base.
Off-topic, but it needs to be said: don’t you think it’s a little pathetic to imitate another user’s nickname? It almost feels as if you want to tarnish his reputation by posting inanities in a deliberate attempt to get modded down…
lighten up
i believe the OS that able to provide end users with a “remote desktop” capability to any other OS would win the future OS market.
virtualization = 1 users able to use/run more than 1 OS.
remote desktop = 1 users able to use/run more than 1 OS too… just about the same goal.
An article which summaries the OS world as Windows OSX & LInux most likely can’t be very worthwhile reading as none of the three are traditional mainframe or embedded .. <insert category> operating systems – they are desktop & server OSes – there is a lot more out there .
There is a nice long list of OSes on wiki – which wont be complete as there will be new OS projects started around the world at research institutions & at home “all” the time .
I expected a summary of OS categories at the start – not reducing the whole OS world down to 3 desktop/server OSes (yes the three can also live in embedded & other categories but that is not where they originate from) – there have been & are (some a lot older than the three e.g. above mentioned IBM Z mainframe) OSes more specialised to these exact needs – they just dont have the OSS nature or developer rescources – I guess) – or media attention .
Just because MacOSX Linux & Windows are the most known to the desktop/ server user doesnt mean they are the best or most advanced in the OS world .
<Mini-rant anoydness over>
Edited 2007-02-12 00:44