When I joined OSNews in 2001, I did it with a great excitment because of my love for… messing around with many operating systems in order to explore news ways of doing things. Back in the ’80s and the ’90s there were a lot of OS projects that would draw the attention of the computer users of the time. But in this decade, it seems that other than Windows, OSX, Linux and a very few other much smaller OSes, the scene is sterile. And it’s only getting worse.Personally, I just can’t stand anymore the endless debates between the Linux, Windows and Mac users. It’s getting old, it’s getting boring. These OSes are already mature, and they follow an evolutionary path with only a few revolutionary steps every now and then (mostly by OSX, and lately with Longhorn — Linux technologies seem to be more conservative in their nature). But thing is, these are just 3 operating systems and there is nothing exciting about them anymore (except the occasional “wow” factor Steve Jobs might bring with his keynote shows).
It’s a lot like liking the Judas Priest music in the ’70s, and then they sudenly become so successful and commercial in the ’80s, that each album seems as commercial and boring and as identical as the previous one. It’s the same with these OSes: they are based on old technologies and they are afraid of making big steps. In fact, they are more concerned on making deadlines (e.g. the cut of WinFS from Longhorn).
If one needs to find some fresh ideas, it’s the the small guys he needs to look at. Not because the small guys are “more intelligent” than the big guys, but because the small guys don’t concern themselves with legacy support or deadlines. They can break everything they want on their OS and only 10-20 people will notice. The big guys can’t afford to do that.
In the ’80s we had at least 6 operating systems that had a good hold of market share each (e.g. AmigaOS, Mac, DOS, GEM, GeOS, Unix flavors). In the ’90s we had Windows and Windows NT, Mac, DOS, OS/2, Linux, AmigaOS, BSD, other Unices and even BeOS, all with some considerable usage share (before Windows 9x got to its 94% of market share and get declared monopolistic). Along with those, you had a gazillion other small, embedded, academic or hobby OSes. We are talking about a few hundrend of them.
Today, it’s the game of the three, plus about 10 more OSes that draw some minor only attention by the media: BSDs, QNX, Symbian, SkyOS, Zeta/BeOS, Solaris, Windows Mobile, PalmOS and some even smaller ones, like VxWorks, Syllable, MenuetOS etc. Overall, I would’t say that there are more than 40-50 active or noteworthy OS projects/products out there today. That’s a far cry from the hundrends that existed in the ’80s and ’90s.
Let’s look at the reasons why this shrinking of OS projects happened:
1. Windows, Windows, Windows… Microsoft even developed an embedded version of its OS and now it’s preparing an HPC one.
2. Linux & BSD are Free and so it’s easier//faster/cheaper to modify them to do a very specific job rather than to write something from scratch.
3. Hardware complexity. Back in the ’90s, having a “network stack” was a big deal and not all OSes needed to have one. Today, you can’t even consider an OS without one. More over, today, everyone wants his USB stack or his 3D acceleration. Writing an OS has become a FAR more complex procedure than it used to be.
4. Embedded OSes have managed to get good features overtime, and so it would make more sense to license them rather than writing your own.
To me, as an editor of the OSNews.com site, it’s getting boring. It’s the same old, same old, every darn day. We have Unix with Linux/BSD/Solaris/*nix on one side backstabbing each other for years, a bastardized Unix with OSX in the middle, and the Windows dysfunctional family on the other side. All the interesting (non-Unix) projects like BeOS or OS/2 or QNX are pretty much dead, or simply, much smaller than they used to be a few years ago. The big-three have destroyed their smaller non-Unix competitors commercially. What’s more sad, is that no big & new really usable OSes have been created since their demise to try and fill their void. After the death of AtheOS (which was moving faster than Syllable is today), only SkyOS seems to be the one that does some interesting things, but it’s still very small and exceptionally buggy (lacking proper stress testing procedures that a company or bigger project would put the OS through). Zeta is nowhere as big either (feels like a big patch over BeOS 6-beta rather than the evolution it should have had since 2000 – the last BeOS 5 release). AmigaOS & MorphOS require special hardware and that’s prohibiting for most people, plus their companies are under a questionable financial status with their user communities killing each other any way they can, making things even worse.
Blah. From where I am standing, it’s all sad and boring.
Wishful thinking: hopefully we will see a new, big, well-done OS soon, that’s not yet another Unix or Linux (although with some POSIX compliance in place for easy app porting, like BeOS & QNX have). We need something fresh. Heck, something new and fresh indeed. Something INTRIGUING. I wanna feel again that same feeling I had when I tried BeOS 4.0 for the first time in 1999 (excitement to the max) or the Mac OS X Jaguar update in 2002 or NeXTSTEP in 1996 (better late than sorry). Boy, didn’t that feel good?
Don’t get me wrong, Windows 2003 Server has been the most stable operating system I have ever run, and it’s blazingly fast too. But it’s not as exciting as the above OSes, because while it’s a good evolutionary step for Microsoft, it’s far from being revolutionary and fresh. It doesn’t come with “Feel Good”(TM) drivers.
Wishful thinking: hopefully we will see a new, big, well-done car soon, that’s not yet another fossil fuel engine or Jaguar (although with some real low end torque in place for easy cargo porting, like McClaren & F1 cars). We need something fresh. Heck, something new and fresh indeed. Something INTRIGUING. I wanna feel again that same feeling I had when I tried a Pontiac with the top down for the first time in 1999 (excitement to the max) or the Jaguar X-Type in 2002 or Dodge Viper in 1996 (better late than sorry). Boy, didn’t that feel good?
All the interesting (non-fossil fuel engine) projects like Toyota Prius or Honda Insight or GM’s hybrid technology are pretty much dead, or simply, much smaller than they used to be a few years ago
I’m afraid it will get even worse than this.The world oil reserve isn’t infinite,although throughout the 70’s,80’s,90’s they predicted the big dryout.China is lobbying in Canada to get some extra oil from the US pipe-line,they allready have beaten Japan as world 2nd largest oil consumer,they are the largest steel consumer in the world.
It’s wise to do even more research for alternative fuels like hydrogen,very soon.
I don’t agree with you there isn’t enough thrilling going on in “car world”.There’s still plenty of technology being developed.However there’s today less room to express with all those enviromental,ethics,budget retricts.Important allso is.The increasingly lack off good designers,like Porsche,Bertone,Ferrari,Pininfarina.etc
Today in a world were the mini,beetle,DS21,2CV have been deminished for your average DAEWOO,oops pardon,CHEVROLET 12 in a box there’s very little room left for individual expression without deep pockets.A car has become more or less the equivalent of a coffee-machine,they all practically look alike.Unless you belong to the happy few who can afford a porsche 911,aston martin.
But i would rather see more talented people go to shool than me driving a tin can on wheels.
Chris continues: And what is a “new advanced object based system”? Sounds like a bunch of buzz words thrown together to me.
Nice attempt to bait.
There are many “new advanced object based systems” that would fit the bill. For example:
http://www.opencroquet.org
http://www.zoku.com
http://www.smalltalk.org/versions
http://www.tunes.org
http://slate.tunes.org
Nope no attempt to bait. And it looks like I was right. Smalltalk? Seems like we’re hardly talking about “new” or “advanced”, more like plain old OOP which has been available for what, 20 years now? Entire research OSes have been written in Smalltalk already.
Yes, millions of threads. Most of the threads would be tiny occupying 12k to a few hundred kilobytes or so with three minimum 4k pages in the thread. Many would be larger and a few the size of most threads today. Obviously there are limits to the number of threads simply due to available virtual memory in a system. That’s another reason for core support in the operating system and application systems for cluster and distriubuted purposes. Local or distributed clusters of PCs or CPUs (at any rate) are the future. This alone is a powerfully innovative idea when implemented.
In many ways advanced object based systems running tiny threads are following the ideas of Unix’s notion of tiny programs cooperating. In this regard Object Pipes are an important evolutional and innovative development that needs to be supported. An object pipe enables independent object processes to be connected and have a flow of objects, not simply primitive bytes, to be transmitted between them.
First, “advanced object based systems” have nothing to do with threads. Really, they don’t. “Object processes”? Come on, this is seriously buzzword bingo now. First, all of this is available today. Second “object pipes”? Last time I looked just about every OO language has some method of sending serialized objects around. How long as CORBA been around? Java RMI? Or are they not “advanced” enough? Besides that, you haven’t at all addressed how all of this is going to fit into RAM. Threads take up a certain amount of RAM simply to exist. Unless each thread is only running a single block of code you’re going to need stack space. 512k of stack space per thread (which is very conservative) still blows most of your RAM before you even touch the text segment of the program. And why should the OS know or care about whether it’s a primitive stream of bytes flowing between programs or “advanced objects”? Besides distributed computing support (which you haven’t even begun to sell me is going to be anything more than a niche), nothing you’ve mentioned requires any modifications to a standard Unix style kernel. Considering the even distributed computing is well developed and available today, I wouldn’t even call that an “innovation”.
Yes, applications need to be built with application development systems that enable automatic parallization whenever possible by fracturing the application components into smaller process spaces. This lays the groundwork for automatic process migration when it makes sense to balance the work load. Obviously it will take innovation to build such systems.
The reason that we need applications and os software to adapt to massive numbers of processes is that the increase in uni-processor performance has slowed down. We are now seeing processors with multiple cores becoming common place. Soon it might be hard to buy a PC without multiple cpus.
Last time I checked, Linux, Solaris, AIX, etc already support all of that just fine. Process load balancing across CPUs … done. Automatic parrallizing compilers … done. But without explicit hardware support, massive numbers of small “object processes” are going to kill your performance in context switching overhead. Large amounts of parrallelization works great for super computer/distributed systems, but it’s rediculous overkill for anything that Dell is likely to sell in the forseeable future. I strongly urge you to go read some of the docs that John Carmack wrote showing how adding threads to Quake 3 slowed everything down.
I would hardly call multi-core cpus common place. Can you even point to a single shipping multi-core system available from a general OEM? Dual CPU systems are a long stretch from the hundreds and thousands of CPUs you’d need to make anything you’re talking about pratical.
I think if you re-read the earlier posts and take some time to consider and research what they are referring to you’ll find many excellent ideas that are innovative.
Point one out to me. Most of the people here are either talking about technologies which have been around forever or are applications that will happily work on top of any kernel. I would say that your posts are a nice mix of the two.
What you are talking about has been being researched for quite a long time. Look at plan 9. It’s not an issue of innovation it’s a matter of practicality. And I’m really not seeing how any of what you’re saying is really “innovation” and not rebranding a bunch of stuff which is available today, works today, runs on any common Unix kernel but doesn’t have “advanced” and “new” in front of it.
The barrier to entry for a new OS is very high, even an old-new OS. Some of these are expectations on programming APIs: take your desire for a POSIX interface. That assumes 1) C, and 2) a UNIX-like model for the OS. POSIX, after all, was just a way to get some common ground between SYSV and BSD. Exactly how innovative can you get when you are so tied to the 1970’s?
But there are many other barriers before most people find an OS useable. Everyone expects to have a 32-bit color GUI on whatever graphics card they own and truetype font rendering. They want a standards compliant web browser and will bitch if their banks web site doesn’t work. SSL IMAP. SAMBA. XML. Printing. Postscript, and on and on. Things you “can’t live without” when really they have nothing to do with the OS proper, and some of which are seriously hard to do at all, much less to well and without security problems.
Take Oberon and it’s latest incarnation: Bluebottle. There is still a lot to learn from Oberon (why, exactly, do we need header files? Oberon doesn’t). Bluebottle is an SMP version of Oberon and there is an update to the Oberon language to go along with it. They even released a “XMAS” version and have environments for it to run on Windows. Regular Oberon can run as a program on a number of different UNIX’s as well.
So entertain yourself for a while. Learn a new language and programming style, a new UI (I like the text version, not the GUI added on later). Take a look at a 15 year old OS that is still extremely intriguing, and whose technology has been the basis for a number of things on your computer today (e.g. the JIT compiler in your JVM).
<http://www.oberon.ethz.ch/>
<http://bluebottle.ethz.ch/dlxmas04.html>
If you miss the ‘wow BeOS’ effect you can (mostly) blame Microsoft..
Lots of comments talking about operating systems and kernels and obviously the main part doesn’t get the difference between the two term, do they?
Look (me tries to be patient) the kernel is what takes care of hardware interrupts, scheduling of Task Control blocks and managing of processes, threads and Task control blocks, scheduling of IO, managing virtual memory and virtual address space, taking care of IPC (Piping, Semaphores, Messaging) and so forth. Well – in a Micro Kernel, the majority of the mentioned parts is put into servics which run as user space processes, while IPC, scheduling, tcb management and memory management (physical and virtual) remains with the kernel.
The Operating System is what uses the features of the kernel and offers an interface to the user and to the ‘not-so-godlike’ user applications like your average text processor – unlike a tool like ps or kill which talk to krnel services more intensive than just by open, close, read, write, yield, fork …
Please be so kind and don’t underestimate the importance of well engineered virtual memory management (this includes a decent swapping algorithm) for an exciting user experience. How else could you open a dozen of hefty applications – without having the os do so much as shrug a shoulder – COW pages is the buzzword here: share read only code pages and only order stack and heap memory from the memory management. That’s nifty stuff.
Hm. I could rant on and on. I’m hobby OS developer myself, and I am developing an os on top of a microkernel developped from scratch (by me). It is fun to do stuff like this. I just don’t have a hankering to beat the crap out of Windows or Linux. The both of them do their job pretty well. The windows folks have ppl sitting all day long behind thinking about how to make something nifty in the kernel or so. A hobby developer can only dream of that much time. Work, wife – and life in general are demanding attention too. *shrugs* That’s why it seems to stagnate. We do much work behind the curtain ere something worth to show off visually turns up.
Just my two euro-cent
I’ve forgotten to mention the art of sharing libraries.
share them in memory and fill in call pointers in a call table inside the process’ address space at runtime (either early or late binding)
Currently I’m drafting and developping shared library support for my hobby os – and refactoring and reworking the gui service. *gg*
I would say that main reason for this is OS size. Writing a new OS is just not as practical anymore. At least a general purpose OS. Think about how many drivers you would have to support, how many different stacks. How much UI code and all that. windows source base is 40 million line, solaris is 10+ million line…it takes time to develop these things and the way OS are moving, i don’t think its easy for a new OS to catch up.
Virtualization brings some hope where a new OS can only write drivers for virtualized devices and get all the hardware support it need.
I think the next innovation is in the field of Virtualization like Xen and VMWare. Things like live migration of OS on a different hardware etc as well..
Atheos looked the most promising until a bunch of idiots nagged at poor Kurt enough for him to crack it, and pack his bags.
Kurt stopped work on AtheOS long before I forked Syllable. Still, please don’t let small things like the constant forward movement of time or facts get in the way of your little rant there.
And the bickering started long before Kurt stopped working on Atheos.
“For those who haven’t been following the EROS project, it has now migrated to the Coyotos project. EROS, the Extremely Reliable Operating System, was a project to create an operating system whose security relied on capabilities rather than the traditional Unix model of root or non-root.”
” Coyotos is a secure, microkernel-based operating system that builds on the ideas and experiences of the EROS project. Much of the code developed for EROS will migrate directly to Coyotos.”
http://coyotos.org/
http://www.osnews.com/story.php?news_id=9513
http://slashdot.org/article.pl?sid=05/01/25/1738206&tid=190&tid=1
So you’re bored? You weren’t around in the late ’70s.
I remember dozens of OSes. Nothing was compatible with anything. If you started with a system, you stayed with it.
Until the company went under. A pretty common occurrence those days.
sure, it was exciting. Frustrating too.
I remember sitting in one office watching a download come in. Someone marveled at the new machine. “Wow, it’s coming down almost to fast to read the words as they come in!” Wow indeed.
While I’m sorry that the Atari and Amigas didn’t make it. Most people are not.
It’s tough enough with the “big three” these days.
This stuff does get old.
I don’t need a geniously innovative OS. I need an OS that works, that respects my privacy and that I can put in every computer I have without selling my soul to the devil. I can fully use my computer now using only free software, in a modern desktop that puts windows desktop to shame. I couldn’t do that 10 years ago. That’s the true innovation, the ability to use my computer in the way _I_ want to, no product activation required
Rob Pike already realized this 4 years ago.
Consider reading the talk at
http://www.cs.bell-labs.com/who/rob/utah2000.pdf
You are free to poke around with Plan9 as well. It might give you an eyeopener once you get it, on how much brighter the future could have been.
I got the wow factor on seeing distributed ameoba in my engineering computing laboratory; reading the technical documentation really made me proud to be an engineer: a fantastically highly distributed operating system. 10 years later, I’ve been practicing in the real world, and nothing like the innovation of a fully distributed O/S has come to fruition. Don’t talk to me about ‘clusters’ like beowulf either, they’re not the same thing.
There’s a lot of _potential_ for innovation, but yes, I agree, that the actual playing field is devoid of it.
It feels to me that we live in interesting times regarding operating systems. Just thinking back to my first 386 with simple dos and how it compares with my dual boot laptop and all the extra stuff it can do. All the open source projects out there, that didnt exist say 15 years ago, all the new devices that didnt exist even 5 years ago.
The lively debates (flamesfests) between the different camps, make it all the more interesting as well.
I agree with you.
I came to OSNews, I think, about three years ago. Even back then, it already was mostly Linux, Windows and OSX. BeOS hit the news every now and then; but those three dominated the news and they still do now.
I can still remember the excitement I felt wehn first trying out Linux (MDK) also about three years (or was it four?) ago. The whole experience was just, indeed, “wow”. KDE was just amazing– it blew the crap out of Explorer (at least, from my perspective back then).
Other than the BeOS “wow” I had (which lasts up untill this day, really) and the Mac OS X wow last summer (also still lasts), it’s about it. No other exciting projects are there to provide me with more “wows”. No, not even SkyOS.
This leaves an awkward situation when I’m maintaining http://www.expert-zone.com . It’s supposed to be an operating systems news site– but due to the lack of news other than Windows/Linux/OSX, I have to fill the void with hardware news and security news. Simply less interesting and exciting to me.
I have to add a note about SkyOS and Syllable though, Eugenia. You said it yourself: “(…) only SkyOS seems to be the one that does some interesting things, but it’s still very small and exceptionally buggy (lacking proper stress testing procedures that a company or bigger project would put the OS through)”. Let me remind you that no one is stopping you from joining the SkyOS beta team and/or Syllable team
.
The OS world left me with four computers all running different operating systems: one x86 running Novell Linux Desktop & Ubuntu Hoary, one iBook running OSX, one UltraSPARC running Solaris9, and another x86 dedicated to the wonderfull BeOS. There should be more things to choose from, definitely.
Maybe I’m too conservative but to me computers are just tools (the most fun to play with to be true). So, what I expect from them:
1 – To be rock solid (I don’t want to start again my work because it broke in the middle);
2 – Be a bit intuitive, as simple to be operated as possible and definitely logical (in the sense that if you know what are you doing you can guess the results);
3 – That the upgrades and extra options don’t change the first 2 rules (i.e. don’t be in the way) and don’t send me to a trainning course once again if not REALLY needed.
All these points to a evolutionary path. Yeap, sometimes new things come to play, and very rarely, new ways to look at problems pay off, but looking at what we have is a bit “ingenuous” to expect big improvements to appear every day. This always happens on situations where two things starts to show:
1 – The technology gets well developed (mature) whether because physical constraints or alternatives are well known (like in sort problems), so even small improvements involve gigantic investments;
2 – The ‘solution’ became a kind of standard and there are virtually no reasons to change it (it’s the well known “good enough”).
I think that we are seeing the these 2 things acting.
I don’t think there are that few OSs about. I hear quite a bit about Voodoo concoctions with weird behavior and even stranger drivers.
The thing with that is: you’re not going to see the next best thing coming out of that because in today’s world, what will a spanking new OS bring to the world that we don’t have today? If you want something better than what’s out there now, you need a vast team of dedicated geeks all looking in the same direction. And geeks almost by default refuse to do just that.
So you want to have something exciting and new, but who is going to build something that’s on the level of Mac OS X or Windows, but with something totally new, a competely different way of thinking?
Things like that can only come about if they’re government sponsored. Companies today only want to make money [which I have not too many problems with], they can’t spend a gazillion dollars designing some cockamamy contraption that people may not want to use.
We’ve grown up.
What you need today is a MMORPG experience which lets you interact with a world, not necessarily a game, where you can do your business, develop ideas. But online, in a virtual environment. MMORPG style environments will grow bigger and better, and people will be consumed by them. Building that kind of environment is the way to go.
the early 90’s had a lot of innovation then came the internet bubble and people realized that the technology wasn’t mature enough yet. I say we are where we are by darwinism. only the strong survive. innovation has died down but it has also become a more difficult market to enter.
There is no need for gloom really. The scene today is not boring because ideas run out, albeit that is an ever present (just look at games). It is boring because the portion of computer enthusiasts, academic users, and hobby programers, diminished considerably, while 10 or 15 years ago it was considerable and therefore somehow influenced the development.
Now this influence is gone. An average user is a corporate employee or somebody surfing the net at home. They usually dislike change and are not very enthusiastic about learning anything new, although some are actually quite fascinated with the technology they use. They prefer things dumbed down, stating a lack of time or some strange universal human right for things to “just-work”, no matter how complex they are, like that was some sort of a natural law (“man, I haven’t got the time for this geek stuff, I just want things to work!!”).
The sad thing is that most of those users are really not that busy at all! In reality, many are just afraid to look stupid – I notice it at work all the time and is a completely natural instinct. People just want to be experts without any effort and often find it embarassing to be teached (“yeah, yeah, I know, just tell me how to do it”).
This is very much different from the early day hobbysts who understood that high technology is inherently arcane and that “not getting it” is part of the fun.
Also, many people today are in fact forced to use computers one way or another (“I’ve *got* to see if they are any good deals on e-Bay”) and they are only too delighted if things stay as they are.
In other words, most people are not revolutionaries. That goes for everything, not just computers.
That is the segment Microsoft and other big corps. are aiming at. As a computer enthusiasts there is nothing else left to do as one thing we always did: keep playing with things we find interesting and look weird to the outside world.
Not from me nor anyone else within the Syllable community. If you’d care to attempt to back up your obviously wrong statements I have the entire contents AtheOS and Syllable mailing lists for what must be going on five years now, so I can easily prove you don’t know what you’re talking about if I cared enough to do so.
For me it seems that we have reached a certain status in technical evolution where new and exciting impulses/concepts are missing but extremely neccessary. I believe that scientific ignorance and economic lobbies are the root of our lame movement currently. This situation is very complex and it is not about changing systems (e.g. free market economy) but to change thinking.
It’s a sad development that human labor is mostly treated as an expense factor by the management of a company and that these managers sometimes earn about 10 x more than their employees. Some of them might be worth their money though…
Many people beleive that our current scientific knowledge is something proven and you should not doubt it in any way. But there was a time when “scientists” thought the earth is flat and we are in the center of the universe. So this knowledge is always just a focus of the current time and knowledge.
As Operating Systems are concerned you might also be able to compare them to cars. More than hundred years ago the Otto Motor was invented. Until today the evolution of driving was to add more plastic and increase comfort. But the concept hasn’t changed. The same goes for computers and operating systems.
Some might not feel the the same demand for a change of the OS landscape and live happily ever after – that’s ok. But some feel a bit bored from time to time 😉
I totally agree with you.
It’s boredoom vs. stability, an OS it’s a very complex software that needs years of development to get a reasonable state of stability, and when they get there…they are boring?!?!
OS it’s not that part of your computer that must be exciting, applications are, and them, along with OSs have gone a long way over the last years. Take a look at Firefox, PostgreSQL, OpenOffice..and many others.
I think the problem here it’s we’re too feed up of anything to really appreciate what we have…quality and variety if sofware we don´t even dream of ten years ago..
Eugenia, when you post stories about blogs that respond to blogs then things are going to get boring. Kernels are boring because (a) most people don’t have sufficient technical skills to care and (b) tcp/ip stacks were finished a long time ago, where end-users were suddenly connected, and (c) it’s all about the apps anyway.
So now, it’s all Unix and Windows. Unix is boring until composite and xglx is done, and windows is boring until Longhorn gets out.
It’s all boring because the desktop does what most people expect it to do.
Now everybody is connected so it’s not like the 70s and 80s where you were excited to get your monthly copy of Byte and see what was going on in the computing world from your relative isolation. Now days, everything is incremental news.
the OS scene id dead! long live to the OS scene…
Almost no technology is new as we are standing on the shoulders of giants. Get used to it. Innovation happens in the creative combinations of existing technologies in ways that have not been seen before. An old technology recast a new with a twist or merged with another technology in ways that provide a new or enhanced value has not only merit but if it fulfills a functional purpose, then it has value.
Chris you seem to dismiss any possibility that there are innovations in the items mentioned in my or others posts. That’s fine as that’s the way you say you see it, but it doesn’t make it so. A definition of Innovation that requires the technology to be new and not been done before is too limited. If that’s your definition then good luck with it. Please don’t rain on our parade in your process.
Let’s see what the real world considers innovation to mean:
http://www.google.ca/search?num=100&hl=en&newwindow=1&safe=off&q=de…
Ah, there are many definitions beyond simply being new, although that’s a part of many notions of innovation, it’s obviously not an exclusive requirement. Here’s a select few that move towards producing value:
Innovation is the whole process from: invention, development, pilot production, marketing, production. Invention is just invention. Innovation = creative idea + implementation.
The practical translation of ideas into new or improved products, services, processes, systems or social interactions.
Innovation is creating something that others want.
Innovation is an evolutionary process of increasing the capability to apply a technology, applying in new contexts, expanding the capability of a technology or improving the capability of a product.
I see the innovations in the above posts being not only possible technically but desirable. Yes, many of the ideas have been done before. In fact it’s reassuring that so many of them have been done before in various forms, this lends certainty to their application again in new and innovative combinations.
Yes, Smalltalk has been around about as long as C and Unix. However, Zoku isn’t Smalltalk but a derivative that you really don’t know anything about so it amazes me how you could possibly know how innovative it might or might not be.
Yes, many systems support process load balancing across CPUs and processors in a cluster. However, no mainstream OS ships with that supports it in a way that ALL applications can automatically use it if a user so chooses. I have ten computers here at my disposal yet it’s next to impossible to spread the load for any of the applications that I use, and that’s because the applications were not designed for it, although they could be. Any OS that ships with this capability would be innovative to the minds of many people such that they’d pay for it.
As for parallelizing compliers, have they really reached their maximum capabilities yet? Doubtful. Are not innovations required to ship production quality tools that enable ALL applications that have the opportunity to run in parallel fragments across multiple CPUs or nodes in a cluster? Can you take your favorite application when it’s bogged down and have it run on multiple nodes if it’s possible? If not why not? If not and it could then it would be an innovative system that could do so.
There are obviously many practical limitations when choosing how and when to fragment an application to maximize it’s parallel potential, this is why as many of these decisions as possible should be automated. The number of threads/processes/tasks (choose your terminology) can slow an application down or speed it up. Electronic Arts avoids the use of threads in their products because they feel it slows down their applications. To be clear not all applications benefit from parallelization. To be very clear, most applications are designed single threaded and a vast majority of them could take advantage of opportunities for parallelization. Ever waited for some application while it was busy? Ever wished it would respond now? These are indicators that that application should be redesigned to take advantage of parallelization.
The Erlang language supports Concurrency Oriented Programming and in particular supports on the order of 100,000 threads. Compared to Java and C#, Erlang wins with an order of magnitude or more with the number of threads possible at once. In a true advanced object based system each object has it’s own thread in theory. In Zoku each object has it’s own thread or is part of an object aggregate that has it’s own thread and possibly the aggregate object has it’s own protected memory space and full process if required. This is a dynamic runtime decision that’s based upon a number of technical and policy choices such as security and system resources. This is how an advanced object based system and threads come together so they do have something to do with each other after all.
Apple computer has been shipping a dual CPU system for a while now. Intel has been shipping Pentiums with Hyper Threading. AMD and Intel are planning on new lower cost dual core CPUs this year. The Cell processor with a Power PC core plus eight co-processors DSP-like “synergistic cores” will be shipping soon. Many people have multiple computers on their desk and at home. Companies have lots of computers. I’m sure that they’d like to maximize the utilization of their capital expenditures on those hardware resources if possible. The age of the dual (or N) CPU systems, local or distributed cluster grid systems has begun. Let’s open our eyes and see the light of the advantages of distributed systems support in our OS.
Object serialization has been around for a long time. That doesn’t mean that it’s a technology that had it’s time. On the contrary it’s role shall increase for very important reasons. The idea of pipes is not new. Unix has had them seemingly forever. What is new are innovations to the idea of pipes to enable them to have enhanced capabilities and to exist within the world of objects as full first class objects. Byte pipes are but one variety of pipes that are needed. Object pipes are the future and are an useful user interface metaphor since everyone can easily relate to pipes from everyday experience. If you don’t see that that not my problem but yours.
Chris wrote: why should the OS know or care about whether it’s a primitive stream of bytes flowing between programs or “advanced objects”?
The role of an Operating System is to enable people and programs to share the systems resources and in the case of today’s systems assist in the flow of communications. One of the roles of an operating system is to be the traffic cop and to ensure a smooth and orderly flow of information and to prevent, if possible, information from leaving the local or clustered system if there is a policy preventing it. Think of the role of the operating system as a information firewall. Why does this need to be handled in the OS? Because of it’s privileged position to mediate all messages in the system, it’s the natural place for it to happen. This is an innovative use and merging of firewall technologies with information systems such as database and object based systems.
While it’s useful for one to take on the devils advocate and apply critical thinking skills and tools they only get you do far and can actually prevent the innovative environment from occurring. Imagine being a participant that moves the discussion forward with insightful questions, observations and contributions of how things might work and why and who would benefit. Innovation is in part taking the old and renewing it with a new purpose. Along this path synergistic thoughts and ideas can occur. This is where the future lives.
I apologise for implicating Syllable’s developers, Vanders. I’m going to assume that I’m mistaken, rather than trawl through a 4 year old mailing list.
But that said, it’s still a shame to see Kurt no longer involved in Atheos (or other OS development), given the sheer amount of great work that man got done in such little time. I’m sure we can agree on that much.
Thank you, your apology is happilly accepted.
I do agree with you. Kurt was an excelent designer and a very prolific and talented developer. His presence in the OS development community is very much missed.
all the three operating system plus many other unix OS just works. yes, it is boring that not much happens, but there is not much most users misses from todays OS.
more important is the software which runs on an operating system. if you look at the visions people like alan kay had in the 70ies than it is really boring where we are today…
Chère Eugenia,
The next computing revolution will not come from softwares(i.e. OSes). In fact the computing industry is obsolete. About 15 years ago i could see fabulous japenese geeks create amazing 3D user interface. I was able to create a (small) speech recognition software working on Sinclair computer.
Since, OSes are simply faster and easier to use, that’s all.
The revolution will come from hardware, we need a really new processor (may be quantum processor ?)(concept, architecture) if we want create a trully innovative software and reinvent the data computing. Wee need a new root.
Chris.
I also think it’s time for new hardware concepts. We have to think over ways of power supply and efficiency for example and then move on…
Haiku!
To paraphrase radically an earlier poster, having a bazillion incompatible operating systems draws vacuum. I, for one, think the OS should be about as intriguing as the BIOS.
Lately, I’ve been ironing out the bugs on a dual-boot system of WinXP and Gentoo. The choices involved to a system where the development toolchain and productivity applications all play nicely irrespective of kernel is fun, but not technically overwhelming.
Someday, I hope to publish an article describing this beast at low detail.
…, isn’t it ? On the one hand it’s positive – things gets cheaper and more people can afford it (technologies, etc.). O the other hand the stuff is loosing its “taste”, charm, it’s oriented to the average Joe user.
Every one on the market knows – “wheel is invented”, consumer is more or less happy with what they have. Even leaders of the market are confident about themselves.
I’m missing ZX Spectrum times…. 64KB game was HUGE
, everyone was happy. Please, don’t get me wrong – we need the progress, but in quality and innovation, not in quantity. Do you know how the process is called when cells of an organism are increasing in quantity very quick ? No, it’s not the growth
, it’s cancer and it kills the organism. I don’t want to see that happening with OSes.
“If one needs to find some fresh ideas, it’s the the small guys he needs to look at. Not because the small guys are “more intelligent” than the big guys, but because the small guys don’t concern themselves with legacy support or deadlines. They can break everything they want on their OS and only 10-20 people will notice. The big guys can’t afford to do that.”
Yeah, and if software was protected with ONLY copyrights like music, books, etc, then innovation would always be there in the ‘small guy’, and we can clearly see it is, with SkyOS, RISC OS, AROS, Amiga OS 4, numerous Linux distro hacks, etc.
Innovation has truelly been great over the last 30 years of OSs.
However, with the Advent of Software Patents, this innovation is about to be halted by the patents holders, which are given a special monopoly license to stop others from using an idea that the patent holder applied for first.
Its a pity.
In the future, in a world with software patents, I expect less innovation and less competition, compared to a world in which software patents never happened.
Patents on technical innovation is great of limited length, but patents on generic software is like patents on a drum sample in music, or hitting the drum three times then missing a hit. Or it is like patenting and idea in a story like an apple falling from a tree, and therefore no one else can write about apples falling from trees no matter how they write it. Software patents will hinder future innovation and competition in OSs.
Guess OS’s are becoming commodities. Whilst it sucks that the main players are either badly designed and/or evolved messily from 30+ year old systems (OS-X is closest to avoiding those offences), they all work and get the job done.
Whilst I’d love to see something that was well supported, even semi-popular and gave me that “whoooo!” feeling I used to have with my Amiga, I’m not holding my breath. (Not that there aren’t some neat things out there… but none have any footprint at all…)
Two OSs that interest me but are not well spoken about:
1. Plan 9
http://www.cs.bell-labs.com/plan9dist/
2. TRON (ITRON, JTRON, BTRON, CTRON eTRON)
(The Real-time Operating system Nucleus)
– The OS is everywhere!!! Really!!!
If you create an operating system today, you have to write drivers for too many different network/graphics/sound cards to make it a feasible alternative to Windows or Linux.
If you are a single person working on it as a hobby, then you will most likely develop drivers for your own hardware only. OK, perhaps you will build in support for widely supported standards such as VESA for graphics, but there are no such standards for network and sound cards. Your only hope may be that if the architecture of the system is really good and revolutionary, then others will catch up and with the help of the wider developer community your OS can grow into something big and usable.
I remember the old days with my Commodore 64: I learned its assembly language, studied the sources of its kernel and programs I had, and in some years my knowledge about it reached a level where I could feel quite comfortable about the hardware and I could concentrate on what I actually want to do with that hardware.
It’s not like that today: if you are a programmer, you constantly have to learn new APIs, new systems, new ways of doing things, new graphics hardware programming tricks all the time, it makes no sense anymore to learn the architecture by heart (for what, it will change anyway in a year). – That’s why programming in assembly is a relatively rare phenomenon today.
Imagine a violin player, whose violin would change (be upgraded) every year, so instead of concentrating on WHAT to play on the violin, she must spend half of her time to constantly relearn the technicalities of HOW to do it. This is the state of things in computer industry at the moment, as I see it.
What I would love to see is a hardware platform like the Sony Playstation 2 or Nintendo Gamecube or whatever, which has such processing power that would be enough to accomplish ANYTHING you would dream about, and which at the same time had totally open specifications (downloadable PDFs for free), where the manufacturer would absolutely back the hobby programmers, so that a lively culture could grow around it.
The system should be easily connectible with a PC so programs may be easily transferred from/to the system, and this would be all supported by the company behind it.
Such an architecture would have a chance at me. I wouldn’t feel that the countless hours I spend on familiarizing myself with it’s structure would be lost when a newer version comes out on the market. And if I grow in the knowledge, I could get to a level when I could really do whatever I want, and if I upload my highly-hardware-optimized, superb-down-to-the-core programmed assembly/C/whatever mixed creations to the net, I could be sure that others could just download it to their unit and it would look exactly the same on their hardware as mine.
If you don’t think LinuxBIOS is innovation, You’re insane like Crazy Eddy.
With the newer breed of PCs, ala, Mini-ITX, Nano-ITX, and mobile CPUs I think we’re going to see computer used in a lot more areas.
Forget innovation in the OS, the OSis a means to an end–it should manage my memory without leaking and that’s about it… GUI , man-machine-interface, and applications are where innovation has the most potential to take place.
It’s funny that people only consider an OS Linux, Windows, OSX, etc. Your microwave’s OS has greatly improved since 1990. Your car’s OS surely has with stricter regulations and the introduction of ODBII. Your clothes dryer is a hell of a lot more sophisticated while consuming less power than in 1990, YOUR WATCH CAN DOUBLE AS A USB DRIVE,
If you want to see innovation, you don’t have to look far. A car from 1990 still drives you to the same place as in 2005, and if you always take the same roads, a new car will function and appear just like your car in 1990. TRY USING IT FOR DIFFERENT PURPOSES, like off roading or racing.
Many cars have GPS systems in them now with directions dictated. Garmin has come a long way.
The future OS will be a network-wide (worldwide ?), distributed system, or will not be.
Just look at what’s done (and planned) with :
http://www.opencroquet.org
http://plan9.escet.urjc.es/ls/planb.html
http://www.dragonflybsd.org
With opencroquet, users will be able to create permanent rooms, in an object-oriented design, with their own rules (scripting), and available to all of the other users.
Some people plan to write rooms that allows users to play football, or deathmatches.
Don’t tell us there is no innovation in operating systems. The innovation game is just playing one layer higher than before. There is no room for computing purists, because the innovation is pulled by the user, and not the programmer. This can be a shame for stability and good-design, but once the users needs will be stabilized, then the programmers will be able to apply good design principles.
Unfortunately, what is visible in an OS, for the common user and the enthusiast, is the top of the iceberg.
@metic
(I may get flamed for this by all the BeOS fans but) in what way exactly was BeOS so much more innovative than what is done in current operating systems? I suppose it is related to GUI mostly,
I’m not a BeOS fan, but AFAIK about this OS, one (and only one) of the revolutionary things it has (had?) is what Micro$soft is tryng (whith some issues and some years behind schedule) in the new FS of Longhorn…
@happy god
>I am so sick of the god-damned Start button (or Gnome >button, or KDE button etc …). It’s now 10 years old.
maybe 20… it’s not a (revolutionary) invention by Micro$oft… just an implementation (copy?) with some cosmetics taken from others…
The problem for innovations/revolutions is that the mass knows whois the first to advertise them, no to created them…
There’s still the majority of people believing that all the stuff advertised with win95 where real innovations and not just taken from other osses 10 years ahead them
JNode.
From my perspective it’s an interesting project.
Well, it’s in its early state, but interesting and promissing.
expresses just what I think.
Nobody wants to “invest” into a new technology, despite current systems being really technically shit (well, Windows and Unix; Mach is a bit more modern…).
windows and kde have start buttons, gnome and mac have menus
i WAS amazed by the early BeOS-Versions, it had some spirit lurking inside. It was responsive and offered a bunch of new and fresh concepts, e.g. the filesystem.
These were good times. I bought this OS just for fun.
On the other Hand, when checking out the early OS X Versions, I was not amazed at all, but annoyed by this show-off-os, that lacked fundamentals, speed, clear fonts and new concepts IMHO.
I agree, that the OS-Scene is little boring these days, mostly because of money. No Money, no OS. Hobby OS eat programmers time, who can’t afford spending much time for hobby-coding.
You already mentioned the increasing complexity, that slows down hobby OS development.
Anyway, I’m still hopeful, maybe Haiku will become a nice Swan.
Or any other GROUNDBREAKING OS ?!?!
I would be one using it. Plan9 sumarizes everything I like. New ideas, simplicity or better, complexity build around a simple concept. The idea of every functionality being implemented only once and only where they must be for everything else to make use of it is very appealing to me. Still I don’t know how good a plan9 distro would do when people started to ignore the concepts and architecture of the system to make room for compatibility. Would it end replacing rio for example with a version of X or things like that? These days make a system widespread and you are dooming it to be just another unix clone.
I remember the good old days, I am not that old yet. But at one point of time I had a Desktop with BeOS/FreeBSD/NetBSD/FreeDos/Dos 6.22-Windows 3.11/Windows 98SE/2000/XP/Redhat 8 /Mandrake(Best Appearance)/Sun Solaris 8/SCO Unixware(Yes I had the Licence) and the best Boot loader XOSL. I miss those days the so called modern Laptop is unable to work with most distros.So I am stuck with the Boring Combination of XP/XP_64(Eval)/CentOS4rc1 x64/NLD9 x86 with GRUB. I tried to install Solaris but could not configure Grub for dual boot.
It was so much fun and rightly Eugenia puts its a boring state of affairs now. Every New distro tries to have linux kernel and Stereotypical New feature set.
Wow, 150 messages in a few hours. I see you people wishing to keep talking. I’ve just created a Yahoo! Group specially to keep this discussion. I see people happy with what they have. The group isn’t for them. Just for people wanting things to change. You can go to http://groups.yahoo.com/group/futurecomp/
I hope something good can result from so much talking!
An old saying; Familiarity breeds contempt, is appropriate in analyzing what you express in your article.
The expectation of a corporate CEO in directing a plan for the future used to be a five year plan. The better planned corporations exceeded five year CEO stints but most had “burn-out” in five years.
The cycle nowadays has shrunk to less than five years.
Our forefathers recognized the five year complex and initiated renewal every four years with elections of new blood. States went further and renewed elected officials even more frequently.
The principle applies to all endeavors of human kind. There is a limit to everything.
Change is required and change is normal in the universe. Some changes are good. Too much of a good thing produces excess pounds and a diet change can fix the problem.
What change would satisfy you?
Commodity
Yes, millions of threads. Most of the threads would be tiny occupying 12k to a few hundred kilobytes or so with three minimum 4k pages in the thread. Many would be larger and a few the size of most threads today. Obviously there are limits to the number of threads simply due to available virtual memory in a system. That’s another reason for core support in the operating system and application systems for cluster and distriubuted purposes. Local or distributed clusters of PCs or CPUs (at any rate) are the future. This alone is a powerfully innovative idea when implemented.
For IA-32 processors, the limit is dictated by how the kernel manages the GDT and LDT. So, millions is waaaayyy over the limit, though.
People use software to get their work done, whatever it is. That rarely means giving a shit about operating systems. Imagine if people cared about the software their DVD player was using: it makes no sense.
there’s a small obstacle though: you’re forgetting a computer is a device born to be much more versatile than a dvd player or a games console, intrinsically more complex and powerful to be able to do anything…
the fact that a computer can nowadays let people do “just their work” is a result of commoditizazion, and of the marketing strategies of some clever individuals, who have pushed the “pc for everybody” paradigm they’ve conceived to attract, and sell pc’s to, people who believed them too hard to use or even ignored their existence…
so they’ve succeded in their intent, and now that people have got used to it, they believe it’s an obvious assumption, and the PC must be no more difficult to use (nor versatile) than a domestic appliance… or even less, if possible
if you stop and think about it, you could realize that, this strategy has over time flattened the differences between intelligent users who use a technologic tool to do some serious work (maybe research), “tech savvy” ones (incl those who love technology for the sake of technology) and clueless people who are forced to use a pc but cannot distinguish it from a typewriter… getting them firmly believe that the one solution the market offers, is certaily right
and, subtly, convicting them that choice among different solutions, if any, is not even needed: this way some other things become unnecessary, such as the famous “freedoms” of FOSS: the freedom to choose and use the available SW, the freedom to review the source and adapt it to my needs etc… if were firmly in the belief i already have the best solution, what would remain, that could push me to explore others, or even to get knowledgeable enough to make my own?
i agree one should focus on the work he/she must do with a computing machine, but i think one could also show some active attitude in choosing the best tools for that job: getting information, achieving a minimum level of knowledge to understand the differences and peculiarities of each solution, instead of buying a generic tool taking a priori for granted it’s THE right one
(if you were to put a nail in the wall, and the guy at the shop tried to sold you a Dremel, you wouldnt buy it, would you?)
i’ve many times heard diversity is a strength in the IT world: actually a passive attitude on the buyer’s side is what kills innovation, and leads to a dull boring world. if you like it that way…
First sorry for my english.
I think the main problem is the INTEL CPU and the compatible processors (AMD, etc.). If the CPU (and auxiliary chips) are the same there’s no need to build another O.S.
If you were as old as me, you could remember some “big iron” OS like UNIVAC, NCR, Bourrows (it was a beauty with an integrated high level language). Even in IBM kingdom you have AS series that started with very fresh ideas.
Now from this old timers we have only IBM O.S.
For me it seems that all the young people think that Mr.Gates invented computers. It’s a very big lie and you can count the number of patents (fundamental ideas an not only “get money” ideas) that Microsoft has to see that in this field they are very …. poor.
Maybe with new CPU hardware (like the Cell from IBM) we can have a new OS.
Get a new processor based on a new paradigm a you will see new OSs.
Where’s the fun? Operating systems are more a chore than anything. Fiddling with Linux, trying to keep the Windows registry lean, bloated applications, boring applications, boring themes etc. Where’s the fun???
Hey, I’m still having fun. OS/2 runs without fiddling, and doesn’t have the issues Windows users have to deal with on a regular basis. I’m sure other OSes are similar.
Try something other than The Big Two. MacOS X, perhaps?
It is clear the ‘plan’ in the US is to stop innovating and tie down what technology they have, and assure stagnation w/ govt. approved monopolies. Soon microsoft will have a pattent on ‘using a computer to perform basic mathematical functions’ or some other rediculous shit which covers all of computing, since all a computer is is an overglorified calculator.
OS’s being bored and monotonous (and not too many of them) is a “good thing TM”, anyone who’s been in the industry for more than 10 years knows this. There’s no point re-inventing the wheel again and again, just for the sake of having fun or excitement (get you a PS2 or an XBOX and some games)
If you are bored of the actual IT market, I suggest you go and study something, and do some kind of research project, that’s where the innovation is.
And most important, stop being the administrator of a site which talks about how boring IT is.
You could argue that things are more interesting than ever. The days when a small capitalist group can hope to unseat the goliaths by marketing a complete “for pay” OS are over. They won’t be back, the train has left the station. On the other hand, we have more interesting apps and distributions that you can try for the price of a CD than we ever had. Whatever BeOs and others brought to the market was a precursor to the far more interesting environment we see today. I don’t know that all those competing commercial OS were that interesting, frankly….what is interesting today are the apps.
This is the beginning of a golden era and it’s only going to get better!
Okay.
Why can’t the desktop UI itself tell me which programs are running? OS/2’s WorkPlace Shell provided a rudimentary start to this sort of feedback when it crosshatched an icon after you started a program from it (and uncrosshatched it when the program stopped running) — how about adding performance information to the icon titlebar as well? Or integrating a TOP display into the desktop itself so I can see what’s running without having to explicitly ask? Color coding icon labels depending on their CPU/RAM usage?
Why can’t I lump several related programs and documents together in a single container on the desktop and open/run them all (or close them all) at the same time with a single action? Again, the OS/2 WPS could do this via WorkGroup folders, but it seems nobody else has thought of this? It’s nice to be able to think in terms of related tasks, not “files” or “programs”.
I don’t remember what it’s called, but that MacOSX feature that lets you bring up/arrange all running Windows into various arrangements so you can see what all is running is a really cool idea. How about an MDI window or equivalent that lets me do that with the pages I’m working on in a document, or the drawings I’m working on in a drawing program? Things like Print Preview are extremely limited when I’m working on a 100-page functional flowchart and want to do simply uniformity checking!
I wish we had more of those, actually. The old Sperry UNIVAC mainframe environment I’m working in now is a multi-CPU multi-threaded environment, but it doesn’t have memory leaks, doesn’t allow data to be executed, and has “capacilities-based” securlty in Linux parlance, making it more advanced in many basic areas from the supposedly more advanced desktop and small server OSes that have been developed more recently.
It sometimes pays to look at history, even in the context of information technology…
I’m hardly styfling innovation with criticism. If you noticed, I even said that distributing computing is interesting, though I feel that it’s a niche. And as long as it’s a niche, no one is going to include support for it in a mainstream OS. It’s simply wasted code. I have 6 systems here totally 11 CPUs and I have no interest in setting up any kind of cluster. Each system does it’s own thing and does it well, I have no need to share processing resouces.
There is no problems trying to stand on the shoulders of giants, but nearly everyone who has listed an “innovation”, including you, is simply regurgitating things that are done and are available today. And when pressed for details of how your system is actually different and can work around the real technical issues I raised, you bailed. I can create a real “advanced, new, distributed object” system today using the J2EE platform. Ya know what, for the vast majority of systems, those are slow. Network latency, even over a gigabit lan, is rediculously slow. Distributed systems right now get away with that because each node is processing something that takes many times longer to process than the latency of sending it around. I’m thinking your 12k threads aren’t going to be spending weeks simulating techtonic plate movement. So you just took something which could have run locally and sent it to another machine adding a least 2ms of overhead to the entire process. That’s 1000s of times slower than doing it locally.
If you really wanna be innovative, solve the problems I’m talking about. Don’t complain that I’m not cheering you on. I’ve seen it 10 times before and I don’t see how it’s suddenly going to start working now.
(And OSs still don’t need to know anything about the data to be a “data firewall”. They can happily do that by only knowing the two endpoints of the data stream and not sending the data anywhere else. I mean, this is TCP/IP 101. And you don’t see network protocols needing to know the format of their data to ensure that it gets from one process to another, do you? Really, if you want to be taken seriosly, you’ve gotta do a bit more than complain about being styfled).
uecdac is right. We don’t need any more new OSes. There’s already a huge number in use (far more than is named by the article). The main stream OSes might not be doing anything innovative (although I’m sure that’s open to debate), but so what?
There’s a vast vast number of open source projects out there – without a relatively stable OS base, these would have a difficult time (unless, in many cases, those projects are tied to bleeding edge OS developments).
There _is_ a great deal of exciting stuff going on. Anyone who insists otherwise is probably only taking a very shallow view of stuff that’s really going on.
Even if I limit myself to metioning embedded ARM Linux (in which there’s a vast world of new OS stuff being done) I could go on for hours. And even if I limited myself to the minority OS I mostly use – RISC OS – there’s still plenty of excting stuff going on. We’ve had one article a week on RISC OS on OSNews recently – not bad going for a minority OS.
To conclude, asking for exciting new OS developments in the major OSes is selfish – it would inhibit real development (which is ultimately in applications) and waste people’s time, not to mention ignoring the stuff that really is going on.
I too have been on the lookout for that “little something” to get the heart jumping with joy. Yes we had various versions of windows … yeh right!
Linux started to raise its head; unfortunately this head had many more smaller heads just behind it. Ok, they all look different however, they do much the same thing, and act a bit like Windows e.g. Java Linux. At the same time we had BeOS and the likes trying to make a change, but to no avail.
One of the more interesting OS’s is Oberon. Alright this is a teaching OS however, it behaves in a very different way to other operating systems and shows some potential.
Many Linux users say that they hate Windows OS. How then is it that Linux looks and feels a bit like this hated OS? Why won’t someone create a totally new user interface? Preferably one which is totally intuitive and obvious to use.
How can the experience be changed? Well, we do we have to have a large X in the top right hand corner to close the “window”; why not use the “Esc” key for the active screen. Could the arrows keys not be used to switch between open screens rather than “Alt + Tab”. Alright these suggestions are daft, but I am sure some much cleverer than me can provide the answers.
Sure Linux is great! However, it is more suited to the regular geek (clever people). It needs to be more inclusive and it will only do this if it is innovative.
Regards
Togora
There is already this nice website that we go to in order to read about what’s happening with alternative OSs. I first came to OSNews to read about what was happening with hobby OSs and looking for discussion about how to write bootloaders, remote debugging, virtual memory implementation, etc. Like many programmers, I sometimes feel like my life wouldn’t be complete if I didn’t write a new OS.
What if OSNews facilitated creating hobby OSs? Maybe give space to developers to discuss and link to their projects, have a voting system on their progress, provide forums for each OS, and provide forums for talking about different facets of an OS, from the really dull stuff all beginners focus on, to stuff like leveraging a microkernel and applying a thoughtful design process to your effort.
Who knows, maybe someone will write an innovative, non-POSIX OS called “Eugenia”.
Look at the OS history tree – look at the Unix history tree – look at distrowatch for linux’s multiple faces….put all that in a chart – insanity!
what we need is standardization of formats, cooperation and good work integration (on top of innovation)
I have a hunch that many people feeling that the modern OS scene is boring are actually talking about desktops and GUIs (and many feeling BeOS nostalgy too) and not thinking so much what is happening in the OS field at large (outside of the relatively narrow desktop OS field).
I cannot see why the non-GUI-related OS field wouldn’t be quite interesting now. And besides, those non-GUI-related things often matter for desktops too. There are many advanced interesting embedded and mobile OS projects, RTOS projects, server, super computer and cluster projects, etc. There are also innovative and serious new OS projects like Coyotos (http://coyotos.org/) that would deserve much more public attention. Besides, I agree with the people who have said that the OS should be relatively transparent and it is the applications that really matter.
As to desktop operating systems, it might make sense to integrate the GUI (and some GUI appliactions) better to the core OS than what is done espcially in the *nix + X world (the classical X technology is far from optimal or transparent). In that sense – from the GUI and OS integration point of view – BeOS was indeed quite innovative and the GUI performance was good at its time. However, BeOS had some problems too, like poor network security. I mean I understand the nostalgy that many former BeOS people have, BeOS was one of the greatest GUI operating systems ever, lean and well itegrated and designed. But nevertheless it was not the holy grail, it had its shortcomings too, and current operating systems may have other equally innovative though different features. The same with OS/2.
Of the pontial new desktop operating systems, personally I like especially Syllable (as far as I know it). What’s wrong with Syllable, or SkyOS?
I also see that it is a very good idea from new desktop OS projects to open source the OS (Syllable is GPL-licensed) because it may otherwise be very difficult to get large enough developer and user community in order to compete with current desktop OSs. Consider if BeOS was GPL’d, and if Linux would have followed the BeOS licensing, we would still have BeOS around, actively developed and used, but nobody would know Linux.
Or if you want to have something very similar to BeOS back, then by all means support the current BeOS derivatives like Haiku – instead of just complaining how far they still may be from that warm feeling you remeber to have had when using the classic BeOS.
I’m using BeOS R5 Max Edition 3.1 with some patches and replaced parts.. The heart of BeOS is beating right now.. Its not necessary to wait for Haiku.
Chris wrote: distributing computing is interesting, though I feel that it’s a niche. And as long as it’s a niche, no one is going to include support for it in a mainstream OS. It’s simply wasted code. I have 6 systems here [totaling] 11 CPUs and I have no interest in setting up any kind of cluster. Each system does it’s own thing and does it well, I have no need to share processing resources.
So we see different needs then. I don’t see it as a niche. I see my ten boxes sitting here and wish to get them all working on what I’m doing at any given moment if possible and with the least amount of my own thinking to do it. As systems get more powerful and as more and more people obtain multiple systems in their lives they will want to maximize their use. They will also wish to link systems with friends or colleagues. It’s really quite a simple idea that takes advantage of the low and seemingly declining costs of computing power.
There is no problems trying to stand on the shoulders of giants, but nearly everyone who has listed an “innovation”, including you, is simply regurgitating things that are done and are available today.
As I pointed out innovation is obviously in the eye of the beholder. For example, if you search the net for research that Microsoft is doing much of it is quite innovative but lots of people wouldn’t agree about that. In addition I showed definitions of innovation that include “regurgitating things”. Obviously the things that are being “regurgitated” are of value to those people. This is an opportunity to see what it is that people want in an OS.
What do you want in an OS, Chris? What would you consider innovation if it’s not any of the items mentioned by people above?
when pressed for details of how your system is actually different and can work around the real technical issues I raised, you bailed.
I hardly call providing more details about your specific concerns bailing. You just didn’t like the answers.
I can create a real “advanced, new, distributed object” system today using the J2EE platform.
Using Java, a static system as the base for an advanced object system? That’s quite funny. Ok go ahead. Static systems create applications and operating systems that are frozen at compile time. That’s a major reason operating systems and applications are so fragile, inflexible and inaccessible. They are all locked up tight at compile time.
Network latency, even over a gigabit lan, is ridiculously slow. Distributed systems right now get away with that because each node is processing something that takes many times longer to process than the latency of sending it around. I’m thinking your 12k threads aren’t going to be spending weeks simulating tectonic plate movement. So you just took something which could have run locally and sent it to another machine adding a least 2ms of overhead to the entire process. That’s 1000s of times slower than doing it locally.
Yes, the limits that you seem to keep pointing out for some reason are present and obvious. I am very aware of them.
It may not make sense to fracture an particular application and run it on multiple boxes, that depends upon the application and how much inter-fragment communications will need to be done. Obviously when I said “applications that have the opportunity to run in parallel fragments” it didn’t communicate to you that the system would need to take into account the various costs in assessing the “opportunity”. Some of these costs and limits are: RAM and disk memory, CPU utilization, process synchronization, process migration across a network, post fragment migration network costs, network saturation limits, network bandwidth dollar costs, risks of network link outages, the cost of deciding all of this, etc…. All of these factors and more (any that you can think of?) contribute to an assessment and decision process that would determine when the opportunity is ripe for fracturing a program and distributing it across multiple machines in a users available cluster network. Of course more knowledgeable users might assist or override such automatic decisions for various reasons.
In the course of a given day I use a number of applications that could be off loaded automatically. If their GUI is all that runs on my primary box then I’m fine. There are many ways to fracture an application. Doing it automatically would certainly qualify as innovative since no OS offers this today.
Let’s look at an internet based role playing game. To support 100,000 plus users a local cluster with high performance interconnects is needed. The interlink communications within the cluster are obviously much faster than the links into and out of the cluster that traverse the internet. And that’s the point that allows the interlink cost to be acceptable even with multiple internal hops. It all depends on the application and how it’s fractured and how those pieces are distributed, locally on multiple CPUs or across a network link. There are many potential problems, as you’ve pointed out, and as I’ve pointed out these “parallelization factors” must be considered during the process of fracturing a program, whether or not the process is carried out by a human being or a program.
If you really wanna be innovative, solve the problems I’m talking about.
What problems are those? Please be specific about the problems you wish solved.
Don’t complain that I’m not cheering you on. I’ve seen it 10 times before and I don’t see how it’s suddenly going to start working now.
I’m not complaining about you cheering or not cheering. What you do is what you do. I’m more interested in you self stated inability to see. Keep asking how it might work rather than giving in to the bleak assessments. Edison’s team went through 10,000 iterations to make the light bulb work. It a matter of perspective and how you approach it. They questions you ask matter, choose carefully.
And OSs still don’t need to know anything about the data to be a “data firewall”.
Obviously that’s your view. One reason we don’t’ have more advanced systems is that many have this standard view. However, the standard view doesn’t invalidate the possibilities or needs that the few are bringing into existence. Furthermore you might not have the needs that others have in this regard.
They can happily do that by only knowing the two endpoints of the data stream and not sending the data anywhere else. I mean, this is TCP/IP 101. And you don’t see network protocols needing to know the format of their data to ensure that it gets from one process to another, do you?
Thank you for pointing out a few details of the standard view of how current TCP/IP and firewalls work. However, it misses the point that I was making. Let me clarify. When people collaborate in groups, large or small, using a unified information system they will need to protect their data objects. A system that enables them to specify the security policies for the information (i.e. objects) can be considered an information firewall and it is an integral component within a Collaborative Operating System.
Really, if you want to be taken seriously, you’ve gotta do a bit more than complain about being stifled).
Actually I wasn’t “complaining” about being stifled by your views. I was pointing out that there is a choice when it comes to innovation. A powerful choice that impacts what one can and can’t see. One path responds with criticisms, this is the devils advocate point of view that so many in our culture actually strive for. Another path participates in creating the future, not by “cheering”, but with active engagement in looking for the opportunities by asking questions that move things forward. Which path to follow is up to each of us.
I agree with Eugenia. OSs having gotten boring. But sooner than what she is alluding to.
Yes Linux has had major changes in the core of the OS. And yes the GUI has changed some. The same is a lot more true of Mac OS X. Windows hasn’t changed at all in three years. At least nothing to speak of.
But there really has been no dramatically visual or work altering changes at the OS level in a long time. Meaning that none of the OS changes has altered the way I work. They may have made it easier or faster but I still do things the same way.
I’ll give you an example. I’m 44 and have been working with computers since 1979 when I started in mainframe programming in COBOL, FORTRAN, RPG, and BASIC.
There was LOTS of change. LOTS of things to learn. LOTS of ways to do things up until about 1998 (and I’m not talking Windows 98). Things really haven’t changed that much since then.
First it was mainframes where computing was remote (not in the machine you were touching). Then PCs with text input. Then windowing (Macs, Amigas, OS/2, and eventually Windows). Then true multi-tasking the way it should be with OS/2 where I replaced four Win 3.1 machines with one OS/2 machine. Windows ’95 was a step backwards and Windows XP sort does what OS/2 did eight years before but not as well. Other than the internet, which is not an OS thing, nothing huge has happened to OSs since then other than Linux joining in the mix. The Linux’s big thing is that it isn’t Mac or Windows.
Yes there is Windows XP but that is just Windows NT with a Windows ’95 front end. Mac OS X is just BSD with a NeXT/Mac front end. Database file systems existed before BeOS but that was for me the first big file system change (FAT to NTFS … yawn).
Most of the changes we see are on the application side. At least the ones that get me excited. Things like GarageBand on Macs. Blogging. Instant Messaging. These are lots bigger than any OS changes in years.
I wrote: Yes, millions of threads.
Simplicus wrote: For IA-32 processors, the limit is dictated by how the kernel manages the GDT and LDT. So, millions is waaaayyy over the limit, though.
Linux uses a trick to avoid such CPU hardware limits. It has been done before and can be done again.
Let’s move past the nit picking over technical limits as most of them are obvious and there are ways around many of them or to work within them.
In the end, I’m seeing most of the creative talent drained away simply because if the security factor.
If this were wildly possible, a “silver bullet” could be found in an OSS way (maybe like this):
http://www.forescout.com/as-process.html
… we could then see more innovation of the sort we’re used to – not just features tacked on and semi-funtional, then ignored like what’s been happening lately in both Apple’s and MS’s OSs.
It seems they put forth these (server) functions which are almost instantly non operational because of security issues.
Let’s call this Patch HELL.
Don’t you think THIS is the main stumbling block?
hylas
Maybe for that high-level “Wow” we need low-level hardware “Wow”? When all is said and done. Processors still see the world from a low-level standpoint. All the software layers are ways to putting a pretty face on it. Now imagine programming an OOP processor for example?
I used to think OSNews was news about OS internals, but about what OS software company said about another, or how much Windows sucks or doesn’t suck. And I’m just sick of those Linux distro reviews! Mandrake 10, then 10.1, then 10.2… Ok All distros use a Linux kernel, GNU software, X windows, maybe some installer. And they all suffer from the same hardware/software problems. They just have a different face, maybe a different package manager… AGR! Fine, off goes osnews from sage.
To me entertainment is what’s boring. Can anybody sing anymore? Hollywood can’t make any good movies anymore, all they do is make rehash of a better movie. I’ll take computers anyday compare to Hollystupid.
This is some of the most self-centered posting I’ve ever seen in an internet forum. A stream of posts saying “I don’t like” or “It doesn’t impress me”. Would anybody like any cheese to go with that wine? Because it sounds an awful lot like small children who’ve been given too much, have had it too good.
It sounds rather like:
Dudley Dursley: How many are they?
Uncle Vernon: 36, counted them myself.
Dudley Dursley: 36! But last year-Last year I had 37!
Uncle Vernon: Yes,Yes, but some of them are quite bigger than lasts years.
Dudley Dursley: I don’t care how big they are!
For OSes to get exciting there is only one way… Computers have to change. It will happen, but not now. It will take 10 to 20 years. Can you wait that much? After that, Voice Command, Facial and Feature recognition, together with some other new and exciting technologies will make interfacing with computers a lot more fun.
Computer… lights.
Back in 1991 I got tired of computers. After first using a teletype and timesharing on a mainframe back in the late 70’s as a elementary school student, to watching and using the evolution of 8088, x286, 386, 486, pentium, DOS, MS-DOS, WANG-DOS, UNIX, Commodore, Apple, Mac, TRS-80, etc… I got truly tired of what was going on and I stopped using computers.
Even today I can’t put my finger on the exact reason.
I understand what Eugenia is saying, I’m starting to get the same feeling that I had back then, can’t put my finger on it.
The closest way to describe it comes from the scary devil monastary, all operating systems suck, all hardware sucks.
What you need today is a MMORPG experience which lets you interact with a world, not necessarily a game, where you can do your business, develop ideas. But online, in a virtual environment. MMORPG style environments will grow bigger and better, and people will be consumed by them. Building that kind of environment is the way to go.
I really love this! A place where geeks like me can call “home”! But not only a game, an extension of the real world, with banks, schools, works etc… OMFG this is like a dream… ^o^’
I really wonder why you feel that way. For instance, do you really feel there’s value in reimplementing usb stacks, ethernet drivers, basic network protocols, etc? Those things are provided for in Windows, OSX/Darwin and Linux. Even if you had something more revolutionary a la Eros and Plan9, you’d still need these elements to be compatible with the rest of the world.
Now that we have these basic elements across the philosphical spectrum (from the proprietary Windows, to the semi-proprietary OSX and Solaris, to the Free Linux, and the completely do-whatever-you-want BSD’s) we can occupy ourselves with some new stuff on top of it. It’s called advancing the state of the art, or bringing the lowest common denominator up.
If you say nothing truly different or interesting is happening on these higher layers, then I feel you’re wrong.
Gnome and KDE are running on Linux, but so are a lot of other things, specialized (mobile phones, home appliances, hifi equipment, Palm will soon be based on Linux too) or not (E17, Qtopia, Rox, XFCE, …)
All these are wildly different, and if one wants to implement a feature but needs the kernel’s help, then that will most likely be implemented (inotify, low-latency, reiserfs, etc etc)
Another example is DragonflyBSD – they’re doing some interesting things wrt to filesystem namespaces and integration with their packages, and loads of other goodies – but they didn’t have to rewrite the whole OS to do that, they started with FreeBSD.
The exciting stuff is happening at the higher levels, and so it should be. I imagine in the future there will be some userspace things everyone relies on, like we do on filesystems now; Once that happens we’ll get a couple of good competing implementations, and we’ll get on with life.
My point is – it’s called progress. There’s little use in starting from scratch over and over again. Maybe a bit, because hindsight is 20/20 and you’re not forced to live with the mistakes you made years ago, but on the whole there’s likely to be better and more interesting things to work on. (I’ve got plenty of ideas, but little time)
I can see some people here have forgotten the hacker attitude: if you don’t like it, you can make it better. Here are a couple of reminders:
1) The world is full of fascinating problems waiting to be solved.
2) No problem should ever have to be solved twice.
3) Boredom and drudgery are evil.
4) Freedom is good.
Read the rest here:
http://www.catb.org/~esr/faqs/hacker-howto.html#attitude
There’s also a fifth reminder: Attitude is no substitute for competence.
So work on it. And make it happen.
I think that there are many very interesting projects around that deserver much attention (anyway, one of those may become the next generation os).
The problem is that the media only covers Windows and Linux and discards the rest.
Another problem are the unreal expectations about the features: if an OS doesn’t have all the features Windows has, than it is not interesting (nobody cares if those features are really used/useful).
IMHO, the next generation will have a managed core with garbage collection and support for threads, exceptions, dynamic loading. Think of a Java or .NET environment running on the bare metal.
My two picks for a better world:
* AOS/Bluebottle http://bluebottle.ethz.ch/
* JNode http://jnode.sourceforge.net/
Eugina, sorry to read crap posts like the above but, would you consider lessening the content on your news site so we only get good quality articles and news items rather than the plethora of regurgitated crap we’ve seen over the last year? Trying to fill the news page with multiple items on a daily basis can be a hard task but it is really dissapointing to read time and time again articles from two bit hacks on “why I like Ubuntu Linux” or so other.
There are many interesting things going on in the computing world but you have to scratch the surface. OSNews has become very tiring to read and now it’s a case of not bothering with the plethora of staid opinion pieces. If you want different ideas in computing OS’s then have a look at alternative computing technologies in development like quantum computing and advances in bio computing. Focusing on current (x86) hardware technology is going to limit your OS reporting options.
Eugenia,
What ever happened to that Sequel OS that was being worked on by David Reid and co?
3 choices are natural nowadays:
1) who only wants an operating system to run commercial and closes applications and doesn’t care about freedom, open souce or security chooses Windows
2) who wants a stable, free (as beer), open source, actively developed and free (as freedom) operating system is well served by linux.
3) who wants a sexy desktop, secure, doesn’t care to pay for an overpriced and closed hardware, doesn’t care about freedom (yes, darwin are free and open source but MacOS X no) chooses MacOS X
Who will spend money buying Zeta, OS/2, SkyOS or any other closed operating system if they can choose windows or Mac OS X ?
Who will use another free operating system to run the SAME applications you use in linux and can install easily with apt/urpmi/yum/etc and have less driver support and less developers ?
Creating operating systems means facing a tough dilemma: be creative and new and not having any native apps or embracing posix and being static, old and boring but having lots of apps could be ported.
Because basically the other non-posix OSes today are windows and openvms wich share a common root in vms and David Cutler.
Amiga and Beos are… niche OSes at best.
Several points that it seems no one has addressed: research funding and commercial influence.
If you follow the money trail, BSD (and BSD’isms) evolved from DARPA research funding. ENIAC/UNIVAC evolved from US Military funding too. Linux evolved from a response to MINIX and MINIX’s license (but MINIX was a research OS).
The commercial OS’s are a response to vendors and commercial licensees. z/os, Win 2K, Unix (Solaris, HP-UX, AIX, Tru64, etc), OpenVMS, TRON/KTRON, etc.
What are the major universities working on? Carnegie-Mellon contributed to NeXT and MACH, which are now parts of OSX, UIUC , MIT, Cambridge University, Princeton, U Penn, etc..?
At the university level theres:
K42 (along with IBM): http://www.research.ibm.com/K42/
i-acoma (at UIUC): http://iacoma.cs.uiuc.edu/
PUMA (at CMU): http://www.ece.cmu.edu/%7Epuma2/pub.htm
Shrimp (at Princeton) : http://www.cs.princeton.edu/shrimp/html/platforms.html
Coyotos (Johns Hopkins): http://coyotos.org
University of Utah took over some MACH development and is also working on: http://www.cs.utah.edu/research/areas/systems/
Hobbyist (but not hobby in the sense of building a model airplane):
openBEOS
syllable
spoon microkernel
SkyOS
and many more that I’ve probably missed.
The biggest problem for many research and hobbyist operating systems that I see is that the only boot in an emulator. Once they get to the point that the can boot from GRUB or LILO or their own bootloader, that’s when there’s greater acceptance of the OS (look at syllable as an example of an OS booting from GRUB and their acceptance).
I personally think that its an exciting time in OS research because if you have ever read any of the older research, such as microkernel or exo-kernel research from the late 80’s and early nineties, the common limiting factor was speed of hardware. The slowness in hardware IO, bus speeds, memory cacheing and access, networking, etc.. was a major problem for many OS’s. Now that bus speeds, cpu speeds, IO speeds, networks, etc are faster and cheaper, many of these concepts can be revisited. Commodity hardware does cause OS bloat because as the OS is accepted, support for the hardware needs to increase (but that’s a good thing). Open-source licenses have also allowed greater entry into popular hardware. Look at Linux and NetBSD as an example of hardware acceptance vs. many commericial OS’s, OpenVMS for example.
There’s plenty of research being done otherwise there wouldn’t be as many PhD’s awarded internationally in Comp Sci. Exciting times, exciting times indeed.
Here I thought you LIKED all those repetitive stories about small changes in Linux and the Mac O.S. which were changing the way we compute!
I’m not sure what you expected out of computing.
It’s a business, after all, and, not surprisingly, it follows business cycles of hype, investment, bust, and retrenchment.
If you thought someone was going to come and do something special after all these years, then you should have understood the lesson of Beos.
If a IOO million dollars IO years ago couldn’t crack the hold which Microsoft and Apple have on the computer market, what chance do a few small time hobbyists have now?
You might do worse than read the article at the following U.R.L.: http://writing.borngraphics.com/work1.htm .
Mr. Born overstates his case a bit, but he has the right ideas.
I’m also surprised by your interpretation of computer history.
I recall Atari being quite prominent in computing in the 8Os – certainly more so than GEM.
It also seems a little wierd to leave out the grand daddy of P.C. O.S.s, C.P.M.
The fact is that computers are business machines.
Now that they are in most American and European households, a common approach to things – which excludes major innovations not made by the big companies – is to be expected.
Do you really want a car with the brake pedal in a new position?
Probably not.
It might make more sense for the brake pedal to be over on the left where a different foot could access it (with the clutch – if any – in the middle), but it would be non-standard and would lead to problems.
Similarly with computer interfaces.
I agree that today’s computing environment is stifling.
Unfortunately, I don’t see that changing without a radical overthrow of the current system.
Maybe it’s time for some personal changes for you, instead, eh?
GNU HURD
I think writing a new operating system would be an awesome thing to do.
Just because Microsoft and Unix-like operating systems exist, doesn’t mean there isn’t room for other systems.
If Microsoft XP is so awesome, why are there new versions coming out? Apparently, there is a lot of room for improvement and without new ideas (features, changes in technology) Microsoft couldn’t keep charging money.
Features, radical ideas and improving upon existing technology is where the fun is in writing operating systems and/or components for them.
I am looking forward to new operating systems and improvements to existing ones.
I am a Unix Administrator by day, but enjoy hobbyist OSes and writing software for them at night.
“We need something fresh. Heck, something new and fresh indeed. Something INTRIGUING.”
There’s the reincarnation of the AmigaOS in Morphos for Power PC — http://www.pegasosppc.com/operating_systems.php
I agree with Eugenia. OSs having gotten boring. But sooner than what she is alluding to.
Yes Linux has had major changes in the core of the OS. And yes the GUI has changed some. The same is a lot more true of Mac OS X. Windows hasn’t changed at all in three years. At least nothing to speak of.
But there really has been no dramatically visual or work altering changes at the OS level in a long time. Meaning that none of the OS changes has altered the way I work. They may have made it easier or faster but I still do things the same way.
>
>
People please. The reason for “OSs having gotten boring”
as Eugenia and others are whining about is quite simple.
Those who can (basically the Linux,BSD developers and people who weren’t all that fond of the whole GUI/Multimedia Crapola) basically told those who best can described as those who can’t (Basically the “UI” Experts and the Trek-fetish fanboys who seem to follow them around) to take a hike.
The Animosity between these two groups has been building for years now.
I mean really who the heck in their right minds wants to use the operating system running the Enterprise, Deep Space Nine or Voyager for anything, let alone running a spacecraft?
Hello?
That particular operating system makes HAL (the computer/AI in the movie 2001) look mentally stable.
And this is basically what the the advocates of “Non-boring” operating systems seem to want to unload upon the world.
Excuse me when I say I want absolutely nothing to do with this garabge in any fashion whatsoever.
All you need to do is look at the reaction to the “reviews” of various articles posted here to see the widing gulf bettween the people who actually create want to create OS’s that reflect *THEIR* wants and desires and the people who tend to want to sit around and do nothing but bitch about the decisions the previous group of people have made.
Has anybody changed their XP to Longhorn via Longhorn Transformation Pack? It’s awesome. I like the icons and themes. To me it also seems more stable. Just install in safe mode so your installer won’t interfere with antivirus or such. Also download the Smartbar XP instead of using the sidebar that came with the pack. Get it here: http://fileforum.betanews.com/detail/Longhorn_Transformation_Pack/1…
It seems to me that people are looking not so much for a new ‘from-scratch’ OS, but rather are wanting a fresh approach to the user interface.
At the end of the day, most people are not interested in HOW the various hardware subsystems work, as long as they DO work relatively transparently. Sure, there are very different ways that these systems can be implemented, but the current power of pc hardware renders the differences not all that noticeable for most people.
The bit that makes the most difference is of course the interface. The divide between an intuitive, rapid-use gui and a clunker needing lots of clicks is huge, far greater than the differences in how OS builds handle hardware issues.
This is where people are living;- the graphical front-end to the system. Some people love flashy effects, some prefer flat functionality. A small percentage (myself included) spend large amounts of time in the command line, but even we can appreciate an attractive, easy-to-use interface. This is where people seem to want “innovation” and “revolution” to strike.
After all, the WIMP interface invented by Xerox goes a long way back, and yet most of the major OS builds are still basing their GUI around these principles. Remember though that this was once a new, fresh and amazing way of using computers, and signalled the end of the command prompt for the average user (the AVERAGE user).
Now of course this is all old news, and the world is looking for a fresh way of working. Consider this: I work with a multi-monitor system most of the time, so when i’m forced onto a single monitor, i find myself screaming for more screen real-estate! Surely here is an area for innovation in the interface. Is there a better way to manage lots of windows on one monitor? This doesn’t need a whole new OS, but just a different shell or window manager. Check out SphereXP or Project Looking Glass, or Longhorn for that matter.
These are the sorts of issues that people want addressed. A better, more efficient way of managing the elements we see and interact with. Do we need a ‘start’ button? No, but we do need some convenient way of accessing all the things the start button contains.
One final thought: What about the humble mouse? Extra buttons, cordless – whatever. The principle is the same and has been for many years. We are all getting RSI and frozen-shoulder using the damn things, but there are no cheap commercially viable options that are as convenient and easy to use. Why? Because the interface for all the major OS’s are based around the mouse. It’s a symbiotic relationship; the mouse gets an extra button, so the OS gets a feature to take advantage of it. The reverse is also true.
If there was a revolutionary input device of whatever sort, it would need a new style of user interface to make full use of it, else it would only fade away quickly to the world of hardcore geeks. The reverse is again true; a truly revolutionary, non-WIMP interface needs a new input device to make it worth using.
Such is the inter-relationship of hardware and OS. ‘Break the mould’ is a great catch-cry, but for the pc environment it takes more than one area to jump on board.
I live in hope!
ArturasB wrote:
“I’m missing ZX Spectrum times…. 64KB game was HUGE
”
Just to be a pedant, the ZX Spectrum came with either 16K or 48K, and eventually also in with 128K.
But never 64K, as far as I can recall.
But you are quite right, the Spectrum was a great machine!
The only way to beat MS at their own game is to think the unthinkable; after all that is what Bill Gates did in the early days. Alright, he did borrow ideas from elsewhere to get started and has done a good job since. Is it a great job? Well, who has the market share? After all it is customer power who ever you are that does the talking.
So what is it that is required? It involves creating a OS that is totally secure, very easy to use through an intuative GUI, uses RAM memory before all other memory and has the code is written as tighly as possible.
By total security I mean tying down the vital elements of the OS as tightly as possible so that no matter how anyone attempts to access a system it is system checked and controlled. Please don’t scream Unix, Linux and the likes at me. None of these OS’s are totally secure however, if you have a double checking system which looks at not only what is being required to run, where the commands are being excuted from, and is it a reasonable request ie will it damage the system or not, then it will go some way to clamping down on security.
Now for using ram above all else. Why not! It is the fastest available resource on your system. Why then can you only run small amounts of it. Should you not be able to run as much as you like? Ok, this is not part of the OS as such however, why not get computer systems builders in on the act. After all year-on-year RAM memory get cheaper why then can’t you add more to get the most out of your system. I can see it happening that 1Tb flash memory cards will be used as a drive and replace harddrives on laptops.
Lastly, tightly written code – nearly everyone complains about excess coding but do very little to change this practise. As the customer should you not be demanding this as a standard? Making the code as tight as possible will speed up all processes, make a system more stable and above all save space on your system.
End of rant
Joel Spolsky’s blog (joel on software) talks about why commercial companies are investing in FOSS software, and it’s to drive down the price of the complements of their software. We’re pretty close to considering the OS platform a commodity these days. The OS, the Desktop Environment, and the applications are even becomming commodities, and interchangible.
http://www.joelonsoftware.com/printerFriendly/articles/StrategyLett…
Many FOSS projects, and Gnome is a good example, are so homogenous to Windows because conversion is a goal, and commoditzation of the DE and application stack is key to that. Because what users want is BACKWARDS COMPATIBILITY for legacy applications. No wonder BeOS died. No wonder Crossover Office and Win4Lin are so promising.
Millions of MSOffice users and Billions of legacy documents that OOo can’t handle are a pressing reality. This is why I think integrated virutalization technologies are where the most innovation is going to happen. Wine, Bochs, Qemu, UML.
The only way to beat MS at their own game is to think the unthinkable; after all that is what Bill Gates did in the early days. Alright, he did borrow ideas from elsewhere to get started and has done a good job since. Is it a great job? Well, who has the market share? After all it is customer power who ever you are that does the talking.
So what is it that is required? It involves creating a OS that is totally secure, very easy to use through an intuative GUI, uses RAM memory before all other memory and has the code is written as tighly as possible.
By total security I mean tying down the vital elements of the OS as tightly as possible so that no matter how anyone attempts to access a system it is system checked and controlled. Please don’t scream Unix, Linux and the likes at me. None of these OS’s are totally secure however, if you have a double checking system which looks at not only what is being required to run, where the commands are being excuted from, and is it a reasonable request ie will it damage the system or not, then it will go some way to clamping down on security.
Now for using ram above all else. Why not! It is the fastest available resource on your system. Why then can you only run small amounts of it. Should you not be able to run as much as you like? Ok, this is not part of the OS as such however, why not get computer systems builders in on the act. After all year-on-year RAM memory get cheaper why then can’t you add more to get the most out of your system. I can see it happening that 1Tb flash memory cards will be used as a drive and replace harddrives on laptops.
Lastly, tightly written code – nearly everyone complains about excess coding but do very little to change this practise. As the customer should you not be demanding this as a standard? Making the code as tight as possible will speed up all processes, make a system more stable and above all save space on your system.
End of rant
It is time to: Ask not what the OS can do for you, ask what you can do with the OS!
i.e., it is about the software on the OS that will let you do things.
I recently tested DevonThink (thanks to a link at <http://www.oreillynet.com/pub/wlg/6528>) as a way to organize the multitude of TIFF and PDFs I have accumulated. It is the first time I have been impressed with any software. In seconds I can search through all my PDF files and find what I am looking for. Even if Tiger ships with some of these features, I believe, DevonThink will still be useful for some time to come.
Check it out and be prepared to be amazed!
<http://www.devon-technologies.com/products/devonthink/overview.php&…
RAM is deleted on power down, that’s why we’re using hard drives. And please, show me the machine that loads 1TB data from HD into RAM in less than 1 week.