Unfortunately, most of these technologies will have to wait for the “next, next” generation. The current “next generation” will really be about consolidating all the cool innovations of the 1980’s and early 1990’s into mainstream systems. Some comments on the options:
Super-high monitor resolution support (vectorized desktop)
This one is a no-go. It’s farking 2004, and there is still not a mainstream desktop LCD capable of going much above 100dpi. Methinks it’ll be another 5 years at least until we see desktop LCDs that even approach 150dpi (still not high enough to go fully vectorized without good hinting).
Artificial Intelligence
I’d love to see some AI techniques used to manage information clutter in current applications. In particuarl, managing window clutter when you’re doing heavy multi-tasking, and file clutter (maybe integrated with the search-based filesystem).
In the end, computers are tools to help people accomplish tasks. Features do not directly translate to this end.
I think too often people, especially technology enhusiasts, like I assume all readers of this site are, tend to see technology as an end unto itself. We lose site of the original goal, that is, to help us get things done.
This is what I see as one of the major problems with Microsoft technology, it gets in the way of what you are trying to accomplish by being overloaded with features. This often translates into an over-all slower system which makes us wait. Computers are rather fast these days, we should never have to wait on the software to do rather simple tasks. Every time I am forced to use Windows, I find myself frustrated about this very thing.
i completely agree, usability is the most important and most overlooked aspect of modern technology. good search, like the database filesystem listed above, are a good start to transparency in tech.
now we just need to help less skilled users understand that just because the data isn’t spreadsheet’d out in front of you, doesn’t mean it isn’t there
Web services & full network transparency across different protocols
I find #5 the most important feature. Web services play a major role in one’s daily interaction with computers which will grow in the future. An OS without network transparency is a handicapped OS that’s as well useless.
There has been almost no change in computer user interfaces in the past decade. I remember the first computer I used, the Apple II GS. It had a mouse, a keyboard, a monitor, and speakers. It offered menus, buttons, textboxes, etc. as the software part of the interface. What’s changed?
The only UI option in this poll is “speech recognition.” I don’t see this as an improvement. I’m a slow typer, about 30 WPM, but I know that I can type as fast as I could speak to a computer, and edit text much faster. Also, speach recognition limits computers to pronouncable words. What computers really need in inovative technolagies to exchange information between people and computers. Keyboards with keys on all sides of the hands, and that use feet as well. Lasers that track eye movement (allready used for speach synthesis in people with disabilities). Contact lens displays. What about direct neural interfaces? I know this last option scares some people, but it has been years since such systems have been proven safe and effective with tests on monkeys, and even one blind human subject.
Hardware-to-human interfaces are, in my opinion, the most ignored feature of computers. The only area Are companies held back by the need to write new kinds of software to handle such devices? Is manufacturing too expensive? Are the customers too stuborn?
To moderate my rant, I will point out that some progress has been made in creating three demensional displays. I was in Cornell University’s cave a few years ago, and I was highly impressed. The graphics were nice, the depth was realistic, but the input was lacking. The controll was a modified joy stick. Not quite suited to the job.
Polish and integration: cramming every features and invoking every buzzword isn’t enough. It must all work in harmony. If the OS can barely do ’95 level stuff properly then what’s the point? (examples: not able to automatically detect your monitor properly, poor OS installation setup, complicated partitioning, non-existant add/remove facilities, unrecognized wheel mouse, etc…)
Fully hw accelerated desktop: nice, definitely useful on desktop, and pretty much useless on server. But is it really the most important thing to render in HW?
Palladium: would need to get over my distrust before I would even want it, more or less consider it important.
Vectorized Desktop: What I voted for. Even for non super-high resolutions, clean scaling of a desktop is nice. Of couser I am a sceptic in the sense that for non-image related work I don’t see much point in going above 1600×1200.
Web Services This I would have voted for but I am not sure (with or without Palladium) to what extent Joe six-pack is ready for a requirement of an always/mostly connected desktop. Then there is the concern with possible exploits, this is definitely in the works for the future IMHO and wil be important at some point. just not convinced it is the immediate future.
DB File System Useful, but also not new. This is a paradigm shift though, unlike most of the others. It changes the form of use at the File System level, which I have problems getting my head around I suppose.
64bit and such Just another ramp up in speed, which has been getting less and less important outside of games.
Speech Recognition meh (can tell I am getting tired of typing)
Plan-9 meh (See what I mean?)
AI meh, I’ll comment more when it bocems more likely to come out in the next 5 years.
So enlighten me, outside of making your applications and IO faster what does 64 bit computing and PCI-X give us. What application (outside of games) do you do that con not be run on a 3GHZ PIV or equivalent Athlon, with 1GB RAM?
Server side it has a bit more impact, but mostly in reducing the performance/cost ratio.
This is a sincere question by the way, not a flame. As I speak I am typing on a 1.4GHZ Pentium-M I crank down to 600MHZ except when compiling now that eclipse performance has improved in Linux. Occasionally I will kick it up, but 600 is fine for most non-3d programs.
Thats why I voted capable voice recognition. If they ever get to Star Trek level voice interfaces, the computer industry, and even our very culture, will be forever changed! Its still quite a ways away, but when that wonderous revolution comes, I’ll be absolutly ecstatic!
Speech recogniztion sure seems like it’s an improvement to me. I’d love being able to just kick back all hands free style:
“What’s happening computer?”
“Bring up osnews for me would ya?”
“Cool, now let me check out that article on future OS features.”
Granted, this little scenario might fall into the AI category more than anything, but speech recognition definitely plays a part.
And man, just because researchers have had some success with getting monkeys to play video games with just thoughts or whatever, doesn’t mean that sorta technology is ready for prime time.
So enlighten me, outside of making your applications and IO faster what does 64 bit computing and PCI-X give us. What application (outside of games) do you do that con not be run on a 3GHZ PIV or equivalent Athlon, with 1GB RAM?
<<<<
64-bit isn’t so much about speed (indeed, some apps will actually be somewhat slower in 64-bit mode due to larger pointer sizes and such) as being able to address > 4GB of RAM, which becomes a concern for instance in high resolution image editing and various other fields.
Intelligent Applications. I’m talking about applications that morph based on usage pattern and other environment variables in your operating sytem. I think that’s the future of application development.
I also see a rise in database-like applications on the desktop not just filesystems. In fact, Intelligent Applications will need to store data about users behaviour in database-like structures overtime, and then use the information to generate user trends and patterns.
So imagine an application that keeps getting faster and faster to use because not only have the application morphing based on your usage patterns, it’s UI is also evolving, and perhaps even, it’s code being reinterpreted and optimised to your usage scenarios(possible in VM-like systems even today).
Another area that needs a revamp in my opinion is user interfaces. Users should be able to interact with applications their system completely via they keyboard. I can count numerous operating systems in which users are forced to use the mouse to interact with applications or the system. And those that allow object manipulation via the keyboard, are the still quite clumsly. That’s wrong!
Perhaps we need more key inputs in next generation keyboards. I really look forward to the evolution of 3D interfaces. Of course they’ll be complementing 2D interfaces, but 3D interfaces will usher a new mode of interacting especially with multimedia media applications.
Well, all I know is that, none of these will be coming from MS. 🙂 I couldn’t help it guys.
OS virtualization is a missing choice. Virtualization lets you make a single computer look like many. For example a web hosting company that makes a single machine look like ten different ones to it’s customers.
Virtualization on the desktop is interesting from the security side. Don’t assume VMware. That is just one of many ways to virtualize.
System’s can be virtualized, but so can networks and storage systems. Another twist on virtualization are Java JITs and Transmeta chips.
“Unfortunately, most of these technologies will have to wait for the “next, next” generation. The current “next generation” will really be about consolidating all the cool innovations of the 1980’s and early 1990’s into mainstream systems.”
At least four out of nine are innovations of the 1980’s and early 1990’s. (And if they appear everywhere or disappear forever, I really don’t seem to care.)
* Super-high resolution support (vectorized desktop)
* Database-like filesystem able to search both filenames and (meta-data about?) content
* 64-bit
* Plan9-like feature: exchange hardware functionality over network
I’m not sure if “Fully hardware-accelerated 2D/3D desktop” or “Artificial Intelligence” could be considered 1980’s-1990’s cutting-edge tech. If you think so, that’s two more.
Artificial Intelligence could really use some work, though, in order to be useful outside of industry. (In my opinion, people playing with little robots slowed progress in AI by five to ten years…)
Now, hardware-based security is a great idea. We really need this. It would have been my vote, but Palladium? No. Never.
Speech recognition was my vote. If it could be made to work, it could be really useful. (I don’t expect it to ever handle navigating a GUI, and it really shouldn’t need to.)
“Web Services & full network transparency across different protocols”? I don’t know if I should care about this. Anyone want to fill me in?
>>64-bit isn’t so much about speed (indeed, some apps will actually be somewhat slower in 64-bit mode due to larger pointer sizes and such) as being able to address > 4GB of RAM, which becomes a concern for instance in high resolution image editing and various other fields. <<
Again outside of a small, specialized community, how is this important. Things for the 2% are important to the overall evolution of computers, but not as important as other factors. Now once banwidth becomes much better realtime image processing/decoding gain in importance, but even these are better off being pushed to specialized hardware (graphics cards in this case). What app outside of high resolution image editing (and to need in excess of 4 GB that is an EXTREMELY high resolution image) what is out there that will improve the overall user experience.
In a way you are proving my point. The need for speed/memory is very important for splinter communities and specialized fields but there is no driving requirement for it at present. Longhorn is likely to change this requirement, but that is quite a way ahead.
I don’t see how you think you’re going to get by without recompiling your code when you’re switching from windows to some non-i386 setup. If the hardware is completely different, the assembly code is completely different.
And why is simply needing a java virtual machine a bad thing?
how about building a computer that is on when you turn it on, and off when you turn it off. in other words, no boot up cycle! also why do they still put floppy controllers on motherboards? these things were supposed to be gone with the last generation of motherboards.
That’s one of those ideas that flourished in old science fiction but has turned out to be impractical. Not because we can’t get it to work, but because it’s about as useful in real life as personal jetpacks or food pills. If you think people talking on cell phones is annoying, try listening to them all talking to their computers as well.
To be sure, speech recognition has its uses. These uses are called “applications”… which is usually distinguished from the OS.
Allowing more than 4GB of RAM enables tons of things:
1) For people doing scientific computing, large memories allow more detailed simulations of larger and more complex systems. For many types of simulations (eg: chemical reactions), there is pretty much no upper bound on how much memory you need.
2) For people doing 3D modeling, or video or image editing, large memories means that they can operate on larger scenes or longer videos.
3) For engineers, large memories mean that engineering programs can do more detailed analysis of larger structures taking into account a larger set of parameters.
4) For developers, huge memories mean that optimizers and other program-analysis tools can operate on more of the program graph at once. It enables the use of slicing tools, whose program graphs can reach ten gigabytes or more.
Aunt Nellie might never need a 64-bit machine, but there is a huge group of people who do.
I’d say all of the items listed on the poll! Those kinds of technologies and interfaces enable the kind of interaction with a computer that have only been dramatised up until now.
I would think speech recognition would become quite the hassle for the surrounding environment. As a college student, having everyone bitching to their comps to do this or that would become even more annoying than the inconsiderate users that leave their AIM sounds on. I see speech recognition about as useful as those media buttons present on today’s keyboards. It might be good for turning up your volume or launching the occasional application, but aside from that, pointless. I think the more appropriate focus should be on ease of use, and I’m not talkin clippy. User interfaces should be made more simple, with the advanced options hidden only for the more advanced users. The future OS should take UI clues from apple and improve upon Mircosoft’s efforts at keeping user’s antivirus definitions and system software up to date. Also, disabling of firewalls should be made almost impossible, with the useability increased. SP2 finally allowed the default firewall to become user friendly by having its intelligent port opening – but this is still quite advanced for the average user.
“What is the “plan 9 -like exchange hardware functionality over network” all about?”
My (very) limited understanding is that computers in a network can specialize in what hardware they offter. One computer can have terrabytes of storage, but limited everything else. Another can be all processors, etc. The network puts it all together and acts as one big distributed computer with network access points.
But I don’t know just how accurate my description is…
IMO, the most important modern or future OS feature is good, up-to-date, clear, concise, correct documentation. I like lovely html in a /usr/share/doc subdirectory, possibly linked somehow with a nice little GUI configuration tool fully outfitted with help buttons and tooltips.
Lot’s of the cool new technology coming up might be interesting to try out, but without great docs (possibly along with simple GUI configuration tools), it means spending hours online hunting through mailing list messages and message boards trying to get stuff working.
Most of the choices could be great, but most are not necessary. Also they should be integrated to the point where we do not have to think about them.
Most computers that I deal with, I want to do their job, and stay out of my sight, out of my way and out of my mind. Consider the router, for example. It just works, what a great computer!.
The one computer that I stare at all day needs to run multiple telnet sessions, multiple browser windows (and programs, no one browser is best at everything), needs to allow nice printing (multiple fonts, sizes, blah, blah). What I don’t want it to do is “help” me. I don’t want it to “fight” me. I don’t want it to protect me. I like a nice screen, easy on the eye, that can display 60 or more lines of text. Nice keypoards are nice too.
I see everything getting better and cheaper all of the time. I remember surfing the web with a fast P90. And 64Meg of RAM. Wow! (and NT3.51, with “Chicago” on the way.)
Speech recognition gets my vote, not because I want to tell my computer to open my web browser or goto specific web pages, those can be done much easier with a few mouse clicks, but to interact with me and gather metadata.
I would like my VoIP phone conversations or gnomemeeting video calls automatically transcribed so when keywords are spoken in conversation dashboard can give me feedback on the subject, including previous conversations on the same subject.
Maybe it could recognise who is speaking and dashboard gives me information about the person.
Maybe I can ask the computer which episode of Buffy had the joke about the Pterodactyl. It would know because last time I played my Buffy DVD’s it was automatically transcribing the speech to create metadata and storing it with details about the episode it extracted from the DVD.
There are already some useful applications of speech recognition such as electronic phrase book programs, but I think it’s the area of desktop integration and metadata collection where it will start to become a useful tool rather than a toy.
OK, I can see these reasons far more than the previous ones. However, except compiler optimizations, most could be handled with existing 64-bit platforms though at a higher cost.
Take a program optimized on a system with infinite memory. Compared to the exact same program compiled on a system with 4GB, what would be the performance gain be on a computer with 4GB? 8GB? 16GB? I would not expect more than a 5-10% boost, and most of that would not make a significant difference if the host platform is not under load.
Again I don’t disagree that it would be useful to certain groups, but does that make it critical for the next gen OS? (Continuing this debate out of sheer stubborness rather than a firmly held belief.)
1) Certainly, OpenGL widget sets have been available for a very long time.
2) A lot of the infrastructural work that is being done now, to get multiple apps to share a single OpenGL pipe in a high-performance manner, was done by SGI more than a decade ago for IRIX. Indeed, the IRIX solution will still be more advanced than what Longhorn, OS X, or Linux will offer, because it fully virtualizes the graphics hardware, allowing you to throw more GPUs in the system as easily as you throw CPUs into the system.
The Plan9 features is actually amazingly interresting.
In essence, you can ‘import’ the CPU of another process, one
simple command. Then the code runs on that processor, everything else, such as screen output, open files, keyboard input, sound output etc. is from the local console.(Note, this is very very diffrent from a remote X session).
Building a cluster is pretty much a nobrainer now, just import a handful of processors in your local namespace.
Ofcourse other sorts of hardware can be imported. Want to have sound output at some gadget(say your tuner running plan9) rather than your local host, just import its sound device.
Need to use a modem connection at another host, import that device.
Use a remote printer(running plan9), import that device, no need for fance handshake protocols etc. everyting is handled
by the OS in a general device handling way.
Not only devices ofcourse, but anything that resembles a file(Which in Plan9 really is everything, unlike UNIX where its almost everything, except some are handled specially e.g. /dev/* named pipes etc..)
The thing is that there is an increasing trend to moving to commodity hardware to do this sort of work. Google uses cheap x86 PCs. Lockheed has announced that they’ll move 10,000 Solaris desktops to Linux-based x86 machines. ILM has moved to Linux-based x86 machines with NVIDIA hardware. Pixar’s render farm consists of cheap x86 machines. So there will definitely be a demand for large memories in commodity hardware.
With regards to the optimization example: Stalin, a whole-program compiler for Scheme, in some cases generates code several times as fast as a regular C compiler (which, lacking whole-program knowledge, has to be very conservative about it’s assumptions). Also, using whole-program optimization allows the compiler to eliminate a lot of abstraction — so straight-forward and elegant code can perform as well as well-tuned and low-level code. In addition, slicing is a critical innovation that will make every developer want to go buy 8GB G5’s. Being able to see how changing one value in your program can affect other values is an enormous booster of productivity.
…its really difficult to fuly respond to your question without giving away any ideas that can and will lead to tangible products but trust me, the user has not seen anything yet.
Even office applications still have innovations left in them its just that the incumbent party is not really big on innovation ( http://osnews.com/comment.php?news_id=7848 ).
Do you know where the majority of human brain/mind function takes place – in the subconscious. And by this I am talking of a huge ammount, > 70%.
This is what is going to happen to the pc. The number of background processes and metadata that is “assisting” the immediately obvious part of the application will far exceed that/those “visible” parts.
On a 5 terabyte HD metadata will constitute at least 60% of the data that is stored on your computer. This is because for each unit of “real/conrete” data there will be several meta-elements… and conurrently, some of the metadata will have metadata and so on…
This will result in the emergence of “multidemensional” computers i.e. an additional and alternative component of achieving AI.
e.g.
====
1.) How many ways – by Tony Braxton
Forget, the usual mp3, etc. etc. metadata. How about this simple fact – it’s is a song. Now this fact can be linked to the assumed system-wide dictionary and thesaurus so the computer has better knowledge of what it is dealing with (sorry I can add more here, already gave away too much).
The dict+thesaurus have become metadata in this particular instance. Taking the definitions of the aggregate synonyms of the word song (and the word song itself) one can then go onto the web (actually – a background task) and “create” more “dimensions” from this data that will empower the user in what ever they are doing.
What you have created above is basically a system/OS that functions in the way the W3C hopes the semantic web will function.
So, just wait, the next five years will be exciting and by 2010 you should start to see the emergence of a totally new kind of OS – at least from the user perspective.
When Jobs introduced OS X (yes – with all its warts, and debug code :-], etc.) he said he had one more platform left in him. Here’s to the next “Star Trek” a.k.a OS XI.
BTW: All the stuff listed will have become commodity in OSs before 2015 at the latest.
In fact implementing an OS that functions like a sementic web (SWb) will outdo many of the above “innovations” in terms of the abilities it will add to the computer.
Some think of SWb as a better web services but it is more than that. SWb adds “depth” to a given context. Lets say web services are integration and differentiation SWb would be limit/(abstract)algebra theory and the “proofs” of what IS integration and differentiation. This is what I mean by “adding a dimension”.
Sorry to be vague but I am only a 1st semester comp. sci student and it will be some time before my coding skills catch up to my ideas. Thus I do not wnat to give away everything – although I might have here. Well, there’s still more – like I said, the fun times are yet to come ::-]…
Make the OS simpler. Follow the lines of Slackware: make things simple and not over-burdened. Because with all of this “emerging and amzing new technology”, it will be quite difficult to track problems down, fix them, and keep a happy and stable operating system.
Make it Unix-esque, but not Unix. Forget about the standard Unix filesystem. How many Windows Grandmas are going to know that /bin is where system programs are kept? Keep system tools small and agile, not bloated. Make each tool good at one thing: basically, use the GNU suite of tools.
Don’t use a registry, and don’t use RPM’s. But it would be extremely helpful to have a background process that profiles new programs and keeps a database of what’s installed and what’s commonly used. That way, you know what you have, and you know what you need.
Make updates easier. For many non-Un*x gurus, a kernel compile can be quite a task (unless you’re on Gentoo or FreeBSD, they have scripts to handle that).
Also, do what Apple did. Make a powerful scripting language that can interface to programs, is easy to understand, and is included with the system so that even moderately literate computer users can get powerful stuff done.
seems far fetched and though the need hasn’t yet come, in the long term period, it will. For the first generation AI, it’ll be best implemented IMO as a another layer on top of a (simple) OS platform, possibly as another userspace app. it’s task will mainly help streamline our daily tasks, optimize things up to make usage of the OS fast and easy. then this AI coupled with realtime speech recognition (as opposed to voice recognition), possibly image recognition also (via live camera feeds) will be very cool. of course, we need multilevel security to safeguard data/the OS/the hardware, everything the AI does can be manually overridden of course. this will pretty much open up a whole new set of exciting possibilities for us users.
Becase many of the things we need can be done by the computer. And heck, these are things we are sort of doing now. Take Gnome’s Dashboard. While I am working on something else, Dashboard is out there finding stuff out for me. It would be nice that while I am writing an email about a movie I just saw, the machine is in the background grabing information about the movie, and while I am talking about the actors, the system would recognize I am starting to type an actors name, and it would automatically suggest the names.
Right now applications are intelligent within their own context. Look at IDEs that can suggest function names, and are constantly updating it’s internal cache of objects that you have created in real time. When you are typing, it suggests options, and allows you to get on with your work without having to remember the little details.
Now expand this across the entire desktop, and you have a real winner. Most other things, like the 2D/3D accelerated desktop, the database backend for the filesystem, and even the voice recognition are merely tools that an good AI system could use. An AI system would be able to work with you, and really help you get your job done.
Other things intelligent apps could do, for example, and email client. It would automatically recognize which emails you read the most, which emails you respond to, the amount of time you usually respond in, and determine from that and other parameters which emails are important to you. So that while you are working, it will notify you of these important emails, emails the machine knows you want to read and answer.
The computer could be set to read the emails to you, and then ask you if you want to reply? You could merely say “Yes” using the voice recognition, and then you could simply dictate your response, or type, as you prefer. Does the email contain an appointment? If so, then the computer would file it away in the database file system.
Of course, all of this takes up resources, which is why faster computers, 64-bit processors, more RAM, and better video cards are important.
You might think this is just a dream, but so was flight for thousands of years, and I don’t think we will have to wait nearly as long for the above to really take shape.
we got games that can emulate both physics and to have 3 dimensional sound, but we still have to control our computers the way we’ve always done.
our input devices are mostly limited to what we are capable to do with our hands, though it do exist readers which can determin where on the screen you’re looking.
the most high-precision device that’s available for the common man is a mouse, and I’ve long wondered why that is so. Everyone can imagine and think up shapes, colours and pictures. very very few can put these on paper, yet even few on a screen using a mouse.
to me, 3d modelers are the easiest way to “paint”, so I really think we should get better input devices.
1) DB File System: due to the workings of the stupid javascript poll I can’t get the exact wording for this option now that I’ve voted and navigated away from the page. However, Be had this five? six? years ago so it’s not new and anyone who does it in the next generation will just be playing catchup. It’s not new.
2) Vecotized desktop, etc: yawn. The 2d multiple-workspace metaphore that Be and mac os x and multiple X environments use is pretty much mature. The desktop is about as mature as it’s going to get, and has been for years.
3) 64-bit. Not that important. Will be useful for clusters but the need to constantly upgrade your desktop has dropped heaps since the P2 generation. Computers since then are fast enough to do anythign a typical user will need to which is run comprehensive browsers, word processors and email programs.
4) AI. This is completely vague, and I can’t see what the operating system has to do with this.
Web services has the potential to become cool. I hope mozilla start embracing and extending the browser experience soon, to the point of introducing new widgets and new means of writing to the browser. If they don’t, Microsoft will, so they should wake up and move now that they have the advantage.
Nevertheless, I voted for the Plan 9 option. Their approach to componentising hardware through the operating system is entirely cool, and is relevant from a home user with a c ouple of peripherals and only one or two computers, right up to the huge installations like google where redunfdant hardware and grid computing is most important.
I was unable to participate, the Poll must be down it didnt show up in my browser, I tried IE and Firefox and it didnt show up, yes I do have javascript enabled.
We’ve been lucky to live in the interesting time when computers went from multi-transistor hand-wired hand-programmed in machine language devices to the real general communication and computation devices they are today. The progress we saw was explosive, and if you only look at that short span of time, it’s tempting to get caught up in the whole Moore’s Law way of thinking.
The revolution is over, thanks for coming. Computers are already wickedly fast — faster than most folks need them to be (for email, web browsing, document creation, and sending pics and vids around to friends). Saying things like, “in the future, we’ll have terabytes of memory and AI and voice recognition and…” is like saying, “in the future, pickup trucks will have wings, and you’ll fly them to work and …”.
Pickup trucks have stayed pretty much the same over the years because they’re good at what they do, and don’t need to do anything else. Desktop computers are rapidly approaching that sort of balance point (if not already there).
Up ’til now, engineers have cheated nature again and again to get these chips smaller and faster than before. Well, if you follow the literature, you’ll see that you can only make these incremental improvements so long before hitting the law of diminishing returns. For example, do you realize how expensive it is to try and develop new lithography techniques to try and bring feature sizes down any further than they are? Can these companies rationalize these enormous expenditures just so Joe Sixpack’s MS Word auto backup takes a second or two less time?
So, how things are going to get better are primarily convergence on open standards (protocols, file formats, hardware interfaces, user GUI’s, etc.), and hopefully the elimination of foolishness like software patents.
Maybe your settings are to “tight” on security or you are using a proxy (like i do – OS Forums is broken with the proxy). And i see that go2poll is down.
“With regards to the optimization example: Stalin, a whole-program compiler for Scheme, in some cases generates code several times as fast as a regular C compiler (which, lacking whole-program knowledge, has to be very conservative about it’s assumptions)”
I agree with you, mostly. two remarks, though:
– analysing one compilation unit at one time( = file on any architecture I know) is not a must with C. For example, the last MS C++ compiler can do a compilation on the whole software. I don’t have any experience with the performance gain, though.
– For computer crushing, I wish I could drop C (C++ for that matter, but that’s the same), which is a real pain for what I do. Matlab, which is a high level ‘language’, is several times faster than C for matrix multiplication and such (uses old Fortran librarie). But matlab lacks real programming features like class, easy extension for GUI. Python would be a killer, but for now, the change between numeric and numarray is a real pain.
In Fact, one real revolution that few people want to recognise in the FOSS community, as far as Iknow, is the .Net principle of running anything from any language you want. No more C , C++ or java because it is the only way to use this fu*** library. I am still baffled by this idea, I am trying right know to use some C++ libraries of my own with gtk#, it would be great to be able to choose a language for its capabilities, and not for its libraries anymore.
I prefer reliability and stability, because they allow you to get work done faster. I’d also prefer OS that consumes less RAM. With current apps it seems that in 10 years time you’ll need Hexium 666GHz, GeForce 1234, 100GB RAM to run notepad application. UI is on the second place for me. Midnight Commander is must-have for each OS It the best for coding.
I doubt many people taking the poll actually know what this is. If they’d take the time to look up what is possible with an OS like Plan9, then this particular option would have been voted for a whole lot more.
Capability-based security. This would fix almost all our current viruses, worms, malwares, etc., and make creating new ones very hard while writing secure programs very easy.
I don’t like the idea that my computer thinks it’s smarter than me, or that it tracks my usage patterns trying to learn what I like and what I don’t like. I don’t want computers making decisions for me without asking me first.
I like to be in control. Dumb computers, intelligent users — that’s the kind of combination I like. Intelligent computers can become very difficult to use if you suddenly want them to do something unexpected and unusual. People will say: “You can always turn the AI off.” But what will happen when you cannot any more turn it off? Who will then have the control?
As far as I’m concerned, I’ll try to keep my computer as dumb as possible for as long as I can. It’s good to have some tasks done automatically for you, but all this AI stuff just gives me the creeps.
Plan9 is very interesting but a bit arcane to install. However, if you wish to use Plan 9 in VMware, you can skip the installation process; download a VMware machine with Plan 9 pre-installed and Log in as glenda.
“I don’t see how you think you’re going to get by without recompiling your code when you’re switching from windows to some non-i386 setup. If the hardware is completely different, the assembly code is completely different.
And why is simply needing a java virtual machine a bad thing?”
That’s right, hardware is a problem, didn’t think about that. Leave out the “with no recompiling”, I meant to say, without changing a single letter in the code, and recompile on other architectures.
A single build on an x86 system should be sufficient for all OS’s on that architecture.
What I’m looking for is no dependencies.
Java is a dependency on almost every OS.
And, since I have a fairly old pc (PII 233MHz), large java programs don’t run as smooth as native binaries. On fast pc’s it might not be noticable.
Better integration between apps. This is much better on windows than on linux/unix, but still could be much better.
I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.
I think user interfaces are not obvious enough to use. People should start experimenting with new usage ideas, so by evolution the easiest way to use an app wins!
Just get me the hell away from that beige box, humming beneath my desk and I’ll be happy.
Plan 9 features are critical to this – I can take a low power system with me, while running high power processes over the network.
Speech recognition is also essential. I’m not talking here about people sitting at their desks, talking to the box in front of them – I’m talking about using your mobile phone to access your data and the internet.
That’s not to say that more inventive interfaces can’t be implemented without speech. You only need one button to implement a menu system and that covers 90% of human-computer interaction.
I have to go walk my dog now, which means leaving the Internet behind (not that I mind, but it’s the principle…).
High level extendable object framework for the operating system that encorages the development of plugins or components rather than complete programs.
It should be possible to combine a photoshop program with a automation sequencer program to make a adobe after FX type utility (video effects) in a well designed operating system (with components rather than programs).
PS: dave’s brilliant post, among whom I will certainly reside in the ‘moderated down’ section (wow, who just happen to be the most informed) is excellent. Editors: clueless. Readers: blissfuly ignorant.
John, I am not sure why you included my name in your response considering my post was primarily about software innovations and the things you say are “done” are primarily hardware and “wire” i.e. networks and protocols (dont focus on the tone – I am not offended, just too lazy to be diplomatic ;-].
Before you even get into what I said in my post which is more futuristic (as in 5-10 years not 100) consider the fact that while the rest of the it and scientific community and industry have advanced alot over the last 20 years the software hsa not.
That’s why Apple is able to release upgrades on a yearly basis the have averaged 140 new features over three years. That’s why Longhorn will be a complete rewrite i.e. they are polishing old concepts (Object Pascal, CORBA, Java, etc). I mean look at the stuff they are getting rid of (Hungarian notation anyone – finally MS follows good design ideas advocated by one of the own ( http://images.amazon.com/images/P/0735619670.01.LZZZZZZZ.jpg )and implementing:
-vector graphics
-compositing engines
-db filesystems
-3d interfaces
-etc.
I mean, look at this list. Nothing new but because of the fact that software has largely remained stagnant while almost every other aspect of the industry has moved companies are just now playing catch up and making these features standard OS components.
If certain companies were in very powerful positions we probably would not see these for some time to come e.g. after the WWDC one of the best summaries I read said something to the effect of “CoreImage and CoreVideo put the power of Photoshop in a single programmers hands”. I have not seen the demo app. but I heard it was amazing – and it was coded in a week by a single programmer.
Similarly of Longhorn Avalon technology which has the combined abilities of (PDF, QUARTZ, ePAPER, eForms, CSS, HTML, SVG, etc.). Now all these things which were previously, somewhat disparate technologies are now in a single well designed API/Library.
Now that these technologies are being “commoditized” i.e. being put in well designed APIs/Libraries companies whose products were largely what is now in these libraries will be forced to truly innovate e.g. Adobe now has to do better because any company that was making pro-user apps. can now easily scale up the abilities of their applications by taking advantage of stuff Adobe has been doing for years but can now be found in CoreImage and CoreVideo on OS X.
Another way to show what I am saying is to look at SFX houses. Not too long before 5 years ago in order for a major movie to outsource SFX stuff you had to be one of the big boys e.g. ILM.
Now along comes Apple. They buy FCP (ver. 1 I think) from Macromedia who have shelved the project coz they do not know what to do with it. Revamp and literally DISRUPT the industry by releasing it at a price previously unheard of – 999.00 USD. So now, instead of shelling out 15k for a workstation and 50k for sotftware you can get it it cheap.
As a result, 5 years later, you have small outfits doing this stuff at EXPONETIALLY way less (in one of the articles I link the guy says before you had to have a 100k system to do some othe stuff they are doing now on their powerbooks):
I could go on and on – but the moral of the story is that while hardware may be approaching a wall, software is just getting started. And now that Apple is “fit” again, you can be assured of a continued rapid pace in innovation from all companies because they tend to push harder than say a certain incumbent that has had the throne for the last decade and a half. And we wonder why software lags so much or its just a coincidense.
I mean for …….. – the development community is just now moving onto design patterns. Design patterns are to OO what structured programming was to PP and yet for over a decade and a half there has been alot intellectual masturbation over polymorphism, true OO (C++) vs. fake OO (VB – pre .Net).
We were glorifying the tools and arguing over curly braces and now we finally realise that things like polymorphism and inheritense are just attributes of the tool that CAN BE (vs. HAVE TO BE) leveraged and now design patterns arec just entering the mainstream.
Like I said before – software is just getting started…
I voted for ‘other’, and here’s my reasoning (didn’t find ‘sandbox’ by search so probably not mentioned yet?)
Transparent sandboxing or ‘jailing’ as it already is in *BSD (haven’t tried, though). This means, you can start running _any_ binary loaded off the net, without concerns for privacy, trojans etc.
While things like OS X are less subject to virii attacks, the chance of making trojans for them is there. Just as much as _any_ other system.
For this need, we need automatic sandboxing of new applications, with gradually lifted privileges as the applications seem to be well behaving, useful, or both.
I believe this is part of Microsoft’s LH agenda (but then again, what ain’t!?) but personally, I want to wait until Apple does it the Right Way.
Anyhow, without this any current/future system will be too exposed to system-wide exploits.
> Transparent sandboxing or ‘jailing’ as it already is in *BSD
No, no, no! Sandboxing is an ugly bubble-gum fix for defects of a bad security model. If you instead switch from ACLs to object-capabilities you’ll get “sandboxing” for free. (Although then it isn’t called “sandboxing” anymore, for the same reason an object in an OOP language isn’t said to be “sandboxed” just because it doesn’t have access to all pointers in the system.)
Stability and security are the steak. All the rest is sizzle. In those two areas we are still in the dark ages.
Why does there have to be a one-size-fits-all OS for both consumers and business/industry/government? There should be an operating system just for grandmas. You turn it on and there is a web browser, and email client, a word processor, a personal finance app, some games and multimedia apps and that’s it — nohting else. Want to install Dreamweaver, Norton SystemWorks or Apache? Forget it! There would be nothing to install, nothing to partition, nothing to configure (except your Internet connection and email account) and no registry to screw up. The user would have ZERO access to the file system except for his Documents folder and the subfolders contained therein. Grandma doesn’t need a 64-bit ultra-high-res 3D desktop and she shouldn’t have to be plugging in network, audio and graphics cards. Windows is not that OS and neither is Linux. Think Mac OS X/iLife with everything locked down and running on $400 commodity PC hardware instead of $1400 Mac hardware.
“I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.”
That is in no way integration. But, done well and done enough, it would make things easy for newbies and hackers alike.
The most important element of an OS is that it works and it works well. By this I mean that it should be available when I start the hardware. It should not be unavailable at any time while I am using it. It should be responsive and transparent. Usability is key. I should be able to trust that the system is in my complete control instead of any third party such as the vendor or an intruder. And most importantly, it must allow me to get my work done.
the only thing that i want in any future os, is speed, speed. instant response and awesome speed, i am so tired of crappy slow speeds of windows and linux, be it bcoz of i/o or any thing else i dont care i want speeed.!!!!!!!!!!!
Super-high monitor resolution support (vectorized desktop)
Web Services & full network transparency across different protocols
Database-like filesystem able to search both filenames and content
64-bit and support for faster PCI/AGP protocols
Truly capable speech recognition
Plan9-like feature: exchange hardware functionality over network
Artificial Intelligence
… and tell me that, with the exception of Plan9, this isn’t some kind of Microsoft or Apple survey along the lines of “What part of our work in progress are you most excited about?” question that gets sent right to marketing.
Now …
Reliability
Non-bloat
Consistency – all apps should use the same widget set
Security
End of DLL/shared library and dependency hell
Open standards
.. this is what I’d really look for in something I’m actually going to use or pay money for. Satisfy this list and you’d have one of the best OS’s out there, regardless of other factors.
Yes, yes. They’re not actually features. If you get real pedantic, they’re products of proper development models and good design decisions. We need love in every bit…
Plan 9… I think Plan 9 and BeOS should inspire people. They risked incompatibility to start over clean. I think one of the best things Be did was the (arguably) most simple: start fresh. And Plan 9 make great use of refactoring to actually sometimes grow smaller as development went along. Plus, bringing #if, #ifdef, #else and #elseif to near extinction should be praised. And sure 8 1/2 (now Rio?) might be a bit ugly… but it’s elegance by recursion is just, well… super-spiffy!
Some re-thinking from Be and Plan 9, reliability from zOS and OpenVMS, code auditing from OpenBSD, security from some current research projects… you might just have the perfect OS.
I say to everyone, take your accelerated desktop and shove it! Give me my dream OS! Gimme!
//nothing more to ramble on about without direction, in a near-incoherent manner, increasingly off topic, with bad spelling and… ACK!! I’ve been sacked!
I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.
>> Transparent sandboxing or ‘jailing’ as it already is in *BSD
>
>No, no, no! Sandboxing is an ugly bubble-gum fix for defects of a bad
> security model.
I think we’re talking about the Same Thing. Notice I said ‘transparent’, meaning the user wouldn’t need to explicitly set up the sandbox.
Personally, I believe in gradual developments and this kind of a system would be able to sandbox also old applications (binaries), not only the ones specifically ‘set up’ to be boxed.
Anyhow, we probably agree access control is important. There’s too many apps requiring me to give admin password (OS X), and even the ones that don’t can mess up my home dir if they like. Something needs to be done.
I tried Plan9 many times as desktop. It is too different, that I must absorb it stepwise. But it’s great! I think, that if i get a job to build some cluster or similar enviroment I’ll try to deploy Plan9 and learn it.
It ability of sharing harware is based 9P protocol, that’s the only thing to worry about. Everything is fileserver, that’s the way of sharing hardware, it’s elegant.
Total integration. Every task/concept/event has EXACTLY one handler in the system. The interface is like a super application.
Everything between you and your data (it is after all data it is all about) should be totaly transparent. There should be no concept of applications, and certainly no stupid names for diffrent part of the system. No I don’t want to start mozilla, I want to access a resource. No I don’t want a webbrowser, I WANT TO ACCESS A RESOURCE.
I shouldn’t be trapped inside an application when manipulating resource objects. All tools of the systems should be availible at all times.
I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.
I agree with your sentiments, but you should probably come up with some better examples, because you’ve been able to do what you describe above since at least the mid 90s.
> Every task/concept/event has EXACTLY one handler in the
> system. The interface is like a super application.
> […]
> There should be no concept of applications, and certainly no
> stupid names for diffrent part of the system. No I don’t want
> to start mozilla, I want to access a resource. No I don’t want
> a webbrowser, I WANT TO ACCESS A RESOURCE.
Umm… but one “application” (or “module” or whatever you want to call the resource handler) is better at something, another one is better at something else. (E.g., some web pages might only be viewable in mozilla and some other only in opera. One image processor has some features and another one has some other features.) How do you solve this dilemma?
As a visually impaired user, I consider accessibility as a most important feature. OS should contain a way for a user to access everything reasonable using his/her preferred input and output devices, i.e. there should be no specific input or output device required, any combination should be handled by OS.
immediate (i mean seeing a windows or linux session in ready state one or two seconds right after making the box “live” from full-off state)
or no-reboot (that is, reloading the the OS kernel without passing through the shutdown – POST- stage 1 loader -stage2 loader steps)
AFAIK, the latter is feasible under linux, but it could be extensively generalized and applied to other OS’s as well, and would be well complemented by the use of a more advanced prelink or slab allocator (i’m thinking of dragonflybsd) feature
the former could be implemented by means of an advance in ACPI standards, some modifications to the bios emory and a fast solid state
Unfortunately, most of these technologies will have to wait for the “next, next” generation. The current “next generation” will really be about consolidating all the cool innovations of the 1980’s and early 1990’s into mainstream systems. Some comments on the options:
Super-high monitor resolution support (vectorized desktop)
This one is a no-go. It’s farking 2004, and there is still not a mainstream desktop LCD capable of going much above 100dpi. Methinks it’ll be another 5 years at least until we see desktop LCDs that even approach 150dpi (still not high enough to go fully vectorized without good hinting).
Artificial Intelligence
I’d love to see some AI techniques used to manage information clutter in current applications. In particuarl, managing window clutter when you’re doing heavy multi-tasking, and file clutter (maybe integrated with the search-based filesystem).
Usability.
In the end, computers are tools to help people accomplish tasks. Features do not directly translate to this end.
I think too often people, especially technology enhusiasts, like I assume all readers of this site are, tend to see technology as an end unto itself. We lose site of the original goal, that is, to help us get things done.
This is what I see as one of the major problems with Microsoft technology, it gets in the way of what you are trying to accomplish by being overloaded with features. This often translates into an over-all slower system which makes us wait. Computers are rather fast these days, we should never have to wait on the software to do rather simple tasks. Every time I am forced to use Windows, I find myself frustrated about this very thing.
</end of rant>
i completely agree, usability is the most important and most overlooked aspect of modern technology. good search, like the database filesystem listed above, are a good start to transparency in tech.
now we just need to help less skilled users understand that just because the data isn’t spreadsheet’d out in front of you, doesn’t mean it isn’t there
Web services & full network transparency across different protocols
I find #5 the most important feature. Web services play a major role in one’s daily interaction with computers which will grow in the future. An OS without network transparency is a handicapped OS that’s as well useless.
Put me down for Rendezvous(OpenTalk), not only does it make things eaiser, it allows certain applicatons to even exist.
This can be under the “network transparency” thing.
There has been almost no change in computer user interfaces in the past decade. I remember the first computer I used, the Apple II GS. It had a mouse, a keyboard, a monitor, and speakers. It offered menus, buttons, textboxes, etc. as the software part of the interface. What’s changed?
The only UI option in this poll is “speech recognition.” I don’t see this as an improvement. I’m a slow typer, about 30 WPM, but I know that I can type as fast as I could speak to a computer, and edit text much faster. Also, speach recognition limits computers to pronouncable words. What computers really need in inovative technolagies to exchange information between people and computers. Keyboards with keys on all sides of the hands, and that use feet as well. Lasers that track eye movement (allready used for speach synthesis in people with disabilities). Contact lens displays. What about direct neural interfaces? I know this last option scares some people, but it has been years since such systems have been proven safe and effective with tests on monkeys, and even one blind human subject.
Hardware-to-human interfaces are, in my opinion, the most ignored feature of computers. The only area Are companies held back by the need to write new kinds of software to handle such devices? Is manufacturing too expensive? Are the customers too stuborn?
To moderate my rant, I will point out that some progress has been made in creating three demensional displays. I was in Cornell University’s cave a few years ago, and I was highly impressed. The graphics were nice, the depth was realistic, but the input was lacking. The controll was a modified joy stick. Not quite suited to the job.
Polish and integration: cramming every features and invoking every buzzword isn’t enough. It must all work in harmony. If the OS can barely do ’95 level stuff properly then what’s the point? (examples: not able to automatically detect your monitor properly, poor OS installation setup, complicated partitioning, non-existant add/remove facilities, unrecognized wheel mouse, etc…)
Security us the important feature lacking in most of OSs today (to a varying degree).
Fully hw accelerated desktop: nice, definitely useful on desktop, and pretty much useless on server. But is it really the most important thing to render in HW?
Palladium: would need to get over my distrust before I would even want it, more or less consider it important.
Vectorized Desktop: What I voted for. Even for non super-high resolutions, clean scaling of a desktop is nice. Of couser I am a sceptic in the sense that for non-image related work I don’t see much point in going above 1600×1200.
Web Services This I would have voted for but I am not sure (with or without Palladium) to what extent Joe six-pack is ready for a requirement of an always/mostly connected desktop. Then there is the concern with possible exploits, this is definitely in the works for the future IMHO and wil be important at some point. just not convinced it is the immediate future.
DB File System Useful, but also not new. This is a paradigm shift though, unlike most of the others. It changes the form of use at the File System level, which I have problems getting my head around I suppose.
64bit and such Just another ramp up in speed, which has been getting less and less important outside of games.
Speech Recognition meh (can tell I am getting tired of typing)
Plan-9 meh (See what I mean?)
AI meh, I’ll comment more when it bocems more likely to come out in the next 5 years.
64bit and such Just another ramp up in speed, which has been getting less and less important outside of games.
Shows what you know…
So enlighten me, outside of making your applications and IO faster what does 64 bit computing and PCI-X give us. What application (outside of games) do you do that con not be run on a 3GHZ PIV or equivalent Athlon, with 1GB RAM?
Server side it has a bit more impact, but mostly in reducing the performance/cost ratio.
This is a sincere question by the way, not a flame. As I speak I am typing on a 1.4GHZ Pentium-M I crank down to 600MHZ except when compiling now that eclipse performance has improved in Linux. Occasionally I will kick it up, but 600 is fine for most non-3d programs.
Thats why I voted capable voice recognition. If they ever get to Star Trek level voice interfaces, the computer industry, and even our very culture, will be forever changed! Its still quite a ways away, but when that wonderous revolution comes, I’ll be absolutly ecstatic!
Speech recogniztion sure seems like it’s an improvement to me. I’d love being able to just kick back all hands free style:
“What’s happening computer?”
“Bring up osnews for me would ya?”
“Cool, now let me check out that article on future OS features.”
Granted, this little scenario might fall into the AI category more than anything, but speech recognition definitely plays a part.
And man, just because researchers have had some success with getting monkeys to play video games with just thoughts or whatever, doesn’t mean that sorta technology is ready for prime time.
Simple – Speech recognition – it’ll revolutionise a wide plethora of areas over the next 50 years…. – not of course limited to desktop pc’s
AI because the amount of data is exploding at a rate faster than we can make use of it as information.
>>>>
So enlighten me, outside of making your applications and IO faster what does 64 bit computing and PCI-X give us. What application (outside of games) do you do that con not be run on a 3GHZ PIV or equivalent Athlon, with 1GB RAM?
<<<<
64-bit isn’t so much about speed (indeed, some apps will actually be somewhat slower in 64-bit mode due to larger pointer sizes and such) as being able to address > 4GB of RAM, which becomes a concern for instance in high resolution image editing and various other fields.
Intelligent Applications. I’m talking about applications that morph based on usage pattern and other environment variables in your operating sytem. I think that’s the future of application development.
I also see a rise in database-like applications on the desktop not just filesystems. In fact, Intelligent Applications will need to store data about users behaviour in database-like structures overtime, and then use the information to generate user trends and patterns.
So imagine an application that keeps getting faster and faster to use because not only have the application morphing based on your usage patterns, it’s UI is also evolving, and perhaps even, it’s code being reinterpreted and optimised to your usage scenarios(possible in VM-like systems even today).
Another area that needs a revamp in my opinion is user interfaces. Users should be able to interact with applications their system completely via they keyboard. I can count numerous operating systems in which users are forced to use the mouse to interact with applications or the system. And those that allow object manipulation via the keyboard, are the still quite clumsly. That’s wrong!
Perhaps we need more key inputs in next generation keyboards. I really look forward to the evolution of 3D interfaces. Of course they’ll be complementing 2D interfaces, but 3D interfaces will usher a new mode of interacting especially with multimedia media applications.
Well, all I know is that, none of these will be coming from MS. 🙂 I couldn’t help it guys.
It would be great if in the future there would be universal standards and that every OS developer implements them.
Example: If I create a windows program, I want it to run, without recompiling on MacOS, Linux, Unix, …
This can be done with a single, universal API.
Something like that would be awesome in my opinion.
OS virtualization is a missing choice. Virtualization lets you make a single computer look like many. For example a web hosting company that makes a single machine look like ten different ones to it’s customers.
Virtualization on the desktop is interesting from the security side. Don’t assume VMware. That is just one of many ways to virtualize.
System’s can be virtualized, but so can networks and storage systems. Another twist on virtualization are Java JITs and Transmeta chips.
Java. =)
“Unfortunately, most of these technologies will have to wait for the “next, next” generation. The current “next generation” will really be about consolidating all the cool innovations of the 1980’s and early 1990’s into mainstream systems.”
At least four out of nine are innovations of the 1980’s and early 1990’s. (And if they appear everywhere or disappear forever, I really don’t seem to care.)
* Super-high resolution support (vectorized desktop)
* Database-like filesystem able to search both filenames and (meta-data about?) content
* 64-bit
* Plan9-like feature: exchange hardware functionality over network
I’m not sure if “Fully hardware-accelerated 2D/3D desktop” or “Artificial Intelligence” could be considered 1980’s-1990’s cutting-edge tech. If you think so, that’s two more.
Artificial Intelligence could really use some work, though, in order to be useful outside of industry. (In my opinion, people playing with little robots slowed progress in AI by five to ten years…)
Now, hardware-based security is a great idea. We really need this. It would have been my vote, but Palladium? No. Never.
Speech recognition was my vote. If it could be made to work, it could be really useful. (I don’t expect it to ever handle navigating a GUI, and it really shouldn’t need to.)
“Web Services & full network transparency across different protocols”? I don’t know if I should care about this. Anyone want to fill me in?
I just wanted to add full hardware indipendance and OS Freedom to the list 😉
I just wanted to add full hardware indipendance and OS Freedom to the list 😉
Sounds like someone is in need of NetBSD…
Quote:
“Java. =)”
I should probably also mention: without using a virtual machine, or anything else that compiles the code just in time
I mean “real” binaries, ones that don’t need to rely on some framework or jit compiler or virtual machine etc…
… some framework …
I mean a 3rd party framework or one that’s not already in the OS.
The universal api I’m talking about is of course a framework too, but one that comes with the OS, and one I hope will be universal.
>>64-bit isn’t so much about speed (indeed, some apps will actually be somewhat slower in 64-bit mode due to larger pointer sizes and such) as being able to address > 4GB of RAM, which becomes a concern for instance in high resolution image editing and various other fields. <<
Again outside of a small, specialized community, how is this important. Things for the 2% are important to the overall evolution of computers, but not as important as other factors. Now once banwidth becomes much better realtime image processing/decoding gain in importance, but even these are better off being pushed to specialized hardware (graphics cards in this case). What app outside of high resolution image editing (and to need in excess of 4 GB that is an EXTREMELY high resolution image) what is out there that will improve the overall user experience.
In a way you are proving my point. The need for speed/memory is very important for splinter communities and specialized fields but there is no driving requirement for it at present. Longhorn is likely to change this requirement, but that is quite a way ahead.
I don’t see how you think you’re going to get by without recompiling your code when you’re switching from windows to some non-i386 setup. If the hardware is completely different, the assembly code is completely different.
And why is simply needing a java virtual machine a bad thing?
how about building a computer that is on when you turn it on, and off when you turn it off. in other words, no boot up cycle! also why do they still put floppy controllers on motherboards? these things were supposed to be gone with the last generation of motherboards.
question…
What is the “plan 9 -like exchange hardware functionality over network” all about?
Speech recognition was the joke option, right?
That’s one of those ideas that flourished in old science fiction but has turned out to be impractical. Not because we can’t get it to work, but because it’s about as useful in real life as personal jetpacks or food pills. If you think people talking on cell phones is annoying, try listening to them all talking to their computers as well.
To be sure, speech recognition has its uses. These uses are called “applications”… which is usually distinguished from the OS.
Allowing more than 4GB of RAM enables tons of things:
1) For people doing scientific computing, large memories allow more detailed simulations of larger and more complex systems. For many types of simulations (eg: chemical reactions), there is pretty much no upper bound on how much memory you need.
2) For people doing 3D modeling, or video or image editing, large memories means that they can operate on larger scenes or longer videos.
3) For engineers, large memories mean that engineering programs can do more detailed analysis of larger structures taking into account a larger set of parameters.
4) For developers, huge memories mean that optimizers and other program-analysis tools can operate on more of the program graph at once. It enables the use of slicing tools, whose program graphs can reach ten gigabytes or more.
Aunt Nellie might never need a 64-bit machine, but there is a huge group of people who do.
I’d say all of the items listed on the poll! Those kinds of technologies and interfaces enable the kind of interaction with a computer that have only been dramatised up until now.
I would think speech recognition would become quite the hassle for the surrounding environment. As a college student, having everyone bitching to their comps to do this or that would become even more annoying than the inconsiderate users that leave their AIM sounds on. I see speech recognition about as useful as those media buttons present on today’s keyboards. It might be good for turning up your volume or launching the occasional application, but aside from that, pointless. I think the more appropriate focus should be on ease of use, and I’m not talkin clippy. User interfaces should be made more simple, with the advanced options hidden only for the more advanced users. The future OS should take UI clues from apple and improve upon Mircosoft’s efforts at keeping user’s antivirus definitions and system software up to date. Also, disabling of firewalls should be made almost impossible, with the useability increased. SP2 finally allowed the default firewall to become user friendly by having its intelligent port opening – but this is still quite advanced for the average user.
“What is the “plan 9 -like exchange hardware functionality over network” all about?”
My (very) limited understanding is that computers in a network can specialize in what hardware they offter. One computer can have terrabytes of storage, but limited everything else. Another can be all processors, etc. The network puts it all together and acts as one big distributed computer with network access points.
But I don’t know just how accurate my description is…
IMO, the most important modern or future OS feature is good, up-to-date, clear, concise, correct documentation. I like lovely html in a /usr/share/doc subdirectory, possibly linked somehow with a nice little GUI configuration tool fully outfitted with help buttons and tooltips.
Lot’s of the cool new technology coming up might be interesting to try out, but without great docs (possibly along with simple GUI configuration tools), it means spending hours online hunting through mailing list messages and message boards trying to get stuff working.
Most of the choices could be great, but most are not necessary. Also they should be integrated to the point where we do not have to think about them.
Most computers that I deal with, I want to do their job, and stay out of my sight, out of my way and out of my mind. Consider the router, for example. It just works, what a great computer!.
The one computer that I stare at all day needs to run multiple telnet sessions, multiple browser windows (and programs, no one browser is best at everything), needs to allow nice printing (multiple fonts, sizes, blah, blah). What I don’t want it to do is “help” me. I don’t want it to “fight” me. I don’t want it to protect me. I like a nice screen, easy on the eye, that can display 60 or more lines of text. Nice keypoards are nice too.
I see everything getting better and cheaper all of the time. I remember surfing the web with a fast P90. And 64Meg of RAM. Wow! (and NT3.51, with “Chicago” on the way.)
Simple sharing of computing resources across the network, including printers, displays, as well as just computing resources.
Speech recognition gets my vote, not because I want to tell my computer to open my web browser or goto specific web pages, those can be done much easier with a few mouse clicks, but to interact with me and gather metadata.
I would like my VoIP phone conversations or gnomemeeting video calls automatically transcribed so when keywords are spoken in conversation dashboard can give me feedback on the subject, including previous conversations on the same subject.
Maybe it could recognise who is speaking and dashboard gives me information about the person.
Maybe I can ask the computer which episode of Buffy had the joke about the Pterodactyl. It would know because last time I played my Buffy DVD’s it was automatically transcribing the speech to create metadata and storing it with details about the episode it extracted from the DVD.
There are already some useful applications of speech recognition such as electronic phrase book programs, but I think it’s the area of desktop integration and metadata collection where it will start to become a useful tool rather than a toy.
OK, I can see these reasons far more than the previous ones. However, except compiler optimizations, most could be handled with existing 64-bit platforms though at a higher cost.
Take a program optimized on a system with infinite memory. Compared to the exact same program compiled on a system with 4GB, what would be the performance gain be on a computer with 4GB? 8GB? 16GB? I would not expect more than a 5-10% boost, and most of that would not make a significant difference if the host platform is not under load.
Again I don’t disagree that it would be useful to certain groups, but does that make it critical for the next gen OS? (Continuing this debate out of sheer stubborness rather than a firmly held belief.)
With regard to 3D-accelerated 2D UIs being new:
1) Certainly, OpenGL widget sets have been available for a very long time.
2) A lot of the infrastructural work that is being done now, to get multiple apps to share a single OpenGL pipe in a high-performance manner, was done by SGI more than a decade ago for IRIX. Indeed, the IRIX solution will still be more advanced than what Longhorn, OS X, or Linux will offer, because it fully virtualizes the graphics hardware, allowing you to throw more GPUs in the system as easily as you throw CPUs into the system.
The Plan9 features is actually amazingly interresting.
In essence, you can ‘import’ the CPU of another process, one
simple command. Then the code runs on that processor, everything else, such as screen output, open files, keyboard input, sound output etc. is from the local console.(Note, this is very very diffrent from a remote X session).
Building a cluster is pretty much a nobrainer now, just import a handful of processors in your local namespace.
Ofcourse other sorts of hardware can be imported. Want to have sound output at some gadget(say your tuner running plan9) rather than your local host, just import its sound device.
Need to use a modem connection at another host, import that device.
Use a remote printer(running plan9), import that device, no need for fance handshake protocols etc. everyting is handled
by the OS in a general device handling way.
Not only devices ofcourse, but anything that resembles a file(Which in Plan9 really is everything, unlike UNIX where its almost everything, except some are handled specially e.g. /dev/* named pipes etc..)
The thing is that there is an increasing trend to moving to commodity hardware to do this sort of work. Google uses cheap x86 PCs. Lockheed has announced that they’ll move 10,000 Solaris desktops to Linux-based x86 machines. ILM has moved to Linux-based x86 machines with NVIDIA hardware. Pixar’s render farm consists of cheap x86 machines. So there will definitely be a demand for large memories in commodity hardware.
With regards to the optimization example: Stalin, a whole-program compiler for Scheme, in some cases generates code several times as fast as a regular C compiler (which, lacking whole-program knowledge, has to be very conservative about it’s assumptions). Also, using whole-program optimization allows the compiler to eliminate a lot of abstraction — so straight-forward and elegant code can perform as well as well-tuned and low-level code. In addition, slicing is a critical innovation that will make every developer want to go buy 8GB G5’s. Being able to see how changing one value in your program can affect other values is an enormous booster of productivity.
…its really difficult to fuly respond to your question without giving away any ideas that can and will lead to tangible products but trust me, the user has not seen anything yet.
Even office applications still have innovations left in them its just that the incumbent party is not really big on innovation ( http://osnews.com/comment.php?news_id=7848 ).
Do you know where the majority of human brain/mind function takes place – in the subconscious. And by this I am talking of a huge ammount, > 70%.
This is what is going to happen to the pc. The number of background processes and metadata that is “assisting” the immediately obvious part of the application will far exceed that/those “visible” parts.
On a 5 terabyte HD metadata will constitute at least 60% of the data that is stored on your computer. This is because for each unit of “real/conrete” data there will be several meta-elements… and conurrently, some of the metadata will have metadata and so on…
This will result in the emergence of “multidemensional” computers i.e. an additional and alternative component of achieving AI.
e.g.
====
1.) How many ways – by Tony Braxton
Forget, the usual mp3, etc. etc. metadata. How about this simple fact – it’s is a song. Now this fact can be linked to the assumed system-wide dictionary and thesaurus so the computer has better knowledge of what it is dealing with (sorry I can add more here, already gave away too much).
The dict+thesaurus have become metadata in this particular instance. Taking the definitions of the aggregate synonyms of the word song (and the word song itself) one can then go onto the web (actually – a background task) and “create” more “dimensions” from this data that will empower the user in what ever they are doing.
What you have created above is basically a system/OS that functions in the way the W3C hopes the semantic web will function.
So, just wait, the next five years will be exciting and by 2010 you should start to see the emergence of a totally new kind of OS – at least from the user perspective.
When Jobs introduced OS X (yes – with all its warts, and debug code :-], etc.) he said he had one more platform left in him. Here’s to the next “Star Trek” a.k.a OS XI.
BTW: All the stuff listed will have become commodity in OSs before 2015 at the latest.
In fact implementing an OS that functions like a sementic web (SWb) will outdo many of the above “innovations” in terms of the abilities it will add to the computer.
Some think of SWb as a better web services but it is more than that. SWb adds “depth” to a given context. Lets say web services are integration and differentiation SWb would be limit/(abstract)algebra theory and the “proofs” of what IS integration and differentiation. This is what I mean by “adding a dimension”.
Sorry to be vague but I am only a 1st semester comp. sci student and it will be some time before my coding skills catch up to my ideas. Thus I do not wnat to give away everything – although I might have here. Well, there’s still more – like I said, the fun times are yet to come ::-]…
Make the OS simpler. Follow the lines of Slackware: make things simple and not over-burdened. Because with all of this “emerging and amzing new technology”, it will be quite difficult to track problems down, fix them, and keep a happy and stable operating system.
Make it Unix-esque, but not Unix. Forget about the standard Unix filesystem. How many Windows Grandmas are going to know that /bin is where system programs are kept? Keep system tools small and agile, not bloated. Make each tool good at one thing: basically, use the GNU suite of tools.
Don’t use a registry, and don’t use RPM’s. But it would be extremely helpful to have a background process that profiles new programs and keeps a database of what’s installed and what’s commonly used. That way, you know what you have, and you know what you need.
Make updates easier. For many non-Un*x gurus, a kernel compile can be quite a task (unless you’re on Gentoo or FreeBSD, they have scripts to handle that).
Also, do what Apple did. Make a powerful scripting language that can interface to programs, is easy to understand, and is included with the system so that even moderately literate computer users can get powerful stuff done.
But those are just my ideas.
It will be very nice to have the ability of driver that will pick up and learn the hardwares by itself to support it.
I vote “Other”. Here’s what I’m looking for:
1 – Originality (No More *nixes or clones) Wouldn’t even mind an entirely new hardware architecture, like Mac but open and accessible.
2 – Original Software (don’t see the point in yet another OS that runs the same old Linux programs)
3 – ‘Smooth’ and ‘easy’ (vague, but I know what I mean)
-Bob
seems far fetched and though the need hasn’t yet come, in the long term period, it will. For the first generation AI, it’ll be best implemented IMO as a another layer on top of a (simple) OS platform, possibly as another userspace app. it’s task will mainly help streamline our daily tasks, optimize things up to make usage of the OS fast and easy. then this AI coupled with realtime speech recognition (as opposed to voice recognition), possibly image recognition also (via live camera feeds) will be very cool. of course, we need multilevel security to safeguard data/the OS/the hardware, everything the AI does can be manually overridden of course. this will pretty much open up a whole new set of exciting possibilities for us users.
Becase many of the things we need can be done by the computer. And heck, these are things we are sort of doing now. Take Gnome’s Dashboard. While I am working on something else, Dashboard is out there finding stuff out for me. It would be nice that while I am writing an email about a movie I just saw, the machine is in the background grabing information about the movie, and while I am talking about the actors, the system would recognize I am starting to type an actors name, and it would automatically suggest the names.
Right now applications are intelligent within their own context. Look at IDEs that can suggest function names, and are constantly updating it’s internal cache of objects that you have created in real time. When you are typing, it suggests options, and allows you to get on with your work without having to remember the little details.
Now expand this across the entire desktop, and you have a real winner. Most other things, like the 2D/3D accelerated desktop, the database backend for the filesystem, and even the voice recognition are merely tools that an good AI system could use. An AI system would be able to work with you, and really help you get your job done.
Other things intelligent apps could do, for example, and email client. It would automatically recognize which emails you read the most, which emails you respond to, the amount of time you usually respond in, and determine from that and other parameters which emails are important to you. So that while you are working, it will notify you of these important emails, emails the machine knows you want to read and answer.
The computer could be set to read the emails to you, and then ask you if you want to reply? You could merely say “Yes” using the voice recognition, and then you could simply dictate your response, or type, as you prefer. Does the email contain an appointment? If so, then the computer would file it away in the database file system.
Of course, all of this takes up resources, which is why faster computers, 64-bit processors, more RAM, and better video cards are important.
You might think this is just a dream, but so was flight for thousands of years, and I don’t think we will have to wait nearly as long for the above to really take shape.
it’s pretty sad really.
we got games that can emulate both physics and to have 3 dimensional sound, but we still have to control our computers the way we’ve always done.
our input devices are mostly limited to what we are capable to do with our hands, though it do exist readers which can determin where on the screen you’re looking.
the most high-precision device that’s available for the common man is a mouse, and I’ve long wondered why that is so. Everyone can imagine and think up shapes, colours and pictures. very very few can put these on paper, yet even few on a screen using a mouse.
to me, 3d modelers are the easiest way to “paint”, so I really think we should get better input devices.
just my point of view.
The greatest feature is that it doesn’t interfere with people’s work.
That means stability.
“Polish and integration:”
What the heck will that do?
I can’t even read Polish!!!
heh, heh…
I wanted to comment on a couple:
1) DB File System: due to the workings of the stupid javascript poll I can’t get the exact wording for this option now that I’ve voted and navigated away from the page. However, Be had this five? six? years ago so it’s not new and anyone who does it in the next generation will just be playing catchup. It’s not new.
2) Vecotized desktop, etc: yawn. The 2d multiple-workspace metaphore that Be and mac os x and multiple X environments use is pretty much mature. The desktop is about as mature as it’s going to get, and has been for years.
3) 64-bit. Not that important. Will be useful for clusters but the need to constantly upgrade your desktop has dropped heaps since the P2 generation. Computers since then are fast enough to do anythign a typical user will need to which is run comprehensive browsers, word processors and email programs.
4) AI. This is completely vague, and I can’t see what the operating system has to do with this.
Web services has the potential to become cool. I hope mozilla start embracing and extending the browser experience soon, to the point of introducing new widgets and new means of writing to the browser. If they don’t, Microsoft will, so they should wake up and move now that they have the advantage.
Nevertheless, I voted for the Plan 9 option. Their approach to componentising hardware through the operating system is entirely cool, and is relevant from a home user with a c ouple of peripherals and only one or two computers, right up to the huge installations like google where redunfdant hardware and grid computing is most important.
I was unable to participate, the Poll must be down it didnt show up in my browser, I tried IE and Firefox and it didnt show up, yes I do have javascript enabled.
We’ve been lucky to live in the interesting time when computers went from multi-transistor hand-wired hand-programmed in machine language devices to the real general communication and computation devices they are today. The progress we saw was explosive, and if you only look at that short span of time, it’s tempting to get caught up in the whole Moore’s Law way of thinking.
The revolution is over, thanks for coming. Computers are already wickedly fast — faster than most folks need them to be (for email, web browsing, document creation, and sending pics and vids around to friends). Saying things like, “in the future, we’ll have terabytes of memory and AI and voice recognition and…” is like saying, “in the future, pickup trucks will have wings, and you’ll fly them to work and …”.
Pickup trucks have stayed pretty much the same over the years because they’re good at what they do, and don’t need to do anything else. Desktop computers are rapidly approaching that sort of balance point (if not already there).
Up ’til now, engineers have cheated nature again and again to get these chips smaller and faster than before. Well, if you follow the literature, you’ll see that you can only make these incremental improvements so long before hitting the law of diminishing returns. For example, do you realize how expensive it is to try and develop new lithography techniques to try and bring feature sizes down any further than they are? Can these companies rationalize these enormous expenditures just so Joe Sixpack’s MS Word auto backup takes a second or two less time?
So, how things are going to get better are primarily convergence on open standards (protocols, file formats, hardware interfaces, user GUI’s, etc.), and hopefully the elimination of foolishness like software patents.
Their server is down atm.
Maybe your settings are to “tight” on security or you are using a proxy (like i do – OS Forums is broken with the proxy). And i see that go2poll is down.
“With regards to the optimization example: Stalin, a whole-program compiler for Scheme, in some cases generates code several times as fast as a regular C compiler (which, lacking whole-program knowledge, has to be very conservative about it’s assumptions)”
I agree with you, mostly. two remarks, though:
– analysing one compilation unit at one time( = file on any architecture I know) is not a must with C. For example, the last MS C++ compiler can do a compilation on the whole software. I don’t have any experience with the performance gain, though.
– For computer crushing, I wish I could drop C (C++ for that matter, but that’s the same), which is a real pain for what I do. Matlab, which is a high level ‘language’, is several times faster than C for matrix multiplication and such (uses old Fortran librarie). But matlab lacks real programming features like class, easy extension for GUI. Python would be a killer, but for now, the change between numeric and numarray is a real pain.
In Fact, one real revolution that few people want to recognise in the FOSS community, as far as Iknow, is the .Net principle of running anything from any language you want. No more C , C++ or java because it is the only way to use this fu*** library. I am still baffled by this idea, I am trying right know to use some C++ libraries of my own with gtk#, it would be great to be able to choose a language for its capabilities, and not for its libraries anymore.
I prefer reliability and stability, because they allow you to get work done faster. I’d also prefer OS that consumes less RAM. With current apps it seems that in 10 years time you’ll need Hexium 666GHz, GeForce 1234, 100GB RAM to run notepad application. UI is on the second place for me. Midnight Commander is must-have for each OS It the best for coding.
I doubt many people taking the poll actually know what this is. If they’d take the time to look up what is possible with an OS like Plan9, then this particular option would have been voted for a whole lot more.
Capability-based security. This would fix almost all our current viruses, worms, malwares, etc., and make creating new ones very hard while writing secure programs very easy.
I don’t like the idea that my computer thinks it’s smarter than me, or that it tracks my usage patterns trying to learn what I like and what I don’t like. I don’t want computers making decisions for me without asking me first.
I like to be in control. Dumb computers, intelligent users — that’s the kind of combination I like. Intelligent computers can become very difficult to use if you suddenly want them to do something unexpected and unusual. People will say: “You can always turn the AI off.” But what will happen when you cannot any more turn it off? Who will then have the control?
As far as I’m concerned, I’ll try to keep my computer as dumb as possible for as long as I can. It’s good to have some tasks done automatically for you, but all this AI stuff just gives me the creeps.
Plan9 is very interesting but a bit arcane to install. However, if you wish to use Plan 9 in VMware, you can skip the installation process; download a VMware machine with Plan 9 pre-installed and Log in as glenda.
You can download the VMachine image from here:
http://www.cs.bell-labs.com/plan9dist/ureg.html
Quote:
“I don’t see how you think you’re going to get by without recompiling your code when you’re switching from windows to some non-i386 setup. If the hardware is completely different, the assembly code is completely different.
And why is simply needing a java virtual machine a bad thing?”
That’s right, hardware is a problem, didn’t think about that. Leave out the “with no recompiling”, I meant to say, without changing a single letter in the code, and recompile on other architectures.
A single build on an x86 system should be sufficient for all OS’s on that architecture.
What I’m looking for is no dependencies.
Java is a dependency on almost every OS.
And, since I have a fairly old pc (PII 233MHz), large java programs don’t run as smooth as native binaries. On fast pc’s it might not be noticable.
You want a brand new OS that supports a brand new hardware architecture to run your brand new apps.
You people crack me up.
Yeap, I agree, years of operating system R+D should be thrown away to pursue some mythical goal of perfect computing.
I bet you already have a good headstart on this brand new OS. Where’s the code bob?
Better integration between apps. This is much better on windows than on linux/unix, but still could be much better.
I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.
I think user interfaces are not obvious enough to use. People should start experimenting with new usage ideas, so by evolution the easiest way to use an app wins!
All of these options outside of 64-bit are not really anything more than evolution of the existing OS’s today. 64-bit is the only valid choice.
11) hardware independent apps (java/.net)
Just get me the hell away from that beige box, humming beneath my desk and I’ll be happy.
Plan 9 features are critical to this – I can take a low power system with me, while running high power processes over the network.
Speech recognition is also essential. I’m not talking here about people sitting at their desks, talking to the box in front of them – I’m talking about using your mobile phone to access your data and the internet.
That’s not to say that more inventive interfaces can’t be implemented without speech. You only need one button to implement a menu system and that covers 90% of human-computer interaction.
I have to go walk my dog now, which means leaving the Internet behind (not that I mind, but it’s the principle…).
High level extendable object framework for the operating system that encorages the development of plugins or components rather than complete programs.
It should be possible to combine a photoshop program with a automation sequencer program to make a adobe after FX type utility (video effects) in a well designed operating system (with components rather than programs).
PS: dave’s brilliant post, among whom I will certainly reside in the ‘moderated down’ section (wow, who just happen to be the most informed) is excellent. Editors: clueless. Readers: blissfuly ignorant.
11. Non-bloat. (tick)
Ack can’t edit post.
11. Non-bloat (tick)
12. Consistency – all apps should use the same widget set (tick)
13. Security in software (tick)
14. End of DLL/shared library and dependency hell (tick)
15. Open standards (tick)
… conservative thinkers that is.
John, I am not sure why you included my name in your response considering my post was primarily about software innovations and the things you say are “done” are primarily hardware and “wire” i.e. networks and protocols (dont focus on the tone – I am not offended, just too lazy to be diplomatic ;-].
Before you even get into what I said in my post which is more futuristic (as in 5-10 years not 100) consider the fact that while the rest of the it and scientific community and industry have advanced alot over the last 20 years the software hsa not.
That’s why Apple is able to release upgrades on a yearly basis the have averaged 140 new features over three years. That’s why Longhorn will be a complete rewrite i.e. they are polishing old concepts (Object Pascal, CORBA, Java, etc). I mean look at the stuff they are getting rid of (Hungarian notation anyone – finally MS follows good design ideas advocated by one of the own ( http://images.amazon.com/images/P/0735619670.01.LZZZZZZZ.jpg )and implementing:
-vector graphics
-compositing engines
-db filesystems
-3d interfaces
-etc.
I mean, look at this list. Nothing new but because of the fact that software has largely remained stagnant while almost every other aspect of the industry has moved companies are just now playing catch up and making these features standard OS components.
If certain companies were in very powerful positions we probably would not see these for some time to come e.g. after the WWDC one of the best summaries I read said something to the effect of “CoreImage and CoreVideo put the power of Photoshop in a single programmers hands”. I have not seen the demo app. but I heard it was amazing – and it was coded in a week by a single programmer.
Similarly of Longhorn Avalon technology which has the combined abilities of (PDF, QUARTZ, ePAPER, eForms, CSS, HTML, SVG, etc.). Now all these things which were previously, somewhat disparate technologies are now in a single well designed API/Library.
Now that these technologies are being “commoditized” i.e. being put in well designed APIs/Libraries companies whose products were largely what is now in these libraries will be forced to truly innovate e.g. Adobe now has to do better because any company that was making pro-user apps. can now easily scale up the abilities of their applications by taking advantage of stuff Adobe has been doing for years but can now be found in CoreImage and CoreVideo on OS X.
Another way to show what I am saying is to look at SFX houses. Not too long before 5 years ago in order for a major movie to outsource SFX stuff you had to be one of the big boys e.g. ILM.
Now along comes Apple. They buy FCP (ver. 1 I think) from Macromedia who have shelved the project coz they do not know what to do with it. Revamp and literally DISRUPT the industry by releasing it at a price previously unheard of – 999.00 USD. So now, instead of shelling out 15k for a workstation and 50k for sotftware you can get it it cheap.
As a result, 5 years later, you have small outfits doing this stuff at EXPONETIALLY way less (in one of the articles I link the guy says before you had to have a 100k system to do some othe stuff they are doing now on their powerbooks):
http://www.apple.com/pro/film/torres/
http://www.apple.com/pro/video/stern/
I could go on and on – but the moral of the story is that while hardware may be approaching a wall, software is just getting started. And now that Apple is “fit” again, you can be assured of a continued rapid pace in innovation from all companies because they tend to push harder than say a certain incumbent that has had the throne for the last decade and a half. And we wonder why software lags so much or its just a coincidense.
I mean for …….. – the development community is just now moving onto design patterns. Design patterns are to OO what structured programming was to PP and yet for over a decade and a half there has been alot intellectual masturbation over polymorphism, true OO (C++) vs. fake OO (VB – pre .Net).
We were glorifying the tools and arguing over curly braces and now we finally realise that things like polymorphism and inheritense are just attributes of the tool that CAN BE (vs. HAVE TO BE) leveraged and now design patterns arec just entering the mainstream.
Like I said before – software is just getting started…
I voted for ‘other’, and here’s my reasoning (didn’t find ‘sandbox’ by search so probably not mentioned yet?)
Transparent sandboxing or ‘jailing’ as it already is in *BSD (haven’t tried, though). This means, you can start running _any_ binary loaded off the net, without concerns for privacy, trojans etc.
While things like OS X are less subject to virii attacks, the chance of making trojans for them is there. Just as much as _any_ other system.
For this need, we need automatic sandboxing of new applications, with gradually lifted privileges as the applications seem to be well behaving, useful, or both.
I believe this is part of Microsoft’s LH agenda (but then again, what ain’t!?) but personally, I want to wait until Apple does it the Right Way.
Anyhow, without this any current/future system will be too exposed to system-wide exploits.
-ak
> Transparent sandboxing or ‘jailing’ as it already is in *BSD
No, no, no! Sandboxing is an ugly bubble-gum fix for defects of a bad security model. If you instead switch from ACLs to object-capabilities you’ll get “sandboxing” for free. (Although then it isn’t called “sandboxing” anymore, for the same reason an object in an OOP language isn’t said to be “sandboxed” just because it doesn’t have access to all pointers in the system.)
1. Stability
2. Security
Stability and security are the steak. All the rest is sizzle. In those two areas we are still in the dark ages.
Why does there have to be a one-size-fits-all OS for both consumers and business/industry/government? There should be an operating system just for grandmas. You turn it on and there is a web browser, and email client, a word processor, a personal finance app, some games and multimedia apps and that’s it — nohting else. Want to install Dreamweaver, Norton SystemWorks or Apache? Forget it! There would be nothing to install, nothing to partition, nothing to configure (except your Internet connection and email account) and no registry to screw up. The user would have ZERO access to the file system except for his Documents folder and the subfolders contained therein. Grandma doesn’t need a 64-bit ultra-high-res 3D desktop and she shouldn’t have to be plugging in network, audio and graphics cards. Windows is not that OS and neither is Linux. Think Mac OS X/iLife with everything locked down and running on $400 commodity PC hardware instead of $1400 Mac hardware.
“I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.”
That is in no way integration. But, done well and done enough, it would make things easy for newbies and hackers alike.
The most important element of an OS is that it works and it works well. By this I mean that it should be available when I start the hardware. It should not be unavailable at any time while I am using it. It should be responsive and transparent. Usability is key. I should be able to trust that the system is in my complete control instead of any third party such as the vendor or an intruder. And most importantly, it must allow me to get my work done.
the only thing that i want in any future os, is speed, speed. instant response and awesome speed, i am so tired of crappy slow speeds of windows and linux, be it bcoz of i/o or any thing else i dont care i want speeed.!!!!!!!!!!!
Look at this list:
Fully hardware-accelerated 2D/3D desktop
Hardware-based security ala Palladium
Super-high monitor resolution support (vectorized desktop)
Web Services & full network transparency across different protocols
Database-like filesystem able to search both filenames and content
64-bit and support for faster PCI/AGP protocols
Truly capable speech recognition
Plan9-like feature: exchange hardware functionality over network
Artificial Intelligence
… and tell me that, with the exception of Plan9, this isn’t some kind of Microsoft or Apple survey along the lines of “What part of our work in progress are you most excited about?” question that gets sent right to marketing.
Now …
Reliability
Non-bloat
Consistency – all apps should use the same widget set
Security
End of DLL/shared library and dependency hell
Open standards
.. this is what I’d really look for in something I’m actually going to use or pay money for. Satisfy this list and you’d have one of the best OS’s out there, regardless of other factors.
Yes, yes. They’re not actually features. If you get real pedantic, they’re products of proper development models and good design decisions. We need love in every bit…
Plan 9… I think Plan 9 and BeOS should inspire people. They risked incompatibility to start over clean. I think one of the best things Be did was the (arguably) most simple: start fresh. And Plan 9 make great use of refactoring to actually sometimes grow smaller as development went along. Plus, bringing #if, #ifdef, #else and #elseif to near extinction should be praised. And sure 8 1/2 (now Rio?) might be a bit ugly… but it’s elegance by recursion is just, well… super-spiffy!
Some re-thinking from Be and Plan 9, reliability from zOS and OpenVMS, code auditing from OpenBSD, security from some current research projects… you might just have the perfect OS.
I say to everyone, take your accelerated desktop and shove it! Give me my dream OS! Gimme!
//nothing more to ramble on about without direction, in a near-incoherent manner, increasingly off topic, with bad spelling and… ACK!! I’ve been sacked!
I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.
You want BeOS.
>> Transparent sandboxing or ‘jailing’ as it already is in *BSD
>
>No, no, no! Sandboxing is an ugly bubble-gum fix for defects of a bad
> security model.
I think we’re talking about the Same Thing. Notice I said ‘transparent’, meaning the user wouldn’t need to explicitly set up the sandbox.
Personally, I believe in gradual developments and this kind of a system would be able to sandbox also old applications (binaries), not only the ones specifically ‘set up’ to be boxed.
Anyhow, we probably agree access control is important. There’s too many apps requiring me to give admin password (OS X), and even the ones that don’t can mess up my home dir if they like. Something needs to be done.
-ak
New features? I couldn’t care less, at least not until the current features all work perfectly.
It’d be interesting if the results of this poll were archived so we can take a look at how people voted in a year or two.
I tried Plan9 many times as desktop. It is too different, that I must absorb it stepwise. But it’s great! I think, that if i get a job to build some cluster or similar enviroment I’ll try to deploy Plan9 and learn it.
It ability of sharing harware is based 9P protocol, that’s the only thing to worry about. Everything is fileserver, that’s the way of sharing hardware, it’s elegant.
1. reliability
2. reliability
3. reliability
Total integration. Every task/concept/event has EXACTLY one handler in the system. The interface is like a super application.
Everything between you and your data (it is after all data it is all about) should be totaly transparent. There should be no concept of applications, and certainly no stupid names for diffrent part of the system. No I don’t want to start mozilla, I want to access a resource. No I don’t want a webbrowser, I WANT TO ACCESS A RESOURCE.
I shouldn’t be trapped inside an application when manipulating resource objects. All tools of the systems should be availible at all times.
I want every piece of displayed data to be draggable, so I can dragg an e-mail straight from Outlook/Mozilla Mail onto the desktop, from there into a word/OpenOffice document. Then I dragg the office document into a PDFalizer and from there into a data channel to a friend.
I agree with your sentiments, but you should probably come up with some better examples, because you’ve been able to do what you describe above since at least the mid 90s.
…subject says it all really!
Ahmad wrote:
>the only thing that i want in any future os, is speed,
>speed. instant response and awesome speed, i am so tired
>of crappy slow speeds of windows and linux, be it bcoz of
>i/o or any thing else i dont care i want speeed.!!!!!!!!!!!
Have you thought of switching to DOS? It runs rather quickly on new H/W! 😉
Kramii
Plan9 is OS for future!
> Every task/concept/event has EXACTLY one handler in the
> system. The interface is like a super application.
> […]
> There should be no concept of applications, and certainly no
> stupid names for diffrent part of the system. No I don’t want
> to start mozilla, I want to access a resource. No I don’t want
> a webbrowser, I WANT TO ACCESS A RESOURCE.
Umm… but one “application” (or “module” or whatever you want to call the resource handler) is better at something, another one is better at something else. (E.g., some web pages might only be viewable in mozilla and some other only in opera. One image processor has some features and another one has some other features.) How do you solve this dilemma?
Plan9 is OS for future!
Now if they only got rid of that mutant psycho rabbit! I hate that freak of a mascott!
As a visually impaired user, I consider accessibility as a most important feature. OS should contain a way for a user to access everything reasonable using his/her preferred input and output devices, i.e. there should be no specific input or output device required, any combination should be handled by OS.
immediate (i mean seeing a windows or linux session in ready state one or two seconds right after making the box “live” from full-off state)
or no-reboot (that is, reloading the the OS kernel without passing through the shutdown – POST- stage 1 loader -stage2 loader steps)
AFAIK, the latter is feasible under linux, but it could be extensively generalized and applied to other OS’s as well, and would be well complemented by the use of a more advanced prelink or slab allocator (i’m thinking of dragonflybsd) feature
the former could be implemented by means of an advance in ACPI standards, some modifications to the bios emory and a fast solid state