Home > Intel > Gordon Moore: Software is too complex Gordon Moore: Software is too complex Eugenia Loli 2005-04-13 Intel 21 Comments Moore’s Law has helped keep the IT industry riding along on ever faster and more complex hardware for 40 years, but its author now says the software side of the equation needs some serious work. About The Author Eugenia Loli Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker. Follow me on Twitter @EugeniaLoli 21 Comments 2005-04-13 7:55 pm Anonymous Isn’t a law, it’s just an observation which has up to this point managed to remain true. But I guess most intelligent people already knew that. 😐 2005-04-13 8:08 pm Anonymous Sounds the guy has never seen OSX, keeping complex things simple, the strength of Apple. 2005-04-13 8:13 pm Anonymous I don’t think that Moore is talking about desktop usage Matt, he talks about “software” as in “software engineering”. 2005-04-13 8:46 pm Anonymous OS X really isn’t a simple system. From the odd kernel design (FreeBSD glued onto Mach), up to the way Carbon and Cocoa frameworks re-implement the same behaviours and appearances (sometimes incorrectly). OS X has some really nice frameworks (CarbonEvents and Accessibility come to mind, much nicer than the Windows counterparts) but it is a mistake to think of OS X as a simpler system than Windows. Ralph 2005-04-13 8:56 pm Anonymous OS X really isn’t a simple system. From the odd kernel design (FreeBSD glued onto Mach) The replacement of Mach servers providing VFS and networking with kernel mode code from FreeBSD decreases the complexity of the kernel, if anything. It’s the same approach taken by Windows NT, which began as a microkernel and was slowly factored into a monolithic one. It’s certainly not an “odd kernel design” the way I see it; Mach was originally refactored from BSD, and the microkernel facilities remaining in XNU (namely Mach message queues) are still heavily utilized. That’s not to mention that they have been using microkernel-like services, previously for prebinding and now for Spotlight. A move to these concepts can be seen in other operating systems like DragonflyBSD, which moves to an Amiga/BeOS-like messaging model which utilizes pervasive multithreading. I’d say the spaghetti-like nature of the interdependence of most monolithic kernels is far more bizzare than kernels which originally began their life as microkernels. 2005-04-13 8:58 pm Anonymous …is that the move to managed code systems, namely Java and .NET, are what is really adding needless complexity. With the advent of modern page protection algorithms, the CPU will be able to provide in hardware buffer overflow protection which surpasses that provided by managed code systems (which, of course, are implemented in native code and are thus subject to buffer overflows themselves without the help of hardware page protection) Less is more, and certain languages (namely Ruby) are moving in that direction. 2005-04-13 9:23 pm Anonymous I think this is what Dr. Moore was trying to say. Software is sold on the basis of two competing principles, ease of use and functionality. Is it really possible to make a user aware of functionality and still make an interface easy to use? 2005-04-13 9:28 pm Anonymous Also, can you guarantee that a particular combo of ease of use and feature accessability will work for most users? I know that I actually almost prefer a cluttered interface. The more visual queues the better, let my brain worry about filtering out what I don’t need. 2005-04-13 9:32 pm Anonymous Today software development is being ruled by the Huge billion dollar Corporations. The days of the programmer being able to put together code that makes sense are over. Outsourcing code/programmers and thinking you can offer ‘canned solutions’ are the biggest mess one can only dream of. These ‘Outsourced fiasco’s’ are not cheap, they are costly in code re-writes, non-functional disasters. It is all about a Corporations bottom line and how the stock can grow and the company 20-25% a year without NO concern for the Associate who spends 80+ a week working hard. Days of loyalty are over and the investor rules the Corporation, the best place is in a small privately held company like http://www.sas.com they have never lost money in a quarter nor do they slash jobs like a big company like IBM, HP, and so on. SAS has excellent software and they are not troubled by the ‘Outsourced’ disasters in complex and non-functional coded disasters of big Corporations. Just my 1 cent. 2005-04-13 9:38 pm Anonymous Well I’m pretty glad that a well respected giant in the industry has the position to say it and that SW types will listen. Switching from Windows to OSX or Linux is hardly the answer, they are all way too complex, just different flavours of complexity. I still am far fonder of my old Macs than anything else I use now (BeOS excepted) but if only it had been 100x faster and never crashed and supported open HW stds. Gee with modern HW we could easily build systems as simple as the old Mac used to be and fast as a rocket, and bullet proof. Now I just wait for somebody to remind me we need multiuser accounts and the other endliss list of mostly useless features we picked up along the way. If I really want some complexity like a CLI & Nix, thats fine for workstation use, but not in my home based PC that I have no control of due to web attacks. What Moore can’t answer to is that the SW industry is compelled by its own unnamed Moores law to ratchet up complexity too. Imagine Apple had actually used the awesome power of HW to keep things as simple as they used to be but removed all the speed/crash issues and then watch MS and Linux wiz straight past in list of complexity features. As long as HW gets faster & more capable, SW will get bigger & more bloated, period. Even the small handheld devices are busy chasing after the same freature sets as desktops, shame on them. end of rant 2005-04-13 9:54 pm Anonymous The replacement of Mach servers providing VFS and networking with kernel mode code from FreeBSD decreases the complexity of the kernel, if anything. From what I understand, the only change, really, between NeXTstep and OSX is that the BSD bits now run in the same address space (“ring 0” in Microsoft parlance). NeXTstep used BSD as a POE….a single process running in user space atop Mach. When a NeXT machine boots, the rom monitor boots Mach, but you’re really not able to do anything meaningful until the BSD process is loaded (e.g. by typing BSD -s to boot single-user). It’s the same approach taken by Windows NT, which began as a microkernel and was slowly factored into a monolithic one. And everybody is worse off because of it. 🙂 NT 3.x was rock-solid stable, and, for the time, the most advanced OS available. NT4 destroyed all that, and it wasn’t until Win2k did MS regain stability. the microkernel facilities remaining in XNU (namely Mach message queues) are still heavily utilized. Because Cocoa is built upon them, and OSX still uses the old-school NeXTstep way of using mach IPC to do message passing. OPENSTEP removed the dependence upon mach by building a set of platform-independent libaries to emulate the facilities Mach provides (normally calling the underlying threading models of the OS…..whether that’s SysV for HPUX and Solaris, or Win32 threads on NT). A move to these concepts can be seen in other operating systems like DragonflyBSD, which moves to an Amiga/BeOS-like messaging model which utilizes pervasive multithreading. And, as I point out above, the SysV descendants have had many of those facilities for years. It’s BSD and Linux that have lagged behind in this regard (though they’ve caught up in the last couple of years). Here’s a good discussion of the problems the NetBSD folks had trying to implement Irix binary emulation…. http://www.onlamp.com/pub/a/bsd/2002/12/19/irix.html I’d say the spaghetti-like nature of the interdependence of most monolithic kernels is far more bizzare than kernels which originally began their life as microkernels. I still have quite a bit of hope that the Hurd/L4 folks are going to design something that’s totally awsome. It may not perform quite as well as a traditional monolithic kernel, but the flexibility it’ll provide will be well worth the speed tradeoff (which is similar to the speed penalty you incur for not running single-user, single-application systems). 2005-04-13 10:09 pm Anonymous >Isn’t a law, it’s just an observation which has up to this >point managed to remain true. But that what ‘laws’ are….. 2005-04-13 10:12 pm Anonymous There is another major reason why SW is so complex but it is under the table. HW & SW guys really don’t ever get together very often to do things right. It happens once in a blue moon and the result is better HW and SW that runs specific to that HW that can be enormously more reliable and easier to write. It never happened at Intel, and it sure doesn’t happen at MicroSoft, they are just so codependant on each other they hate it. When Intel does SW or MS does HW the other guy gets the twitches. To do good HW and good SW, you must do both with team members on both sides which is obviously why Apple/Sun can always do better products but also explains why you also get a smaller market share as punishment. I don’t mean HW guys working with SW guys at a driver level, I mean the SW guys having some say in the HW architecture. How many people here can name any OS that had some real influence on the HW cpu design for which it was designed to run well on. Usually HW gets done far in advance of SW OSes. Certainly OSS is at a huge disadvantage there. The x86 and most other RISCs do very little for the SW engineer to help write reliable SW, they give us maybe 100x more speed, and more memory than the earlier micros but only if you play cards right and that means constaints (don’t go off writing random data structure far bigger than cache etc). So HW sometimes goes 100x faster, but because of other very slow components such as DRAM and HardDisks the SW guy has to trade much of that performance gain to cover the sloth of DRAM/HDs etc. end of rant2 2005-04-13 11:35 pm Anonymous Not really, but w/e. I wanna see what Moore said though, not some zdnet morons review of it… Looks like Moore just thinks interfaces are too complicated. Which I have to totally agree with! I think this has improved though, at least Firefox is an improvement over Mozilla in this sense. 2005-04-13 11:39 pm Anonymous One example of hardware/software collaboration I can think of is ARM and RISC OS which were really designed for each other (and it shows). 2005-04-13 11:51 pm Anonymous How many people here can name any OS that had some real influence on the HW cpu design for which it was designed to run well on. Well, in a roundabout way, Unix did. The x86 architecture was built to run C code, and the C language was invented because of Unix. I know that’s not what you meant though. 2005-04-14 12:00 am Anonymous Acorn/ARM could be a good example, it was designed by a mixed HW-SW team that had built at least a superior 6502 based BBC (with some really good SW on it considering the miserable cpu in the box) and when looking to the future didn’t think any of the US RISC designs would be low enough cost for a low cost PC to follow the very succesfull Beeb. However I now find the ARM to have alot of undesirable aspects, funny thumb mode, over zealous patents, expensive IP etc. I would add Xerox, Inmos, MIPs, even Apple/Sun but with some caveats since they started with the off the shelf 68K. I’d better scratch my head for a few more. Burroughs, and NWirths groups also comes to mind. The antithsis is Windows 3.1 and everything before that had to long suffer the fate of running on 16b segments something Intel never apologized for. I couldn’t use anything from MS till the NT flat 32b memory model. Funny thing, I am now leaning back to segments rather than pages but these could be any size 32b or more with segment names at least also 32b range to give an address space upto 64b, although the word segment doesn’t fit here 2005-04-14 4:25 am Anonymous Linux is over a decade old. FreeBSD too. Solaris even older. Even if you started from scratch, you are looking at around 2015 to see a stable codebase “done right”. Will PCs still be relevant? Will you end up hacking support for the things people wanted to do in 2012? Will you be sticking to the “perfect” design of 2005 or will that be replaced with hacked up requirements as time passes? Oh by the way you will have to use existing tools which may have some of the breakage he describes. Want to build new tools? Tack on another five years to your project. I think we’re going to have to work with what we have until software can write software and turn out a better system a lot faster than we can. 2005-04-14 4:27 am Anonymous Agreed, but SAS is a rare bird…it has a leader who actually wants to create an island of sanity where people work hard, are rewarded, and are treated like human beings. In other words the complete opposite of almost every other corporation you can think of. 2005-04-14 4:30 am Anonymous The Amiga was also built as an integrated hardware/software system (well, they used an off-the-shelf CPU but had some custom hardware). This seems to be the only system people have fond memories of. Of course if they would have gone on to conquer the world they would have dropped the ball somewhere and release some crap to please shareholders or try to make it selling mp3 players or blah blah blah. The good die young. 2005-04-14 9:21 pm Anonymous I didn’t say that OSX was not a complex OS. Any multithreaded OS is complex. What I mean is that Apple makes great efforts for the user to manage a computer in an easy way, besides that it is a very clean and ordered OS despite it’s complexity.