Linked by Hadrien Grasland on Sat 19th Feb 2011 10:24 UTC
Hardware, Embedded Systems Okay, source material is more than 10 years old so this is not exactly news, but I think it's interesting anyway. This BeOS promotional video is a good reminder of how powerful modern hardware truly is, what hardware should be needed for light computer use, and how laughable modern desktop&mobile OSs are as far as performance is concerned. Here are part 1 and part 2 on YouTube.
Order by: Score:
It's the software and applications
by joshv on Sat 19th Feb 2011 11:05 UTC
joshv
Member since:
2006-03-18

Yeah and? I remember by W2K box with 128MB or RAM and a single Pentium being pretty darned fast. Yes, it could play two videos at once and still word process. But that wasn't massively compressed 1080p video.

What's changed are the applications. A single H.264 1080p video stream would crush the BeBox. A simple flash app would crawl (I regularly see flash apps use more RAM than that machine has). Even native apps these days are massive memory pigs, just sort your task manager by memory usage and see.

So it's not just Vista, it's the entire application stack using more compute resources, because they are available.

I for one am happy I don't still live in the days where I can only watch two postage stamp sized videos at once and use half my 3GB hard disk to store them.

Reply Score: 6

Neolander Member since:
2010-03-08

Well, to start with, can you imagine how long an OS with modern power management *and* very low CPU usage could make a computer last on battery ?

It's a known fact that computationally expensive apps always result in a battery life hit.

Reply Score: 2

Neolander Member since:
2010-03-08

Of course, the apps would have to follow for this to work. But there's plenty of work to be done in the basic set of applications bundled in modern OSs already (see how Windows 7's backup tool can make mere MP3 playback drop frames on modern machines)

Reply Score: 1

OSbunny Member since:
2009-05-23

Yes I agree. Last year I switched from a pentium 4 with 512MB RAM to a core2duo with 4GB RAM. My p4 would slow to a crawl when I had a lot of tabs open in Firefox. Once I switched to the newer PC I understood exactly why. Firefox alone takes up about 512MB of RAM!

Reply Score: 1

Nth_Man Member since:
2010-05-16

Firefox alone takes up about 512MB of RAM!

How do you know that?

Reply Score: 1

phoenix Member since:
2005-07-11

Simply bumping the RAM in the P4 system to 4 GB would have sped things up *a lot*.

It's amazing how many people replace perfectly working systems with new systems, when all they really need is a RAM upgrade. I see this a lot at work. People complain about a slow Windows XP system and put in a PO for a new computer. While working on something else, I notice they only have 512 MB of RAM, so I bump it up to 2 GB for them. They think they have a brand new PC!!

The most important item in a PC is the amount of RAM. Don't skimp on it. Just because the OS lists "512 MB RAM minimum" doesn't mean you can actually use the system for anything with only 512 MB RAM installed.

Reply Score: 4

Neolander Member since:
2010-03-08

Heh, we tried that on the Win2k laptop which I used during an internship... And you know what ?

In the McAfee vs RAM upgrade fight, McAfee still wins ^^' It's just impressive : that PC runs perfectly smoothly all the time, even when doing some intensive computation, unless McAfee starts to do something.

Edited 2011-02-21 08:44 UTC

Reply Score: 3

Doc Pain Member since:
2006-10-08

Yeah and? I remember by W2K box with 128MB or RAM and a single Pentium being pretty darned fast. Yes, it could play two videos at once and still word process.


It seems to be a fact that the relation between hardware resources (what's provided) and the software requirements (what's spent) both increases. As a result, the overall average general usage speed keeps constant - a simple mathematical conclusion: numerator and denominator increasing similarly. The result of the equation is therefore constant.

The old computers traditionally boot and run as fast as the "modern" ones do. Imagine a 386 with 80 MB disk, 2 MB RAM and GeoWorks Ensemble. It would even boot faster than a modern plentycore machine with tenmelonhundred GHz and endless hard disks. This may be interesting for at least half of today's uses: People still treat their monster-PCs as worse typewriters. :-)

What's changed are the applications.


Fully agree. Applications tend to go the same way as operating systems do, where being efficient and fast is not in scope of development, as resources are present and can be utilized - and "overfully" used, which is the reason for people to buy a new PC. This continuous renewal of hardware and software keeps the market machinery running, but also benefits technical development (hopefully).

A single H.264 1080p video stream would crush the BeBox. A simple flash app would crawl (I regularly see flash apps use more RAM than that machine has).


The question that arises is if those "advanced" (I'll explain the quotes right away) applications do justify their requirements by what they provide. If you need 2 GHz and more to play a postage stamp sized video with crappy sound, is this justified? Is it an advanced use to abuse "Flash" for what HTML is intended? You know, many "advanced" and "modern" web designers treat "Flash" as a replacemnt for HTML (whole site navigation and display of static text), as well as a replacement for animated GIFs (for navigation, attention gaining and advertising purposes).

Many things that have been possible in the past with less resources do require more resources today, as operating system and the corresponding applications require it. But of course upcoming popular services make people use them, and in order to participate, certain requirements have to be fulfilled by the end users. Here the circle closes.

So it's not just Vista, it's the entire application stack using more compute resources, because they are available.


Although I feel a bit sad about it, I have to agree again. What you are describing applies to many Linux distributions also. Of course, everyone wants to participate on the new ability modern hardware offers. But this is traditionally done through layers of layers of abstraction and libraries. Those are often the parts of software that raise the requirements of specific hardware. This is sometimes called bloat, but I've been advised that it is not bloat, is is the requirement for modern software development. Hmmm...

I for one am happy I don't still live in the days where I can only watch two postage stamp sized videos at once and use half my 3GB hard disk to store them.


I may tell a true story: My first UNIX PC was a P1 (yes, Pentium 1) with 64 MB PS/2-EDO-RAM and an 8 GB hard disk. It was able to play video (mplayer), MP3 (xmms), compile the system, burn a CD (yes, no DVDs at that time), download an ISO per FTP and still provide a well responding web browser (Opera) - all at the same time. That was many years ago. Today, people running their monster-PCs get skipping MP3 playback when moving a window on the screen or starting another program.

Sounds wrong?

In fact, it does.

Reply Score: 12

Neolander Member since:
2010-03-08

Thanks for showing me that I'm not alone ;)

Reply Score: 2

Doc Pain Member since:
2006-10-08

Thanks for showing me that I'm not alone ;)


You're not alone. I have many friends and customers who do not "advance" in a way the industry (content providers, software vendors, hardware manufacturers) want them to do. They keep doing - or at least want to - the same thing for many years. Their scope is continuity, compatibility, and maybe interoperability. Of course, this places them into a niche market as they do not want to participate in the ongoing renewal of fully functional installations. As I said before, there are many users who treat their powerful PCs as worse typewriters. This means those people do not benefit from new computers (with new software) as their work won't be easier and won't be done faster. In fact, the opposite seems to be true. In the past, using computers forced people to learn and to think, basically due to the kind of interface programs (and operating systems) presented to them. A main part of that knowledge was how to handle a keyboard, and of course how to use the programs. Today, no learning is desired, so people keep doing the strangest things, like writing letters in "Excel" or printing a simple photo with "PowerPoint". The web with all its new possibilities caters this attitude. "No learning required, just go there and clickityclick!" This is one of the reasons why powerful hardware is needed, even if the tasks performed with it are the same tasks as 10, 20 or maybe even 30 years ago, at less speed, with worse results. Programmers also know that if they want their software to be used by as many people as possible, they have to concentrate on how the majority "thinks", and this introduces the need for so many dependencies and abstraction layers. Users then get uses to even the strangest concepts of doing things and require them in the future, this is the reason for backward-compatibility that makes things even more complicated.

Basically, people don't get what they want; they get what they deserve. :-)

Reply Score: 6

It's not only the OS
by d3vi1 on Sat 19th Feb 2011 16:38 UTC
d3vi1
Member since:
2006-01-28

While BeOS was fast mainly because it was designed for and on modern (at the time) hardware it's not that impressive. We had a Linux on AMD K6-2 350MHz with 128MB of RAM and a consumer 4 GB hard drive that ran a J2EE 1.4 website with 10.000 parallel clients. The bottleneck was the 5200 RPM hard-drive, because of the database. Initial software would only allow 500 users and a lot of code, JavaVM, OS and DB tuning later it would cap at the 10k users mark, which was acceptable for us. Right now, it serves half a million users on 4 intel Mac Mini systems, with the DB one using SSD storage and the other ones just a RAM bump to 8GB.
Want a better comparison? The Apple iPhone 3G (in its iOS 2.x and 3.x era) was pretty fast in multimedia apps, while having a 400MHz CPU and 128MB of RAM, thus about the same config as the BeOS system you have over there. If you test and optimize for a platform, you can do it!
An even better example: IBM PS/1 2133-W13 running an Intel 80386SX at 25MHz with added FPU and 16MB RAM (the maximum allowed on 386SX), could do routing and NAT saturating a 10Mbps Ethernet link on Linux on a recent Linux Kernel 2.4.
The question obviously at hand is: how? Simple, if you have enough time (a very precious resource nowdays) on your hand, you can fine tune stuff to run on the lowest end hardware. The question is, is it economically viable? For us the K6-2 experiment was an economical requirement, it was the only system we had on hand that was unused, and we had no budget at the time to do better. The 386 experiment, on the other hand was one that we did just for fun, loving that piece of old hardware and testing to see if we brought it back to life.

I loved BeOS, I like Haiku-OS, but I don't want to see BeOS on our desktops anymore, since the design is already 15 years old. I want to see the next BeOS, designed to run on 64 core systems, with another 1000 GPU cores at hand and a crapload of RAM and SSD/Flash. And I want it to run with the best power-management out there. This fictive OS should have double the battery lifetime of the iPad on the same hardware (except for the exceptional standby, where you can't do anything about it).

Reply Score: 3

beos is well done
by _xmv on Sat 19th Feb 2011 17:52 UTC
_xmv
Member since:
2008-12-09

I see what I feel a lot of wrong comments.
This is this desire to think that whatever is latest technology is necessarily better than older, "obsolete" stuff mainly.

Operating systems did catch up a good bit on BeOS but they're not quite the same.

What BeOS has (and Haiku), none of the current operating systems deliver completely.

For example, everything, EVERYTHING is threaded properly. It means the scaling of the OS with system "cores" is excellent.
As an example, Linux's kernel "big system lock" (which is a lock that prevent proper threading) has been removed in the latest development version - but nothing has been optimized around that, it will probably take 5 more years minimum to thread things "mostly properly".

Every component has been though out for maximum efficiency and scaling.

Everything in BeOS is live, that is, every app and component has been though out for live useage. Linux added such support a few years ago and it's still being worked on and you *need* to add support for it in your applications.

In BeOS it's just there from the ground up.

People coding Linux are still trying to make a proper scheduler. It's hard due to the code base and the way everything has been programmed, including the applications and the standard libraries.
BeOS just has it right. A dream.

And for the cool stuff, try to disable one cpu live on your current operating system without any hack.

BeOS has poor hardware support, no apps, outdated interface and no 3D accelerated stuff. That makes it only useable for a few geeks, yes. But apart from that, it's an excellent excellent OS from the design point of view. And I only listed a few key points.

Edited 2011-02-19 17:54 UTC

Reply Score: 8

RE: beos is well done
by _xmv on Sat 19th Feb 2011 18:06 UTC in reply to "beos is well done"
_xmv Member since:
2008-12-09

here's some random video with recent hardware and stuff :p

http://www.youtube.com/watch?v=k0WeOXGzP4c

Reply Score: 3

RE: beos is well done
by malxau on Sat 19th Feb 2011 19:43 UTC in reply to "beos is well done"
malxau Member since:
2005-12-04

For example, everything, EVERYTHING is threaded properly. It means the scaling of the OS with system "cores" is excellent.


That's also true on NT, which was designed for multi processor systems. The video mentioned BeOS scaling to 8 processors. NT now scales to 256. There's no "big kernel lock" in NT.

And, as other posters have commented, that hasn't translated to user perceived speed, for various reasons. One of those reasons is that scaling out to multiple processors is not just an OS issue - it also needs application support. Note the careful presentation of this in the video, using a multiproc aware rendering app.

Reply Score: 1

RE[2]: beos is well done
by t3RRa on Sat 19th Feb 2011 20:25 UTC in reply to "RE: beos is well done"
t3RRa Member since:
2005-11-22

You might missed BeOS's multi-threading feature. Every app by default haas two threads running; one for the window itself and another for the application logic. It is not always the case in BeOS demo that its apps are so called "multiproc aware rendering app" if you meant it by each app was hand tuned for multi processor/threading, rather it is inherited by default. You do not have to hand tuned for that.

Reply Score: 4

RE[2]: beos is well done
by _xmv on Sat 19th Feb 2011 20:58 UTC in reply to "RE: beos is well done"
_xmv Member since:
2008-12-09

With BeOS/Haiku you don't need the application support for most threading. It's really from the ground up.

Unfortunately I do not possess a system with 1000 cpus :p

But booting Haiku with -smp 100 (emulates 100 CPUs) works just fine for example.

And yes every BeOS app has minimum 2 threads. It means apps cannot lock up as one is dedicated to the UI.

But it's not just that. The filesystem is threaded. The GUI framework is also threaded (it makes porting apps kinda difficult also). Etc.
Basically the API is async. Coding ain't as easy. But the result is there.

I would suggest "BeOS: porting UNIX applications
By Martin C. Brown" for example

Reply Score: 3

RE[3]: beos is well done
by ciplogic on Sat 19th Feb 2011 21:17 UTC in reply to "RE[2]: beos is well done"
ciplogic Member since:
2006-12-22

Today the difference in memory consumption is in abstraction level and in libraries.
BeOS have small libraries, Windows 95 will work like lighting (with patch to not start with a division by zero) booting in 5 seconds or so, without SSD.
XFCE + Linux can run today on those specs (you need to pin down libraries).
Where are antialiasing fonts? Where is Compiz or any composite support? Where are huge res support that may consume more memory.
Windows NT 4 delivers likely similar outcome in the same specs, excluding that threading support was likely beter, and for certain Windows 2000.
2 threads per app are nice, but if the fact is that your UI will remain responsible is the single benefit. Few applications are here. Writing your application multithreaded in a framework like Qt, will give today the same perceived fastness (as Google Chorme does it multiprocess). So don't cry for a dead man that looked amazing in Windows 95 world, but would look just nice today.

Reply Score: 2

RE[3]: beos is well done
by malxau on Sun 20th Feb 2011 01:00 UTC in reply to "RE[2]: beos is well done"
malxau Member since:
2005-12-04

With BeOS/Haiku you don't need the application support for most threading. It's really from the ground up...

And yes every BeOS app has minimum 2 threads. It means apps cannot lock up as one is dedicated to the UI.


That's a nice feature, but as you say, the benefit here is to prevent locking up the UI. It won't really distribute computation across cores - you're not really spending one core rendering UI, right? If you want to saturate a quad core machine, you'll need to distribute more than just farming out UI. The compute bound task itself needs to be distributed.

But it's not just that. The filesystem is threaded. The GUI framework is also threaded (it makes porting apps kinda difficult also). Etc.
Basically the API is async. Coding ain't as easy. But the result is there.


And that's true in NT too. The filesystem is quite parallel - NTFS has four different locks per file, for example, and the native API is fully asynchronous. But that only benefits you if the app uses asynchronous operations without immediately waiting for completion, which isn't generally done, since it's much harder for applications. So we end up with an OS that supports SMP really well, but the overall result still frequently falls short.

Reply Score: 2

RE[3]: beos is well done
by Soulbender on Sun 20th Feb 2011 05:44 UTC in reply to "RE[2]: beos is well done"
Soulbender Member since:
2005-08-18

It means apps cannot lock up as one is dedicated to the UI.

Uhm , no it doesnt. I had plenty of apps lock up back when I used BeOS.

The filesystem is threaded.


Doesn't mean it always performs well though. A fun experiment you could do back in the day (might have been solved with Haiku's OpenBFS since) was to have thousands of emails in a folder. Deleting a single email would take, and I don't make this up, minutes and sometimes much more. I remember trying to delete all of them and waiting for hours for the delete operation to complete. So much for built to scale.

Hey, BeOS was really neat (but so was the Amiga before it) but it was by no means the perfect OS.

Basically the API is async.

just like many other API's out there.

Reply Score: 2

RE[3]: beos is well done
by phoudoin on Tue 22nd Feb 2011 07:35 UTC in reply to "RE[2]: beos is well done"
phoudoin Member since:
2006-06-09

To be factual...

But booting Haiku with -smp 100 (emulates 100 CPUs) works just fine for example.


... this is false.

Haiku only supports up to B_MAX_CPU_COUNT, which is currently set to 8 for ABI backward compatibility.
Only first 8 emulated "CPUs" will be used, not the whole 100.

Otherwise, I agree fully.

Reply Score: 2

RE[2]: beos is well done
by viton on Sun 20th Feb 2011 14:45 UTC in reply to "RE: beos is well done"
viton Member since:
2005-08-09

There's no "big kernel lock" in NT.

NT GUI prior to Windows7 suffered from "big GDI lock"
http://blogs.msdn.com/b/e7/archive/2009/04/25/engineering-windows-7...

Reply Score: 3

RE[2]: beos is well done
by galvanash on Mon 21st Feb 2011 03:11 UTC in reply to "RE: beos is well done"
galvanash Member since:
2006-01-25

That's also true on NT, which was designed for multi processor systems. The video mentioned BeOS scaling to 8 processors. NT now scales to 256. There's no "big kernel lock" in NT.


You leave out the fact that until Windows 7/2008 R2 NT did have at least one major "big kernel lock" (the scheduler's dispatch lock), and if you go back to say NT4/2000 their where many such course grained locking strategies used all over the place. So what you say may be true for the most part now, but it certainly wasn't designed that way from the beginning.

That isn't meant as a slight against the NT kernel, just trying to be fair. The BeOS kernel is is now 20 years old - comparing it's past virtues to the NT kernel of today is certainly a bit strange to say the least.

Regardless, the desire to eliminate course grain locks in NT is primarily an attempt to improve scaling when running on a very large number of cores (i.e. >16). It does offer some benefits on smaller systems, but from what I have read it is at best a few percentage points of improvement.

BeOS was never designed for scalability - it is designed for interactivity. That goes far beyond the kernel and what locking strategies it uses. The major difference with BeOS is how the programming API's interact with the graphics library and basic user input. It "feels" fast because it is biased towards interactivity (keyboard and mouse input, drawing windows, etc.) sometime at the expense of overall performance. Everything is biased towards returning control to the user as quickly as possible.

As has been trumpeted over the years many times, BeOS has no hourglass cursor - this is not because there is no possible need for one, you can certainly bog down BeOS if you tried. It is because not giving developers that crutch creates an expectation that they shouldn't waste cycles doing work when there is a user waiting...

The user is paramount. Interactivity is more important than overall performance. It is more of an ethos than technological magic. While there is a good bit of technical effort applied in BeOS/Haiku to facilitate that goal, the thing that makes it special is that it holds that goal above all others. It is first and foremost a single user desktop operating system, it is not something else pretending to be one.

Reply Score: 5

RE: beos is well done
by Nth_Man on Sun 20th Feb 2011 02:13 UTC in reply to "beos is well done"
Nth_Man Member since:
2010-05-16

I see what I feel a lot of wrong comments.


As an example, Linux's kernel "big system lock" (which is a lock that prevent proper threading) has been removed in the latest development version - but nothing has been optimized around that, it will probably take 5 more years minimum to thread things "mostly properly".

From the latest stable kernel changelog, in http://kernelnewbies.org/Linux_2_6_37:
"No BKL (Big Kernel Lock)
[...] Note that this doesn't have performance impact: all the critical Linux codepaths have been BKL-free for a long time. It still was used in many non-performance critical places -ioctls, drivers, non-mainstream filesystems, etc-, which are the ones that are being cleaned up in this version. But the BKL is being replaced in these places with mutexes, which doesn't improve parallelism (these places are not performance critical anyway)."

Edited 2011-02-20 02:19 UTC

Reply Score: 2

RE: beos is well done
by Soulbender on Sun 20th Feb 2011 04:13 UTC in reply to "beos is well done"
Soulbender Member since:
2005-08-18

Every component has been though out for maximum efficiency and scaling.


I guess you just forgot about the awful performance and scalability of NetServer.

Reply Score: 3

Where are we going?
by Zbigniew on Sat 19th Feb 2011 21:59 UTC
Zbigniew
Member since:
2008-08-28

"Current software is shameful. Giant operating systems linger from the 1970's. Applications are team-produced with built-in obsolescence. User interfaces feature puzzle-solving.

With the huge RAM of modern computers, an operating system is no longer necessary, if it ever was".

"I despair. Technology, and our very civilization, will get more and more complex until it collapses. There is no opposing pressure to limit this growth. No environmental group saying: Count the parts in a hybrid car to judge its efficiency or reliability or maintainability".


Chuck Moore, "father" of Forth programming language.

I would also recommend the lecture:
http://patorjk.com/programming/articles/forththoughts.htm

Reply Score: 3

RE: Where are we going?
by Zbigniew on Sat 19th Feb 2011 23:41 UTC in reply to "Where are we going?"
Zbigniew Member since:
2008-08-28

Uh, I forgot: this could be interesting as well:
http://hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Be...

"Check out the results! For the functions that people use most often, the 1986 vintage Mac Plus beats the 2007 AMD Athlon 64 X2 4800+: 9 tests to 8! Out of the 17 tests, the antique Mac won 53% of the time! Including a jaw-dropping 52 second whipping of the AMD from the time the Power button is pushed to the time the Desktop is up and useable.
[..]
...it can be stated that for the majority of simple office uses, the massive advances in technology in the past two decades have brought zero advance in productivity"
.

Reply Score: 3

RE[2]: Where are we going?
by static666 on Wed 23rd Feb 2011 14:13 UTC in reply to "RE: Where are we going?"
static666 Member since:
2006-06-09

Oh, that's so true!

I remember playing with vintage Mac SE FDHD back in 2001-2002. Running System 7, it had Excel 2.0 and Word 4.0 for Mac installed and I was shocked to discover old version from 1991 having almost all daily use features of Office XP, the latest version at that time.

Reply Score: 1

Haiku vs Linux
by ParadoxUncreated on Sun 20th Feb 2011 03:46 UTC
ParadoxUncreated
Member since:
2009-12-05

This article discusses a bit NewOS vs Linux. NewOS later forked to become the kernel of the Haiku project.

Article also ten years old.

http://www.drdobbs.com/cpp/184404881;jsessionid=GX0KADKZWKDJZQE1GHP...

Reply Score: 1

What about BeOS sources?
by joahim on Sun 20th Feb 2011 12:05 UTC
joahim
Member since:
2006-08-22

Palm, the owner of BeOS has just been acquired by HP. So HP is now able to use BeOS or make it open source...

Reply Score: 1

Buzzword enabled.
by ParadoxUncreated on Sun 20th Feb 2011 16:14 UTC
ParadoxUncreated
Member since:
2009-12-05

I think what matters in reality, is how good the OS is at handling low latency streams, say 1 ms.
And how much you can utilize the processor.

Last time I checked this with linux, the BFS patch seemed to produce the best results.

The mainstream kernel also seems to improve all the time.

I talked to a macuser, and he said 2ms latency for audio, was usable for a typical mac.

Back in the day, I also remember one of the Be engineers talk about, running audio at 1 ms latency.
Which should mean that today, that would be an even easier task.

I also see a page on windows engineering, where they talk about 10ms latency, which is far too much by todays standards. http://blogs.msdn.com/b/e7/archive/2009/06/17/improving-audio-glitc...

Reply Score: 2

Cool stuff
by Risteard on Mon 21st Feb 2011 09:36 UTC
Risteard
Member since:
2011-02-17

I only think that modern applications could be done in such a way to be more reactive. Yes, we might still need some powerful hardware in order to play or convert some heavy video files, however nowadays, most of this power is being exploited by heavy operating systems and bloatware, making any simple process as watching movies, burning cds, surfing the web and other stuff as slow as it was with older computers, depending on what OS is being used ;) .

Reply Score: 3

RE: Cool stuff
by vodoomoth on Mon 21st Feb 2011 11:36 UTC in reply to "Cool stuff"
vodoomoth Member since:
2010-03-30

however nowadays, most of this power is being exploited by heavy operating systems and bloatware, making any simple process as watching movies, burning cds, surfing the web and other stuff as slow as it was with older computers, depending on what OS is being used ;) .

Even slower. Clicking a music file in Windows Media Player, I usually wait around 20-30 seconds before it starts playing, no word of a lie. What is it doing, why is it thrashing the disk when Sonique, Winamp or Jaangle would start playing the file in the second, I don't know.

Reply Score: 2

Slow scripting code
by rif42 on Tue 22nd Feb 2011 08:56 UTC
rif42
Member since:
2005-11-20

Lots of CPU power is wasted nowadays with processors crawling through scripting language code.

Edited 2011-02-22 08:58 UTC

Reply Score: 1

A little test.
by ParadoxUncreated on Tue 22nd Feb 2011 17:49 UTC
ParadoxUncreated
Member since:
2009-12-05

I tried some quick low latency tests.

Standard threads in linux, seem run solid at 2 ms latency. BFS patched kernel, and rt-threads run with a few dropouts at 0.3 ms latency.
Windows directsound needs 10 latency for solid audio.
Windows ASIO runs 2 ms latency with a few more dropouts than linux at 0.3ms latency.

Os-jitter is clearly much lower on Linux that windows. And that means better scalability over several processors.

BeOS main selling point was scaling over several processors, was it not? So eventually haiku needs to be better than this again, to compete with Linux (?). ;)

Reply Score: 2