Running out of ink? The Apple online store carries several varieties, as users of the company’s next-generation operating system may soon find out.
Running out of ink? The Apple online store carries several varieties, as users of the company’s next-generation operating system may soon find out.
no speed increase from the g5s for a yr
Hmmmm the G5 was bumped to 2.5 GHz just this june, 3 months ago. Which planet do you come from where a year is less than 3 months?
ipod shortages
Hunh???? when did that happen? the iPod minis are shipping in volume. I saw a bunch of boxes at Fry’s.
raptor spreading bunk again:
“no speed increase from the g5s for a yr
Hmmmm the G5 was bumped to 2.5 GHz just this june, 3 months ago. Which planet do you come from where a year is less than 3 months?”
it means from the day the g5 came out it took a yr for them to add a single mhz. not within a yr from today. get a grip.
and announced in june, they shipped in sept. and still take 3 to 5 weeks to get one if you order today. i think ibm is capable of producing about 50 per week from what you see on the web.
“ipod shortages
Hunh???? when did that happen? the iPod minis are shipping in volume. I saw a bunch of boxes at Fry’s.”
oh well take your word for it the whole world is in great supply of ipods cause you saw a few at a single store.
http://money.cnn.com/2004/08/24/technology/techinvestor/lamonica/
“The biggest issues facing Apple in the short term are various component troubles. The iPod mini is currently on backorder, for example, because of hard drive shortages at supplier Hitachi.”
its been all over the net how over the last few months finding ipods has been a chore. spread your wings a little and look around. you will learn some stuff raptor.
“look at the newest osnews story on the imac g5. 1.8ghz g5 is 50% faster than 1.25ghz g4 in raw clock but it only beats it by 35% in benchmarks. pathetic.”
What percentage would not be pathetic? Do you mean a processor 50% faster should be 50% more efficient? Is that a law?
when it has better video card
faster sata 150 hard drive
faster system bus
hot new superfast panther 10.3.5
50% faster cpu that is supposed to have all these great enhancements over the g4.
etc etc etc
yeah it should do better than a 35% speed increase.
Subjective. No fact. “Should do”??
its been all over the net how over the last few months finding ipods has been a chore. spread your wings a little and look around. you will learn some stuff raptor.
May be you should learn more. That news was about a month old and is just that, OLD NEWS. Apple is shipping ipod minis in bulk now.
Faster sata 150 hard drive
faster system bus
hot new superfast panther 10.3.5
50% faster cpu that is supposed to have all these great enhancements over the g4.
etc etc etc
yeah it should do better than a 35% speed increase.
Bullshit. That is exactly what you know about computer technology.
SATA 150 or ATA100 makes zero difference in pefromance of a single 7200 RPM drive.
May be if you had posted better context here is the article in context.
There you go it is twice as fast on cpu intesive tasks.
“Speedmark, our all-around system performance benchmark, showed that the 1.8 GHz iMac G5s were 35 percent faster than the 1.25GHz iMac G4. Some individual application tests, like rendering a scene using Maxon’s Cinema 4D, showed the new 1.8GHz iMac completing tasks in less than half the time it took the 1.25GHz G4 version.
The results also indicated that the performance gap between Apple’s consumer desktop and professional models is closing. Tests with applications like Cinema 4D and Apple’s Compressor that make the most of two processors showed the dual-1.8GHz Power Mac running twice as fast as the new 1.8GHz iMac G5. But Speedmark also showed that Power Mac was about 19 percent faster than the new 1.8GHz iMac in the suite’s 15 tests; that same system was 61 percent faster than the old 1.25GHz iMac G4.”
Amazon is shipping ipod minis in 1 to 2 days.
http://www.amazon.com/exec/obidos/tg/detail/-/B0001A99ME/002-961192…
SILVER :Availability: Usually ships within 24 hours
GREEN:Availability: Usually ships within 1 to 2 days
in a discussion about what apples sales figures will be like for ALL of 2004, what is in effect today doesnt carry that much weight.
what carrys more weight is what has been the norm for most of the year considering the year ends in a weeks time.
“May be you should learn more. That news was about a month old and is just that, OLD NEWS. Apple is shipping ipod minis in bulk now.” that statement just shows how dense you are capable of being.
one month old news is not invalid when discussing what has transired with apple since last oct 1.
you make this too easy raptor.
“Bullshit. That is exactly what you know about computer technology.
SATA 150 or ATA100 makes zero difference in pefromance of a single 7200 RPM drive.”
that is the most ignorant thing ive read on here in weeks….probably since one of your last doozies.
simply incredible how you pretend to know something yet write something so profoundly ignorant.
I think Apple will make about $9 billion in gross revenue for 2004… But I am talking about the calendar year, meaning Apple will have tallied that sales figure between Jan 1, 2004 and Dec 31, 2004. Just to clarify.
This is what I wrote, ND. Go back and read it. Over a couple of posts. I was very clear, as you see. Not Apple’s fiscal year… over the calendar year of 2004.
Either way, it really doesn’t matter, does it, Seeker? You can wait the extra two months to gloat, can’t you? What’s two months with the disatrous year that Apple is having.
Hmmm?
you dont get to make the rules
apple measures their sales on a fiscal year not a calendar year.
the whole world abides by this….dont go twisting cause you know it wont come true…..either now or come january. ill let you use both to your hearts content. apple will not hit 9 billion in sales for either time frame.
stretch it out all you like and we can have a chuckle at your absurd musings two times.
Sorry to break your bubble but NT has been nothing but unstable for a workstation or Server OS.
Sorry to burst your bubble, but it’s been quite stable for these purposes.
I have seen more than my fair share of irq_not_equal_less kernel crashes on NT.
That’s almost always a bad driver or bad hardware. I’ve seen more than enough of them on every other OS I’ve ever used.
I have seen unplugging a SCSI scanner from NT crash the kernel and it kept crashing every boot after, had to be rebuilt.
Probably not a hotplug SCSI bus with disks on the same bus – hardly surprising.
I have seen a bad mouse blue screen NT on boot.
That I find exceptionally hard to believe.
*shrug*. I can come up with meaningless stability anecdotes about Linux, OS/2 MacOS, MacOS X, Solaris, BeOS and others if you really want me to.
NT is far from secure interms of the number of Virsuses and exploits being released every other day, week, month.
This word, “secure”, it doesn’t mean what you seem to think it means.
MS even announced that they would stop feature development to make windows secure.
“Microsoft’s record on software security has been heavily criticized in the past, and in January of this year the company announced a new emphasis on trustworthy computing in an effort to clean up its image. This news was soon followed by word that its software developers would stop writing new code while they audited their existing code for security flaws.”
That doesn’t bode well coming from the horses mouth.
But when, say, the OpenBSD team commit to a code review, that’s a *good* thing, right ?
Contrary to what others migh think I have used Windows more than Macs and absolutely detest it after having supported it at a technical level while in college and even now for friends.
Your opinion is noted. It does not change the simple fact that an appropriately NT machines are stable, robust and secure.
[Contrary to what others migh think I have used Windows more than Macs and absolutely detest it after having supported it at a technical level while in college and even now for friends.
Your opinion is noted. It does not change the simple fact that an appropriately NT machines are stable, robust and secure.]
Ugh. Should have said:
Contrary to what others migh think I have used Windows more than Macs and absolutely detest it after having supported it at a technical level while in college and even now for friends.
Your opinion is noted. It does not change the simple fact that appropriately configured and administered NT machines are stable, robust and secure.
What strategy?
Turning the Operating System into a commodity product and selling it separately.
Wanting desktops on everyones desk in the world is not a strategy – its at best a dream (at least back in those days). Nothing wrong with dreams, just that when you say strategy it implies some kind of plan was had, which is ALMOST the total opposite.
I never said that was a strategy, I said that was what Bill Gates wanted to do. The “strategy” part was what they did to facilitate that goal.
That’s how I view the whole BillG scenario – he saw that economies of scale were surely going to increase (thanks to IBM, not MS) as more people were buying computers so he then “set out” to put one on every desktop.
Actually, Bill’s cleverness was making an OS that didn’t require an IBM computer to run on and then sell it separately. By doing that, Microsoft played a key part in those economies of scale and driving the cost of computing down.
Now take Apple on the other hand – pursued DTP, created various products that implemented ideas relegated to acadamia or not even tried before – Hypercard/Hypertalk, Objectice-C, WebObjects, OS-level support for scripting applications – all of which would eventually be adopted by MS when it sees fit (ASP.Net only now getting WO features that have won it various technical awards as [to be fair, so is Java w/JSF but we are discussing MS]) over the years.
ObjectiveC came from NeXT, not Apple – and I’m pretty sure WebObjects did as well. Hypercard was quite an innovative product – probably one of most innovative Apple has ever done. “OS-level scripting” isn’t really anything special.
Microsoft did quite a bit to pursue Bill’s “dream” of a computer on every desk – making hardware and software much more affordable, Visual Basic, application integration (Office).
They’ve both done a lot – as I said, different strategies – Microsoft have tried to make computing cheaper and give customers more bang for their buck, Apple have concentrated on the entire user experience and “being cool”.
I don’t believe NT was ever ported to SPARC. the alpha port was killed when 2000 shipped and so was the PPC port.
The SPARC port was done internally and never released. There was also one (internally) done to HP’s PA-RISC (and probably others).
The Alpha port made it through to Windows 2000’s second beta round and was canned after that. I think I’ve still got a CD around here somewhere with it.
The PPC port only made it to about Service Pack 3 of NT.
And let’s not forget NT’s original development was done on intel’s 860 CPU.
It’s pretty obvious that NT was – and remains – a portable OS. Why people even bother to try and argue otherwise is beyond me.
Windows no only supports x86 and has only supported x86 since NT 5.0/Win 2000. XP is going to add support for 64-bit architectures. Since it hasn’t shipped yet XP only supports x86 32-bit today.
Windows also supports Itanium and is 64 bit on it.
Also NT was never ported to Mips. Microsoft has never released a sparc port of NT or a mips one at that. They killed the alpha and ppc ports long time ago.
My NT4 CD disagrees with you:
Y:>dir
Volume in drive Y is NTWKS40A
Volume Serial Number is D7E5-675A
Directory of Y:
14/10/1996 11:38 AM [DIR] ALPHA
14/10/1996 11:38 AM 176 AUTORUN.INF
14/10/1996 11:38 AM 6 CDROM_W.40
14/10/1996 11:38 AM [DIR] DRVLIB
14/10/1996 11:38 AM [DIR] I386
14/10/1996 11:38 AM [DIR] LANGPACK
14/10/1996 11:38 AM [DIR] MIPS
14/10/1996 11:38 AM [DIR] PPC
14/10/1996 11:38 AM [DIR] SUPPORT
2 File(s) 182 bytes
7 Dir(s) 0 bytes free
Y:>
The MIPS (and PPC) port may not be present on CDs pressed after they were canned, but if you get an original NT4 CD (like mine), there it is.
Show me where I can buy a PC with itanium at a reasonable cost that will out perform an pentium based machine for home use.
How is price relevant ? Your claim was 2003 and XP are now only available on x86 (presumably to imply that NT isn’t portable “anymore”). You are wrong.
Panther already supports 8 GB of memory and 64 bit apps. I would call that a 64-bit OS. I am thinking you know what 64-bit means????
I am thinking you don’t.
OS X isn’t 64 bit yet. It can address lots of RAM (but, then again, so can 32 bit Windows – up to 64GB) and a few system libraries are 64 bit (enough so Apple can make their traditional sorta-true-but-not-really marketing claims). However, AFAIK applications can’t yet allocate more than a 32 bit address space – which is really the acid test – so OS X isn’t 64 bit in any meaningful way.
Probably not a hotplug SCSI bus with disks on the same bus – hardly surprising.
SCSI devices always go through a arbitration and selection phase. SCSI buses are robust enough to deal with surprise removal of devices. At best the OS should retry the selection phase and print “device not responding to selection” after a timeout or some such message not CRASH permanently.
Incase you didn’t notice I said it died on boot every time after words and needed to be reinstalled. That sounds pretty weak to me.
BTW this was single channel adaptec HBA solely for the scanner.
That I find exceptionally hard to believe.
Of course, you didn’t spend three days trying to figure that one out after yanking every damn card and device out of the system.
This word, “secure”, it doesn’t mean what you seem to think it means.
What does secure mean then ? NT isn’t even common criteria certified if it is networked.
But when, say, the OpenBSD team commit to a code review, that’s a *good* thing, right ?
OpenBSD teams commitment was from the get go. Ms commited to this after billions of $$$ of losses thier customers have sustained and after it became a issue for them publicly. There is a huge difference when some on like MS commits to it after a threat from linux and after 15 years.
Your opinion is noted. It does not change the simple fact that appropriately configured and administered NT machines are stable, robust and secure.
Don’t twist what I said. I knew more about OSes in college than you do today.
Prime example being this ridiculous statement you just made.
However, AFAIK applications can’t yet allocate more than a 32 bit address space
WTF does apps allocate a address space mean? Address spaces are an abstract concept in an OS. A process “has” and address space. Allocations usually come from the heap or stack which is part of the processes address space.
How is price relevant ? Your claim was 2003 and XP are now only available on x86 (presumably to imply that NT isn’t portable “anymore”). You are wrong.
I believe this discussion is about Apple’s desktops and itaniums are not meant for consumers. So price is relevant.
Probably not a hotplug SCSI bus with disks on the same bus – hardly surprising.
I just realised how ignorant you are. You can unplug any device on a SCSI bus without affecting any other device. You should read the SCSI spec sometime.
There is no such thing as a hotplug SCSI bus, all SCSI devices are required to maintain power, ground and control signals 1ms after removal in one of the cases . That means the device is hot durning removal, it is in the SPEC.
Don’t pretend to know better. little knowledge is dangerous.
SCSI devices always go through a arbitration and selection phase. SCSI buses are robust enough to deal with surprise removal of devices.
If you hotplug a SCSI device that isn’t designed for it, it’s going to leave the bus in an undefined state (although some hardware might be able to recover). Particularly if it affects termination.
At best the OS should retry the selection phase and print “device not responding to selection” after a timeout or some such message not CRASH permanently.
Uh huh. Because, like, NT4 is the only OS that’s ever crashed due to a SCSI error. I bet you’ve *never* seen another OS crash because of a SCSI error, right ?
SCSI drivers run in kernel mode – that means if they shit themselves, they’re more than capable of taking the OS down with them. This is, I might add, equally as likely on OS X.
Incase you didn’t notice I said it died on boot every time after words and needed to be reinstalled. That sounds pretty weak to me.
Well, if your abuse of the SCSI bus resulted in corrupted files on the hard disk(s) or a physically damaged peripheral, it wouldn’t be in the least surprising to me. I’ve seen crashes on lots of different OSes result in unbootable systems because of file/filesystem corruption, it’s hardly something that has – or will – only affect Windows.
Of course, you didn’t spend three days trying to figure that one out after yanking every damn card and device out of the system.
Given that NT functions perfectly well with millions of PS/2 peripherals all over the world, I’d propose to you that your example was a special case. Drawing a general conclusion from it was, at best, silly.
Also, I’d be fascinated to know what measure you are applying that will find OS X “stable, secure and robust” in 3 years, that doesn’t also say – all else being equal – likewise about NT. I also have to wonder if you’ll consider OS X to be “stable, secure and robust” even if someone quotes a few anecdotes about abnormal OS X problems.
What does secure mean then ?
Well, it certainly doesn’t mean “hasn’t had lots of derivated exploits targeted at it in the last X months”. By that logic, DOS is “secure”.
Personally, I’d call a platform secure if it can be kept free from exploits and malicious use with relatively little effort while remaining useful. Given doing that on NT requires little more than a competent admin (as with any modern platform), I’d say it qualifies as secure.
The vast bulk of security problems are caused by users doing silly things. Just because a particularly platform is so much more common there’s a lot more occurrances of its users doing silly things, does not make that platform insecure.
NT isn’t even common criteria certified if it is networked.
Ignoring for a second that C2 certification excludes any networking criteria, which alternatives are you thinking of that have even reached that level ? In the context of a discussion about Apple’s desktop computers, I mean…
Hell, why do you even raise Common Criteria as a standard ? It’s pretty pointless given that it only applies to specific hardware and software combinations, is lost as soon as pretty much any modification to the certified configuration is made (like, say, patching a security hole) and is mostly meaningless outside of a tickbox on a form.
OpenBSD teams commitment was from the get go.
But they still do code *reviews*. Are you saying when the OpenBSD team stops new development and does a code review that’s a bad thing ?
Ms commited to this after billions of $$$ of losses thier customers have sustained and after it became a issue for them publicly.
Most people would consider a security review in those circumstances to be responsibly addressing customer concerns.
There is a huge difference when some on like MS commits to it after a threat from linux and after 15 years.
The “threat from Linux” is nothing more than a Linux zealots wet dream. Linux and Windows compete directly in stunningly few areas. If you really believe Microsoft started their “security initiative” because of Linux and not because of bad press and actual security problems, then I’ve got a bridge to sell you.
Don’t twist what I said.
I’m not. You made a blanket statement that NT was not secure, stable and robust. You are wrong. Millions of people successfully using NT on workstations and servers for years indicates your assertion that this is impossible to be wrong.
I knew more about OSes in college than you do today.
Possible, but unlikely.
WTF does apps allocate a address space mean? Address spaces are an abstract concept in an OS. A process “has” and address space. Allocations usually come from the heap or stack which is part of the processes address space.
Processes in a 32 bit OS can’t have greater than a 32 bit address space. Perhaps I should have said “applications can’t yet request an address space larger than 4GB” – would that have avoided your pointless pedantism ?
I believe this discussion is about Apple’s desktops and itaniums are not meant for consumers. So price is relevant.
When the claim is:
“Windows no only supports x86 and has only supported x86 since NT 5.0/Win 2000. XP is going to add support for 64-bit architectures. Since it hasn’t shipped yet XP only supports x86 32-bit today. ”
Price is not relevant. Windows currently supports x86 and Itanium, is 64 bit on Itanium and has a freely available 64 bit beta for x86-64. Ergo, the assertion that Windows is only available on x86 and is only 32 bit, is wrong.
As an analogy, consider that if we were talking about Apple’s laptops and the assertion was made that “Apple has no 64 bit hardware”, that claim would still be wrong because Apple *does* have 64 bit hardware, even if said hardware isn’t a laptop.
you dont get to make the rules
apple measures their sales on a fiscal year not a calendar year.
the whole world abides by this….dont go twisting cause you know it wont come true…..either now or come january. ill let you use both to your hearts content. apple will not hit 9 billion in sales for either time frame.
stretch it out all you like and we can have a chuckle at your absurd musings two times.
No, I don’t get to make the rules. But they are my predictions, and I was clear in what I wrote. You don’t get the chance to distort.
So you can gloat twice, but only one will have any relevancy to this discussion. And if by some farfetched chance, I’m right, you won’t have the chance to gloat at all…
And for every prediction I made, you offered comments suggesting that I was way out in left field. So your words and arguments are also preserved for posterity, ND. We’ll see how often you’re right, and maybe the Apple users on OS News will be in for some serious gloating.
If you hotplug a SCSI device that isn’t designed for it, it’s going to leave the bus in an undefined state (although some hardware might be able to recover). Particularly if it affects termination.
No you are wrong. The HBA would reset the bus.
Uh huh. Because, like, NT4 is the only OS that’s ever crashed due to a SCSI error. I bet you’ve *never* seen another OS crash because of a SCSI error, right ?
Not as of yet no. So yes NT4 is the only OS I have seen that totally crapped its self silly because I pulled a scanner out. That’s fragile.
SCSI drivers run in kernel mode – that means if they shit themselves, they’re more than capable of taking the OS down with them. This is, I might add, equally as likely on OS X.
All drivers in most commercial Os’es today run in privilleged mode (the correct term, if you knew OS terminology). Robust OS’ SCSI drivers can handle abrupt device removal. After all i is a part of the SCSI spec. Any robust OS should be ables to gracefully handle such things especially ones that are desgined for server use.
At best they should crash and recover on reboot and not crap them self permanently.
Well, if your abuse of the SCSI bus resulted in corrupted files on the hard disk(s) or a physically damaged peripheral,
OK, first you are talking as if you were physically there when this happened. No even unpluging the SCSI card (HBA) resulted in a crash on boot.
it wouldn’t be in the least surprising to me. I’ve seen crashes on lots of different OSes result in unbootable systems because of file/filesystem corruption, it’s hardly something that has – or will – only affect Windows. [/i]
So your saying NTFS isn’t robust enough to recover from a crash. BTW the BSOD wasn’t a result of a corrupt filesystem. NTFS is a journalling filesystem and is designed to recover from crashes. especially when the boot device is on an IDE channel.
So that blows a huge hole in your NT is robust argument, it can’t handle filesystem recovery after a crash.
Ignoring for a second that C2 certification excludes any networking criteria, which alternatives are you thinking of that have even reached that level ? In the context of a discussion about Apple’s desktop computers, I mean…
C2 is not the same as common citeria certification.
Hell, why do you even raise Common Criteria as a standard ? It’s pretty pointless given that it only applies to specific hardware and software combinations, is lost as soon as pretty much any modification to the certified configuration is made (like, say, patching a security hole) and is mostly meaningless outside of a tickbox on a form.
Because it is a independant test of reliabilty, robustness and certification of a product being well desgined an secure. Your opinion not withstanding NT has not passed any level of independant scrutiny of its design being secure.
BTW microsoft spent a lot of resources trying to get Win2k Certified at EAL 4 with falw recommendations. Apparently it is impotant to MS. However, windows 2003 isn’t certified. That just shows a trend in MS’ code development and the importance they give to security. Suddenly when every other day and linux, they announce trusted computing initiative or whatever.
Further more now with MS trying to penetrate data centers it is very important. Also the DoD wouldn’t depoly any product in critical areas without a certification.
But they still do code *reviews*. Are you saying when the OpenBSD team stops new development and does a code review that’s a bad thing ?
Every company that develops any quality software product does code reviews as a part of thier development cycle prior to interation into the main source base.
If you were a developer you would know, There is no need for MS to have hyped a security review that should have been done here in the first place. especially after so much media attention.
Most people would consider a security review in those circumstances to be responsibly addressing customer concerns.
Have you heard of the term “too little too late”. After the Dept of home land security sent out warnings, i wouldn’t call that meeting customer concerns. I would call that competiton making MS more attentive to issue that hey were selling crap that people were forced to by becuase of thier monpoly.
The “threat from Linux” is nothing more than a Linux zealots wet dream. Linux and Windows compete directly in stunningly few areas. If you really believe Microsoft started their “security initiative” because of Linux and not because of bad press and actual security problems, then I’ve got a bridge to sell you.
Now I know you are a MS fanboy who is so far down it’s ass you don’t know what the head is doing. MS has an executive who is a linux strategists and has publically addressed that linux is threat. Time to come out and smell fresh air.
You made a blanket statement that NT was not secure, stable and robust. You are wrong. Millions of people successfully using NT on workstations and servers for years indicates your assertion that this is impossible to be wrong.
NT is not secure in any measurable way. MacOS X is not being marketed at governments and mission critical applications but NT is. And it is important that it be certified at the very least not have more holes than swiss cheese.
BTW no OS is secure. So your ranting about NT being secure multiple times and making a blanket statment about it’s security made me react. NT is relatively insecure compared to it’s competition.
Processes in a 32 bit OS can’t have greater than a 32 bit address space. Perhaps I should have said “applications can’t yet request an address space larger than 4GB” – would that have avoided your pointless pedantism ?
My honest advice is stop trying to dig your self further into a hole. Don’t pretend to know OSes. Applications requesting an address space is as absurd
as allocating one.
Applications/Process are born in a system with an address space, it is thier universe they don’t know anything outside of that space. That is how an OS garuntees an program/proc can’t trample on any other processes memory.
Well I think my point is made, you know abosultely nothing about Opertaing System internals.
Price is not relevant. Windows currently supports x86 and Itanium, is 64 bit on Itanium and has a freely available 64 bit beta for x86-64. Ergo, the assertion that Windows is only available on x86 and is only 32 bit, is wrong.
Well it looks like windows xp pro is going away soon. HP just announced thier EOL of itanium workstations. HP codeveloped itanium and cancelling a volume product like a workstation means the itanium is dying.
While ARs statement might have been wrong, it was forward looking, almost prophetic. So yes eventually it looks like windows might only support x86 based architectures.
NTFS is a journalling filesystem and is designed to recover from crashes. especially when the boot device is on an IDE channel.
Should have said:
NTFS is a journalling filesystem and is designed to recover from crashes,
especially when in this case the boot device was on an IDE channel completely away from anything SCSI.
Its not my fault that nobody is running their business using Windows Scripting Host. If you read up on Applescript you will find that it is not your everyday OS level scripting as you put it.
Think WSH+VBA+more…
http://www.apple.com/applescript/stories/macys/
http://www.apple.com/uk/creative/trident/
http://www.apple.com/pro/science/black/
http://www.apple.com/hotnews/articles/2002/10/craighunter/
If was so easy on windows many would do it. Instead they go out buy VS to get the same functionality from VB.
No you are wrong. The HBA would reset the bus.
The HBA *might* reset the bus.
Not as of yet no. So yes NT4 is the only OS I have seen that totally crapped its self silly because I pulled a scanner out. That’s fragile.
Then you’ve led a fairly sheltered life.
All drivers in most commercial Os’es today run in privilleged mode (the correct term, if you knew OS terminology). Robust OS’ SCSI drivers can handle abrupt device removal. After all i is a part of the SCSI spec. Any robust OS should be ables to gracefully handle such things especially ones that are desgined for server use.
If a SCSI driver running in “privileged mode” takes out the OS, there’s not much the OS can do about it.
At best they should crash and recover on reboot and not crap them self permanently.
That is entirely dependant on the circumstances.
So your saying NTFS isn’t robust enough to recover from a crash. BTW the BSOD wasn’t a result of a corrupt filesystem. NTFS is a journalling filesystem and is designed to recover from crashes. especially when the boot device is on an IDE channel.
NTFS only journals metadata. That means only a coherent filesystem structure is guaranteed. It’s quite possible to end up with a corrupt file in those circumstances.
That’s also ignoring the possibility of the write cache on the drive itself, which knows nothing of filesystem structures and hence could corrupt even a journalling FS.
If an OS crashes *hard* halfway through disk operations, it doesn’t matter where the disk is located, it’s still quite possible for files on that disk to be corrupted.
C2 is not the same as common citeria certification.
Point taken. I read it quickly and assumed you were talking about that, since it’s the usual topic that gets raised talking about NT and security.
Because it is a independant test of reliabilty, robustness and certification of a product being well desgined an secure. Your opinion not withstanding NT has not passed any level of independant scrutiny of its design being secure.
Uh, it was C2 certified back in the NT4 days. I’d say that qualifies as “independent scrutiny of its design”.
Not that you really need that to figure out whether the *design* is secure – just read a book or two describing the design.
BTW microsoft spent a lot of resources trying to get Win2k Certified at EAL 4 with falw recommendations. Apparently it is impotant to MS.
Like C2 was, it’s probably a tickbox in some government purchasing decisions.
However, windows 2003 isn’t certified. That just shows a trend in MS’ code development and the importance they give to security. Suddenly when every other day and linux, they announce trusted computing initiative or whatever.
Ah, so does the fact that neither Linux, OS X or any of the BSDs are certified also show a trend in their code and the importance their developers give to security ?
Every company that develops any quality software product does code reviews as a part of thier development cycle prior to interation into the main source base.
If you were a developer you would know, There is no need for MS to have hyped a security review that should have been done here in the first place. especially after so much media attention.
Microsoft lives and dies by how the market perceives it. You’d better believe they need to tell the whole world they’re doing something to address security.
Have you heard of the term “too little too late”. After the Dept of home land security sent out warnings, i wouldn’t call that meeting customer concerns. I would call that competiton making MS more attentive to issue that hey were selling crap that people were forced to by becuase of thier monpoly.
I seem to recall that warning coming out some time after the “security initiative” started. Not to mention nothing makes a company address “customer
concerns” better than the threat of competition.
Now I know you are a MS fanboy who is so far down it’s ass you don’t know what the head is doing. MS has an executive who is a linux strategists and has publically addressed that linux is threat. Time to come out and smell fresh air.
Microsoft has probably had a Linux strategist for 5+ years. They don’t stuff around when they perceive _any_ threat, no matter its size.
NT is not secure in any measurable way. MacOS X is not being marketed at governments and mission critical applications but NT is. And it is important that it be certified at the very least not have more holes than swiss cheese.
There’s not much the OS can do to stop its users doing stupid things. The number of actual *holes* is no more or less than the alternatives.
BTW no OS is secure. So your ranting about NT being secure multiple times and making a blanket statment about it’s security made me react. NT is relatively insecure compared to it’s competition.
Which competition are you thinking of and what metrics are you using to determine “relatively secure” ? Be specific, explain your reasoning and provide evidence to support your conclusions.
My honest advice is stop trying to dig your self further into a hole. Don’t pretend to know OSes. Applications requesting an address space is as absurd
as allocating one.
Applications/Process are born in a system with an address space, it is thier universe they don’t know anything outside of that space. That is how an OS garuntees an program/proc can’t trample on any other processes memory.
Well then, what Raptor-approved-terminology [tm] would you prefer I use in future to say that processes on OS X can’t access more than 4GB worth of memory, to avoid setting your pedantism-reaction off ?
Well it looks like windows xp pro is going away soon. HP just announced thier EOL of itanium workstations. HP codeveloped itanium and cancelling a volume product like a workstation means the itanium is dying.
Probably it is, but that’s hardly relevant to whether or not Windows is available on Itanic.
While ARs statement might have been wrong, it was forward looking, almost prophetic. So yes eventually it looks like windows might only support x86 based architectures.
Which immediately means it isn’t portable, I suppose ?
you didnt mention anything about 1-1-2004 to 21-31-2004.
any normal person when discussing financials would assume you mean what everyone else uses…a fiscal year. apples doesnt run in agreement with the calendar year sorry to say.
“But they are my predictions, and I was clear in what I wrote. You don’t get the chance to distort.”
so no you were not clear until you came along in later posts twisting things about to gankakus financial statment parameters for apple.
twist twist twist.
The HBA *might* reset the bus.
It will. read the spec.
Then you’ve led a fairly sheltered life.
No, I have worked on SCSI drivers and peered over enough analyser traces to know.
If a SCSI driver running in “privileged mode” takes out the OS, there’s not much the OS can do about it.
Are you just being dense? I am fine with it taking the OS down, it shouldn’t if porperly written. I would fire the guy who wrote the driver. But the OS should atleast reboot and not die every time at boot. So much so that you have to reinstall everything again. Get it.
That’s also ignoring the possibility of the write cache on the drive itself, which knows nothing of filesystem structures and hence could corrupt even a journalling FS.
That’s why robust Oses like Solaris turn of write caches on disks. A little performance loss is acceptable when data intergrity is concerned.
If an OS crashes *hard* halfway through disk operations, it doesn’t matter where the disk is located, it’s still quite possible for files on that disk to be corrupted.
I agree. But a kernel crash shouldn’t automatically cause corruption to kernel and driver files which are almost never open writeable during normal operations. Files required for booting are never open for writting and shouldn’t be.
Regardless of what is in the write cache it can’t be kernel or driver exectuables and config files. Only blocks for files that are being written to would be in the write cache. A robust OS should not randomly write data to the disk and corrupt any arbitrary file.
Uh, it was C2 certified back in the NT4 days. I’d say that qualifies as “independent scrutiny of its design”.
It barely passed C2 without a network connection!!!! Any system is either secure or insecure if you only have physical local access.
Ah, so does the fact that neither Linux, OS X or any of the BSDs are certified also show a trend in their code and the importance their developers give to security ?
All the Oses you mentioned have a pretty good track record of security. BTW no one is running any of the Oses for mission critical tasks yet. But NT is targeted at those markets no the others you mentioned.
Well then, what Raptor-approved-terminology [tm] would you prefer I use in future to say that processes on OS X can’t access more than 4GB worth of memory, to avoid setting your pedantism-reaction off ?
Not raptor approved just standard terminology. If I am not mistaken you claimed to know more about OSes than me. Use of standard terminology would be a good place to start.
Which immediately means it isn’t portable, I suppose ?
No body said NT wasn’t portable. Just that MS has killed all ports to any architecture over time, expect the itanic to go the same way.
BTW almost any modern OS is portable. Solaris,Linux, darwin (MacOS X) have all been ported to different architecutres with easy.
It will. read the spec.
Having hot plugged things before (that shouldn’t have been) and crashed machines (both with and without data loss), I’ll trust that the spec defines and escape route and continue asserting that hardware won’t always follow it.
No, I have worked on SCSI drivers and peered over enough analyser traces to know.
I struggle to believe anyone could use SCSI for as long as you appear to have and _not_ seen crashes due to SCSI problems.
Are you just being dense? I am fine with it taking the OS down, it shouldn’t if porperly written. I would fire the guy who wrote the driver. But the OS should atleast reboot and not die every time at boot. So much so that you have to reinstall everything again. Get it.
Were that an event that happened commonly, repeatably and was clearly attributable to the OS, I’d agree with you.
As it stands, you appear to be making a *massive* generalised extrapolation from a single data point – ie: that every SCSI error in NT will cause irretrievable filesystem corruption.
That’s why robust Oses like Solaris turn of write caches on disks. A little performance loss is acceptable when data intergrity is concerned.
And you can turn it off in NT as well if you want.
I agree. But a kernel crash shouldn’t automatically cause corruption to kernel and driver files which are almost never open writeable during normal operations. Files required for booting are never open for writting and shouldn’t be.
It doesn’t “automatically cause corruption”. It caused it in your particular example.
It barely passed C2 without a network connection!!!! Any system is either secure or insecure if you only have physical local access.
a) C2 doesn’t define any networking functionality at all.
b) how do you “barely pass” something that’s either a “pass” or a “fail” ? Is that like being “barely pregnant” ?
All the Oses you mentioned have a pretty good track record of security.
Yes, well, it’d be interesting to see just what their “track record” would be like with 95% market share and a vast majority of end users who happily run anything an email asks them to, but I doubt that’s ever going to happen. It’s not like the number of security problems from actual OS flaws is especially different.
BTW no one is running any of the Oses for mission critical tasks yet. But NT is targeted at those markets no the others you mentioned.
I think IBM, Redhat and Suse like to market their Linux-based products as being “mission critical”. Not to mention all those Linux advocates who like to say the same thing.
However, this is getting ridiculous. I was only responding to your assertion that:
“I would say OS X became a a stable, robust and secure computing platform in a mere 3 years. It’s taken MS what 9 years and it’s not even close”
(Which seems to be contradicted by your later statement that “no OS is secure”…)
My reply that NT was similarly “stable, robust and secure” was clearly meant to be taken in the context of a comparison to OS X, but you’ve chosen to blow that out to somehow sound like an absolute, universal truth, something I never meant it to be.
Not raptor approved just standard terminology. If I am not mistaken you claimed to know more about OSes than me. Use of standard terminology would be a good place to start.
The few people I ran the same phrase (in context) past yesterday knew exactly what I was talking about straight away.
Incidentally, I never claimed to know more about OSes than you, I just doubt you knew more than I do now, when you were in college (particularly if we were to take a sample from, say, your first day there).
No body said NT wasn’t portable. Just that MS has killed all ports to any architecture over time, expect the itanic to go the same way.
Well, the implication from AR (IP: —.sun.com) certainly appeared to me to be that NT wasn’t portable. I can’t think of why else he’d bother trying so hard tp pretend the different ports that exist or existed, didn’t or don’t.
BTW almost any modern OS is portable. Solaris,Linux, darwin (MacOS X) have all been ported to different architecutres with easy.
Indeed. Which is why people continually trying to say NT isn’t portable, using the (incorrect) example that “it’s only available on x86”, mystify me. The underlying implication always seems to be that Microsoft is too incompetent to write (and maintain) a portable OS, despite copious amounts of evidence to the contrary.
you didnt mention anything about 1-1-2004 to 21-31-2004.
any normal person when discussing financials would assume you mean what everyone else uses…a fiscal year. apples doesnt run in agreement with the calendar year sorry to say. “But they are my predictions, and I was clear in what I wrote. You don’t get the chance to distort.”
so no you were not clear until you came along in later posts twisting things about to gankakus financial statment parameters for apple.
twist twist twist.
Hey ND:
You’ve gone on record saying that Apple has absolutely no chance of making $9 billion in sales for 2004 at the end of either their fiscal year, or the calendar year.
Can I hold you to that? Are you trying to back out of your predictions? Because it sure seems like it.
I think you’re worried. So I’ll ask you straight out: Do you actually think Apple has a chance of making $9 billion in sales for the calendar year? Is that why you’re trying to stick it to me over this one small item in a whole list of predictions that you described thusly: I’d say that you are way off on EVERYTHING that you wrote.”
Or is it just the code of honor for the OS News forums that you’re trying to uphold. Is there an unwritten rule that states that no one is allowed to clarify a position. I know that we’ve traded sarcastic barbs, on this forum, but I’m not being sarcastic here. I’m honestly curious as to why you won’t accept my clarification.
Because you are right. Twelve hours passed between my original post and the small paragraph added later that clarifies just one small aspect of a whole range of incredible predictions. Truth to tell, I’m not trying to weasle out of anything. Just trying to let everyone know exactly what I’m predicting.
I’ll let you in on a little secret, ND. This isn’t the place for personal commentary or exposition, but here’s the reason why I offered that small addendum or clarification. I’m not writing these words to evoke sympathy, but rather to explain why I won’t budge one centimeter on my clarified prediction.
You see, ND. I have a brain tumor. And I take large doses of methadone everyday to help me cope with the pain. As a result, I’m not as sharp as I used to be, and I occasionally make little mistakes.
Like this one. The original post wasn’t clear, so I added to it. I will stick by the other predictions, without further amendments. What’s your problem?
The ball’s back in your court. Chose to accept my amended prediction, or chose to dismiss it. Just remember that in rebutting my statements, you’ve also made predictions of your own, and I won’t let you weasle out of any of them.
ND… we’re still waiting. Your homework, remember?
I asked you ages ago for proof that Apple is using overclocked parts… I don’t want your opinion, mind you… Just verifiable proof. You’ve repeated it so many times, you must have something to back it up.
Find me a link. You’ve said you always tell the truth, so please back up this one little claim with one little link from a reputable website or news organization.
Don’t get angry at me for asking. Just give us proof that you’re not making it all up.
Having hot plugged things before (that shouldn’t have been) and crashed machines (both with and without data loss), I’ll trust that the spec defines and escape route and continue asserting that hardware won’t always follow it.
SCSI is a packet based bus and it is hard or almost impossible to put the bus in a undefined state.
I struggle to believe anyone could use SCSI for as long as you appear to have and _not_ seen crashes due to SCSI problems.
I never claimed to have not seen a crash due to scsi problems. A crash due to a scsi problem is almost always a driver bug and never protocol or bus related. I said I have never seen an OS completely kill it self after a SCSI related crash or atleast not a robust OS.
Were that an event that happened commonly, repeatably and was clearly attributable to the OS, I’d agree with you.
As it stands, you appear to be making a *massive* generalised extrapolation from a single data point – ie: that every SCSI error in NT will cause irretrievable filesystem corruption.
No I never said that. You assumed I said that. I claimed I have seen NT completely kill itself with the removal of a SCSI scanner (a device non essential to system operation), hence it is not robust. You challenged it with your half baked understanding of SCSI and operating system internals.
It doesn’t “automatically cause corruption”. It caused it in your particular example.
Again you are assuming it was a corrupt filesystem that caused it to fail at boot. When all signs point to the fact that the filesystem shouldn’t have been corrupt in the first place.
1) NT should be more robust in detecting an incorrect shutdown and recover the filesystem.
2) A crash doesn’t mean a complete and abrupt poweroff. IDE disk caches are automatically flushed ever few milliseconds. So it is almost impossible that an NT4 BSOD which doesn’t cause an abrrupt power removal from the drive would cause data from the cache to not be flushed. Moreover, a BSOD or crash on most osed means a warm reset and almost any ATX powersupply needs its power buttton to be depressed for atleast a few seconds before it shuts off.
3) A BSOD means that the OS ran long enough to print a crash screen and surely dumped some sort of memory image or core file to disk for postmortem analysis. In case it did it should have flushed the disk caches explictly with an ATA_FLUSH_CACHE command. Incase it didn;t dump memory into a file, it is not a very robust OS.
All of the above indicate you the cache was probably not flushed theory highly unlikely.
So the fact that NT died permanently becuase of a SCSI scanner being removed means that the OS is not robust.
a) C2 doesn’t define any networking functionality at all.
b) how do you “barely pass” something that’s either a “pass” or a “fail” ? Is that like being “barely pregnant” ?
My mistake. NT aimed for the none networked criteria, is what I should have said.
Which is why people continually trying to say NT isn’t portable, using the (incorrect) example that “it’s only available on x86”, mystify me. The underlying implication always seems to be that Microsoft is too incompetent to write (and maintain) a portable OS, despite copious amounts of evidence to the contrary.
Just go past and read AR’s posts again . He states that MS never ported NT to sparc and killed of the PPC and Alpha ports along the way and now only the x86 ports remain.
I agree that he made a mistake with the Itanium port but he never claimed NT wasn’t poratble. So stop taking things out of context. You seem to do that often.
http://en.wikipedia.org/wiki/Windows_NT
“Windows NT 3.1 ran on Intel IA-32, DEC Alpha, MIPS R4000, and PowerPC processors; Intergraph Corporation ported Windows NT to its Clipper architecture and later SPARC, but neither version was sold to the public. Windows NT 4.0 was the last major release to support Alpha, MIPS, or PowerPC, though development of Windows 2000 for Alpha continued until 1999, when Compaq stopped support for Windows NT on that architecture. Windows XP 64-Bit, Windows Server 2003 Enterprise, and Windows Server 2003 Datacenter support Intel’s IA-64 processors. As of September 2004, Microsoft had published beta releases of three editions for the AMD64: Windows XP Professional x64, Windows Server 2003 Standard x64, and Windows Server 2003 Enterprise x64.”
and that article is too old to have incorporated the current beta support of windows xp pro for intels new 64 bit extended pentium/xeon line up.
portability:
Intel i860
Intel IA-32 x86 (including all x86 makers: intel, amd, via, transmeta, etc)
DEC Alpha
MIPS R4000
PowerPC
Clipper architecture
SPARC
Intel IA-64 Itanium
AMD64 (athlon 64, athlon fx, and opteron)
Intel EM64T (xeon, xeon mp, and pentium 4)
that is portability. why are you trying to defend a fellow mac fanatics false and mistaken statements?
Intergraph Corporation ported Windows NT to its Clipper architecture and later SPARC, but neither version was sold to the public.
Ok never sold to the public, oh wow. Did you or anyone ever run NT on sparc?
If nobody ever ran it is as good as it never was. What proof did intergraph show to the world that NT was actually working on SPARC and there was any reasonable application for it.
Anyone can port any OS to any platform. The days of handcoding machine language are long gone. Most Oses are written to be portable today with highlevel langauges and compilers.
“Darwin is an advanced BSD UNIX operating system for PowerPC and x86 compatible architectures, currently owned and developed by Apple Computer, Inc. It forms the core of Apple’s flagship Mac OS X operating system, which has been catapulted from its release in 2001 to being the most widely distributed UNIX-based operating system.
Building on FreeBSD 5 and Mach 3.0, Darwin provides many advanced features, such as pre-emptive cooperative multitasking, symmetric multiprocessing (SMP), real-time threads, 64-bit kernel services, loadable file systems (including HFS+ and UFS), and more. Darwin also has a powerful, object-oriented device driver subsystem called I/O Kit, widely regarded as being one of the best available for any platform.
If you want portability since MacOS X is based on Next OS and the MACH microkernel. Here is the poratbility list for MACH. Which would be applicable here becuase you are bringing up history, long dead ports and apparently ports that never made it out in public.
http://www-2.cs.cmu.edu/afs/cs/project/mach/public/FAQ/platforms
i386/i486: Toshiba 5200 laptop, Dell laptops
Intel and Olivetti systems
HP Vectras,
PS/2 microchannel bus machines with tokenrings
Sequent Symmetry
68k: Sun 3
Macintosh II, SE/30, IIcx, etc.
VAX: 8600, 8650, 8800, 6200, Microvax II and III.
Decstation 2100, 3100, 5000/200,120,20
Sun4/110, Sun4/sparcstations
NeXT – software released by NeXT
Omron Luna 88K multi-processor
IBM-RS/6000 – port done by IBM
Elsewhere Mach has been ported to the Sequent Corallary, various HP machines,
various experimental IBM machines, the IBM 370, IBM/RS600, the BBN Butterfly
the NS532, several Amiga models and lots of other boxes.
Also here is the Darwin compatibility list for x86
http://www.opendarwin.org/hardware/
all that still doesnt make ar’s stupid comments suddenly become true.
all that still doesnt make ar’s stupid comments suddenly become true.
No but sure does make your stupid comments more obviuous. You did claim Windows was more portable. Now I have proven you wrong once more. And will keep doing it till you stop tolling on mac threads.
“No but sure does make your stupid comments more obviuous. You did claim Windows was more portable. Now I have proven you wrong once more. And will keep doing it till you stop tolling on mac threads.”
windows as it ships today runs on more processors than mac os x.
forget nt 3 and 4. forget mach. forget darwin. forget unix. forget all that bunk you toss about. current versions of windows run on a greater variety of cpus than mac os x. period.
disprove that.
No I never said that. You assumed I said that. I claimed I have seen NT completely kill itself with the removal of a SCSI scanner (a device non essential to system operation), hence it is not robust. You challenged it with your half baked understanding of SCSI and operating system internals.
Actually I’m challenging a sweeping generalisation that NT isn’t robust based on a single data point. If I can find a single incident where an OS X machine self-destructed, will you still call OS X “robust” ? How about other OSes ? Because I’ve seen heaps of Linux machines self-destruct and require significant recovery effort from apparently innocuous system crashes (power outages and the like). Heck, I’ve even seen a Solaris install rendered unbootable when some tech screwed up inside an e10k, and you don’t get much more “enterprise” than that as a normal person.
Again you are assuming it was a corrupt filesystem that caused it to fail at boot. When all signs point to the fact that the filesystem shouldn’t have been corrupt in the first place.
There aren’t very many things that are going to render a machine unbootable. It’s almost certainly going to be some permanent hardware damage or severe file corruption.
1) NT should be more robust in detecting an incorrect shutdown and recover the filesystem.
A valid filesystem doesn’t always mean valid file data. It’s perfectly possible to have corrupted files and a coherent filesystem when only metadata is being journalled.
2) A crash […]
3) A BSOD […]
All of the above indicate you the cache was probably not flushed theory highly unlikely.
I didn’t say that’s what happened, I simply said that was a possibility.
So the fact that NT died permanently becuase of a SCSI scanner being removed means that the OS is not robust.
If a single extraordinary event is your measure of “not robust”, I think you’ll have a long, hard search trying to find something that is robust. It certainly makes your claim that OS X is “robust” sound ridiculous.
First, I would like to appologize for my XP itanium port sanfu. MS keeps changing thier naming convention.
Actually I’m challenging a sweeping generalisation that NT isn’t robust based on a single data point.
No I think he gave a few reasons which you shot down as being bad hardware or drivers. Even the scanner based example you shot down as being a SCSI related problem till raptor gave extremely valid points against. Raptor provided a few examples and you wouldn’t hear it.
Actually I’m challenging a sweeping generalisation that NT isn’t robust based on a single data point.
It doesn’t but as raptor mentioned kernel, drivers and files required for boot shouldn’t be opened for writing. If the file was never opened for writing how does it get corrupt?
Does the NT just write random garbage to anywhere on the disk when it goes belly up?
If a file required for boot is constantly writing to during normal system operation, then that is bad design.
There aren’t very many things that are going to render a machine unbootable. It’s almost certainly going to be some permanent hardware damage or severe file corruption.
Exactly, the fact that with NT it happend by doing something as innocuous as pulling a scsnner out is pretty bad. Especially since a reinstall fixed the problem, it couldn’t have been bad hardware. And the case for filesystem corruption has been made.
Heck, I’ve even seen a Solaris install rendered unbootable when some tech screwed up inside an e10k, and you don’t get much more “enterprise” than that as a normal person.
I find that extremely hard to believe. E10K’s have redundancy and multipathing capabilities and no single point of failure can render an E10K system unbootable.
No I think he gave a few reasons which you shot down as being bad hardware or drivers. Even the scanner based example you shot down as being a SCSI related problem till raptor gave extremely valid points against. Raptor provided a few examples and you wouldn’t hear it.
I said most IRQ_NOT_LESS_OR_EQUAL BSODs are hardware or driver related. That’s perfectly true. I posited that the crash *might* have been because he unplugged a SCSI device that was not designed to be hotplugged, which may have caused the crash and that the crash might have caused file corruption.
Most of Raptor’s “points” are pedantic nitpicking of terminology. He agrees that driver errors can – and do – bring down OSes and that he’d seen SCSI errors crash OSes. Clearly, he knows a lot more about SCSI than I do, although I will stand by my statement that I’ve seen hotplugging SCSI devices that aren’t meant for it can cause problems.
I also posited that a possible source of any file corruption may have been the hard disk’s write cache. Based on the information I knew *at the time* (only that Windows had crashed) it was a reasonable hypothesis, but later comments by Raptor would suggest that it was not a likely candidate.
This is the sort of thing that leads to the “hindsight is always 20/20” saying – it’s really easy to see where you’re wrong once you’ve got more information.
It doesn’t but as raptor mentioned kernel, drivers and files required for boot shouldn’t be opened for writing. If the file was never opened for writing how does it get corrupt?
The registry is often open for writing and certain parts are required for a successful boot.
Does the NT just write random garbage to anywhere on the disk when it goes belly up?
Uh, a privileged mode driver certainly has the *ability* to write random crap anywhere it wants during the process of going belly up.
If a file required for boot is constantly writing to during normal system operation, then that is bad design.
A single example does not mean a file is constantly open for writing, it means that in that particular example it was open for writing.
I’ve seen a lot of NT crashes on various bits of hardware – as I’m sure most here have – 99/100 times or more the machine simply reboots without a problem, which indicates a scenario in which an NT box crashes and doesn’t reboot is unusual, not commonplace.
Exactly, the fact that with NT it happend by doing something as innocuous as pulling a scsnner out is pretty bad.
Without knowing exactly what was going on at the time that’s rather difficult to say for certain.
Especially since a reinstall fixed the problem, it couldn’t have been bad hardware. And the case for filesystem corruption has been made.
Since I doubt Raptor made any attempts at problem diagnosis and system recovery outside of simply throwing a CD back in and reinstalling (as that seems to be the typical response to Windows problems) it’s practically impossible to say what the problem actually was. It could have been anything from a few corrupted items in the registry, through an important but corrupted driver (for example, the NTFS driver), to a completely corrupted filesystem.
The important point here that I keep trying to make – and is constantly ignored at every turn for the sake of pedantic nipicking – is that declaring a product bad for everyone based on *one* example is just stupid. I’ve seen just about every mainstream OS anyone is ever likely to use crash, self-destruct, wipe out data for no reason whatsoever and just plain behave weirdly. However, since such events are usually *rare*, I don’t try to use them as sole truths for defining general bahaviour.
If *every* time someone unplugged a scanner or the OS crashed their system was rendered unbootable, then Raptor would have a point. But that doesn’t happen.
I find that extremely hard to believe. E10K’s have redundancy and multipathing capabilities and no single point of failure can render an E10K system unbootable.
*System* – as in hardware – no. *OS* – as in software – most certainly. The cause was file and filesystem corruption that – had it not been for an excellent backup regime and supporting infrastructure (Jumpstart) – would probably have taken over a full day to recover from. As it was we managed to get the service back in a few hours, the majority of which was spent waiting for files to copy. When we looked at the relevant filesystems, the damage was extensive – they wouldn’t even mount initially and many files were missing, zero-length or corrupt once fsck had at least made them mountable.
Of course, since something of that magnitude has happened to me extremely rarely, so I don’t go around saying Solaris sucks.
what’s the differnce between darwin supporting multiple CPUs and OS X supporting multiple CPUs? You can install a darwin build into your OS X system, replacing all the underlying parts, and it still works just fine? Or at least that’s what I’ve read… I mean… it’s fundamentally the same thing as a different Windows version written for the Itanium with different library versions and only emulated x86 support, right?
…maybe not
Side question: Has anyone tried installing the Darwin 8 beta files into 10.3.x just to see what happens?
I don’t know if that works with darwin 7.x or not, I just read you have to do it to get ports working properly… but that was awhile back… and I was tired…and I was drunk at the time… so I’m probably not a reliable source.
I imagine 8 beta would cause dependency issues with aqua, but as long as it replaces the standard utilities with the new kernel it probably wouldn’t be that big a pain? Unless of course they did something evil and added pthreads like FreeBSD did, killing program compatibility, and making the GUI not boot without strange modifications.
Also, the SCSI comment was a perfect example of how OS X is more robust than NT. NT is monolithic, as I’m sure you know, which means if that “ass deep” scsi layer craps itself, chances are the system is coming down, at least for a reboot.
With OS X (at least in theory), you can take the SCSI server (yes, server, that’s the proper term when talking about microkernels, shut the hell up about it), shoot it, shoot the SCSI card itself, plug something in while it’s on, unplug it, repeat 300-400 times, and outside of a motherboard failure caused by your abuse, the system will not go down. It will just unload the SCSI server program if it starts being stupid about the fact that your SCSI card is in multiple peices on the floor next to the case. Sure, any program needing SCSI access may crash, but the system itself *should* stay up with no real problems… unless of course the SCSI card was controlling your HDDs. Then your up a creek…
But, it *should always* (again, in theory) work this way, as opposed to never reacting the same way twice (due to diffing tasks), as it would with most other OSs.
I really don’t know where they went wrong with NT. They brought in the guy from digital, whom I assume had at least some experience with the OpenVMS code and design ideas, so you’d expect at least somewhere close to the stability, even if just from the design standpoint. OpenVMS stays up for a decade straight, whereas you’re lucky to get the full 49.7 days on the uptime counter out of NT before it either dies or has to be restarted for something stupid (i.e. Security patch).
Because of it’s design, OS X has the potential to be the most “robust” operating system out there(with the possible exception of QNX), as long as Apple stays diligent with the microkernel idea, instead of crapping out like Microsoft did after their initial attempts at it in NT 3.1.
Also, as a side note, Apple offered a public beta of OS X 10.0 for 6 months prior to selling it. They also offer darwin 8 beta now, which is basically a 10.4 beta for people who should actually be beta testers.
Windows doesn’t offer normal betas to everone either, just a select few, so this is really better on betas than MS is too? and don’t tell me about the AMD_64 and intel clone 64 bit version, that’s different. That’s just there because they are so behind on releasing it, and because it’s the only real incentive for a manufacturer to still make you pay for a normal 32bit license with your Athlon FX CPU even though you want the 64 bit version. You pay twice there too! Once when you buy something (we’ll say, a laptop), and once again when the 64 bit version comes out.
though looking back, it occurs to me that no one really cares. least of all me. I was just bored, and, since I took the time to write it, I might as well submit, eh?
Also, how the hell did all these topics come up as a result of an article on printer spam in the preferences box? there was so much reading, I just plain forgot…
what’s the differnce between darwin supporting multiple CPUs and OS X supporting multiple CPUs? You can install a darwin build into your OS X system, replacing all the underlying parts, and it still works just fine? Or at least that’s what I’ve read… I mean… it’s fundamentally the same thing as a different Windows version written for the Itanium with different library versions and only emulated x86 support, right?
Not really. OS X has substantially more functionality than Darwin – in particular, the GUI. To just dismiss its importance with regards to portability with a wave of the hand is foolish.
Having said that, I’ve no doubt whatsoever OS X is quite portable.
Also, the SCSI comment was a perfect example of how OS X is more robust than NT. NT is monolithic, as I’m sure you know, which means if that “ass deep” scsi layer craps itself, chances are the system is coming down, at least for a reboot.
NT is microkernel-ish. About as microkernel-ish as OS X, in fact. It’s certainly not monolithic in same sense as Linux or FreeBSD.
Back in the NT 3.x days it was more like a “pure” microkernel, but Microsoft had to move away from that for performance reasons, like Apple.
Apparently Longhorn is supposed to be moving back towards a “pure” microkernel – machines are getting fast enough these days, so it might even be doable.
With OS X (at least in theory), you can take the SCSI server (yes, server, that’s the proper term when talking about microkernels, shut the hell up about it), shoot it, shoot the SCSI card itself, plug something in while it’s on, unplug it, repeat 300-400 times, and outside of a motherboard failure caused by your abuse, the system will not go down. It will just unload the SCSI server program if it starts being stupid about the fact that your SCSI card is in multiple peices on the floor next to the case. Sure, any program needing SCSI access may crash, but the system itself *should* stay up with no real problems… unless of course the SCSI card was controlling your HDDs. Then your up a creek…
Were OS X a true microkernel, that would probably be true. However, it isn’t – Apple had to make much the same sacrifices Microsoft did to concept of a microkernel to get sufficient performance, which included moving a great deal of “stuff” into kernel – sorry, privileged – mode. So, a driver stuffup (SCSI or otherwise) in OS X is just as likely to bring the system crashing to its knees as it is in NT.
I really don’t know where they went wrong with NT. They brought in the guy from digital, whom I assume had at least some experience with the OpenVMS code and design ideas, so you’d expect at least somewhere close to the stability, even if just from the design standpoint.
NT and VMS are _very_ similar in design, as you ‘d expect. NT’s “purity” just feel victim to the practicalities of having to be a product for the mass market.
OpenVMS stays up for a decade straight, whereas you’re lucky to get the full 49.7 days on the uptime counter out of NT before it either dies or has to be restarted for something stupid (i.e. Security patch).
Bollocks.
Because of it’s design, OS X has the potential to be the most “robust” operating system out there(with the possible exception of QNX), as long as Apple stays diligent with the microkernel idea, instead of crapping out like Microsoft did after their initial attempts at it in NT 3.1.
Too late.
Also, as a side note, Apple offered a public beta of OS X 10.0 for 6 months prior to selling it.
For $29.95, don’t forget. I imagine that was a nice little money spinner.
They also offer darwin 8 beta now, which is basically a 10.4 beta for people who should actually be beta testers.
No, it’s not. There’s a lot more to OS X than Darwin. Indeed, from a product standpoint, Darwin is probably the least important part of OS X.
Windows doesn’t offer normal betas to everone either, just a select few, so this is really better on betas than MS is too? and don’t tell me about the AMD_64 and intel clone 64 bit version, that’s different. That’s just there because they are so behind on releasing it, and because it’s the only real incentive for a manufacturer to still make you pay for a normal 32bit license with your Athlon FX CPU even though you want the 64 bit version.
You mean like the way OS X was so far behind schedule ?
You pay twice there too! Once when you buy something (we’ll say, a laptop), and once again when the 64 bit version comes out.
If you don’t want to pay twice for the OS, don’t buy a machine with it bundled. Simple.
Simple fact is the x86-64 beta is available for free, for anyone to download. So is a 6 month demo version of Windows 2003. Whereas no versions of OS X – beta, demo or otherwise – are available for free download. I don’t expect Microsoft to continue with the public beta theme after Windows for x86-64 has been released (although I wouldn’t be completely surprised), but it’s not like Apple continued with their public beta program either, is it ?
Not to mention anyone can sign up for MSDN and get access to boatloads of Microsoft software – including betas – for relatively little – just like they can to the Apple Developer’s program.
Most of Raptor’s “points” are pedantic nitpicking of terminology. He agrees that driver errors can – and do – bring down OSes and that he’d seen SCSI errors crash OSes. Clearly, he knows a lot more about SCSI than I do, although I will stand by my statement that I’ve seen hotplugging SCSI devices that aren’t meant for it can cause problems.
Really….. I only get pedantic about terminology when I detect BS. What part of SCSI being desgined for hotplugging don’t you understand.
Like there is a Hotplug spec for PCI there is not hot plug spec for SCSI, it is by design hotpluggable. If the driver for that device can’t handle errors properly it can crash the system various ways. But unplugging a device should not cause the OS to crash, if it does it is a bug.
And if said bug causes the OS to go belly up premanently, it means that the OS is not robust. Robust meaning able to handle errors reasonably.
Uh, a privileged mode driver certainly has the *ability* to write random crap anywhere it wants during the process of going belly up.
No a a privileged more driver can only execute privileged mode instructions on a CPU and handle privileged mode traps/interrupts. No CPU on the planet has an instruction that writes data to a disk.
A driver would have to Message pass an I/O command or call a function to be able to write blocks on an IDE device using ATA protocols. No A privileged more driver can “just” write random garbage to disks.
A piece of code running in privileged mode could write random garbage to memory but not to a disk.
If in the off chance that NT allows such a stupid thng to happen, it is . I seriously doubt it does, NT is still designed quite well.
The important point here that I keep trying to make – and is constantly ignored at every turn for the sake of pedantic nipicking – is that declaring a product bad for everyone based on *one* example is just stupid. I’ve seen just about every mainstream OS anyone is ever likely to use crash, self-destruct, wipe out data for no reason whatsoever and just plain behave weirdly. However, since such events are usually *rare*, I don’t try to use them as sole truths for defining general bahaviour.
No I haven’t even began telling you NT issues. We had to rebuild most labs every year/semester becuase the NT machines would slow down or become unusable.
How do you explain people losing thier roaming profiles and loosing thier desktops because the Domain controller runs out of licenses? It would just go throught CALs because it could not account for accesses correctly. A reboot and outage was ususally needed every few weaks to get the licenses to reset. We had more than enough licenses.
Anyway, I am done with this discussion. While I understand that any system can crash and do crash. I will not accept that NT is more robust than OS X.
Especially when Microsoft has infinitely more resources than any other software company, relatively speaking. They keep losing focus on what matters and add feature after useless feature and don’t take security seriously all the way until 2003.