Linked by Eugenia Loli on Fri 13th May 2005 07:20 UTC
Sun Solaris, OpenSolaris Sun Microsystems has delayed the release of two major features the company has trumpeted as reasons to try its latest version of the Solaris operating system. Eric Schrock, a Solaris kernel programmer, said on his blog in April that he's "completely redesigning the ZFS commands from the ground up" after finding some deficiencies.
Permalink for comment
To read all comments associated with this story, please click here.
Robert Escue
by Anonymous on Fri 13th May 2005 19:09 UTC

"And why didn't you ask directly instead of this "cat and mouse" nonsense?"

First off, I wasn't playing "cat and mouse", whatever that
is, and I don't believe I was engaging in any nonsense.

Secondly, I'm not sure what I was supposed to "ask directly". I didn't have any question for you - I just
didn't like the fact that you were, as I said, being
disingenious.

The topic, as I recall, was benchmarks.

"Further, what hardware would you want these benchmarks run on, and what benchmarks?"

You probably didn't intend to do so, but you're playing straight man to my soapbox here...

I want lots of benchmarks - I want a wide variety of the most popular, well-supported hardware to be tested. I want benchmarks for file serving, for web serving, for database sorts and searches, I want frame rates (where relevant).

Sun (and sun-approved/financed organizations) couldn't produce as many benchmarks as I'd like to see, because
(as you said):

"while it would be an interesting read, it could take months to complete."

Good, reproducible benchmarks *do* take a lot of time and effort, and Sun (and its approved benchmarkers) wouldn't
have time to do a wide variety.


"The vast majority of benchmarks I don't trust, regardless of who did them simply because there is so little material published as to what was done there is no way the test could be repeated and verified by an outside source."

I agree that some benchmarks are described too vaguely to be reproduced. Such benchmarks are of little value.

I'm not so sure about "the vast majority" bit though.

Also, I've seen benchmarks which use custom-tailored drivers that were not available to anybody outside of Microsoft. These benchmarks are also useless in the real world.

"Some of the research I have done indicates that the results you get for a particular system is only good for that system."

I'm not sure how to interpret that comment - it is so bizarre I can't believe you're saying what I think you're
saying.

I agree that results on, say, an HP PIII uniprocessor with an 8 gigIDE drive won't necessarily scale consistently to
a quad Xeon with RAID0 160 megabit SCSI drives, but my
2.4 gig Dell with Seagate 120 Gig hard drive should perform the same as the tester's Dell with identical hardware and identical software configuration.

Again, that is why I would love to see lots of benchmarks -many different tests on a wide variety of hardware would highlight strengths and weaknesses of the OS'es being tested.

"You might or might not achieve similar results, and while it would be an interesting read, it could take months to complete."

That's why you need to have many credible people doing the benchmarks.