Linked by Thom Holwerda on Fri 18th Sep 2009 13:40 UTC, submitted by Robert Escue
Hardware, Embedded Systems This is an article which discusses the increase in storage capacity while performance and hard error rates have not improved significantly in years, and what this means for protecting data in large storage systems. "The concept of parity-based RAID (levels 3, 5 and 6) is now pretty old in technological terms, and the technology's limitations will become pretty clear in the not-too-distant future " and are probably obvious to some users already. In my opinion, RAID-6 is a reliability Band Aid for RAID-5, and going from one parity drive to two is simply delaying the inevitable."
Thread beginning with comment 385220
To view parent comment, click here.
To read all comments associated with this story, please click here.
Robert Escue
Member since:
2005-07-08

Budget does come into play. Our history with SATA storage here has not been a good one. We bought storage from a vendor that after six months decided to get out of the SATA storage game and dropped support for the devices. We now have over a hundred 500 GB SATA drives that we pulled from the arrays and junked the rest.

We also have SATA solutions from other vendors (NetApp, HP) and the jury is still out. Power, AC, and space do come into consideration but some of the people I work with also expect a level of performance that SATA might not meet, plus factor in the idea of using SATA over SCSI and FC makes some uncomfortable.

Most of the server rooms I have worked in are near or over capacity for power and AC, but new ideas are usually the hardest sell.

Reply Parent Score: 2

jwwf Member since:
2006-01-19

Budget does come into play. Our history with SATA storage here has not been a good one. We bought storage from a vendor that after six months decided to get out of the SATA storage game and dropped support for the devices. We now have over a hundred 500 GB SATA drives that we pulled from the arrays and junked the rest.


That's pretty bad. I guess it's kind of in line with what I mean though: SATA is just one piece of the puzzle. If the rest of the stack is junk, it almost doesn't matter what the drive interface is. From your other posts I think you are saying the same thing. I just wouldn't blame SATA, rather junky arrays.

We also have SATA solutions from other vendors (NetApp, HP) and the jury is still out. Power, AC, and space do come into consideration but some of the people I work with also expect a level of performance that SATA might not meet, plus factor in the idea of using SATA over SCSI and FC makes some uncomfortable.

Most of the server rooms I have worked in are near or over capacity for power and AC, but new ideas are usually the hardest sell.


True. It's kind of funny that by that mentality anybody would accept anything but direct attach storage. I mean, just because the SAN controller has fibre ports on both sides doesn't mean there isn't a very complicated black box in the middle. Thinking of it as "fibre from host to spindle" is sort of meaningless when there is no direct path from host to physical disk.

Reply Parent Score: 2

Robert Escue Member since:
2005-07-08

Our problem is the Government wants to build a SAN, but they want to use existing components that are in production (a bad idea) and I really don't think it sank in that mixing components is a good idea (we have 1 GB, 2 GB and 4 GB FC arrays and libraries). Our stuff is direct attach at the moment, which works but is not flexible.

Unfortunately this is what happens when you build something piecemeal and buy the key pieces (the FC switches) last.

Reply Parent Score: 2