Linked by David Adams on Tue 20th Mar 2012 17:27 UTC, submitted by sawboss
Hardware, Embedded Systems "In the world of hard drives storage, density is king and Seagate has just achieved a major breakthrough that guarantees major storage gains for the next decade. That breakthrough is managing to squeeze 1 terabit (1 trillion bits) of data into a square inch or space, effectively opening the way for larger hard drives over the coming years beyond the 3TB maximum we currently enjoy. How much of an improvement does this storage milestone offer? Current hard drives max out at around 620 gigabits per square inch, meaning Seagate has improved upon that by over 50%. However, that's just the beginning."
Thread beginning with comment 511425
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: Faster now !
by Alfman on Wed 21st Mar 2012 14:44 UTC in reply to "RE[2]: Faster now !"
Member since:


"I wonder... Since LBA has been the norm, are OSs still in control of where their data goes on the HDD, or is it the job of a hard drive's manufacturer to propose a LBA->physical geometry mapping which verifies such laws as 'First LBA blocks are accessed faster' ? "

Even picking sectors randomly, most data will end up on the outer half of the disk anyways (hmm 'half' is a misnomer here). Assuming all data has the same priority, then ideally it should be equally distributed. For this, random placement turns out to be not bad at all. Keeping file sectors far apart even helps avoid fragmentation which is a much worse bottleneck than throughput. If the system had a policy to store all files in the faster sectors fist when the disk is empty, over time the newer files would be placed in slower parts of the disk. Consequently the file system would appear to slow down over time, which I feel is a bad quality. Maybe the OS can move files around in the background, but this could slow it down further.

In my opinion seek performance is far more critical for most real-world applications. Even slow sectors on cheap consumer drives can beat 50MB/s.

You mentioned RAID 0 in the context of increasing throughput, but it could also help reduce seek latencies. It'd be theoretically possible to have two or more disk having identical content synced to spin together out of phase, and have a smart controller to direct read requests to the disk which is closest to the data. This design would give a performance boost not only for seeking, but also raw throughput. The inherent redundancy might be a selling point also. One could do the same thing minus the redundancy by having two heads on the same platter.

The above arrays would require specialized disks, but one might approximate it with off the shelf hardware using software alone. The OS would send 2 or more identical requests to the disks, and the data from whichever happens to respond first is used. This would negate the throughput multiplication factor, but for the average case seek time could be halved (it would be unfortunate if the disks ended up spinning exactly in phase out of coincidence). I'm also not sure if this software solution could scale unless we could immediately cancel the read request on the loosing disk.

Reply Parent Score: 3

RE[4]: Faster now !
by Alfman on Wed 21st Mar 2012 15:41 in reply to "RE[3]: Faster now !"
Alfman Member since:

A very clever OS might even be able to pragmatically determine the disk's phase and velocity by timing it's response latencies. It might then create a model in software of the disk's platter/head position at any given point in time and then determine which disk of a raid array is more likely to respond quicker and then only send the request to that disk.

I have to wonder whether any of this micromanaging could offer a performance benefit over a simpler implementation of using a dual disk elevator algorithm. Even a 25% increase in seek speed might be worthwhile for a seek-bound database.

Reply Parent Score: 2