Linked by David Adams on Tue 20th Mar 2012 17:27 UTC, submitted by sawboss
Hardware, Embedded Systems "In the world of hard drives storage, density is king and Seagate has just achieved a major breakthrough that guarantees major storage gains for the next decade. That breakthrough is managing to squeeze 1 terabit (1 trillion bits) of data into a square inch or space, effectively opening the way for larger hard drives over the coming years beyond the 3TB maximum we currently enjoy. How much of an improvement does this storage milestone offer? Current hard drives max out at around 620 gigabits per square inch, meaning Seagate has improved upon that by over 50%. However, that's just the beginning."
Order by: Score:
Comment by benb320
by benb320 on Tue 20th Mar 2012 17:55 UTC
benb320
Member since:
2010-02-23

I think I will never use that much memory, but who knows

Reply Score: 1

RE: Comment by benb320
by randy7376 on Tue 20th Mar 2012 18:16 UTC in reply to "Comment by benb320"
randy7376 Member since:
2005-08-08

I think I will never use that much memory, but who knows


You're going to need that 60TB with the future release of Windows 9. ;)

Reply Score: 4

RE: Comment by benb320
by zima on Tue 20th Mar 2012 18:17 UTC in reply to "Comment by benb320"
zima Member since:
2005-07-06

I have no doubt we will find a way to use it - collectively, at least.

Sure, present HDDs (or SSDs, almost) might be more than enough, on a local personal level.

But datacenters and such? They're always hungry for more spacious drives. Ultimately, HDDs store our civilisation - it runs on them (kinda like, yeah, we do "personal" long-distance travel largely by airplanes nowadays - but ships, trains and such are what really still runs the place, our global economy)

Reply Score: 5

RE: Comment by benb320
by Doc Pain on Wed 21st Mar 2012 09:13 UTC in reply to "Comment by benb320"
Doc Pain Member since:
2006-10-08

I think I will never use that much memory, but who knows


Have more faith in possibilities mankind will invent to occupy such big disks. Hint: Parkinson's Law applied.

Data expands to fill the space available for storage. Storage requirements will increase to meet storage capacity. The demand upon a resource tends to expand to match the supply of the resource. The reverse is not true.

Maybe also see Jevons paradox.

A technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.

As it has already been mentioned, big datacenters, primarily governmental installations and "big business" will happily use such disks to store more data. This is good for for the customers, good for the people, good for the market. :-)

Let us be thankful we have storage. Store now. Store more. Store more now. Store... and be happy!

Reply Score: 5

Comment by ssokolow
by ssokolow on Tue 20th Mar 2012 19:13 UTC
ssokolow
Member since:
2010-01-21

Nice. Maybe I'll be able to keep my movies and shows readily accessible AND have RAID after all.

I've got something on the order of 7TiB of them and I can't afford a NAS, so it has to be do-able in four or fewer SATA drives.

That means I need 14TiB of drives for a RAID-10 array before you even take room for future growth into account.

(As you may guess, a lot of my disposable income goes into entertainment, so my media center is XBMC with scrounged hardware.)

Edited 2012-03-20 19:16 UTC

Reply Score: 5

v RE: Comment by ssokolow
by alexz on Wed 21st Mar 2012 04:19 UTC in reply to "Comment by ssokolow"
RE[2]: Comment by ssokolow
by Zer0C001 on Wed 21st Mar 2012 10:36 UTC in reply to "RE: Comment by ssokolow"
Zer0C001 Member since:
2011-12-22

Maybe you could pirate even more movies, that way you'll be able to save and pay for a NAS!


He did say "a lot of my disposable income goes into entertainment", so the reason he can't afford a NAS is that he's not pirating at all.

Reply Score: 1

RE: Comment by ssokolow
by Laurence on Wed 21st Mar 2012 16:03 UTC in reply to "Comment by ssokolow"
Laurence Member since:
2007-03-26

Why are you limiting yourself to 4 disks?

I have a 6x 1TB ZFS raid which gives me 2 disk redundancy, about 3.8TB of storage and the ability to grow that storage dynamically if/when I add more HDDs.

Reply Score: 2

RE[2]: Comment by ssokolow
by tanishaj on Wed 21st Mar 2012 17:09 UTC in reply to "RE: Comment by ssokolow"
tanishaj Member since:
2010-12-22

Why are you limiting yourself to 4 disks?


The original poster said two things:

1) He needs 7TB of storage

2) He is using RAID 10 (1 + 0)

3) He is "scrounging" hardware (ie. he is poor)

Four disks is the minimum to achieve RAID 10. What you are suggesting is that he either put up with much less storage or spend much more money on disks. He ruled both those out.

RAID 1 = Two redundant disks

RAID 0 = Two striped (aggregated) disks

RAID 10 = RAID 1 + 0 = A stripe of two RAID 1 volumes

To get 7TB of storage as RAID 10, you need 14TB as he said.

Reply Score: 2

RE[3]: Comment by ssokolow
by Laurence on Wed 21st Mar 2012 17:27 UTC in reply to "RE[2]: Comment by ssokolow"
Laurence Member since:
2007-03-26


The original poster said two things:

1) He needs 7TB of storage

Yes, I know that. I wasn't suggesting he clone my set up - like for like.


2) He is using RAID 10 (1 + 0)

Actually no. He said he wasn't running a RAID. He gave an example of the requirements he would need if he ran RAID10


3) He is "scrounging" hardware (ie. he is poor)

Which is why I offered a suggestion of something that can be cheaply grown as his storage needs grows yet still offers great resilience.


Four disks is the minimum to achieve RAID 10. What you are suggesting is that he either put up with much less storage or spend much more money on disks. He ruled both those out.

Which goes back to my question. Why limit yourself to 4 disks. If you have ZFS then you can grow your storage as your storage needs grows. There's no limit.


RAID 1 = Two redundant disks

RAID 0 = Two striped (aggregated) disks

RAID 10 = RAID 1 + 0 = A stripe of two RAID 1 volumes

To get 7TB of storage as RAID 10, you need 14TB as he said.

Well done for reiterating a point which nobody was arguing against to begin with <_<

Reply Score: 2

60TB hard drives possible
by sagum on Tue 20th Mar 2012 20:00 UTC
sagum
Member since:
2006-01-23

A 60TB hard drives sound great, but do you guys have any idea how heavy that would become once you put all your data on it :o

Reply Score: 3

RE: 60TB hard drives possible
by phoenix on Tue 20th Mar 2012 20:14 UTC in reply to "60TB hard drives possible"
phoenix Member since:
2005-07-11

If you put the drive between two large electro magnets with reversed polarities, you can "float" or "hover" the drive between them and not worry about weight.

;)

Reply Score: 6

Pain
by CapEnt on Tue 20th Mar 2012 20:35 UTC
CapEnt
Member since:
2005-12-18

I can't measure the pain of loosing a single 60TB hard drive. These increasingly larger drives are making RAID setups a necessity!

Reply Score: 4

RE: Pain
by nbensa on Tue 20th Mar 2012 22:43 UTC in reply to "Pain"
nbensa Member since:
2005-08-29

I can't measure the pain of loosing a single 60TB hard drive. These increasingly larger drives are making RAID setups a necessity!


I have RAID since 500GiB. I'm at 4TB (3x2TB Linux md-raid5). I remember RAID saved my pr0n once when one of the disks die without any previous warning (even with (notso)SMART enabled... ;) )

Reply Score: 2

RE: Pain
by tanishaj on Wed 21st Mar 2012 17:11 UTC in reply to "Pain"
tanishaj Member since:
2010-12-22

I can't measure the pain of loosing a single 60TB hard drive. These increasingly larger drives are making RAID setups a necessity!


RAID has burned me more than once. I also live in terror of these new huge disk sizes but RAID does not make me sleep much better.

Reply Score: 1

Faster now !
by Neolander on Tue 20th Mar 2012 21:40 UTC
Neolander
Member since:
2010-03-08

Alright, now if only someone could find a software or hardware method to turn a slow multi-TB hard drive into a superfast ~200GB hard drive... Perhaps possible by using a variant of RAID between drive platters ?

I believe I'm not the only one findind the capacity of modern HDDs too big for domestic use, and getting a compromise between SSD's speed and "pure" hard drive price and reliability could be pretty cool...

Edited 2012-03-20 21:45 UTC

Reply Score: 2

RE: Faster now !
by aligatro on Tue 20th Mar 2012 22:29 UTC in reply to "Faster now !"
aligatro Member since:
2010-01-28

I was under the impression that it already does it. It reads multiple metal disks at the same time, to parallelise read and write. The only way to make it faster is to add more disks. I suspect those new harddrives are going to be almost as fast as average SSD. They increased physical density, so much more information can be read/written at the same time.

Reply Score: 2

RE[2]: Faster now !
by Neolander on Wed 21st Mar 2012 07:56 UTC in reply to "RE: Faster now !"
Neolander Member since:
2010-03-08

I wonder... Since LBA has been the norm, are OSs still in control of where their data goes on the HDD, or is it the job of a hard drive's manufacturer to propose a LBA->physical geometry mapping which verifies such laws as "First LBA blocks are accessed faster" ?

In any case, if the full drive capacity is kept, II guess that constant monitoring of disk usage and moving data around would be necessary in order to keep the most frequently used data in the first sectors. I was thinking of something simpler, more along the lines of putting exactly the same content on all disks (like RAID 1 does with multiple HDDs), which slows down writes a little but should vastly increase read speeds.

Edited 2012-03-21 08:03 UTC

Reply Score: 1

RE[3]: Faster now !
by Alfman on Wed 21st Mar 2012 14:44 UTC in reply to "RE[2]: Faster now !"
Alfman Member since:
2011-01-28

Neolander,

"I wonder... Since LBA has been the norm, are OSs still in control of where their data goes on the HDD, or is it the job of a hard drive's manufacturer to propose a LBA->physical geometry mapping which verifies such laws as 'First LBA blocks are accessed faster' ? "

Even picking sectors randomly, most data will end up on the outer half of the disk anyways (hmm 'half' is a misnomer here). Assuming all data has the same priority, then ideally it should be equally distributed. For this, random placement turns out to be not bad at all. Keeping file sectors far apart even helps avoid fragmentation which is a much worse bottleneck than throughput. If the system had a policy to store all files in the faster sectors fist when the disk is empty, over time the newer files would be placed in slower parts of the disk. Consequently the file system would appear to slow down over time, which I feel is a bad quality. Maybe the OS can move files around in the background, but this could slow it down further.

In my opinion seek performance is far more critical for most real-world applications. Even slow sectors on cheap consumer drives can beat 50MB/s.

You mentioned RAID 0 in the context of increasing throughput, but it could also help reduce seek latencies. It'd be theoretically possible to have two or more disk having identical content synced to spin together out of phase, and have a smart controller to direct read requests to the disk which is closest to the data. This design would give a performance boost not only for seeking, but also raw throughput. The inherent redundancy might be a selling point also. One could do the same thing minus the redundancy by having two heads on the same platter.

The above arrays would require specialized disks, but one might approximate it with off the shelf hardware using software alone. The OS would send 2 or more identical requests to the disks, and the data from whichever happens to respond first is used. This would negate the throughput multiplication factor, but for the average case seek time could be halved (it would be unfortunate if the disks ended up spinning exactly in phase out of coincidence). I'm also not sure if this software solution could scale unless we could immediately cancel the read request on the loosing disk.

Reply Score: 3

RE[4]: Faster now !
by Alfman on Wed 21st Mar 2012 15:41 UTC in reply to "RE[3]: Faster now !"
Alfman Member since:
2011-01-28

A very clever OS might even be able to pragmatically determine the disk's phase and velocity by timing it's response latencies. It might then create a model in software of the disk's platter/head position at any given point in time and then determine which disk of a raid array is more likely to respond quicker and then only send the request to that disk.

I have to wonder whether any of this micromanaging could offer a performance benefit over a simpler implementation of using a dual disk elevator algorithm. Even a 25% increase in seek speed might be worthwhile for a seek-bound database.

Reply Score: 2

RE: Faster now !
by magick on Wed 21st Mar 2012 01:06 UTC in reply to "Faster now !"
magick Member since:
2005-08-29

Alright, now if only someone could find a software or hardware method to turn a slow multi-TB hard drive into a superfast ~200GB hard drive...

Actually, there is a method and a fairly simple one. You could use only outher most tracks (cylinders). Reducing multi-TB drive to only several hundred GB would provide quite some 'speediness'.

And another thing... RAID is not a substitution for backup it is mainly a method for increasing system's availability. Always backup your (important) data!

Reply Score: 2

RE[2]: Faster now !
by Neolander on Wed 21st Mar 2012 08:00 UTC in reply to "RE: Faster now !"
Neolander Member since:
2010-03-08

Actually, there is a method and a fairly simple one. You could use only outher most tracks (cylinders). Reducing multi-TB drive to only several hundred GB would provide quite some 'speediness'.

Don't modern filesystems already try to do this as a default ?

And another thing... RAID is not a substitution for backup it is mainly a method for increasing system's availability. Always backup your (important) data!

Yes, I know that disks in RAID are at the same physical location and thus vulnerable to the same external breakage factors. However, I was only considering the ability of RAID to speed-up disk reads here, write redundancy being only a way to achieve this goal.

Reply Score: 1

RE: Faster now !
by OSbunny on Wed 21st Mar 2012 06:59 UTC in reply to "Faster now !"
OSbunny Member since:
2009-05-23

I believe they already have something like that. Its called a Western Digital Raptor. Another option is the hybrid Momentus series of drives from seagate.

Reply Score: 2

RE: Faster now !
by tanishaj on Wed 21st Mar 2012 17:14 UTC in reply to "Faster now !"
tanishaj Member since:
2010-12-22

Alright, now if only someone could find a software or hardware method to turn a slow multi-TB hard drive into a superfast ~200GB hard drive


Perhaps we will start to see consumer level systems using RAID (or RAID like) multi-platter storage with SSDs (or even RAIDs of them) acting as a kind of cache.

Reply Score: 2

RE[2]: Faster now !
by Neolander on Wed 21st Mar 2012 18:08 UTC in reply to "RE: Faster now !"
Neolander Member since:
2010-03-08

Perhaps we will start to see consumer level systems using RAID (or RAID like) multi-platter storage with SSDs (or even RAIDs of them) acting as a kind of cache.

I have mixed feelings about these hybrid drives with SSD caches (which already exist). On one hand, they offer an interesting compromise between HDD capacity and SSD speed. On the other hand, can the drive fallback nicely to "pure" hard drive operation when the SSD fails ?

Reply Score: 1

RE[3]: Faster now !
by Alfman on Thu 22nd Mar 2012 02:03 UTC in reply to "RE[2]: Faster now !"
Alfman Member since:
2011-01-28

I'd be very concerned about the reliability of the SSD cache for critical data. I've recently lost data on a 16GB flash drive, it lasted some 5 months before dying without warning. We might have had a month of media on there that we weren't prepared to loose.

I keep automatic copies of computer data, but the flash cards weren't part of that backup strategy. We inquired about data recovery, it was $500+/-100. Also, though the card was still under warranty, amazon refused to give any option of both honoring their warranty and allowing us to send the defective card for data retrieval (shame on you amazon, your warranty is an utter disgrace).

Anyways, I couldn't justify $500 on a one time data recovery, so I purchased a nand flash chip reader instead. I desoldered the flash chips using a heat gun, and low and behold I was able to read the raw flash data off the chips. However I'm still in the process of finding a way to make the raw data usable. I've been researching the issue diligently and though it's not easy hopefully I'll find a way to crack it.


As a warning, I discovered that the nand chips used on the amazon brand card were rated by the manufacturer for only 10,000 erase/program cycles whereas most chips are rated for 100,000 cycles. I have no idea how one would be able to find this info before purchasing the cards though.

[end tangent]

Reply Score: 2

RE[4]: Faster now !
by Neolander on Thu 22nd Mar 2012 08:09 UTC in reply to "RE[3]: Faster now !"
Neolander Member since:
2010-03-08

That's what I've heard frequently on the web too. Flash SSDs seem to fail completely and without warning after a short and highly random time, even if it's only one single NAND chip that is damaged. After that, good luck for the recovery...

Until this kind of issue is fixed, I would not trust an SSD that is not at least mirrored on another SSD (hello, prohibitive costs !) to hold important data. As a cache, why not, but only if it can be bypassed. Which is why I wished HDD manufacturers focused a bit more on seek times and throughput rather than always more bits per square inch...

Edited 2012-03-22 08:10 UTC

Reply Score: 1

RE[5]: Faster now !
by Alfman on Thu 22nd Mar 2012 10:27 UTC in reply to "RE[4]: Faster now !"
Alfman Member since:
2011-01-28

Yea, in my quest to restore my data, I've discovered alot about flash. NOR flash is much more reliable and is directly accessible like RAM and has greater write cycles. The disadvantage is expense/capacity. NAND flash uses far fewer gates and must be addressed in larger blocks. Permanent physical bit errors are a normal occurrence with NAND when brand new, therefor many sectors are shipped as bad and ECC is mandatory. Additionally there are fewer erase cycles, so wear leveling algorithms are mandatory. This introduces the need for a logical/physical mapping between the normal file system and the raw flash sectors through the use of a proprietary NAND flash controller, all of this is typically unnecessary with NOR flash, and I'm inclined to believe these extra components make NAND less reliable.

Another factor is that older SLC NAND flash stores 1 bit per cell, but greater densities has driven the industry to adapt MLC NAND cells with 3 bits each (by storing 8 distinct voltage levels). One might hope they wouldn't do this unless it were safe, but it turns out MLC is far less reliable across both read and write operations, and hence requires significantly more error correction per sector. Another issue with MLC, is that (for reasons I don't understand) the cells are divided across unrelated sectors, so botched operations in one sector will always trash other sectors as well. The voltage levels are so fragile that the controller has to keep track of how many times a cell is READ before needing to re-flash it.

So reflecting on all this, one must ponder if SDDs are becoming cheap and practical only because of the engineering compromises with data integrity, or if we'll eventually be able to manufacturer cheap flash which doesn't compromise data integrity.


Regarding seek time, I found this to be relevant:

http://www.tomshardware.com/news/seagate-hdd-harddrive,8279.html

"Will manufacturers like Seagate ever bring back hard drives with dual actuator heads? Unlikely, given that the focus is now on increasing capacities and SSDs."

Edit: What about using battery backed RAM for caching? The TI-8x calculators didn't have any non-volatile storage after all and could last a couple years on a cell battery.

Edited 2012-03-22 10:33 UTC

Reply Score: 3

Thailand floods
by transputer_guy on Wed 21st Mar 2012 04:37 UTC
transputer_guy
Member since:
2005-07-08

I just wish the HD industry would manufacture all the critical parts in several countries that aren't prone to flooding. Perhaps they should use RAID in the manufacture.

Reply Score: 5

RE: Thailand floods
by OSbunny on Wed 21st Mar 2012 07:00 UTC in reply to "Thailand floods"
OSbunny Member since:
2009-05-23

RAID costs too much. Thais work cheap.

Reply Score: 1

Comment by ilovebeer
by ilovebeer on Wed 21st Mar 2012 07:31 UTC
ilovebeer
Member since:
2011-08-08

Can't wait to see what kind of absurd price tag these come with.

Reply Score: 2

the 3TB maximum we currently enjoy
by 0brad0 on Wed 21st Mar 2012 17:14 UTC
0brad0
Member since:
2007-05-05

I don't think you live in the same world I am. We've been beyond 3TB for awhile now and the manufacturers are capable of producing a 5TB drive now if they chose to. They choose not to.

Reply Score: 2

Comment by maxsideburn
by maxsideburn on Wed 21st Mar 2012 18:24 UTC
maxsideburn
Member since:
2011-01-04

about time. hard drive capacity has always grown exponentially....except for the last few years.

I've now got a plethora of 2TB hard drives laying around with stuff on them and that drives me insane. I like having ONE drive and ONE backup drive. thank you seagate.

Reply Score: 1