Linked by Thom Holwerda on Sun 14th Apr 2013 20:30 UTC
Hardware, Embedded Systems "In the past five years, flash memory has progressed from a promising accelerator, whose place in the data center was still uncertain, to an established enterprise component for storing performance-critical data. It's rise to prominence followed its proliferation in the consumer world and the volume economics that followed. With SSDs, flash arrived in a form optimized for compatibility - just replace a hard drive with an SSD for radically better performance. But the properties of the NAND flash memory used by SSDs differ significantly from those of the magnetic media in the hard drives they often displace. While SSDs have become more pervasive in a variety of uses, the industry has only just started to design storage systems that embrace the nuances of flash memory. As it escapes the confines of compatibility, significant improvements in performance, reliability, and cost are possible."
Thread beginning with comment 558489
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Comment by TempleOS
by ssokolow on Mon 15th Apr 2013 01:36 UTC in reply to "Comment by TempleOS"
ssokolow
Member since:
2010-01-21

When the fundamental storage changes from volitile to nonvolitile, a new reality exists and everything changes. The old operating systems can obviously treat it conventionally, but the potential for a big improvement will be there until a new operating system is designed.


Why would we go from volatile to non-volatile?

Flash memory can't be used as RAM. It can only be erased a limited number of times before wearing out and there's no quicker way to accidentally wear out flash memory than to put a swap partition on it.

Even if that weren't the case and we could use Flash memory as RAM, we've already got functionality along the lines you're thinking of in the Linux kernel.

(For example, the ext2 filesystem driver has had "execute in place" support for memory-constrained, flash-based mobile devices for years and there's also mmap() for userspace apps. There's a smooth migration path to be made when we're ready for it.)

Edited 2013-04-15 01:40 UTC

Reply Parent Score: 5

RE[2]: Comment by TempleOS
by Neolander on Mon 15th Apr 2013 06:05 in reply to "RE: Comment by TempleOS"
Neolander Member since:
2010-03-08

Also, even if Flash memory is faster than hard drives, it's nowhere near DDR RAM speeds. Swapping on an SSD or SD card remains pretty much as painful as swapping on an HDD, and I can't believe it's all the SSD interface's fault.

There is some cool research going on regarding NVRAM, such as STT MRAM* or memristors, but I don't think Flash memory will be able to go there.


* STT = Spin Transfer Torque. In short, the main issue regarding MRAM today is that we don't know how to flip the magnetization of a small magnet without affecting that of neighboring magnets, which limits storage density. STT research is about using spin-polarized ("magnetized") electrical currents flowing directly into the magnet in order to do that.

Edited 2013-04-15 06:13 UTC

Reply Parent Score: 3

RE[3]: Comment by TempleOS
by gilboa on Mon 15th Apr 2013 06:14 in reply to "RE[2]: Comment by TempleOS"
gilboa Member since:
2005-07-06

... Let alone the implications of having a complex firmware that handles garbage collection behind the OS' back.

In 40 years, we have learned all-there-is-to-know about magnetic drives - especially, how they fail.
We have yet to reach the same level of maturity when it comes to SSD's. (Let alone the fact the possibility of bricking all the members of the storage pool, all at once, due to a firmware bug).

SSD will replace HDDs - there's no doubt about it.
However, I tend to choose caution over innovation when it comes to data storage...

- Gilboa

Edited 2013-04-15 06:15 UTC

Reply Parent Score: 5

RE[3]: Comment by TempleOS
by Flatland_Spider on Mon 15th Apr 2013 19:16 in reply to "RE[2]: Comment by TempleOS"
Flatland_Spider Member since:
2006-09-01

The SATA interface chip introduces lag. SATA wasn't designed for SSDs, so it's kind of a bottleneck when added to an SSD. A much more direct way to access the drive would help with the speed. The Fusion-IO stuff is a good example of the speeds that could be reached when SATA is eliminated.

The consensus is SSDs using NAND are a stop gap measure until NVRAM is commercially available in bulk. NAND becomes less efficient as it gets smaller, unlike transistors which become more efficient, and producers are already starting to see the affects of this. More NAND chips on few channels means reduced speed, and the smaller NAND cells wear out faster. It's a good first gen solid state disk product, but ultimately, there will be a better technology that will have a longer run than NAND.

Reply Parent Score: 2

RE[2]: Comment by TempleOS
by Lennie on Mon 15th Apr 2013 15:57 in reply to "RE: Comment by TempleOS"
Lennie Member since:
2007-09-22

Why would we go from volatile to non-volatile?


It could happen if it become cheap enough the first mass-market products will ship this year:

http://hardware.slashdot.org/story/13/04/04/016221/non-volatile-dim...

Reply Parent Score: 2

RE[2]: Comment by TempleOS
by Neolander on Mon 15th Apr 2013 16:08 in reply to "RE: Comment by TempleOS"
Neolander Member since:
2010-03-08

Oh, there's something I missed this morning too...

Why would we go from volatile to non-volatile?


I'd say the main reason for doing that would be increased reliability and simplified abstractions.

Reliability would be increased because machines could be smoothly powered off and back without losing any state, and without a need for hackish "save RAM data to disk periodically" mechanisms. Suspend and hibernate could well cease to exist in less than a decade, replaced by the superior alternative of simply turning hardware on and off by flipping an hardware switch.

Abstractions would be simplified because there wouldn't be a need to maintain two separate mechanisms to handle application states and data storage and interchange through file. A well-designed filesystem could instead address the use cases of both malloc() and today's filesystem calls, much to the delight of "everything is a file" freaks from the UNIX world ;)

(As an aside, the latter could actually already be done today, by allocating all free RAM into a giant ramdisk, mounting it alongside mass storage, and treating process address spaces as a bunch of memory mapped files. It simply doesn't make sense at this point, since both memories have such different characteristics...)

Edited 2013-04-15 16:21 UTC

Reply Parent Score: 3

RE[3]: Comment by TempleOS
by Alfman on Mon 15th Apr 2013 18:02 in reply to "RE[2]: Comment by TempleOS"
Alfman Member since:
2011-01-28

I also think non-volatile ram would make a lot of sense, assuming the technology were feasible and not overly compromising like flash is today.

High throughput database and file system processes are obvious candidates, they would benefit tremendously by eliminating the need to O_DIRECT/fsync constantly for committing transactions.

By unifying ram/disk into one concept, we could open up new programming methodologies where programs and/or data can simply exist without having to sync state begin disks and ram. I'd even go further and make stateful objects network-transparent too so that they "just exist" and never need to be serialized from a programmer's point of view (such things could be handled automatically by the languages/operating systems). Like you say, this could be emulated today, but it'd necessarily have to be in a lower performance and/or less reliable fashion that NV-RAM could achieve.

Modern NAND flash is not ideal, the way it works adds latency and has undesirable addressing properties. NOR flash is technically far closer to a RAM substitute since it's truly random access and more reliable than NAND without needing the whole Flash Translation Layer in front of it. If NOR flash could be made cheaper and more densely, it would completely replace NAND.

Reply Parent Score: 3

RE[3]: Comment by TempleOS
by saso on Mon 15th Apr 2013 19:21 in reply to "RE[2]: Comment by TempleOS"
saso Member since:
2007-04-18

What you describe is already in place, it's called "suspend to RAM" (aka "sleep") and it's far from simple. There's tons of runtime state that needs to be stored and restored when a machine changes power states that isn't in main memory. Just a little food for thought:

* peripherals (graphics cards, displays, mice, scanners, etc.)
* timing circuits (programmable interrupt clocks, watchdogs, etc.)
* environmental dependencies (open network connections, security contexts, etc.)

All of these need to be gracefully taken care of and reinitialized, and if possible made to continue previously interrupted tasks. All of this is already handled by current OSes. And all of this is very, very messy and complicated.

Reply Parent Score: 5

RE[3]: Comment by TempleOS
by Flatland_Spider on Mon 15th Apr 2013 19:25 in reply to "RE[2]: Comment by TempleOS"
Flatland_Spider Member since:
2006-09-01

It's looking more and more like the future belongs to SoCs, and RAM will become just another cache on the CPU. To really make that work NVRAM is needed.

Reply Parent Score: 2