Linked by Thom Holwerda on Sun 14th Apr 2013 20:30 UTC
Hardware, Embedded Systems "In the past five years, flash memory has progressed from a promising accelerator, whose place in the data center was still uncertain, to an established enterprise component for storing performance-critical data. It's rise to prominence followed its proliferation in the consumer world and the volume economics that followed. With SSDs, flash arrived in a form optimized for compatibility - just replace a hard drive with an SSD for radically better performance. But the properties of the NAND flash memory used by SSDs differ significantly from those of the magnetic media in the hard drives they often displace. While SSDs have become more pervasive in a variety of uses, the industry has only just started to design storage systems that embrace the nuances of flash memory. As it escapes the confines of compatibility, significant improvements in performance, reliability, and cost are possible."
Thread beginning with comment 558593
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: Comment by TempleOS
by ssokolow on Tue 16th Apr 2013 00:33 UTC in reply to "RE[2]: Comment by TempleOS"
ssokolow
Member since:
2010-01-21

The Linux devs are playing around with compressing allocated memory to increase transfer times and use RAM more efficiently.


True, but from what I understand, that's more like swap space and it can't achieve the same compression ratios as data-aware formats like PNG. (Remember, PNG performs various transforms before DEFLATEing)

With ECC RAM, RAM already has checksums, and checksummed RAM pages, at the OS level, would increase security.


Point. I keep forgetting how much acceleration for checksum-related stuff is available in modern CPU instruction sets.

OS X already renders everything in PDF, and it pre-renders icons and things of different sizes then stores them in an on-disk cache.


Point. I suppose it could work as long as there's a strong effort to shame and shun any applications which put their caches somewhere that can't be expired independent of their cooperation or knowledge.

Every conceivable version doesn't have to be pre-rendered. Only the most common sizes need to be pre-rendered, or only the rendered sizes need to be cached. Plus, the rendered versions don't have to be saved with the original file.


I was under the impression "the most common size" for a PDF was "Fit Width".

Also, I'm Canadian. I've got 5Mbit/800Kbit Internet and every member of my family (my writer/artist mother, my gamer brothers, my non-gaming, programming self, etc.) can never have enough disk space so wasting space on easily-regenerated caches for rarely-used files is not acceptable from a storage OR a transfer perspective.

"Pre-rendering the most common sizes" is only acceptable for things like icon themes where you can typically count the number installed on one hand and even the biggest ones have a small "per page" size.

Reply Parent Score: 3

RE[4]: Comment by TempleOS
by Flatland_Spider on Tue 16th Apr 2013 13:31 in reply to "RE[3]: Comment by TempleOS"
Flatland_Spider Member since:
2006-09-01

True, but from what I understand, that's more like swap space and it can't achieve the same compression ratios as data-aware formats like PNG. (Remember, PNG performs various transforms before DEFLATEing)


Agreed. They're talking about using XZ compression, so it's just bulk compression rather then specific compression, like in the case of PNG.

I suppose it could work as long as there's a strong effort to shame and shun any applications which put their caches somewhere that can't be expired independent of their cooperation or knowledge.


It's the OS that manages that, so I don't think applications have any say in the matter, which is how it should be.

I was under the impression "the most common size" for a PDF was "Fit Width"


And full page in window.

Also, I'm Canadian. I've got 5Mbit/800Kbit Internet and every member of my family (my writer/artist mother, my gamer brothers, my non-gaming, programming self, etc.) can never have enough disk space so wasting space on easily-regenerated caches for rarely-used files is not acceptable from a storage OR a transfer perspective.


Right, that's why the render cache shouldn't be stored with the original file, which is what I was pointing out.

There are lots of techniques and algorithms for cache management. The OS should keep the render cache at a reasonable size. We have enough CPU/GPU power now that re-rendering, or rendering everything on the fly, isn't the performance hit it used to be.

Reply Parent Score: 2

RE[5]: Comment by TempleOS
by ssokolow on Tue 16th Apr 2013 20:41 in reply to "RE[4]: Comment by TempleOS"
ssokolow Member since:
2010-01-21

Right, that's why the render cache shouldn't be stored with the original file, which is what I was pointing out.

There are lots of techniques and algorithms for cache management. The OS should keep the render cache at a reasonable size. We have enough CPU/GPU power now that re-rendering, or rendering everything on the fly, isn't the performance hit it used to be.


My original point is that NVRAM wouldn't change anything here. If Evince or Okular supported a non-volatile render cache, I'd turn it off to save space since I have many many PDFs, use each individual one infrequently, and always have less space than optimal. (I collect things like YouTube videos)

As far as PDFs go, I usually just read a research paper or electronic component's data sheet, write a library, circuit diagram, or take notes, and then keep it around just in case.

Reply Parent Score: 2