Linked by Thom Holwerda on Thu 27th Sep 2012 19:36 UTC
Apple I bought a brand new iMac on Tuesday. I'm pretty sure this will come as a surprise to some, so I figured I might as well offer some background information about this choice - maybe it'll help other people who are also pondering what to buy as their next computer.
Thread beginning with comment 536796
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

Well, two things. First, I'm not sure I can believe that "10 years" figure; I've done some reading about SSDs, and it doesn't seem totally believable to me. Not for a system drive with swap and all your data, at least. It just sounds too far-fetched. Maybe in a perfect world and in perfect conditions, but not in this world.

And second, SSDs definitely have the theoretical edge over hard drives when it comes to the speed of reading files that are laid out non-contiguously on the drive. But... at the same time, in all my time running Linux (exclusively for about 6 years), I have not had any major slowdowns due to fragmentation. It just doesn't happen. I was leery about moving entirely to an OS that doesn't even offer a native defragmentation program, but it turns out that it really is needed much less.

On the other hand, before I switched I was using Windows XP, and it seemed like I was running PerfectDisk every week or two just to keep the drive running at peak speed (especially the system drive). So... I guess the moral of the story here is that Windows users have more to gain in terms of performance than Linux users by using an SSD?

I recently found out about the free version of PerfectDisk and had my cousin install it on his Windows 7-based machine... as I expected, just one offline/boot defrag and one online defrag made a very noticeable performance improvement. So apparently on Windows a good defragging is still needed. [Note: Previously, I told him to install the free program Defraggler, which he used until then. IMO, PerfectDisk is a must... if I ran Windows today, I would no doubt buy another license for a recent version of the program.]

Reply Parent Score: 1

Luminair Member since:

the parts about ssd speed and lifetime sound right to me. even on fast new systems, hard drives still thrash left and right when doing lots of stuff.

and think of how many old semiconductors there are still working. the 10 year estimate makes sense because most ssds won't get worn out enough to kill the NAND. most of the stuff on an ssd just sits there. and when stuff is written, it is spread out across the nand to reduce aging.

Reply Parent Score: 2

UltraZelda64 Member since:

The only time the thrashing gets noticeable on my machines has always been when the system is swapping--especially when the swapping gets out of hand. In this case, having to swap is a bad thing in general, and the fix (although it will come at a cost) is more memory. That swapping-related thrashing doesn't cause a definite and drastic decrease in the the hard drive's life, but if it were to happen on a purely SSD-based machine, it would. Hard drives really are built to handle some pretty heavy-duty work; I've put many of mine through hell over the years especially in terms of swapping, and they never failed to impress me at how long they last before breaking.

I tend to usually have a system drive and a secondary drive for /home though, so thrashing for me tends to be minimal unless the system is swapping. Years ago I put the swap partition on the /home drive, thinking that would minimize hard drive usage and therefore thrashing, but later I found that I was wrong and that putting the swap space on the same drive as the system partition gave better results.

Reply Parent Score: 1

WereCatf Member since:

And second, SSDs definitely have the theoretical edge over hard drives when it comes to the speed of reading files that are laid out non-contiguously on the drive. But... at the same time, in all my time running Linux (exclusively for about 6 years), I have not had any major slowdowns due to fragmentation.

Don't try to turn this into an anti-Windows argument. You know perfectly well that both Windows and any average Linux-distro consists of thousands of small files. It doesn't matter whether those files are fragmented or not, they're still not laid out on the disk in such an order that the drive can read every single one of them in sequential order and that is exactly why low seek times matter.

Also, as I said these days SSDs trump HDDs even in sequential speeds. Check out e.g. : the SSD can write ~200MB/s incompressible data in sequential order, something that no consumer-oriented HDD can do.

Reading both and would do you a lot of good.

Reply Parent Score: 3

lfeagan Member since:

A SSD is many times faster than a HDD for my work (programming). My 5 year old ThinkPad T61p (Core 2 Duo, 8GB) with a slow 128 GB, SATA I SSD absolutely stomps my employer-provided ThinkPad W520 (Quad Core i7, 16GB) that has a 7200 rpm HDD in tasks I care about, like compilation. Generally my compile jobs and large application launches (enterprise DBMS) are 4x faster. And this is with a nearly 5 year old SATA I SSD drive (a whopping 128 GB at that).

Another interesting note is that I also have a Samsung 830 512 GB in a newer system and the performance, while better than the super-old SSD, is less than 2x better. The return on investment for high vs low performance SSDs is minimal. OTOH, making sure you buy a reliable drive is money well spent.

Reply Parent Score: 2

UltraZelda64 Member since:

This is honestly such a deep, complex subject that everyone could argue the living hell out of it and no one would ever agree on a conclusion. ;) There are just too many factors involved, and whether you want to hear it or not, the OS involved does make a difference.

I did a quick fsck on my / and /home partitions for the hell of it earlier just to get fragmentation data, and quite honestly, I am amazed at what I saw; it completely backed up what I said, 100%. It was shocking, considering the cringe-worthy state of my system's partitioning. It's in desperate need of re-partitioning, and technically I'm doing a lot of things extremely inefficiently and just downright WRONG, yet far fewer fragments were reported than PerfectDisk would typically report in much better circumstances. And the biggest culprits? Exactly what I expected according to filefrag: three VirtualBox disk images.

And BTW, the reason I focused on data fragmentation is because in my experience it tends to have a far more devastating effect than a bunch of little files scattered all over a drive. That is, assuming you're running an OS and file system combination that is prone to fragmentation in the first place... luckily, in that case there are some excellent tools to keep the fragments under control.

Not to mention, with the general explosion of the data densities of hard drives that has been going on over the years, even if the rotational speeds do not increase a newer drive will still probably read the same amount of data as an older drive, even faster.

The bottom line is that hard drives are wicked fast these days. And with partitioning, it's easily possible to separate your boot files (/boot), main system (/), programs (/usr), and personal files (/home) and keep them all together to minimize seeking between files... but honestly, in my experience you don't even have to go that far, because I've found (for example) Windows XP running practically exactly the same on a 15GB partition at the beginning of the disk as it did on a 60GB partition spanning the entire drive. The only condition? That the number of file fragments are kept to a minimum.

I will just say that I still stand by what I said and will leave it at that.

Edited 2012-09-28 12:51 UTC

Reply Parent Score: 1

aliquis Member since:

Don't try to turn this into an anti-Windows argument.
And if he had used something except Linux (or even cared at all) for the last 6 years (or 10) he would had known that Windows run NTFS now.

Reply Parent Score: 2

Alfman Member since:


"First, I'm not sure I can believe that "10 years" figure;"

I've read that manufacturers are targeting 3 years of heavy use. Anything less than that they consider unacceptable. Anything more they consider as an opportunity to increase the tradeoff from reliability to capacity. However all these reliability figures are based on "typical" writing patterns as they're handled by the wear-leveling controller. If you are going to regularly re-write the entire flash, then the wear-leveling algorithm becomes irrelevant and your back to the NAND chip's underlying program/erase lifespan (spec'd around 3-5K).

So, whether flash is acceptable depends upon the application. If you are a photographer, you can fill up your flash card a few thousand times before going over the manufacturer's specs. This is acceptable to most consumers.

Storing swap could be ok, but only if you don't expect applications to leak into swap very often. If an application enters a period of vicious swapping such that the entire swap area is being rewritten continuously (and assuming the swap space is a large fraction of the SSD capacity), then you're looking at depleting the NAND chip's lifespan very quickly.

Flash is much better suited in scenarios where reading is much more frequent than writing, which is usually the case for operating system files. Just be aware of processes that continuosly write to flash.

Reply Parent Score: 2