Urban75 Home About Offline BrixtonBuzz Contact

solid state hard drives

I've been booting my machine under my TV from a CF card for the last 6 months, and I'll be getting an 8GB solid state drive as soon as I can find one with a SATA interface.

Solid state + ramdisk = damn fast "appliance" computer :)
 
Sunray said:
Your forgetting that those hard disks have 16Mb cache, which are static ram cache and can deliver data at very high rates, probably approaching SATA max bandwidth. So what your talking about is entirely dependant on access patterns.

For large file moves, a hard disk will smash a SSD for performance, for small reads a SSD will beat a disk but only if the disk hasn't read the data already.
http://www.anandtech.com/storage/showdoc.aspx?i=3064&p=5

Smash is the wrong term. In HD writes that SSD gets 85% of the performance of a 150GB Raptor. In real world tests it gets to around 5% of the raptor's scores. SSDs are still slower than hard drives, and more expensive for the performance, but for how long?

Once i can get a 16GB SSD for under £100 i'll be seriously tempted to move over to it.
 
What I find interesting is that you're still using an SSD through an OS that has been tweaked extensively for the inherent deficiencies of mechanical hard drives. I'd love to see what happens when someone puts Linux or similar on an SSD, disable the I/O scheduler re-ordering, maybe even with a flash-friendly filesystem...
 
stdPikachu said:
What I find interesting is that you're still using an SSD through an OS that has been tweaked extensively for the inherent deficiencies of mechanical hard drives. I'd love to see what happens when someone puts Linux or similar on an SSD, disable the I/O scheduler re-ordering, maybe even with a flash-friendly filesystem...

It's been done a lot, I seem to remember. Most of the "Linux on PDA" projects do something like that, and there's a whole sub-project of embedded engineers from "consumer electronics" companies working in that area. What you tend to end up with is 1-2s boot times.

The main issue is: where do you put the filesystems that get written to a lot? (/tmp, /var, maybe /home)
 
rich! said:
If you're worried about latency, it's because you've already filled your system with RAM. 16MB on each spindle only makes a bit of difference - the main use is to let the HD re-order writes and reads, as well as storing the last tracks read.

What'll be interesting is to see if anyone starts moving large read-mostly database servers to flash storage instead of RAM caches.

i know that at least one vldb appliance manufacturer is looking at this. its purely R&D at the moment as reliability isnt there but its being given serious cionsideration for the future

the price of VLDB appliances means that the disk price becomes ireelevant.

mount SDDs in parallel.. and then add a parralell DBMS architecture and you have one huge very very fast EDW that uses a fraction of the power and space that the previous version used...
 
rich! said:
It's been done a lot, I seem to remember.

Aye, but I've not seen any reviews of these "mainstream" flash SSD's with above things done to them. At the moment I'm running /tmp as a ramdisk, /var and /home hardly get touched at all on this box. In any case, the built-in wear levelling on the new SSD's mean that, as long as you don't divvy the SSD into lots of small partitions, you could write to it continuously for a year before you're likely to lose a load of sectors.
 
Back
Top Bottom