From wiki:
The vital TRIM function is supported by the Linux OS starting with
2.6.33 kernel (available early 2010). However, support amongst various filesystems is still inconsistent or not present. Proper partition
alignment is also not carried out by installation software.
So, which filesystem works best for SSD and supports TRIM + partition alignment during install and is available on Ubuntu?
Answer
Choose ext4, and either mount it with the
discard
option for TRIM support, or use FITRIM (see below). Also use thenoatime
option if you fear "SSD wear".Don't change your default I/O scheduler (CFQ) on multi-applications servers, as it provides fairness between processes and has automatic SSD support. However, use Deadline on desktops to get better responsiveness under load.
To easily guarantee proper data alignment, the starting sector of each partition must be a multiple of 2048 (= 1 MiB). You can use
fdisk -cu /dev/sdX
to create them. On recent distributions, it will automatically take care of this for you.Think twice before using swap on SSD. It will probably be much faster compared to swap on HDD, but it will also wear the disk faster (which may not be relevant, see below).
- Filesystems:
Ext4 is the most common Linux filesystem (well maintained). It provides good performance with SSD and supports the TRIM (and FITRIM) feature to keep good SSD performance over time (this clears unused memory blocks for quick later write access). NILFS is especially designed for flash memory drives, but does not really perform better than ext4 on benchmarks. Btrfs is still considered experimental (and does not really perform better either).
- SSD performance & TRIM:
The TRIM feature clears SSD blocks that are not used anymore by the filesystem. This will optimize long-term write performance and is recommended on SSD due to their design. It means that the filesystem must be able to tell the drive about those blocks. The discard
mount option of ext4 will issue such TRIM commands when filesystem blocks are freed. This is online discard.
However, this behavior implies a little performance overhead. Since Linux 2.6.37, you may avoid using discard
and choose to do occasional batch discard with FITRIM instead (e.g. from the crontab). The fstrim
utility does this (online), as well as the -E discard
option of fsck.ext4
. You will need "recent" version of these tools however.
You might want to limit writes on your drive as SSD have a limited lifetime in this regard. Don't worry too much however, today's worst 128 GB SSD can support at least 20 GB of written data per day for more than 5 years (1000 write cycles per cell). Better ones (and also bigger ones) can last much longer: you will very probably have replaced it by then.
If you want to use swap on SSD, the kernel will notice a non-rotational disk and will randomize swap usage (kernel level wear levelling): you will then see a SS
(Solid State) in the kernel message when swap is enabled:
Adding 2097148k swap on /dev/sda1. Priority:-1 extents:1
across:2097148k SS
- I/O Schedulers:
Also, I agree with most of aliasgar's answer (even if most of it has been -illegally?- copied from this website), but I must partly disagree on the scheduler part. By default, the deadline scheduler is optimized for rotational disks as it implements the elevator algorithm. So, let's clarify this part.
Long answer on schedulers
Starting from kernel 2.6.29, SSD disks are automatically detected, and you may verify this with:
cat /sys/block/sda/queue/rotational
You should get 1
for hard disks and 0
for a SSD.
Now, the CFQ scheduler can adapt its behavior based on this information. Since linux 3.1, the kernel documentation cfq-iosched.txt
file says:
CFQ has some optimizations for SSDs and if it detects a non-rotational
media which can support higher queue depth (multiple requests at in
flight at a time), [...].
Also, the Deadline scheduler tries to limit unordered head movements on rotational disks, based on the sector number. Quoting kernel doc deadline-iosched.txt
, fifo_batch
option description:
Requests are grouped into ``batches'' of a particular data direction
(read or write) which are serviced in increasing sector order.
However, tuning this parameter to 1 when using a SSD may be interesting:
This parameter tunes the balance between per-request latency and
aggregate throughput. When low latency is the primary concern,
smaller is better (where a value of 1 yields first-come first-served
behaviour). Increasing fifo_batch generally improves throughput, at
the cost of latency variation.
Some benchmarks suggest that there is little difference in performance between the different schedulers. Then, why not recommend fairness? when CFQ is rarely bad in the bench. However, on desktop setups, you will usually experience better responsiveness using Deadline under load, due to its design (probably at a lower throughput cost though).
That said, a better benchmark would try using Deadline with fifo_batch=1
.
To use Deadline on SSDs by default, you can create a file, say /etc/udev.d/99-ssd.rules
as follows:
# all non-rotational block devices use 'deadline' scheduler
# mostly useful for SSDs on desktops systems
SUBSYSTEM=="block", ATTR{queue/rotational}=="0", ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="deadline"
No comments:
Post a Comment