We have a group of consumer terminals that have Linux, a local web server, and PostgreSQL installed. We are getting field reports of machines with problems and upon investigation it seems as if there was a power outage and now there is something wrong with the disk.
I had assumed the problem would just be with the database getting corrupted, or files with recent changes getting scrambled, but there are other odd reports.
- files with the wrong permissions
- files that have become directories (for example,
index.php
is now a directory)
- directories that have become files
- files with scrambled data
There are problems with the database getting corrupted, but that's something I could expect. What I'm more surprised about is the more basic file system problems - for example, permissions or changing a file into directory. The problems are also happening in files that did not recently change (for example, the software code and configuration).
Is this "normal" for SSD corruption? Originally we thought it was happening on some cheap SSDs, but we have this happening on a name-brand (consumer grade.)
FWIW, we are not doing autofsck on unclean boot (don't know why- I'm new). We have UPSs installed in some locations, but sometimes it's not done properly, etc. This should be fixed, but even then people can power-down the terminal uncleanly, etc. - so it's not fool-proof. The filesystem is ext4.
The question: is there is anything we can do to mitigate the problem at the system-level?
I found some articles referring to turning off the hardware cache or mounting the drive in sync mode, but I'm not sure if that would help in this case (metadata corruption and non-recent changes). I also read a reference about mounting the filesystem in read-only mode. We can't do that because we need to write, but we could make a read-only partition for the code and configuration if that would help.
This is an example of a drive sudo hdparm -i /dev/sda1
:
Model=KINGSTON RBU-SMS151S364GG, FwRev=S9FM02.5, SerialNo=
Config={ Fixed }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0
BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=125045424
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
AdvancedPM=yes: disabled (255) WriteCache=enabled
Drive conforms to: Unspecified: ATA/ATAPI-3,4,5,6,7
When suddenly losing power, MLC/TLC/QLC SSDs have two failure modes:
- they lose the in-flight and in-DRAM-only writes;
- they can corrupt any data-at-rest stored in the lower page of the NAND cell being programmed.
The first failure condition is obvious: without power protection, any data which are not on stable storage (ie: NAND itself) but on volatile cache only (DRAM) will be lost. The same happens with classical mechanical disks (and that alone can wreak havoc on filesystem which does not properly issue fsyncs).
The second failure condition is a MLC+ SSDs affair: when reprogramming the high page bit for storing new data, an unexpected power loss can destroy/alter the lower bit (ie: previous committed data) also.
The only true, and most obvious, solution is to integrate a power-loss-protected DRAM cache (generally using battery/supercaps), as done since forever by high-end RAID controllers; this, however, increase drive cost/price. Consumer drives typically have no power-loss-protected caches; rather, they use an array of more economical solutions as:
- partially protected write cache (ie: Crucial M500/M550/M600+);
- NAND changes journal (ie: Samsung drives, see SMART PoR attribute);
- special SLC/pseudo-SLC NAND regions to absorbe new writes without previous data at risk (ie: Sandisk, Samsung, etc).
Back to your question: your Kingstone drives are ultra-cheap ones, using unspecified controller and basically no public specs. It does not surprise me that a sudden power loss corrupted previous data. Unfortunately, even disabling the disk's DRAM cache (with the massive performance loss it commands) will not solve your problem, as previous data (ie: data-at-rest) can, and will, be corrupted by unexptected power losses. If they are based on the old Sandforce controller, even a total drive brick can be expected under the "right" circumstances.
I strongly suggest to review your UPS and, in the mid-term, to replace these aging drives.
A last note about PostgreSQL and other Linux databases: they will not disable the disk's cache and should not be exptected to do that. Rather, they isses periodic/required fsyncs/FUAs to commit key data to stable storage. This is the way things should be done unless a very compelling reason exists (ie: a drive which lies about ATA FLUSHES/FUAs).
EDIT: if possible, consider migrating to a checksumming filesystem as ZFS or BTRFS. At the very least consider XFS, which has journal checksum and, lately, even metadata checksum. If you are forced to use EXT4, consider enabling auto-fsck at startup (fsck.ext4 is very good at repair corruption).