Sunday, November 30, 2014

filesystems - delete millions of files within a directory

The other day I ran bleachbit on my system. I had enabled wipe disk space option in it. It took several hours and my disk space filled up completely (100GB or so). After waiting forever, I decided to terminate the program and delete the files manually.



Now the problem: I'm not able to delete the files or the directory. I cannot do an ls within the directory. I tried rsync -a --delete, wipe, rm, different combos of find & rm, etc



I followed the instructions here and noticed the "Directory Index Full!" error in my logs as well. rm on a directory with millions of files



I noticed that the stat command returned an unusually large directory size of more than a GB. Usually it's just 4096 or something around tens of thousands.




nameh@labs ~ % stat kGcdTIJ1H1                                                                            
File: ‘kGcdTIJ1H1’
Size: 1065287680 Blocks: 2080744 IO Block: 4096 directory
Device: 24h/36d Inode: 9969665 Links: 2
Access: (0777/drwxrwxrwx) Uid: ( 1000/ nameh) Gid: ( 1000/ nameh)
Access: 2014-10-31 07:43:08.848104623 +0530
Modify: 2014-10-31 07:43:19.727719839 +0530
Change: 2014-10-31 07:43:19.727719839 +0530
Birth: -



The "ONLY" command that so far seems to be able to delete files within this dir is the srm command (secure deletion toolkit by THC). All other commands do not work. srm has been running for 20 hours now and has freed up around 1.1 GB so far. It's running with the least secure mode.



sudo srm -v -rfll kGcdTIJ1H1


Ideas?



edit: My question is "how do I delete the directory quickly?". Like in a few hours or so without having to spend several days to delete the files. rm -rf does nothing.

No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...