Wednesday, October 21, 2015

filesystems - directory with 980MB meta data, millions of files, how to delete it? (ext3)








Hello,



So I'm stuck with this directory:



drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2



The directories contents is small - just millions of tiny little files.




I want to wipe it from the filesystem but have been unable to. My first try was:



find sessions2 -type f -delete



and



find sessions2 -type f -print0 | xargs -0 rm -f



but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory.




So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory?



So I did this (foolishly): tune2fs -O^dir_index /dev/xxx



Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage.



I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran!



2 questions:




1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage.



2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system?



Thanks!

No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...