I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free".
Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good.
- Is it possible to tell the filesystem to always serve certain files out of RAM?
- Are there any other methods I can use to improve file reading capabilities by use of RAM?
More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me.
Possible applications here are:
- Web servers with static files that get read alot
- Application servers with large libraries
- Desktop computers with too much RAM
Any ideas?
Edit:
- Found this very informative: The Linux Page Cache and pdflush
- As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.
Answer
vmtouch seems like a good tool for the job.
Highlights:
- query how much of a directory is cached
- query how much of a file is cached (also which pages, graphical representation)
- load file into cache
- remove file from cache
- lock files in cache
- run as daemon
EDIT:
Usage as asked in the question is listed in example 5 on vmtouch Hompage
Example 5
Daemonise and lock all files in a directory into physical memory:
vmtouch -dl /var/www/htdocs/critical/
EDIT2:
As noted in the comments, there is now a git repository available.
No comments:
Post a Comment