So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO?
If that makes any sense, how do you figure out just how much?
Answer
Well, it's kinda simple. The short answer is "mess with it and load test until you find peak perforance".
More detail:
Most database engines fall into one of two overly-broad categories:
- A database that takes over a raw disk device and does raw I/O
- A database that creates files ("disk files" or some other scheme) in the operating system's FS
Type #1 doesn't care about the OS buffer cache. It wants to glom up all the RAM it can for its own cache, and would prefer that the OS gets the hell out of its way (these almost always get run on big fat dedicated systems)
Off the top of my head Oracle and Sybase can both be configured this way, but I'm sure others can.
Type #2 includes Oracle and Sybase (with different configurations) as well as the two open-source juggernauts (MySQL & Postgres). These systems do care about the OS buffer cache, but how much they care is debatable and depends on the underlying storage engine & the efficiency of the OS buffer cache.
In most cases there are two layers of caching here (the DB engine has a cache & the OS has its buffer cache), and you tune both caches up & down until you find the mix that gives you the best performance.
There are some more extensive notes on type #2 in the Postgres Wiki (look for shared_buffers
& effective_cache_size
). Those notes are Postgres-specific, but the concepts are generally applicable to other DB engines that use the filesystem to hold their data.
It still always boils down to the short answer I gave at the beginning though.
No comments:
Post a Comment