I've started looking at switching storage charging models. Historically - it's been purely per-gig, in a big shared pool.
But I'm looking at extending the model, and looking at tiering storage offerings - mostly because we're looking at an investment in SSD - but under our previous charging model, we've not really been able to justify it. (We have got some as a 'controller cache upgrade').
My initial start point is to:
- Take a 'storage chunk'. Probably a RAID group, but maybe 'a shelf' or 'a controller'.
- Sum up the usable gigs.
- Sum up the theoretical IOPs of the spindles (random read and write penalty writes)
- Sum up theoretical MB/sec throughput of the spindles.
Divide IOPs and throughput by available TB, to set a 'performance allocation' based on each TB 'bought'. And look at a sample SATA configuration, sample FC/SAS configuration and SSD.
Now, this is rather a simplification I know - and is rather a worst case scenario, in a world of big fat caches and assorted other bottlenecks.
I've got a big pile of 'average usage' performance stats, and know what my real world cache hit ratios look like - but am now a bit stuck as to how to 'scale' this. As an example - my NetApp Filers are giving me 20-25% read cache miss rates, and does something entirely different with write cache and WAFL that makes that element hard to compare. (But I think assuming a high write cache-hit is not unreasonable, and thus allowing me to disregard write penalty and burst write latency).
So this is my question - what approach would you suggest for putting together say, 3 tiers of 'storage offering' (Archive, 'standard' and high performance)? How would you factor in the expected returns of caching and consolidation benefits?
Regarding storage tiering - that's an option, because it's effectively what we're doing already by big controller caches. But there's still a need for differentiating 'cheap' and 'fast' storage, and I'm needing and appraoch for that too.
No comments:
Post a Comment