Wednesday, August 8, 2018

storage - ZFS memory requirements in a "big files" DAS scenario



I have some old server hardware that I want to build a FreeNAS data server with, but it only has 8 GB of memory and I can't really expand on that.




I plan on putting 6 4tb drives in there, with double parity, yielding about 14 gb of actual storage. Almost double what the "1gb per 1tb" rule entitles the system to.



However, the server will only be accessed by a single system, and the usage pattern will be highly sequential data streams, nothing that can really benefit from extensive caching. No multiple clients, no small random access, nothing running in jails. Just a plain "huge files" server.



Would 8 gb of ram be able to cut it, or do I need to shell out an extra 1000$ to buy a new system?


Answer



8GB of RAM is fine.



I'd urge you to consider an alternative to FreeNAS, since it's not the beat or most reliable ZFS implementation. But sure, the amount of RAM you have is okay.
Be sure not to enable deduplication. Compression is fine, though.



No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...