Sunday, August 7, 2016

Understanding NVMe storage and hardware requirements



I'm a bit confused about the recent developments in PCIe-based storage, particularly as it relates to the NVMe specification and its hardware compatibility.



While I've worked extensively with SSDs in disk form-factor and some higher-end PCIe devices like Fusion-io, I'm in a position where I don't understand the basics of NVMe and am seeking clarification on what type of server hardware is supported.



For instance, ad copy like this from Supermicro is confusing.




...high performance CPU PCI-E Gen3 direct connnect to NVMe devices.





I'm dealing with a Linux-based software-defined-storage solution and wanted to use spare Fusion-io devices, which use a proprietary driver (presenting /dev/fioX device names to the OS).



When I asked for help from the vendor, the response was:




The "fioX" device naming is made obsolete by the new NVMe device
interface. It means us purchasing obsolete adapters to add support
that nobody else has asked for.





This seems a bit harsh. I didn't think Fusion-io adapters were obsolete.



The scarce information I find online seems to hint that NVMe is only supported on the absolutely newest generations of server hardware (Intel E5-2600v3 CPUs and PCI 3.0 chipsets?). But I can't verify this.



Is this true?



What's the adoption rate? Is this something that engineers are accounting for in their design decisions, or are we talking about a "standard" that's not fully formed?




If NVMe is something that only applies to the newest systems in the market, is it reasonable to suggest (to the vendor) that my install base of older systems can't be NVMe-compatible, so it's worth adding the support I requested?


Answer



I needed to test this for myself...



I purchased four Intel 750 PCIe NVMe SSDs to install in HP ProLiant DL380p Gen8 servers. The servers are not the current generation Intel 2600v3 series CPUs, but rather the 2600v2 CPUs.



The takeaway:



NVMe is an interface specification. Under Linux, the devices are enumerated as /dev/nvmeXnY, e.g. /dev/nvme0n1 and /dev/nvme1n1.




The form-factor of the devices I used was PCIe 3.0 x4. The Gen8 ProLiant servers have two PCIe 3.0 slots on the default riser cage. These NVMe PCIe cards will work in slower PCIe slots (or PCIe 2.0), but will be limited by the bus at that point.



So for my use case, NVMe is somewhat OS-driven, but is definitely compatible with my slightly older server hardware.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...