




🚀 Unlock blazing storage speeds and dominate your workflow with ASUS Hyper M.2 X16!
The ASUS Hyper M.2 X16 PCIe 4.0 expansion card empowers professionals with the ability to install up to four NVMe M.2 SSDs simultaneously, delivering combined speeds up to 256Gbps. Featuring a PCIe 4.0 x16 interface and premium server-grade PCB, it ensures maximum bandwidth and stability. Its integrated heatsink and blower fan prevent thermal throttling, maintaining peak performance. Compatible with AMD 3rd Gen Ryzen sTRX40, AM4 sockets, and Intel VROC RAID configurations, this card is a versatile, high-efficiency solution for next-gen storage demands.


| ASIN | B084HMHGSP |
| Are Batteries Included | No |
| Best Sellers Rank | 84,438 in Computers & Accessories ( See Top 100 in Computers & Accessories ) 874 in Motherboards |
| Brand | ASUS |
| Colour | BLACK |
| Customer Reviews | 4.2 4.2 out of 5 stars (719) |
| Date First Available | 31 Jan. 2020 |
| Graphics Card Interface | PCI Express |
| Graphics Coprocessor | AMD 3rd Ryzen sTRX40 |
| Graphics RAM Type | VRAM |
| Guaranteed software updates until | unknown |
| Item Weight | 885 g |
| Item model number | HYPER M.2 X16 GEN 4 Card |
| Manufacturer | Asus |
| Operating System | Windows, macOS, Linux |
| Processor Socket | AM4 |
| Product Dimensions | 27 x 12.19 x 1.52 cm; 884.51 g |
| Series | HYPER M.2 X16 GEN 4 CARD |
| Wattage | 14 watts |
C**T
Check your Mobo before use
Not all Mobo's will support this. Make sure you get one that supports "Gen 4" Also you will need to reconfigure your PCIE lanes in Bios to 4*4*4*4*, if you leave them at default/16x this won't work
D**E
Noisy card delivers on most of manufacturers promises
Works as designed. In an MSI trx40 creator Motherboard that already has an MSI pcie card in the one x16 slot and a GPU in the other I was therefore only able to divide the channels for this 3rd card into two lanes => 3 on board nvme drives, 4 more in MSI card and 2 (out of four slots) for the Asus extension card. Asus card is noisy but it runs fine. Used AMD raidexpert2 to manage "both" cards
S**U
On time delhivery product
C**N
Absolutely rock solid integration with the TRX40 chipset on an ASUS motherboard that has the PCI-e bifurcation option (PCIe RAID mode) that you can activate on a lot via BIOS. Disassembly and re-assembly was a breeze, the correct amount of screws was included, each slot has a full size thermal pad inside ready to go and the fan seems to work wonders even with four SN850X running in RAID0. It's setting up the AMD RAID drivers properly that is the wild card and can be a complete buzzkill if you do one step wrong. You have to navigate to (for AMD processors) to the support page specific to your CHIPSET, not the processor, and download whatever RAID software package they have there for your AM4 or TRX40 board. You have to enable both SATA and NVMe RAID modes in the BIOS before you can even install the RAID package you just grabbed and once that is on properly, you will see some unrecognized storage controllers in your Device Manager. From there you right click each unrecognized item and point it toward the folder with drivers that came with that package or that you grabbed separately and manually update each of them one by one. Then go back into the BIOS and delete the legacy array housing your single NVMe drives and create whatever RAID array you want there and if all is well it should show up as one single drive in Windows that you can then format as a Simple Volume and manage through AMD RAID Xpert. At each step you might require a few restarts or a full shutodown/power down in order for the changes to fully take. Once you clear this hurdle though, it never reverts and is insanely easy to manage as long as you don't flip back to non-RAID mode in BIOS. Highly recommend sticking with the 256K allocation size recommended in the BIOS when creating the array and the Windows default value when formatting it in Windows, any other tweaks yielded lesser or spottier performance. As other readers have stated, you must make sure that you have a free x8 or x16 slot for two or four SSDs respectively and you must be able to split the 16 into four siloed lanes for each drive in the BIOS and you must make sure that whatever slot you're using isn't sharing lanes with your processor or RAM or another NVMe slot on your mobo. Threadripper platforms are worry free in this department because of the gross amount of PCIe lanes they have. As long as you're not running two 5090s in parallel and maxing out every RAM, NVMe slot and with some SATA drives thrown in, this should not be a concern. Even I didn't expect quite the eye watering results I got with four of the fastest 1TB Gen4 drives on the market running in a striped RAID0 setup. The screenshots speak for themselves. That's basically brushing right up against the theoretical limits of Gen 4 NVMe 1.4 drives and what it should look like when those drives are striped and running free of bottlenecks. For $50 (Used - Like New...came in perfect condition) plus less than $100 each for the drives, this is an absolute no brainer in terms of bang for buck value and will give you performance exceeding that of current Gen5 SSDs by another 8GB/sec. For reference this is a hair shy of the 2400 MT/s base, non XMP clock speed of my 3600 mhz RAM. This is madness. If you do video editing, hosting off your main rig or are looking to trim any possible other system bottlenecks to max out a current gen graphics card, what are you doing still reading??
G**E
I almost didn’t buy this product due to negative ratings and that would have been a mistake. After use and re-reading it is clear all negative reviews on this product simply don’t understand the technical limitations of the environment, to wit: Each NVME requires x4 PCIE lanes, and many motherboards have a single x16 slot (which furthermore requires firmware support for 4x4x4x4 bifurcation). Simply check the support and know that to use all four slots on this card you likely will need to move your graphics card to an x4/x8 slot and/or update BIOS and/or make configuration changes. These options might not be called “bifurcation” and may be “pcie raid” … these firmware and hardware inconsistencies are not Asus’ fault. Just because your PCIE slot looks like a full-sized x16 slot does not mean this product will work. No, this is not an active raid controller. That said, this product for those with the technical aptitude to understand the limitations and requirements is fantastic. There are power filtering capacitors on the board, and other passive components populated to protect our expensive PCIE4-era NVMEs. There is a huge solid block of machined aluminium and the correct riser rubbers/thermal adhesives/risers/screws to mount four single- or double-sided NVMEs. The fan isn’t super useful in a system with above average cooling, and you can simply turn it off. My NVMEs went from circa 90 degrees (on motherboard with no heat sinks, in the random access RW work I use them for) to barely above ambient. The maximum theoretical throughput of x4 PCIE is approaching 8Gb/sec and I see RW speeds at random access approaching 16Gb across 4 drives, I don’t know if sequential RW is at full x16 speed, but with the 4x 980 Pros I use I see ever-so-slightly more than double speed of 2x 980 Pros, so it definitely scales in a linear fashion IF YOU USE x16 IN 4x4x4x4 BIFURCATION!
T**G
The expansion card is well made. I added 4 more NVME. Now trying to workout how to configure it as RAID10..
T**F
So far after a couple/few months I can say this card works great with my Gigabyte x570 Aorus Pro Wifi motherboard and two pcie gen 4 nvme SSDs. No problems at all so far. Given this card doesn't have an active pcie bridge/switch I feel it's a bit overpriced for what is essentially a mostly empty PCB with a fan and heatsink. That said, I have a couple complaints: 1. I feel it's a little overpriced as it's essentially a bare PCB with no active pcie bridge/switch and therefore relies on your motherboard and cpu to properly support pcie bifurcation (splitting a single pcie x16 slot into multiple separate smaller slots). It's kind of a crap shoot what any given motherboard will allow. On mine for instance, IIRC it groups the two main pcie slots such that the bifurcation is limited to either: 16/0, 8/8, 8/4/4, 4/4/4/4. Which means, if you use the second slot in any way, you cut your primary slot down to at most 8x and potentially 4x pcie. So your GPU will only get an 8x link at best. Not the cards fault. It's more of a cpu limitation but something you need to keep in mind. What it does mean is that if I want to use all 4 slots on this card I'd have to set my bios/UEFI to 4x4 mode. If this card had an active bridge/switch, I could assign 8 pcie lanes to the card and it could dynamically allocate the bandwidth between all 4 nvme slots. As it is I'm stuck to "just" 2 usable slots as I want to keep 8x pcie lanes for my GPU. Also I'm not super happy with receiving an obvious return unit as if it were brand new. But there was nothing physically wrong with it and I didn't want to wait for a return so just kept it.
Trustpilot
Hace 1 semana
Hace 1 mes