On December 15, 2025, Microsoft introduced that Home windows Server 2025 would lastly undertake the NVMe customary natively in its storage structure. Nevertheless, NVMe storage has been a well-liked storage format on servers, enterprise workstations, and client PCs for a few years, with OS compatibility in-built since Home windows Server 2012 R2 and Home windows 8.1. With this in thoughts, an announcement about “native” NVMe assist may not appear essential and even newsworthy, however we promise there’s extra to this than meets the attention.

What Does “Native NVMe” Even Imply?
In earlier variations of the Home windows client and server version storage stacks, instructions used to learn and write information, whatever the underlying {hardware}’s protocols, have been at all times translated into SCSI instructions. The Small Pc System Interface (SCSI) customary dates again to the early Nineteen Eighties and was designed to attach peripherals and storage drives to computer systems (Storage Networking Trade Affiliation, n.d.). It’s the basis of a number of trendy storage protocols used for varied workloads, together with network-connected protocols similar to iSCSI (Web Small Pc System Interface) and FCP (Fibre Channel Protocol), and native storage interfaces similar to SAS (Serial Hooked up SCSI) and UASP (USB Hooked up SCSI).
The Outdated Method
By changing totally different protocols into SCSI instructions, Microsoft unified storage instructions at larger ranges of the working system. Nonetheless, it sacrificed most of the scalability and efficiency enhancements of recent storage architectures. The outdated path for I/O operations went like this:
- Learn and write operations happen within the higher storage stack on the filesystem degree.
- Instructions are handed all the way down to the Disk.sys driver.
- Disk.sys interprets generic storage instructions into SCSI instructions.
- Storport receives the SCSI instructions and sends them to the suitable Miniport driver (e.g., StorAHCI.sys for SATA drives).
- The corresponding Miniport driver communicates instantly with the storage gadget, translating it once more to the suitable storage command format.
Different SCSI conventions, similar to LUNs (Logical Unit Numbers) used to establish partitions of knowledge on a storage gadget, have been carried ahead into the Home windows storage stack, though newer ideas like NVMe namespaces have existed for fairly a while (Arms, Worley, & Lakhveer Kaur, n.d.).
The New Normal
Microsoft’s latest storage structure for Home windows Server 2025 allows new options in Storport and replaces Disk.sys with NVMeDisk.sys, offering a scalable, future-ready, high-performance framework.
- Learn and write operations happen within the higher storage stack on the filesystem degree.
- Instructions are handed instantly from NVMeDisk.sys to the brand new StorMQ code inside Storport.
- StorMQ generates the suitable NVMe (or different storage sort) instructions for every learn and write operation and sends them on to the {hardware} itself.

(Picture from Scott Lee’s SNIA Developer Convention presentation on September 16, 2025)
This new customary for drive operations on Home windows Server 2025 eliminates a layer of translation and absolutely integrates itself with the storage command queues on NVMe, RAID, and HBA gadgets. Streamlining the Home windows storage system additionally presents further advantages, similar to decreased CPU useful resource utilization from eliminating pointless storage command translation and improved utilization of logical processors. The brand new structure adopts different NVMe specs, similar to NVMe namespaces and plug-and-play assist. It permits vendor or device-type-specific storage Miniport drivers to be created and “plugged in” to Home windows for higher compatibility and efficiency with new courses of storage gadgets (Lee, SNIA SDC 2025 – Storage Multi-Queue on Home windows, 2025).
Able to Check?
In his presentation on the SNIA Developer Convention on September 16, 2025, Scott Lee revealed that Microsoft was already working carefully with distributors to develop new drivers for gadgets similar to RAID playing cards and HBAs. This means that the enhancements to StorMQ are coming quickly or could already be enabled on many storage gadgets. The function was introduced as usually accessible final December, however the brand new storage stack is enabled solely as an opt-in, requiring the addition of a registry key. You may learn the steps for enabling it in Microsoft’s native NVMe announcement article.
Warning: Incorrectly modifying the registry could cause critical issues, so remember to take a look at this on a non-critical server first. A number of customers who enabled this function reported points with NVMe drives with deduplication enabled. Though an official repair from Microsoft is coming quickly, you proceed at your personal danger!
Testing Native NVMe on Home windows Server 2025
Our testing platform for evaluating native NVMe on Home windows Server 2025 (OS Construct 26100.32370) featured a dual-SP5 socket server outfitted with two 128-core AMD EPYC 9754 CPUs. Alongside the multi-core processors was an equally spectacular 768GB of DDR5 reminiscence working at 4800 MT/s.
Word: In line with Microsoft’s Yash Shekar, an interim enchancment unrelated to native NVMe has already been launched for Home windows Server 2025, which can have offered further uplift for the non-native storage stack, reducing the potential delta between outcomes.

To judge the potential of the brand new storage stack, we used sixteen 30.72 TB Solidigm P5316 NVMe SSDs with PCIe 4.0 in a JBOD configuration. An essential notice is that the Solidigm P5316 has an indirection unit dimension of 64 kilobytes, which implies that write outcomes for smaller sizes (similar to 4K checks) are sometimes worse than anticipated. Contemplating this bigger indirection unit, we ran FIO benchmarks with learn and write checks for random 4K, random and sequential 64K, and sequential 128K to check general pace throughout totally different block sizes. We additionally monitored CPU utilization throughout testing to judge Microsoft’s claims of upper effectivity.
Highlights
- Massively elevated 4K and 64K random learn bandwidth and IOPS
- Decrease 4K and 64K random learn latency
- Important decreases in CPU utilization for sequential reads and writes throughout various block sizes
| Metric | Random 4K | Random 64K | Sequential 64K | Sequential 128K | ||||
|---|---|---|---|---|---|---|---|---|
| Non-Native | Native | Non-Native | Native | Non-Native | Native | Non-Native | Native | |
| Learn | ||||||||
| Bandwidth (GiB/s) | 6.1 | 10.058 | 74.291 | 91.165 | 35.596 | 35.623 | 86.791 | 92.562 |
| IOPS | 1,598,959 | 2,636,516 | 1,217,176 | 1,493,637 | 583,192 | 583,638 | 710,978 | 758,252 |
| Common Latency (ms) | 0.169 | 0.104 | 0.239 | 0.207 | 0.809 | 0.812 | 0.613 | 0.608 |
| Whole CPU Utilization (%) | 72.67 | 74.22 | 68.44 | 65.11 | 44.89 | 37.11 | 61.56 | 49.56 |
| Metric | Random 4K | Random 64K | Sequential 64K | Sequential 128K | ||||
|---|---|---|---|---|---|---|---|---|
| Non-Native | Native | Non-Native | Native | Non-Native | Native | Non-Native | Native | |
| Write | ||||||||
| Bandwidth (GiB/s) | 1.803 | 1.756 | 7.654 | 7.655 | 44.67 | 50.087 | 50.477 | 50.079 |
| IOPS | 472,725 | 460,383 | 125,391 | 125,406 | 731,859 | 820,603 | 413,495 | 410,232 |
| Common Latency (ms) | 0.992 | 1.028 | 3.814 | 3.816 | 0.399 | 0.558 | 1.022 | 1.149 |
| Whole CPU Utilization (%) | 26.00 | 20.67 | 12.22 | 9.33 | 70.44 | 57.78 | 58.44 | 47.33 |
Evaluation of Outcomes
Starting with random 4K and 64K learn benchmarks, we noticed considerably larger learn speeds, with an virtually 4 GiB/s distinction between the native and non-native storage stacks (respectively) on the random 4K learn take a look at, and almost a 16.9 GiB/s uplift on random 64K learn. We additionally noticed an honest improve in sequential 128K learn operations, with our checks exhibiting a couple of 5.8 GiB/s improve in bandwidth.


Curiously, we didn’t see any vital will increase throughout our random or sequential write bandwidth checks, with the one notable distinction being an approximate 5.4 GiB/s improve in 64K sequential writes. Most of our outcomes have been inside 100 MiB/s of one another, suggesting that the brand new storage stack’s efficiency is at the least in keeping with the outdated’s in instances the place it didn’t enhance.

Since throughput is usually correlated with latency, we additionally noticed massive drops in common random learn latency for each 4K and 64K checks. We noticed a 38.46% lower in non-native random learn 4K, from 0.169 milliseconds to 0.104. Random 64K learn checks had a smaller lower, at round 13.39%. Latency didn’t change drastically with sequential learn operations, however random write and sequential write operations confirmed will increase throughout the board, regardless of comparable or larger throughput.


Along with the rise in random learn speeds, one other fascinating development revealed by our FIO checks was a considerable lower in complete CPU utilization for 64K and 128K sequential learn and write operations. Sequential write checks confirmed essentially the most dramatic variations, with a median drop of 12.66% CPU utilization for 64K, and a virtually matching 11.11% drop for 128K. The sequential 128K learn take a look at we carried out additionally noticed a becoming 12% utilization lower, however solely a 7.78% lower for sequential 64K learn. One issue to think about is that with a quick sufficient CPU, there could also be situations the place each stacks can attain a storage gadget’s full potential; subsequently, throughput may not improve, however CPU useful resource utilization would lower.


Takeaways
Whereas lots of our outcomes have been inside run-to-run variance after enabling the brand new storage stack, we have been in a position to corroborate lots of Microsoft’s claims, together with larger learn bandwidth with decrease latency, and decreased CPU utilization throughout the board. Since that is fairly a radical change to their decades-old Home windows Server storage stack, Microsoft will first allow native NVMe by default in Home windows Server vNext. Fortuitously, the function could be enabled on Home windows Server 2025 with a fast registry edit or a bunch coverage, permitting courageous server directors to benefit from the brand new stack as early as immediately (after acknowledging the dangers of deploying it).
We look ahead to seeing native NVMe enabled by default on the Home windows Server platform, and hope to see buy-in from NVMe SSD, RAID card, and HBA producers, who ought to be capable of take Microsoft’s enhancements to the following degree!
References
Arms, J., Worley, D., & Lakhveer Kaur. (n.d.). NVMe Namespaces. Retrieved December 30, 2025, from NVM Categorical: https://nvmexpress.org/useful resource/nvme-namespaces/
Lee, S. (2025, September 15). SNIA SDC 2025 – Storage Multi-Queue on Home windows. San Tomas, CA, United States of America: Storage Networking Trade Affiliation. Retrieved December 29, 2025, from https://www.youtube.com/watch?v=dR-DWrmCba0&t
Lee, S. (2025, September 16). Storage Multi-Queue on Home windows: A New Stack for Excessive Efficiency Storage {Hardware}. Retrieved December 29, 2025, from SNIA Developer Convention: https://www.snia.org/websites/default/recordsdata/2025-10/SNIA-SDC25-Lee-Storage-Multi-Queue-On-Home windows.pdf
Shekar, Y. (2025, December 15). Saying Native NVMe in Home windows Server 2025: Ushering in a New Period of Storage Efficiency. (Microsoft) Retrieved December 29, 2025, from Home windows Server Information and Finest Practices: https://techcommunity.microsoft.com/weblog/windowsservernewsandbestpractices/announcing-native-nvme-in-windows-server-2025-ushering-in-a-new-era-of-storage-p/4477353
Storage Networking Trade Affiliation. (n.d.). What’s SCSI? Retrieved December 30, 2025, from Storage Networking Trade Affiliation: https://www.snia.org/schooling/what-is-scsi
