For the past several years my desktop has also had a disk dedicated to maintaining a Windows install. I’d prefer to use the space in my PC case for disks for Linux. Since I already run a home NAS, and my Windows usage is infrequent, I wondered if I could offload the Windows install to my NAS instead. This lead me down the course of netbooting Windows 11 and writing up these notes on how to do a simplified “modern” version.
↫ Terin Stock
The setup Terin Stock ended up with is rather ingenious, to be honest. They had to create not just an environment in which netbooting through iXPE using iSCSI, but also a customised Windows PE ISO that included the necessary drivers to make installing Windows onto a iSCSI-connected remote drive possible in the first place, because they’re not included in the Windows installation ISO. This isn’t exactly a standard setup, of course, so there were a few roadblocks to clear before getting there.
They now have Windows 11 booting from a drive in their NAS, and it seems it doesn’t affect gaming – the reason why they did this in the first place is an online game that hard-requires Windows – at all. Installing the game through Steam took a bit longer, sure, but regular gameplay seems unaffected, and there’s no saturation on the network or disk. You’d think this would be wholly too slow to be suitable for gaming, but I guess at least some games handle this just fine. My uneducated guess is that more demanding games that rely on a ton of disk activity to load textures and so on will have a much more difficult time running.
In any event, this intrigues me, and I’m kind of curious to try and set this up myself, if only for the memes. It looks like fun.
If he’s running iSCSI he’s probably running a 10Gbps network, so as long as he has fast SSDs on the other end it shouldn’t bottleneck.
1 Gbps network, switched across VLANs, to a ZFS cluster setup for durability—not performance—on WD Red 5400 rpm drives. For the type of game this setup was built for, this seems to work fine. I imagine immersive open-world experiences will struggle or have a lot of assets popping.
In 2025 iPXE is completely useless for netbooting. You can just set an UEFI device path to the iSCSI LUN and boot it. Since the BootXXXX variables can use an iSCSI DevicePath as long as the NIC DXE Driver is linked against the UEFI iSCSI Boot library (they usually are). You also need the iSCSI Initiator Name Protocol in the UEFI implementation. In the UEFI boot menu you configure your IP addressing for the NIC (DHCP or static, VLAN ID if needed). Then you configure your iSCSI initiator. Once booted in the OS installer, most modern operating systems will get the IQN from the UEFI firmware and the iSCSI target information and use their own iSCSI initiators to connect. I’ve done this countless times for virtualization clusters that have SAN Boot where Fibre Channel wasn’t available.
iPXE on UEFI hardware is completely useless for such a scenario.
I’ve built a K8S cluster with Raspberry PIs and the UEFI firmware for RPi, just like this.
I’ve also made them boot VMware ESXi for ARM just like this.