I've got a problem.
My homelab is running out of PCIe slots.
Its my own fault, really...
I made the mistake of purchasing used Sandy Bridge, Ivy Bridge, and Haswell era Intel Xeons to make up a hyperconverged cluster.
They (the E3s, i3s, i5s, and i7s) don't have a lot of PCIe slots and lanes, and the ones they do have are locked up in... inconvenient form factors. at least for me.
It seems the common configuration for these mATX motherboards is x8 (in x16) PCIe 3.0, x8 (in x8) PCIe 3.0, and the chipset x4 (in various sizes) PCIe 2.0
When you shove a 40Gb nic (really 56Gb, thanks Mellanox), a SAS controller, and an NVME drive into a mini-ATX motherboard, you find a problem.
There's 4 PCIe 3.0 lanes that go unused.
So what, get a PLX chip like the rest of humanity, or upgrade your perfectly functional hardware?
I chose to dive into the Intel processor datasheet(s), BIOS modding, and grabbing my trusty multimeter.
In subsequent parts, I'll dive more into the individual platforms I have on hand.