F5F Stay Refreshed Power Users Networks Using the Mellanox MT26448 10Gig card in a gaming PC setup.

Using the Mellanox MT26448 10Gig card in a gaming PC setup.

Using the Mellanox MT26448 10Gig card in a gaming PC setup.

Pages (2): 1 2 Next
_
_MrTamir_
Junior Member
46
05-29-2019, 02:29 AM
#1
Hello everyone! I just bought two Mellanox MT26448 cards from eBay and am trying to connect them with a MikroTik CRS305-1G-4S+IN switch. I plan to link them to two computers: an Unraid Server and my main gaming PC. The Unraid setup worked perfectly—it recognized the card and functioned properly. The main challenge is with my gaming PC, which has an R7 3770x, 2080 Super, 8x4 HyperX RAM, and an MSI X570 Gaming Plus motherboard. I suspect the issue is that the PCIe lane configuration isn’t matching what the card requires. Right now, the GPU is in slot 1, and the MX66 is in slot 3 (slot 2 is taken by the GPU). I also have an NVMe drive in the first M.2 slot. I attempted to adjust the lane assignments but couldn’t find any BIOS settings beyond setting 8x + 8x on the first PCIe slot. Is this even possible? Should I consider upgrading my motherboard? Thanks!
_
_MrTamir_
05-29-2019, 02:29 AM #1

Hello everyone! I just bought two Mellanox MT26448 cards from eBay and am trying to connect them with a MikroTik CRS305-1G-4S+IN switch. I plan to link them to two computers: an Unraid Server and my main gaming PC. The Unraid setup worked perfectly—it recognized the card and functioned properly. The main challenge is with my gaming PC, which has an R7 3770x, 2080 Super, 8x4 HyperX RAM, and an MSI X570 Gaming Plus motherboard. I suspect the issue is that the PCIe lane configuration isn’t matching what the card requires. Right now, the GPU is in slot 1, and the MX66 is in slot 3 (slot 2 is taken by the GPU). I also have an NVMe drive in the first M.2 slot. I attempted to adjust the lane assignments but couldn’t find any BIOS settings beyond setting 8x + 8x on the first PCIe slot. Is this even possible? Should I consider upgrading my motherboard? Thanks!

S
siyo1999
Junior Member
9
05-29-2019, 04:02 AM
#2
Because of the M.2 storage, you should attempt to run the NIC via the chipset. The GPU and your M.2 have already consumed all available PCI_e lanes. A 10Gb connection doesn’t require the full x8 Gen2 bandwidth. An x4 would suffice, but with the right motherboard only one x1 slot remains, which isn’t ideal. If both x16 slots are assigned to the CPU, the first should drop to x8, though it seems this isn’t happening. Try swapping the cards and check if the one in your desktop is faulty. I’ve tested many MNPA19-XTR cards, which can be unreliable right now. Consider the Mellanox ConnectX-3 CX311A—it’s newer, handles heat better, offers improved features, and is more compact.
S
siyo1999
05-29-2019, 04:02 AM #2

Because of the M.2 storage, you should attempt to run the NIC via the chipset. The GPU and your M.2 have already consumed all available PCI_e lanes. A 10Gb connection doesn’t require the full x8 Gen2 bandwidth. An x4 would suffice, but with the right motherboard only one x1 slot remains, which isn’t ideal. If both x16 slots are assigned to the CPU, the first should drop to x8, though it seems this isn’t happening. Try swapping the cards and check if the one in your desktop is faulty. I’ve tested many MNPA19-XTR cards, which can be unreliable right now. Consider the Mellanox ConnectX-3 CX311A—it’s newer, handles heat better, offers improved features, and is more compact.

K
KidzBeEz
Member
242
05-30-2019, 04:57 PM
#3
K
KidzBeEz
05-30-2019, 04:57 PM #3

C
CozyTea
Member
106
06-05-2019, 03:50 AM
#4
The device might support PCIe 2.0, requiring a minimum of four lanes to achieve 10 Gbps. A PCIe 2.0 lane supports up to 500 MB/s, whereas a PCIe 3.0 lane can reach around 985 MB/s. If the card uses PCIe 3.0, you could connect it via a PCIe 3.0 x1 slot using a riser cable, achieving roughly 900-950 MB/s. You might also fit a small M.2 to PCIe x4 adapter board and insert it into the second M.2 port, then use an PCIe x4 riser cable for your network card. Here are some references:
- M.2 to PCIe x4 adapter board: https://www.ebay.com/itm/275011561139
- PCIe x4 riser cable: https://www.ebay.com/itm/143436591417
- M.2 board with cable and PCIe x4 slot: https://www.ebay.com/itm/255069063716
- Riser cable for orientation flexibility: https://www.ebay.com/itm/114987331304
C
CozyTea
06-05-2019, 03:50 AM #4

The device might support PCIe 2.0, requiring a minimum of four lanes to achieve 10 Gbps. A PCIe 2.0 lane supports up to 500 MB/s, whereas a PCIe 3.0 lane can reach around 985 MB/s. If the card uses PCIe 3.0, you could connect it via a PCIe 3.0 x1 slot using a riser cable, achieving roughly 900-950 MB/s. You might also fit a small M.2 to PCIe x4 adapter board and insert it into the second M.2 port, then use an PCIe x4 riser cable for your network card. Here are some references:
- M.2 to PCIe x4 adapter board: https://www.ebay.com/itm/275011561139
- PCIe x4 riser cable: https://www.ebay.com/itm/143436591417
- M.2 board with cable and PCIe x4 slot: https://www.ebay.com/itm/255069063716
- Riser cable for orientation flexibility: https://www.ebay.com/itm/114987331304

L
LpLuks
Member
141
06-06-2019, 03:08 PM
#5
It helped with the ASUS card on one of my PCs running an ITX board, but this isn't sure for all setups. Some X8 cards only function at X8 speeds even if they technically support X4. The adapters tend to be tricky to handle, costly, and losing an M.2 slot is a big downside for a gaming rig. An Aquantia-based card like the ASUS might not cost much more.
L
LpLuks
06-06-2019, 03:08 PM #5

It helped with the ASUS card on one of my PCs running an ITX board, but this isn't sure for all setups. Some X8 cards only function at X8 speeds even if they technically support X4. The adapters tend to be tricky to handle, costly, and losing an M.2 slot is a big downside for a gaming rig. An Aquantia-based card like the ASUS might not cost much more.

G
57
06-08-2019, 06:30 AM
#6
Thanks for the updates! I thought maybe the connection wasn't strong enough since there was no bracket included. Now it looks like the computer is recognizing the card, but I still see a blue screen when the OS starts. It seems to be linked to the Zwischenox issue, which is a positive sign. The PC works without the card connected.
G
gamerbros4ever
06-08-2019, 06:30 AM #6

Thanks for the updates! I thought maybe the connection wasn't strong enough since there was no bracket included. Now it looks like the computer is recognizing the card, but I still see a blue screen when the OS starts. It seems to be linked to the Zwischenox issue, which is a positive sign. The PC works without the card connected.

L
Loroi
Member
137
06-08-2019, 10:21 AM
#7
The card appears to function properly on the Ubuntu live USB, suggesting the issue lies with Windows. I previously installed some drivers thinking they were the cause, but after uninstalling everything, the blue screen still occurs each time Windows loads. On Google, I only found information about removing drivers and disabling "e cores," which doesn’t fully address my concerns. Any advice would be appreciated.
L
Loroi
06-08-2019, 10:21 AM #7

The card appears to function properly on the Ubuntu live USB, suggesting the issue lies with Windows. I previously installed some drivers thinking they were the cause, but after uninstalling everything, the blue screen still occurs each time Windows loads. On Google, I only found information about removing drivers and disabling "e cores," which doesn’t fully address my concerns. Any advice would be appreciated.

3
3Edge
Senior Member
718
06-09-2019, 03:44 PM
#8
I can't relate unfortunately. As far back as I started playing with server adapters every desktop I ever owned supported 40+ PCIe lanes and I'm talking 2012~2013. I never dealt with such restriction but I do know what you're talking about. Intel crippled the PCIe count on mainstream platforms for years and AMD actually looks to have reduced their count as well with AM4. At least at this time I know Windows has a basic driver for the MNPA19-XTR. You shouldn't have to install anything to use it as a 10Gig NIC. If you can see it's working under Ubuntu then I don't believe we have a PCIe lane issue here. When the PC first starts can you tell us what firmware version the card is running? It will say it on the 10gig NICs splash screen. If it's not running the latest this could be part of the problem given the age and the system you've put it in. I have MNPA19-XTR's with full-height brackets that were DOA. I was refunded for them and allowed to keep the dead cards so they're only good for parts. Assuming you have the same NIC I'll send you a full-height bracket if you'll cover shipping.
3
3Edge
06-09-2019, 03:44 PM #8

I can't relate unfortunately. As far back as I started playing with server adapters every desktop I ever owned supported 40+ PCIe lanes and I'm talking 2012~2013. I never dealt with such restriction but I do know what you're talking about. Intel crippled the PCIe count on mainstream platforms for years and AMD actually looks to have reduced their count as well with AM4. At least at this time I know Windows has a basic driver for the MNPA19-XTR. You shouldn't have to install anything to use it as a 10Gig NIC. If you can see it's working under Ubuntu then I don't believe we have a PCIe lane issue here. When the PC first starts can you tell us what firmware version the card is running? It will say it on the 10gig NICs splash screen. If it's not running the latest this could be part of the problem given the age and the system you've put it in. I have MNPA19-XTR's with full-height brackets that were DOA. I was refunded for them and allowed to keep the dead cards so they're only good for parts. Assuming you have the same NIC I'll send you a full-height bracket if you'll cover shipping.

B
BLACKPANTHER34
Junior Member
9
06-09-2019, 08:51 PM
#9
This technology used to be built into a dedicated northbridge chip on the motherboard, meaning it wasn’t limited to just the CPU. Changing latency and boosting bandwidth required additional components like a PCIe mux chip. While this solution works, it increases cost and can add latency, which most users don’t need. Only a few boards include extra PCIe lanes, making it more sensible to purchase boards with built-in 10Gbps support. It bothers me that many still offer only 2.5Gb speeds instead of at least 5Gb. An on-board SFP+ port would be ideal, but currently it’s mainly a concern for older server hardware using outdated PCIe x8 lanes versus modern consumer cards that need x4 lanes. Market trends are shifting toward PCIe 5, yet many boards still stick to PCIe 3 with just one or two slots. The push for more features often leads to compromises—like forcing M.2 slots into awkward expansion cards—something people might avoid if they disliked the aesthetic of RGB lighting.
B
BLACKPANTHER34
06-09-2019, 08:51 PM #9

This technology used to be built into a dedicated northbridge chip on the motherboard, meaning it wasn’t limited to just the CPU. Changing latency and boosting bandwidth required additional components like a PCIe mux chip. While this solution works, it increases cost and can add latency, which most users don’t need. Only a few boards include extra PCIe lanes, making it more sensible to purchase boards with built-in 10Gbps support. It bothers me that many still offer only 2.5Gb speeds instead of at least 5Gb. An on-board SFP+ port would be ideal, but currently it’s mainly a concern for older server hardware using outdated PCIe x8 lanes versus modern consumer cards that need x4 lanes. Market trends are shifting toward PCIe 5, yet many boards still stick to PCIe 3 with just one or two slots. The push for more features often leads to compromises—like forcing M.2 slots into awkward expansion cards—something people might avoid if they disliked the aesthetic of RGB lighting.

B
Barney_420
Member
72
06-10-2019, 04:08 AM
#10
The enthusiasm for SFP+ on motherboards remains limited. It’s usually found only in custom form factors from companies like Dell, HPE, and IBM/Lenovo. Most products rely heavily on 10G RJ45, which I strongly dislike. The heat from the controllers is excessive, and the heatsinks are often insufficient. Mounting them in areas with poor airflow is common, switches cost more per port, and overall it’s a frustrating situation. I’d rather use a PCIe slot than compromise on speed for onboard RJ-45.
B
Barney_420
06-10-2019, 04:08 AM #10

The enthusiasm for SFP+ on motherboards remains limited. It’s usually found only in custom form factors from companies like Dell, HPE, and IBM/Lenovo. Most products rely heavily on 10G RJ45, which I strongly dislike. The heat from the controllers is excessive, and the heatsinks are often insufficient. Mounting them in areas with poor airflow is common, switches cost more per port, and overall it’s a frustrating situation. I’d rather use a PCIe slot than compromise on speed for onboard RJ-45.

Pages (2): 1 2 Next