F5F Stay Refreshed Power Users Networks Find the optimal route between devices?

Find the optimal route between devices?

Find the optimal route between devices?

Pages (2): Previous 1 2
W
WreckCD
Member
190
02-28-2017, 10:29 PM
#11
So what's the issue? It seems like the newer generation is causing problems. I own a system with 2* e5-2680v2 and 256GB RAM, running Proxmox VE—free, open-source, Debian-based. It's quite stable and supports freenas, databases, media servers, and home assistants. I definitely recommend it.
W
WreckCD
02-28-2017, 10:29 PM #11

So what's the issue? It seems like the newer generation is causing problems. I own a system with 2* e5-2680v2 and 256GB RAM, running Proxmox VE—free, open-source, Debian-based. It's quite stable and supports freenas, databases, media servers, and home assistants. I definitely recommend it.

H
HenriqueOL
Junior Member
33
03-20-2017, 02:46 AM
#12
It initially appeared one of the 10Gbit ports was nonfunctional. Swapping it meant covering the cost of the board for "Advanced Shipping," which I ended up paying twice. After they returned the old board, I received a refund. RAM proved to be the main issue—sticking modules together even with matching stickers caused compatibility problems. All components needed to match each other. The integrated X540 NIC overheats, and there’s no simple solution for cooling it when multiple cards are installed. It’s frustrating but functional once it works. I won’t consider purchasing Supermicro or NEMIX again. My current setup includes two E5-2698v3 servers with 0.5TB RAM, running Windows Server, alongside a PROXMOX VE VM server. I really enjoy it.
H
HenriqueOL
03-20-2017, 02:46 AM #12

It initially appeared one of the 10Gbit ports was nonfunctional. Swapping it meant covering the cost of the board for "Advanced Shipping," which I ended up paying twice. After they returned the old board, I received a refund. RAM proved to be the main issue—sticking modules together even with matching stickers caused compatibility problems. All components needed to match each other. The integrated X540 NIC overheats, and there’s no simple solution for cooling it when multiple cards are installed. It’s frustrating but functional once it works. I won’t consider purchasing Supermicro or NEMIX again. My current setup includes two E5-2698v3 servers with 0.5TB RAM, running Windows Server, alongside a PROXMOX VE VM server. I really enjoy it.

S
StephanKruger
Member
226
04-04-2017, 10:11 AM
#13
That's quite frustrating. Running at 10Gbps is demanding, especially with fiber. It seems the fiber isn't handling it well. You only have 1GbE NICs—really limited. Nice! Which GPU are you using? Are you asking about NVIDIA solutions for fixing code 43?
S
StephanKruger
04-04-2017, 10:11 AM #13

That's quite frustrating. Running at 10Gbps is demanding, especially with fiber. It seems the fiber isn't handling it well. You only have 1GbE NICs—really limited. Nice! Which GPU are you using? Are you asking about NVIDIA solutions for fixing code 43?

P
psyducky
Junior Member
33
04-04-2017, 03:56 PM
#14
Other servers and desktops use fiber NICs and stay much cooler. They run AMD systems. A fix might seem like a change, but it’s not guaranteed. NVIDIA’s setup can detect virtual environments and reports an error when it thinks the system isn’t running in one. They push for their own GPU lines like Quadro or Tesla. Windows sometimes tricks it into thinking it’s not in a VM, which helps the NVIDIA driver but results aren’t consistent.
P
psyducky
04-04-2017, 03:56 PM #14

Other servers and desktops use fiber NICs and stay much cooler. They run AMD systems. A fix might seem like a change, but it’s not guaranteed. NVIDIA’s setup can detect virtual environments and reports an error when it thinks the system isn’t running in one. They push for their own GPU lines like Quadro or Tesla. Windows sometimes tricks it into thinking it’s not in a VM, which helps the NVIDIA driver but results aren’t consistent.

D
Diabolo09
Junior Member
11
04-11-2017, 10:49 PM
#15
I tried using a Quadro 600 card, but it didn’t function properly. According to NVIDIA, you need a subscription to activate the special feature. AMD performed immediately, while the Nvidia GTX 750TI only worked within a Linux virtual machine.
D
Diabolo09
04-11-2017, 10:49 PM #15

I tried using a Quadro 600 card, but it didn’t function properly. According to NVIDIA, you need a subscription to activate the special feature. AMD performed immediately, while the Nvidia GTX 750TI only worked within a Linux virtual machine.

R
RaSiMkA
Junior Member
46
04-12-2017, 07:40 AM
#16
I used an RTX 2080 inside QEMU smoothly, yet I anticipated NVIDIA would try to identify and restrict it.
R
RaSiMkA
04-12-2017, 07:40 AM #16

I used an RTX 2080 inside QEMU smoothly, yet I anticipated NVIDIA would try to identify and restrict it.

W
wave3156
Junior Member
37
04-14-2017, 12:15 AM
#17
AMD seems indifferent, NVIDIA doesn't aim to damage their server/workstation lineup. Why let people bypass a $1000 card and force them to upgrade to a $5000 model with just a few extras? I'm wondering why Linus allows 1080Ti and 2080TI cards to run on UNRAID. It probably comes down to how the NVIDIA driver or hardware recognizes virtual environments—settings not all hypervisors support, so it sometimes works. Personally, I enjoy QEMU. I plan to create a guide for passing through QEMU GPUs in Debian soon.
W
wave3156
04-14-2017, 12:15 AM #17

AMD seems indifferent, NVIDIA doesn't aim to damage their server/workstation lineup. Why let people bypass a $1000 card and force them to upgrade to a $5000 model with just a few extras? I'm wondering why Linus allows 1080Ti and 2080TI cards to run on UNRAID. It probably comes down to how the NVIDIA driver or hardware recognizes virtual environments—settings not all hypervisors support, so it sometimes works. Personally, I enjoy QEMU. I plan to create a guide for passing through QEMU GPUs in Debian soon.

Pages (2): Previous 1 2