SuperMicro X9 fails to connect with Adaptec RAID 71605 device
SuperMicro X9 fails to connect with Adaptec RAID 71605 device
I possess two identical server configurations that operate consistently. Each unit includes:
- SuperMicro X9DRi-LN4F+ board
- Two Intel Xeon 2697 (v2) processors
- 512 GB storage (16x32 GB) from Nemix LRDIMM @ 1600 MT/s
- One Adaptec RAID 71605 connected via backplane
- One Samsung Evo 870 2.5" 1TB SSD
- Four Seagate Barracuda Compute 3.5" 4TB HDDs (7200RPM)
POST performance is naturally slower, which is acceptable. I can navigate the boot priority menu and enter BIOS (version 3.4, latest available). BIOS confirms the CPUs and memory match the description, and the RAID card is connected in a PCIe-Gen3.0x16 slot. Boot priority is set to favor UEFI on that device.
However, when I let the system boot completely, it remains stuck in a black screen indefinitely. The Adaptec Ctrl + A option never appears, even after waiting up to an hour before powering down.
If I insert a bootable USB with the Ubuntu 20.04 LTS installer, the servers recognize it and install the OS. The installer identifies all connected drives, allowing me to format the SSD as EXT4 and mount /mnt/data0 through /mnt/data3. It handles updates and finishes; just remove the media and restart.
After that, I proceed past the SuperMicro logo into a black screen with no activity.
At this stage, I’m open to using the RAID cards as HBA devices, since the installer treated them that way. I still want access to the Adaptec configuration tool via Ctrl + A, but I’m fine without it and prefer using the cards as pass-through interfaces.
How can I make these servers actually load the operating system? Any help would be appreciated! Thank you in advance.
Edited October 10, 2023 by doctorchg added context
I thank you for your message. Initially I attempted the first solution but it didn’t work. I repositioned the RAID card into each PCIe slot, checked the needed lane count and type against the six available slots on the motherboard. Confirmed in BIOS that each slot’s generation and lane count matched the manual. All slots except one theoretically satisfy the RAID card requirements (Gen3 x8). No changes were made. Updated October 10, 2023 by doctorchg
Removing the SSD from the backplane and linking it directly to MOBO via a SATA cable allows the server to boot from the SSD and still utilize the HDDs on the backplane. It seems the BIOS or RAID cards might not be identifying these devices correctly as bootable options.
I saved the maxView Storage Manager image to a USB drive using balenaEtcher. I also retrieved the newest firmware for the Adaptec RAID 71605 and moved it to another USB stick. I started the server with maxView on the Storage Manager disk and ran the software. It identified the RAID controller as functional, though the firmware it was updated with was two years outdated. After flashing it with the latest version, I rebooted using the USB stick to confirm the update worked. When I tried again without any USB drives, the POST menu still didn’t show a Ctrl + A option. More updates coming... Edited October 11, 2023 by doctorchg
I restarted the maxView Storage Manager and used the software to form two logical volumes—one for the 1TB SSD and another at RAID 10 with four HDDs. Notably, maxView categorizes three of the HDDs as SSDs, while the remaining four are labeled as HDDs. The only uniform aspect is that all three SSD HDDs share the same model (ST4000DM004-2CV1), and all four HDDs share a different model (ST4000DM004-2U91). After setting up these volumes, I reinstalled the Ubuntu 20.04 Server OS. Before finishing, I switched back into the main system BIOS to adjust the boot sequence, and now the RAID controller's configuration tool appears! Progress made. More updates coming...