How to choose a computer with AI abilities?
How to choose a computer with AI abilities?
I’m starting to grasp the basics of programming AI. I’m interested in running local large language models. As a beginner, I’m unsure what to choose. The choices are:
1) Purchase an AMD CPU and graphics card, use some AMD software—like the recent news about DeepSeek working well on AMD Radeon 7900 XTX. Is this a viable path? Are there better AMD options available now? What about RAM and other considerations? There are methods to run CUDA on an AMD graphics card too.
2) Wait until March and try getting a NVDA "Project Digits" desktop with a GPU. I’ll need to brush up on Linux Ubuntu to use it, but I’m working on that now. The expected cost of around $3000 seems reasonable for what would essentially be a high-performance desktop.
3) Are there any Intel-based PCs or laptops that can handle AI for large language models? It appears Intel has had some performance and reliability issues, but they’re releasing new hardware at CES. Do they seem more focused on a Microsoft-co-pilot style AI experience? Should I consider Intel?
I’m drawn to mentions like: “Microsoft releases DeepSeek’s AI model on Azure.” I’m not sure if I’m asking the right questions, but I want an AI-ready machine that stays relevant for at least a year.
Should I even be considering DeepSeek’s models, since they might be more unstable and harder to manage than the NVDA/CUDA setup? It seems NVDA is more user-friendly, though it needs better hardware.
Please help me understand. I’m not sure where to begin.
From what I understand, having more VRAM on the GPU is preferable. At least 16GB is recommended. CUDA with NVIDIA appears to be the leading option, although I haven’t compared NVIDIA with AMD GPUs or Intel GPUs in depth. There seems to be a need for at least 64GB of system RAM (two 32GB units plus extra space for up to 128GB). As AI models continue to grow, a large PCI-e Gen 4 SSD with at least 2TB capacity would likely improve load times for larger models. A laptop doesn’t seem like the best choice here. The current issue with NVIDIA GPUs is limited stock for the new 5000 series and a shortage of older 4000 models. I also suspect Project Digits may face similar supply constraints, with scalpers likely to take advantage. Whether you’re training/developing AI LLMs or just using them as end users, it’s clear that processing speed matters more than spending heavily. It often takes hours or even days to run a capable desktop model, and giving up is inevitable.
Considering the cost of Nvidia cards, what would be the performance of a Mac Studio or even a MacBook Pro with 192GB of unified memory? If you allocate 64GB for the CPU and use the rest solely as VRAM, it won't match the speed of an Nvidia card, but it can still handle large models effectively.