Nvidia DGX Spark & DGX Station, redefining the future of AI Workstations? Part 1/3
Making sense of Nvidia's latest Workstation announcements at GTC 2025 and exploring their impact on the workstation landscape
Part 1: The background
A few of weeks ago, at Nvidia GTC 2025, the company announced and showcased two new and innovative Workstation platforms: DGX Spark (formerly known as Project DIGITS) and DGX Station. This is my take on what these products mean and the potential impact they can have. So many of you asked me about this launch! Will they capture a lot of segment share (MSS) in the Workstation Market? Will they cannibalize existing Workstations? and if they do, whose lunch would Nvidia eat? Let’s unpack all this!
What has been Nvidia’s role in Workstation until now?
From Pro-Viz Graphics to GPGPU to XPU
Before we talk about the new products, it’s important to build a baseline of the role that Nvidia has historically played in the Workstation Market. I don’t want to go too far into the past, but let’s just say that over the like couple of decades, Nvidia has patiently and persistently built a presence in the Workstation market that should be a case study in business schools. Today, approximately 4 of every 5 Workstations have a discrete GPU in them (in many cases more than one), and the vast majority (as in 9 out of 10) are Nvidia based.
Now, what is the role that such GPUs play in the system? It all started as Graphics Accelerators for 3D and video applications, then around 2010 they started shifting into not only graphics but becoming more programmable which make them great choices for parallel compute workloads, a trend known as General Purpose GPU or GPGPU.
Let’s move to the 2010s and Nvidia’s SW enabling efforts (and Hardware evolution of course) were mature enough to start Complementing the CPU in complex compute workloads like 3D rendering (think of 3D animation, not real-time like Video Games). Software Vendors (ISVs) rightfully saw the opportunity to put that powerful piece of Silicon known as GPU to actually compute (think export operations), beyond the traditional interactive graphics tasks in the applications. Such effort is known as XPU, as a way to say that you can compute on CPU and GPU. This transition is still on-going but every year, every generation, more applications and more (parallel) features are capable of producing exactly the same result (or negligible differences) on CPU (gold quality standard) and the GPU.
Applications like SideFX Houdini (sidefx.com), users’ favorite VFX tool for parametric 3D modeling, particle effects (smoke, water, fire, etc) are a great example of the XPU transition during the last handful of versions. For the ones paying attention, if there is such a thing as “Share of compute”, once a workload (specific function within an application) is XPU, even if both CPU and GPU are collaborating and giving it all, the majority of compute happens on the GPU. It is not uncommon when rendering on SideFX Houdini (Karma XPU renderer) to see System Resource Utilization like the image below, where Optix refers to Nvidia and EmbreeCPU to x86 CPU. Sure, that share of compute can be altered with higher end CPUs (more and newer cores), but that argument can also be made for having higher end GPUs and/or more GPUs on the systems (in the case of Desktop Workstations)
That is all great news for Media & Entertainment workflows, but in the last 5-7 years, Data Science/A.I. development needed very powerful systems, so in 2018 Nvidia launched the first DGX Station, a tower form factor designed to be at your desk, not on a rack, but more on this later.
Around 2020, Data Science and later on A.I. Development keep aggressively growing. When the world was blown away with OpenAI’s ChatGPT, developers had been developing A.I. for years. In order to satisfy their need for compute, the Workstation industry (CPU/GPU Vendors and Workstation OEMs) built on the concept of the 2018 DGX Station and launched Workstations with the specific purpose of facilitating A.I. Development: Meet the “Data Science Workstation”, later called “A.I. Workstation”. This kind of system was (and still is) a recommended configuration of CPU, GPU, memory and a strong layer of Software, from O.S.s to optimized distributions of Python, Libraries and Frameworks that could have orders of magnitude better performance vs “vanilla” (non-optimized) versions.
At the end of 2020, Nvidia updated the DGX Station to support A100 GPUs, Intel Xeons were replaced by an AMD Epyc CPU. With this refresh and the birth of the D.S./A.I. Workstation, this new class of system was ramping fast!
Everyone is a winner with this kind of solution, CPU vendors, especially Intel, added value with better core-count CPUs with modern Instruction sets like AVX-512 and AMX instructions. AMD got a CPU win on the DGX Station A100, and Nvidia was the real winner with a proven solution, regardless of the system vendor, to add not only a single high-end GPU, but up to 4 of them on a single system. And of course, the end user, the data scientist / A.I. Developer wins with a super powerful, local, low latency system; with a high upfront cost, but no cloud credits required for transmission, storage or compute, probably lower Total Cost of Ownership (TCO) in the long term.
Tons of system memory is necessary to clean and manipulate big Data Sets (RAM = 2-3X data set size) and tons of GPU Memory (VRAM) to then train the models. Having one of these “A.I. Workstations”, the user can now raise the limit on the level of complexity that could be achieved locally, especially if they had some funds for the project, because these systems can get pricey, a typical starting configuration would be >US$10K and with multi-GPU could easily go to US$30K. Each top end GPU goes for ~$7K-8K, times number of GPUs, you can estimate how quickly the system price can build up. The now “gen minus one” Nvidia RTX Ada generation packed up to 48GB per card, so scaling GPU memory (VRAM) is limited to 192GB when using 4 cards (~US$25K just in GPUs on such system)
There was also a remote/non-local compute cousin from Nvidia, the DGX-1. It was introduced in April 2016 as a rack mountable solution for A.I. Development. Calling this system a Workstation would be a stretch, due to its rack form factor and features, it would be more appropriate to call it a GPU-accelerated server. Regardless, its usage was the key point: At launch it was a GPU accelerated Rendering Node; then A.I. ramped up and the DGX-1 pivoted to A.I. Development, for A.I. to flourish few years later, I’d consider this kind of solution one of the seeds. Fun fact, “A. I.” was not mentioned once in the original DGX-1 data sheet.
Its specs were impressive even by today standards and although there was not a drop of co-marketing, it was based on Intel Xeon Processors, in combination with the latest Nvidia GPUs.
Then the update in 2018 was all about A.I. with the integration of Nvidia Volta architecture. Great for remote A.I. development, but there are advantages to local solutions and when you are Nvidia, why not split your hand and have products for both remote and local development? The Nvidia DGX-1 and DGX Station were the answer, until now…
Meet the 2025 Nvidia DGX <Workstation> products
For local development, as mentioned above, the top Workstation OEMs created with Intel and Nvidia, custom Data Science Workstations (later called: “A.I. Workstations” and such systems have been updated to newer generations of CPUs, GPUs and Software tools to make Developers more productive.
At Nvidia GTC, Jensen Huang announced…
Let’s continue the conversation
We can help you bring workstations to market (4P/GTM).
Would you like an in-depth review of your latest Workstations?
Do you want an independent voice on your next salesforce training or webinar?
I’m one click away…
References:
Nvidia DGX Station (2018): White Paper: https://images.nvidia.com/content/newsletters/email/pdf/DGX-Station-WP.pdf
Nvidia DGX Station (2018): Data Sheet: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-station/dgx-station-print-explorer-datasheet-letter-final-web.pdf
Nvidia DGX Station A100 (2020): Data Sheet: https://www.pny.com/en-eu/File%20Library/Professional/DATASHEET/DGX/DGX_Station_A100_Datasheet_PNY-WEB.pdf
Nvidia DGX Spark: https://www.nvidia.com/en-us/products/workstations/dgx-spark/
Dell Blog - Nvidia GTC coverage: https://www.dell.com/en-us/blog/pushing-boundaries-driving-ai-innovation-at-every-scale-with-dell-pro-max/