NVIDIA Announces PCIe A100 Accelerator: 250 Watt Ampere In A Standard Form Factor

With the launch of their Ampere architecture and new A100 accelerator barely a month behind them, NVIDIA this morning is announcing the PCIe version of their accelerator as part of the start of the now-virtual ISC Digital conference for high performance computing. The more straight-laced counterpart to NVIDIA’s flagship SXM4 version of the A100 accelerator, the PCie version of the A100 is designed to offer A100 in a more traditional form factor for customers who need something that they can plug into standardized servers. Overall the PCIe A100 offers the same peak performance as the SXM4 A100, however with a lower 250 Watt TDP, real-world performance won’t be quite as high.

The obligatory counterpart to NVIDIA’s SXM form factor accelerators, NVIDIA’s PCIe accelerators serve to flesh out the other side of NVIDIA’s accelerator lineup. While NVIDIA would gladly sell everyone SXM-based accelerators – which would include the pricey NVIDIA HGX carrier board – there are still numerous customers who need to be able to use GPU accelerators in standard, PCIe-based rackmount servers. Or for smaller workloads, customers don’t need the kind of 4-way and higher scalability offered by SXM-form factor accelerators. So with their PCIe cards, NVIDIA can serve the rest of the accelerator market that their SXM products can’t reach.

The PCIe A100, in turn, is a full-fledged A100, just in a different form factor and with a more appropriate TDP. In terms of peak performance, the PCIe A100 is just as fast as its SXM4 counterpart; NVIDIA this time isn’t shipping this as a cut-down configuration with lower clockspeeds or fewer functional blocks than the flagship SXM4 version. As a result the PCIe card brings everything A100 offers to the table, with the same heavy focus on tensor operations, including the new higher precision TF32 and FP64 formats, as well as even faster integer inference.

NVIDIA Accelerator Specification Comparison
 A100
(PCIe)
A100
(SXM4)
V100
(PCIe)
P100
(PCIe)
FP32 CUDA Cores6912691251203584
Boost Clock1.41GHz1.41GHz1.38GHz1.3GHz
Memory Clock2.4Gbps HBM22.4Gbps HBM21.75Gbps HBM21.4Gbps HBM2
Memory Bus Width5120-bit5120-bit4096-bit4096-bit
Memory Bandwidth1.6TB/sec1.6TB/sec900GB/sec720GB/sec
VRAM40GB40GB16GB/32GB16GB
Single Precision19.5 TFLOPs19.5 TFLOPs14.1 TFLOPs9.3 TFLOPs
Double Precision9.7 TFLOPs
(1/2 FP32 rate)
9.7 TFLOPs
(1/2 FP32 rate)
7 TFLOPs
(1/2 FP32 rate)
4.7 TFLOPs
(1/2 FP32 rate)
INT8 Tensor624 TOPs624 TOPsN/AN/A
FP16 Tensor312 TFLOPs312 TFLOPs112 TFLOPsN/A
TF32 Tensor156 TFLOPs156 TFLOPsN/AN/A
Relative Performance (SXM Version)90%100%N/AN/A
InterconnectNVLink 3
6 Links? (300GB/sec?)
NVLink 3
12 Links (600GB/sec)
NVLink 2
4 Links (200GB/sec)
NVLink 1
4 Links (160GB/sec)
GPUGA100
(826mm2)
GA100
(826mm2)
GV100
(815mm2)
GP100
(610mm2)
Transistor Count54.2B54.2B21.1B15.3B
TDP250W400W250W300W
Manufacturing ProcessTSMC 7NTSMC 7NTSMC 12nm FFNTSMC 16nm FinFET
InterfacePCIe 4.0SXM4PCIe 3.0SXM
ArchitectureAmpereAmpereVoltaPascal

But because the dual-slot add-in card form factor is designed for lower TDP products, offering less room for cooling and typically less access to power as well, the PCIe version of the A100 does have to ratchet down its TDP from 400W to 250W. That’s a sizable 38% reduction in power consumption, and as a result the PCIe A100 isn’t going to be able to match the sustained performance figures of its SXM4 counterpart – that’s the advantage of going with a form factor with higher power and cooling budgets. All told, the PCIe version of the A100 should deliver about 90% of the performance of the SXM4 version on single-GPU workloads, which for such a big drop in TDP, is not a bad trade-off.

And on this note, I should give NVIDIA credit where credit is due: unlike the PCIe version of the V100 accelerator, NVIDIA is doing a much better job of documenting these performance differences. This time around NVIDIA is explicitly noting the 90% figure in their their specification sheets and related marketing materials. So there should be a lot less confusion about how the PCIe version of the accelerator compares to the SXM version.

Other than the form factor and TDP changes, the only other notable deviation for the PCIe A100 from the SXM version is the number of NVLink-connected GPUs supported. For their PCIe card NVIDIA is once again using NVLink bridges connected across the top of A100 cards, allowing for two (and only two) cards to be linked together. NVIDIA’s product sheet doesn’t list the total bandwidth available, but as the PCIe V100 supported up to 100GB/sec in each direction using two links, the PCIe A100 and its 3 NVLink connectors should be able to do 150GB/sec, if not more.

Otherwise the PCIe A100 comes with the usual trimmings of the form factor. The card is entirely passively cooled, designed to be used with servers with powerful chassis fans. And though not pictured in NVIDIA’s official shots, there are sockets for PCIe power connectors. Meanwhile, with the reduced usage of NVLink in this version of the card, A100’s native PCIe 4 support will undoubtedly be of increased importance here, underscoring the advantage that an AMD Epyc + NVIDIA A100 pairing has right now since AMD is the only x86 server vendor with PCIe 4 support.

Wrapping things up, while NVIDIA isn’t announcing specific pricing or availability information today, the new PCIe A100 cards should be shipping soon. The wider compatibility of the PCIe card has helped NVIDIA to line up over 50 server wins at this point, with 30 of those servers set to ship this summer.

Subscribe to the E-Crypto Newsletter

Sign up to the best of Crypto, Blockchain and Future Trends news.

Invalid email address
We promise not to spam you. You can unsubscribe at any time.
bitcoin
Bitcoin (BTC) $ 31,479.00
ethereum
Ethereum (ETH) $ 1,190.86
tether
Tether (USDT) $ 1.00
polkadot
Polkadot (DOT) $ 16.13
ripple
XRP (XRP) $ 0.275146
cardano
Cardano (ADA) $ 0.333630
litecoin
Litecoin (LTC) $ 133.25
bitcoin-cash
Bitcoin Cash (BCH) $ 435.69
chainlink
Chainlink (LINK) $ 19.34
binancecoin
Binance Coin (BNB) $ 40.14