IBM launched a storage system designed with Nvidia for artificial intelligence workloads and use of data tools such as TensorFlow, PyTorch and Spark.
The effort, called IBM Spectrum AI with Nvidia DGX, is a converged system the combines a software-defined file system, all-flash and Nvidia’s DGX-1 GPU system.
IBM Spectrum AI is part of a broader trend by storage vendors to go all-flash and create architectures better suited for AI workloads and machine learning. Pure Storage and Nvidia had paired up on AI-ready infrastructure with the former targeting a data hub architecture. NetApp, which now aims to be more of a data management player, and Lenovo partnered on AI systems.
Big Blue is hoping to draft off Nvidia’s popularity with open source frameworks and developers. Nvidia DGX is often a foundational platform for data science efforts delivered through the cloud or on-premise.
Key parts of the system include:
- DGX-1 Servers
- The NVIDIA DGX software stack, optimized for maximized GPU-accelerated training performance, including the RAPIDS framework.
- IBM Spectrum Scale v5, software-defined file storage, architected specifically for AI workloads with enhanced small file, metadata and random IO performance.
- NVMe all-Flash storage with 300TB in every 2U building block and up to 120GB/s of data throughput in a rack.
- Integrates with IBM Spectrum Discover for extensible data governance and metadata tagging across IBM Spectrum Scale and IBM Cloud Object Storage.
IBM Spectrum AI with Nvidia DGX is available through IBM and Nvidia resellers and designed to pair with IBM Spectrum Scale, an AI and high performance computing system. IBM’s storage unit has reference architectures for AI workloads, flash, models and machine learning and combinations with AWS public cloud services as well as its IBM Cloud Object Storage, Spectrum Discover and Spectrum Scale.