Google to IBM: How big tech giants are embracing Nvidia’s new hardware and software services

Join Gen AI enterprise leaders in Boston on March 27 for an exclusive night of networking, insights, and conversations surrounding data integrity. Request an invite here.

Nvidia has gone all in to push the boundaries of computing at the ongoing GTC conference in San Jose.

CEO Jensen Huang, donning a black leather jacket, addressed a packed crowd (the event looked more like a concert than a conference) in his keynote and announced the long-awaited GB200 Grace Blackwell Superchip, promising up to 30 times performance increase for large language model (LLM) inference workloads. He also shared notable developments across automotive, robotics, omniverse and healthcare, flooding the internet with all things Nvidia. 

However, GTC is never complete without industry partnerships. Nvidia shared how it is evolving its work with several industry giants by taking its newly announced AI computing infrastructure, software and services to its tech stack. Below is a rundown of key partnerships.


Nvidia said AWS will offer its new Blackwell platform, featuring GB200 NVL72 with 72 Blackwell GPUs and 36 Grace CPUs, on EC2 instances. This will enable customers to customers to build and run real-time inference on multi-trillion parameter LLMs faster, at a massive scale, and a lower cost than previous-generation Nvidia GPUs. The companies also announced they are bringing 20,736 GB200 superchips to Project Ceiba – an AI supercomputer built exclusively on AWS – and teaming up to integrate Amazon SageMaker integration with Nvidia NIM inference microservices.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

Google Cloud

Like Amazon, Google also announced it is bringing Nvidia’s Grace Blackwell platform and NIM microservices to its cloud infrastructure. The company further said it is adding support for JAX, a Python-native framework for high-performance LLM training, on Nvidia H100 GPUs and making it easier to deploy the Nvidia NeMo framework across its platform via Google Kubernetes Engine (GKE) and Google Cloud HPC toolkit. 

Additionally, Vertex AI will now support Google Cloud A3 VMs powered by NVIDIA H100 GPUs and G2 VMs powered by NVIDIA L4 Tensor Core GPUs.


Microsoft also confirmed the plan to add NIM microservices and Grace Blackwell to Azure. However, the partnership for the superchip also includes Nvidia’s new Quantum-X800 InfiniBand networking platform. The Satya Nadella-led company also announced the native integration of DGX Cloud with Microsoft Fabric to streamline custom AI model development and the availability of newly launched Omniverse Cloud APIs on the Azure Power platform. 

In the healthcare domain, Microsoft said Azure will use Nvidia’s Clara suite of microservices and DGX Cloud to help healthcare providers, pharmaceutical and biotechnology companies and medical device developers quickly innovate across clinical research and care delivery.


Oracle said it plans to leverage the Grace Blackwell computing platform across OCI Supercluster and OCI Compute instances, with the latter adopting both Nvidia GB200 superchip and B200 Tensor Core GPU. It will also come on the Nvidia DGX Cloud on OCI. 

Beyond this, Oracle said Nvidia NIM and CUDA-X microservices, including the NeMo Retriever for RAG inference deployments, will also help OCI customers bring more insight and accuracy to their generative AI applications.


SAP is working with Nvidia to integrate generative AI into its cloud solutions, including the latest version of SAP Datasphere, SAP Business Technology Platform and RISE with SAP. The company also said it plans to build additional generative AI capabilities within SAP BTP using Nvidia’s generative AI foundry service, featuring DGX Cloud AI supercomputing, Nvidia AI Enterprise software and NVIDIA AI Foundation models. 


To help clients solve complex business challenges, IBM Consulting plans to combine its technology and industry expertise with Nvidia’s AI Enterprise software stack, including the new NIM microservices and Omniverse technologies. IBM says this will accelerate customers’ AI workflows, enhance use case-to-model optimization and develop business and industry-specific AI use cases. The company is already building and delivering digital twin applications for supply chain and manufacturing using Isaac Sim and Omniverse.


Data cloud company Snowflake expanded its previously-announced partnership with Nvidia to integrate with NeMo Retriever. The generative AI microservice connects custom LLMs to enterprise data and will allow the company’s customers to enhance the performance and scalability of the chatbot applications built with Snowflake Cortex. The collaboration also includes Nvidia TensorRT software that delivers low latency and high throughput for deep learning inference applications.

Other than Snowflake, data platform providers Box, Dataloop, Cloudera, Cohesity, Datastax, and NetApp also announced they plan to use Nvidia microservices, including the all-new NIM technology, to help customers optimize RAG pipelines and integrate their proprietary data into generative AI applications. 

Nvidia GTC 2024 runs from March 18 to March 21 in San Jose and online.


Leave a Reply

Your email address will not be published. Required fields are marked *