Connect with us

Daily News

Microsoft and NVIDIA forge AI powerhouse

Published

on

In a groundbreaking announcement, Microsoft Corp and NVIDIA have unveiled a series of integrations set to revolutionise generative AI capabilities for enterprises worldwide. This collaboration, showcased at GTC, introduces cutting-edge advancements across Microsoft Azure, Azure AI services, Microsoft Fabric, and Microsoft 365, leveraging the latest NVIDIA generative AI and Omniverse™ technologies.

Microsoft’s adoption of the NVIDIA Grace Blackwell GB200 processor and advanced NVIDIA Quantum-X800 InfiniBand networking marks a significant leap forward in AI infrastructure. These innovations will enable the deployment of trillion-parameter foundation models, enhancing natural language processing, computer vision, and speech recognition capabilities on Azure.

Moreover, Microsoft is rolling out the Azure NC H100 v5 VM virtual machine, powered by the NVIDIA H100 NVL platform, offering flexibility and scalability for diverse AI workloads. This development underscores Microsoft’s commitment to providing cutting-edge solutions for AI training and inferencing needs.

In the realm of healthcare and life sciences, the collaboration extends to integrating cloud, AI, and supercomputing technologies. By leveraging Azure alongside NVIDIA DGX™ Cloud and Clara™ suite of microservices, stakeholders in healthcare and life sciences can expect rapid innovation in clinical research and care delivery.

The partnership also brings forth advancements in industrial digitalization, with NVIDIA Omniverse Cloud APIs set to debut on Microsoft Azure. This integration promises increased data interoperability, collaboration, and physics-based visualization for software applications, enhancing insights and accelerating production processes.

Furthermore, NVIDIA GPUs and Triton Inference Server™ are set to power AI inference predictions in Microsoft Copilot for Microsoft 365. This integration enhances productivity and creativity by delivering real-time contextualised intelligence to users.

Finally, NVIDIA NIM™ inference microservices are slated to debut on Azure AI, facilitating optimized AI deployments. These microservices, part of the NVIDIA AI Enterprise software platform, streamline the deployment of performance-optimised production AI applications, enabling developers to expedite time-to-market.

The Musical Interview with Anamika Jha

Trending