AI

SandboxAQ and Nvidia Collaborate to Revolutionize Quantum and AI with LQMs

12 August 2024

|

Zaker Adham

SandboxAQ, a pioneer in quantum and AI technology, began as a discreet project within Google's parent company, Alphabet, in 2016. At that time, quantum and AI were largely theoretical concepts. Fast forward eight years, and numerous companies are now exploring how these technologies can be combined to advance computing, sensing, security, and more across various industries.

One key area of focus is computational chemistry. SandboxAQ, based in Palo Alto, California, became an independent entity in 2022 after spinning off from Alphabet. The company is at the forefront of using quantum and AI to drive innovations in drug discovery, disease treatment, and material science, including battery chemistry.

Recognizing the need for collaboration, SandboxAQ partnered with GPU giant Nvidia in November 2023 to leverage quantum and AI for chemistry simulations. Recently, the partners showcased the first results of this collaboration, combining SandboxAQ's new AI models with Nvidia H100 GPUs and quantum technology to push the boundaries of computational chemistry.

SandboxAQ introduced Large Quantitative Models (LQMs) and utilized Nvidia’s CUDA-accelerated Density Matrix Renormalization Group (DMRG) quantum simulation algorithm. This combination enables highly accurate quantitative AI simulations of real-life systems, surpassing the capabilities of Large Language Models (LLMs) and other AI models. The integration of the CUDA-DMRG algorithm, Nvidia Quantum platform, and Nvidia accelerated computing allows for faster and more precise calculations, doubling the size of computable catalysts and enzyme active sites.

"AI supercomputing is addressing critical challenges in the chemical and pharmaceutical industries," said Tim Costa, Nvidia's director of high-performance and quantum computing. "SandboxAQ's use of the Nvidia Quantum platform is enabling unprecedented scale in simulations, allowing scientists to rethink what's possible in computational chemistry."

Fierce Electronics recently interviewed Nadia Harhen, general manager of AI simulation at SandboxAQ, to discuss the partnership's achievements, the nature of LQMs, and future plans.

Interview Highlights:

Fierce Electronics: How does this achievement relate to the partnership announced last November?

Nadia Harhen: This milestone showcases the progress made since the partnership began. By combining the CUDA-accelerated DMRG algorithm, Nvidia Quantum platform, and Nvidia accelerated computing, we've achieved an 80x speedup in highly accurate chemistry calculations compared to 128-core CPU-based calculations.

FE: What are LQMs, and how do they differ from LLMs?

NH: LQMs, or Large Quantitative Models, are trained on large volumes of proprietary, high-accuracy physics-based data to predict real-world properties of drugs and materials. Unlike LLMs, which focus on textual data, LQMs learn the physics of matter to represent physical and chemical interactions, enabling applications like designing new materials and drugs.

FE: What implications do LQMs have for the future use of LLMs in computational chemistry?

NH: While LLMs excel at text-focused tasks, LQMs expand the applicability of deep learning models to areas like sustainable materials and drug design. LLMs will continue to be used for extracting chemical data, but LQMs are needed for designing new three-dimensional drug molecules.

FE: What's next for SandboxAQ and Nvidia?

NH: We're just beginning. Our next step is to scale up to a larger GPU cluster and capture more complex chemistry. We aim to expand beyond calculating energetics and generate massive databases of other important chemical properties.

FE: Did the collaboration leverage technology from SandboxAQ's acquisition of Good Chemistry?

NH: The partnership predates the acquisition, but the integration has been successful. Contributors from both teams have achieved a significant milestone in accelerating AI-driven chemistry.

FE: Were these simulations performed on Nvidia H100 GPUs, and what are the future plans?

NH: The work was done on Nvidia’s DGX-H100 with 8 GPUs. We're exploring larger GPU architectures like GH-200 and expect significant benefits from the new Blackwell machines. We're also discussing hardware needs and software acceleration for quantum chemistry and other workflows.