Get Started With NVIDIA NeMo

Tools to take generative AI models from development to production.

The Journey From AI Models to Generative AI Insights

Experience the end-to-end enterprise-ready platform for generative AI.

1

Experience NVIDIA-optimized foundation models.

2

Prototype with NVIDIA NeMo and NVIDIA-hosted APIs.

3

Run in production with NVIDIA AI Enterprise.

1. Prototype AI models.

Experience Optimized, Production-Grade Generative AI Models

Start prototyping with leading NVIDIA-built and open-source generative AI models that have been tuned for high performance and efficiency. LLMs can then be customized with NeMo and deployed using NVIDIA NIM from the NVIDIA API catalog. 

2. Pull NeMo tools and microservices.

Customize With NeMo Tools and Microservices

NeMo Framework

Build Custom Models

Start development of generative AI and speech AI models with automated data processing, model training techniques, and flexible deployment options.

NeMo Retriever

Retrieval-Augmented Generation (RAG)

Connect enterprise data to generative AI models and retrieve information with the lowest latency, highest throughput, and maximum data privacy.

NeMo Guardrails

Safeguard AI Applications

Orchestrate dialog management for LLMs, ensuring accuracy, appropriateness, and security in smart applications.

NeMo Curator

GPU-Accelerated Data Curation

Use this GPU-accelerated data curation tool to prepare large-scale, high-quality datasets for pretraining generative AI models.

NeMo Customizer

Simplified Model Alignment

Simplify fine-tuning and alignment of LLMs for domain-specific use cases with this high-performance, scalable microservice.

NeMo Evaluator

Automatic Model Assessment

Evaluate custom LLMs and RAG efficiently and reliably across diverse academic and custom benchmarks on any cloud or data center.

3. Run in production.

Deploy in Production With NVIDIA AI Enterprise

NVIDIA AI Enterprise is the end-to-end software platform that brings generative AI into every enterprise, providing the fastest and most efficient runtime for generative AI foundation models. It includes NeMo and NVIDIA NIM to streamline adoption with security, stability, manageability, and support. 

Request a free 90-day evaluation to access generative AI solutions and enterprise support today.

Resources

Documentation

Find a collection of documents, guides, manuals, how-to’s, and other informational resources in the NeMo Documentation Hub.

Sessions

Check out NVIDIA On-Demand, which features free content on NeMo from GTC and other technology conferences from around the world.

Must-Reads

Read how NeMo enables you to build, customize, and deploy large language models.

Training

Learn how to set up end-to-end projects with hands-on learning and get certified on the latest generative AI technologies.

FAQs

NVIDIA NeMo is an end-to-end, cloud-native framework for building, customizing, and deploying generative AI models anywhere. It includes training and inferencing frameworks, a guardrailing toolkit, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. Developers can choose to access NeMo through open-source code on GitHub, as a packaged container in the NVIDIA NGC™ catalog, or through an NVIDIA AI Enterprise subscription.

NeMo is available as part of NVIDIA AI Enterprise. The full pricing and licensing details can be found here.

NeMo can be used to customize, large language models (LLMs), vision language models (VLMs),  automatic speech recognition (ASR) and text-to-speech (TTS) models.

Customers can get NVIDIA Business-Standard Support through an NVIDIA AI Enterprise subscription, which includes NeMo. NVIDIA Business-Standard Support offers service-level agreements, access to NVIDIA experts, and long-term support across on premises and cloud deployments.

NVIDIA AI Enterprise includes NVIDIA Business-Standard Support. For additional available support and services such as NVIDIA Business-Critical Support, a technical account manager, training, and professional services, see the NVIDIA Enterprise Support and Service Guide.

NeMo Curator is a scalable data curation tool that enables developers to sort through trillion-token multilingual datasets for pretraining LLMs. It consists of a set of Python modules expressed as APIs that make use of Dask, cuDF, cuGraph, and Pytorch to scale data curation tasks such as data download, text extraction, cleaning, filtering, exact/fuzzy deduplication, and text classification to thousands of compute cores.

NeMo Guardrails, an open-source toolkit, orchestrates dialog management, ensuring accuracy, appropriateness, and security in smart applications with large language models. It safeguards organizations overseeing LLM systems.

NeMo Guardrails lets developers set up three kinds of boundaries:

  • Topical guardrails prevent apps from veering off into undesired areas. For example, they keep customer service assistants from answering questions about the weather.
  • Safety guardrails ensure apps respond with accurate, appropriate information. They can filter out unwanted language and enforce that references are made only to credible sources.
  • Security guardrails ensure apps only connect to external third-party applications known to be safe.

With NeMo Retriever, a collection of generative AI microservices, enterprises can seamlessly connect custom models to diverse business data to deliver highly accurate responses. NeMo Retriever provides world-class information retrieval with the lowest latency, highest throughput, and maximum data privacy, enabling organizations to use their data better and generate real-time business insights. NeMo Retriever enhances generative AI applications with enterprise-grade retrieval-augmented generation (RAG) capabilities, which can be connected to business data wherever it resides.

NVIDIA NIM, part of NVIDIA AI Enterprise, is an easy-to-use runtime designed to accelerate the deployment of generative AI across enterprises. This versatile microservice supports a broad spectrum of AI models—from open-source community models to NVIDIA AI Foundation models, as well as bespoke custom AI models. Built on the robust foundations of the inference engines, it’s engineered to facilitate seamless AI inferencing at scale, ensuring that AI applications can be deployed across the cloud, data center, and workstation.

NeMo Evaluator is an automated microservice designed for fast and reliable assessment of custom LLMs and RAGs. It spans diverse benchmarks with predefined metrics, including human evaluations and LLMs-as-a-judge techniques. Multiple evaluation jobs can be simultaneously deployed on Kubernetes across preferred cloud platforms or data centers via API calls, enabling efficient aggregated results.

NeMo Customizer is a high-performance, scalable microservice that simplifies fine-tuning and alignment of LLMs for domain-specific use cases.

Retrieval-augmented generation is a technique that lets LLMs create responses from the latest information by connecting them to the company’s knowledge base. NeMo works with various third-party and community tools, including Milvus, Llama Index, and LangChain, to extract relevant snippets of information from the vector database and feed them to the LLM to generate responses in natural language. Explore the AI Chatbot Using RAG Workflow page to get started building production-quality AI chatbots that can accurately answer questions about your enterprise data.

 

NVIDIA offers AI workflows—cloud-native, packaged reference examples that illustrate how NVIDIA AI frameworks can be leveraged to build AI solutions. With pretrained models, training and inference pipelines, Jupyter Notebooks, and Helm charts, AI workflows accelerate the path to delivering AI solutions.

Quickly build your generative AI solutions with these end-to-end workflows:

NVIDIA LaunchPad is a universal proving ground, offering expansive testing of the latest NVIDIA enterprise hardware and software. This dynamic platform expedites short-term trials, facilitates long-term proofs of concept (POCs), and fuels the accelerated development of managed services and standalone solutions.

Users can initiate their AI journey with a prescriptive development environment tailored to their needs. Or, they can explore a vast catalog of hands-on labs designed to offer immersive experiences across a spectrum of use cases, from AI and data science to 3D design and infrastructure optimization. Enterprises gain easy access to the latest accelerated hardware and software stacks deployed on privately hosted infrastructure.

NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines the development and deployment of production-grade AI applications, including generative AI, computer vision, speech AI, and more. It includes best-in-class development tools, frameworks, pretrained models, microservices for AI practitioners, and reliable management capabilities for IT professionals to ensure performance, API stability, and security.

The NVIDIA API catalog provides production-ready generative AI models and continually optimized inference runtime, packaged up as NVIDIA NIM - microservices that can be easily deployed with standardized tools on any GPU-accelerated system.

Stay up to date on the latest generative AI news from NVIDIA.

Get the Inside Scoop on Generative AI News and More

Get developer updates, announcements, and more from NVIDIA sent directly to your inbox.