What is Red Hat AI?
Red Hat AI is engineered to help build and run AI solutions from first experiments to full production. The solution includes three services:
- Red Hat AI Inference Server, which optimizes model inference across the hybrid cloud for faster, cost-effective model deployments. Powered by vLLM, it includes access to validated and optimized third-party models on Hugging Face. It also includes LLM compressor tools.
- Red Hat Enterprise Linux AI is a platform for inference and training of large language models to power enterprise applications. It includes InstructLab tooling for customizing models, as well as integrated hardware accelerators.
- Red Hat OpenShift AI builds on the capabilities of Red Hat OpenShift to provide a platform for managing the lifecycle of generative and predictive AI models at scale. Through integrated MLOps and LLMOps capabilities, it offers complete lifecycle management with distributed training, tuning, inference, and monitoring of AI applications across hybrid cloud environments.
Categories & Use Cases
Videos
Technical Details
| Mobile Application | No |
|---|
FAQs
What is Red Hat AI?
Red Hat AI is engineered to help build and run AI solutions from first experiments to full production. The solution includes Red Hat AI Inference Server, Red Hat Enterprise Linux AI, and Red Hat OpenShift AI.
