Supported Meta Models
You can import large language models from Hugging Face and OCI Object Storage buckets into OCI Generative AI, create endpoints for those models, and use them in the Generative AI service.
These models are an improved version of Meta Llama models with Grouped Query Attention (GQA). For more information, see Llama 2, Llama 3, Llama 3.1, Llama 3.2, Llama 3.3, Llama 4 in the Hugging Face documentation.
Meta Llama
Important
- Imported models support the native context length specified by the model provider. However, the maximum context length that you can use is also limited by the underlying hardware configuration in OCI Generative AI. You might need to provision additional hardware resources to take full advantage of the model’s native context length.
- To import a fine-tuned version of a model, only fine-tuned models that use the same transformers version as the original model and have a parameter count within ±10% of the original are supported.
- If the instance type for the recommended unit shape isn’t available in your region, select a higher-tier instance (for example, select an H100 shape instead of an A100-80G shape).
- For prerequisites and how to import models, see Managing Imported Models (New).