Llama 3.1 405B
by Meta
The largest and most capable Llama model for complex reasoning and generation tasks.
Run queries immediately, pay only for usage
Per 1M Tokens
About this model
Llama 3.1 405B is the flagship model in Meta's Llama 3.1 family. With 405 billion parameters, it delivers state-of-the-art performance on complex reasoning tasks, mathematical problems, and code generation. It's ideal for applications requiring the highest quality outputs.
Capabilities
Use Cases
- Research & Analysis
- Complex Problem Solving
- Technical Documentation
- Academic Writing
Model Details
- Provider
- Meta
- Model ID
- meta-llama/Llama-3.1-405B-Instruct
- Parameters
- 405B
- Context Length
- 128K tokens
- Category
- chat
API Usage
Use the DOS API to integrate Llama 3.1 405B into your applications. Our API is compatible with OpenAI's client libraries for easy migration.
Model ID
meta-llama/Llama-3.1-405B-InstructPython
from dos import DOS
client = DOS()
response = client.chat.completions.create(
model="meta-llama/Llama-3.1-405B-Instruct",
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)cURL
curl https://api.dos.ai/v1/chat/completions \
-H "Authorization: Bearer $DOS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Llama-3.1-405B-Instruct",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'Node.js
import DOS from 'dos-ai';
const client = new DOS();
const response = await client.chat.completions.create({
model: "meta-llama/Llama-3.1-405B-Instruct",
messages: [
{ role: "user", content: "Hello, how are you?" }
]
});
console.log(response.choices[0].message.content);Related Models
Llama 3.3 70B
High-performance multilingual LLM optimized for dialogue and instruction following.
Mistral Large 2
Flagship model with strong multilingual and coding capabilities.
DeepSeek V3
State-of-the-art MoE model with exceptional reasoning capabilities.