Llama 3.3 70B
Newby Meta
High-performance multilingual LLM optimized for dialogue and instruction following.
Run queries immediately, pay only for usage
Per 1M Tokens
About this model
Llama 3.3 70B is Meta's latest instruction-tuned language model, offering exceptional performance across a wide range of tasks. With 70 billion parameters and a 128K context window, it excels at complex reasoning, coding, multilingual tasks, and creative writing. The model has been fine-tuned using RLHF to be helpful, harmless, and honest.
Capabilities
Use Cases
- Customer Support Bots
- Code Assistants
- Content Generation
- Data Analysis
- Research Assistants
Model Details
- Provider
- Meta
- Model ID
- meta-llama/Llama-3.3-70B-Instruct
- Parameters
- 70B
- Context Length
- 128K tokens
- Category
- chat
API Usage
Use the DOS API to integrate Llama 3.3 70B into your applications. Our API is compatible with OpenAI's client libraries for easy migration.
Model ID
meta-llama/Llama-3.3-70B-InstructPython
from dos import DOS
client = DOS()
response = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct",
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)cURL
curl https://api.dos.ai/v1/chat/completions \
-H "Authorization: Bearer $DOS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'Node.js
import DOS from 'dos-ai';
const client = new DOS();
const response = await client.chat.completions.create({
model: "meta-llama/Llama-3.3-70B-Instruct",
messages: [
{ role: "user", content: "Hello, how are you?" }
]
});
console.log(response.choices[0].message.content);Related Models
Llama 3.1 405B
The largest and most capable Llama model for complex reasoning and generation tasks.
Mistral Large 2
Flagship model with strong multilingual and coding capabilities.
DeepSeek V3
State-of-the-art MoE model with exceptional reasoning capabilities.