Modellake provides an advanced chat completion functionality that allows users to interact with various LLM (Large Language Model) providers such as OpenAI (GPT-4), Gemini, DeepSeek, Claude, and Llama. This enables seamless integration of multiple AI models into applications for different conversational and generative AI use cases.
Setting Up API Keys
To use chat_complete, you need to set API keys for different AI models.
1. Setting API Keys in Terminal (Windows)
shCopyEditset OPENAI_API_KEY=your_openai_key
set GOOGLE_API_KEY=your_gemini_key
set DEEPSEEK_API_KEY=your_deepseek_key
set GROQ_API_KEY=your_llama_key
set ANTROPHIC_API_KEY=your_claude_key
import os
from groclake.modellake import Modellake
modellake = Modellake()
chat_completion_request = {
"model_name": "gpt-4",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Which model are you using?"}
]
}
response = modellake.chat_complete(chat_completion_request)
print("Response:", response)
Response
Response : {'answer': "As an artificial intelligence, I'm based on OpenAI's GPT-3 model.", 'input_tokens': 46, 'output_tokens': 19, 'total_tokens': 65}
2. Google Gemini Chat Completion
chat_completion_request = {
"model_name": "gemini-1.5-flash",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Which model are you using?"}
]
}
response = modellake.chat_complete(chat_completion_request)
print("Response:", response)
Response
Response : {'answer': "I'm currently running on the Gemini family of models.", 'input_tokens': 6, 'output_tokens': 13, 'total_tokens': 19}
3. DeepSeek Chat Completion
chat_completion_request = {
"model_name": "deepseek/deepseek-r1:free",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Which model are you using?"}
]
}
response = modellake.chat_complete(chat_completion_request)
print("Response:", response)
Response
Response : {'answer': "Hi! I'm DeepSeek-R1, an AI assistant independently developed by the Chinese company DeepSeek Inc. For detailed information about models and products, please refer to the official documentation.", 'input_tokens': 18, 'output_tokens': 81, 'total_tokens': 99}
4. Claude Chat Completion
chat_completion_request = {
"model_name": "claude-3-5-sonnet-20241022",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Which model are you using?"}
]
}
response = modellake.chat_complete(chat_completion_request)
print("Response:", response)
Response
Response : {'answer': "I am Claude, an AI assistant created by Anthropic. I aim to be direct and honest about what I am. I don't pretend to be human.", 'input_tokens': 13, 'output_tokens': 37, 'total_tokens': 50}
5. Llama Chat Completion (via Groq API)
Getting a Groq API Key
To use Llama models, obtain an API key from Groq:
Sign up on the Groq Developer Console.
Navigate to API Keys and generate a new key.
Securely store and use the key in your application.
Llama Chat Example
chat_completion_request = {
"model_name": "llama3-70b-8192",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Which model are you using?"}
]
}
response = modellake.chat_complete(chat_completion_request)
print("Response:", response)
Response
Response : {'answer': "I'm an AI assistant, and I'm using a highly advanced language model to understand and respond to your queries. My model is based on a transformer architecture, specifically a variant of the BERT (Bidirectional Encoder Representations from Transformers) model.\n\nMy model has been fine-tuned on a massive dataset of text from various sources, including but not limited to books, articles, research papers, and online conversations. This fine-tuning enables me to understand and respond to a wide range of questions and topics, from simple queries to more complex conversations.\n\nThat being said, my model is constantly learning and improving, so I become more accurate and informative with each interaction. I'm here to help you with any questions or topics you'd like to discuss!", 'input_tokens': 27, 'output_tokens': 150, 'total_tokens': 177}