Model Internet Search
🌐 Real-time Web Search: Breaking LLM Time Limitations for More Accurate and Reliable Outputs
We've enhanced OpenAI and Gemini series models with the ability to access the latest information from the web, helping you:
- ✅ Access Latest Information: Get real-time updates on current events, latest research, or live data
- ✅ Eliminate Knowledge Gaps: Overcome the time limitations of LLM training data by accessing post-training information
- ✅ Reduce Hallucinations: Provide fact-based answers through real-time web searches, significantly reducing AI confabulations
- ✅ Improve Decision Quality: Make more confident decisions based on analysis and recommendations grounded in current facts
Supported Models
Currently supporting OpenAI and Gemini model series with two integration methods:
1. Models with Native Search Capabilities
Gemini Series (Ground with Google search):
- gemini-2.0-pro-exp-02-05-search
- gemini-2.0-flash-exp-search
- gemini-2.0-flash-search
OpenAI Series (Bing search):
- gpt-4o-search-preview
- gpt-4o-mini-search-preview
2. Parameter-Based Support
Simply add the parameter web_search_options={}
to enable web connectivity for all Gemini and OpenAI models.
The search fee for Gemini models is $3.5 per thousand searches.
3. Future Plans
We're planning to enable Web Search through parameter suffixes, making it even easier for developers to implement. Stay tuned!
Usage Guide
OpenAI SDK Support
Before using, run pip install -U openai
to upgrade the openai package.
Python Example
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://aihubmix.com/v1"
)
chat_completion = client.chat.completions.create(
model="gemini-2.0-flash-exp",
# 🌐 Enable search
web_search_options={},
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Search for information about the AIhubmix LLM API platform, provide a brief introduction, and include relevant links."
}
]
}
]
)
print(chat_completion.choices[0].message.content)
TypeScript Example
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-***',
baseURL: 'https://aihubmix.com/v1'
});
async function main() {
const chatCompletion = await client.chat.completions.create({
model: 'gemini-2.0-flash-exp',
// 🌐 Enable search
web_search_options: {},
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Search for information about the AIhubmix LLM API platform, provide a brief introduction, and include relevant links.'
}
]
}
]
});
console.log(chatCompletion.choices[0].message.content);
}
main().catch(console.error);
CURL Example
curl "https://aihubmix.com/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer xxx" \
-d '{
"model": "gemini-2.0-flash-exp",
"web_search_options": {},
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Provide information about Van Gogh on the Google Arts & Culture website, with a brief introduction and relevant links."
}
]
}
],
"stream": false
}'