Use openrouter for llama models
Docker Image CI / build-and-push-image (push) Has been cancelled
Maintain Release Merge PR / update-release-pr (push) Has been cancelled
release-please / release-please (push) Has been cancelled
test / test (18.x) (push) Has been cancelled
test / test (20.x) (push) Has been cancelled
test / test (22.x) (push) Has been cancelled

This commit is contained in:
jelveh
2025-04-03 17:23:26 -07:00
parent ddb04431cc
commit 80060e863d
+5
View File
@@ -297,6 +297,11 @@ class AI{
requestParams.model = 'openrouter:openai/o1-mini';
}
// if model starts with meta-llama/, prepend it with openrouter:
if ( requestParams.model.startsWith('meta-llama/') ) {
requestParams.model = 'openrouter:' + requestParams.model;
}
// map model to the appropriate driver
if (!requestParams.model || requestParams.model === 'gpt-4o' || requestParams.model === 'gpt-4o-mini') {
driver = 'openai-completion';