|
Agents 1.4.0
Edge AI Agents SDK
|
Interface for language model providers (OpenAI, Anthropic, Google, Ollama). More...
#include <llm_interface.h>
Public Member Functions | |
| virtual | ~LLMInterface ()=default |
| Destructor. | |
| virtual std::vector< std::string > | getAvailableModels ()=0 |
| Get available models from this provider. | |
| virtual void | setModel (const std::string &model)=0 |
| Set the model to use. | |
| virtual std::string | getModel () const =0 |
| Get current model. | |
| virtual void | setApiKey (const std::string &api_key)=0 |
| Set API key. | |
| virtual void | setApiBase (const std::string &api_base)=0 |
| Set API base URL (for self-hosted or proxied endpoints). | |
| virtual void | setOptions (const LLMOptions &options)=0 |
| Set options for API calls. | |
| virtual LLMOptions | getOptions () const =0 |
| Get current options. | |
| virtual LLMResponse | chat (const std::string &prompt)=0 |
| Generate completion from a prompt. | |
| virtual LLMResponse | chat (const std::vector< Message > &messages)=0 |
| Generate completion from a list of messages. | |
| virtual LLMResponse | chatWithTools (const std::vector< Message > &messages, const std::vector< std::shared_ptr< Tool > > &tools)=0 |
| Generate completion with available tools. | |
| virtual void | streamChat (const std::vector< Message > &messages, std::function< void(const std::string &, bool)> callback)=0 |
| Stream results with callback. | |
| virtual Task< LLMResponse > | chatAsync (const std::vector< Message > &messages) |
| Async chat from a list of messages. | |
| virtual Task< LLMResponse > | chatWithToolsAsync (const std::vector< Message > &messages, const std::vector< std::shared_ptr< Tool > > &tools) |
| Async chat with tools. | |
| virtual AsyncGenerator< std::string > | streamChatAsync (const std::vector< Message > &messages, const std::vector< std::shared_ptr< Tool > > &tools) |
| Stream chat with AsyncGenerator. | |
| virtual AsyncGenerator< std::pair< std::string, ToolCalls > > | streamChatAsyncWithTools (const std::vector< Message > &messages, const std::vector< std::shared_ptr< Tool > > &tools) |
| Stream chat with tools using AsyncGenerator. | |
| virtual std::optional< JsonObject > | uploadMediaFile (const std::string &local_path, const std::string &mime, const std::string &binary="") |
| Provider-optional: Upload a local media file to the provider's file storage and return a canonical media envelope (e.g., with fileUri). Default: not supported. | |
Interface for language model providers (OpenAI, Anthropic, Google, Ollama).
|
pure virtual |
Generate completion from a prompt.
| prompt | The prompt |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Generate completion from a list of messages.
| messages | The messages to generate completion from |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
virtual |
Async chat from a list of messages.
| messages | The messages to generate completion from |
|
pure virtual |
Generate completion with available tools.
| messages | The messages to generate completion from |
| tools | The tools to use |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
virtual |
Async chat with tools.
| messages | The messages to generate completion from |
| tools | The tools to use |
|
pure virtual |
Get available models from this provider.
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Get current model.
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Get current options.
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Set API base URL (for self-hosted or proxied endpoints).
| api_base | The API base URL to use |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Set API key.
| api_key | The API key to use |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Set the model to use.
| model | The model to use |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Set options for API calls.
| options | The options to use |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
pure virtual |
Stream results with callback.
| messages | The messages to generate completion from |
| callback | The callback to use |
Implemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
virtual |
Stream chat with AsyncGenerator.
| messages | The messages to generate completion from |
| tools | The tools to use |
Reimplemented in agents::llms::AnthropicLLM, agents::llms::GoogleLLM, agents::llms::OllamaLLM, and agents::llms::OpenAILLM.
|
virtual |
Stream chat with tools using AsyncGenerator.
| messages | The messages to generate completion from |
| tools | The tools to use |
Reimplemented in agents::llms::GoogleLLM, and agents::llms::OllamaLLM.
|
virtual |
Provider-optional: Upload a local media file to the provider's file storage and return a canonical media envelope (e.g., with fileUri). Default: not supported.
| local_path | Local filesystem path |
| mime | The MIME type of the media file |
| binary | Optional binary content of the media file |
Reimplemented in agents::llms::AnthropicLLM, and agents::llms::GoogleLLM.