🧠 Agents-SDK - A High Performance C++ Framework for AI Agents

Agents-SDK is a portable, high-performance C++ framework for building on-device, agentic AI systems — think LangChain for the edge. This SDK is purpose-built for developers who want to create local-first AI agents that can reason, plan, and act without relying on the cloud.
🚀 Features
- ⚙️ Modular Architecture — Compose agents from interchangeable components.
- 🧩 Multi-LLM Support — Connect to multiple providers seamlessly:
- OpenAI (GPT-4o, GPT-4, GPT-3.5 Turbo)
- Anthropic (Claude 3 family models (Opus, Sonnet, Haiku)
- Google (Gemini family models (Pro, Flash)
- Ollama/llama-cpp (local models like Llama, Mistral, etc.)
- ⚡ Optimized for Speed and Memory — Built in C++ with focus on performance.
- 🔁 Built-In Workflow Patterns
- Prompt Chaining
- Routing
- Parallelization
- Orchestrator-Workers
- Evaluator-Optimizer
- 🤖 Autonomous Agents — Supports modern reasoning strategies:
- ReAct (Reason + Act)
- CoT (Chain-of-Thought) [In Development]
- Plan and Execute
- Zero-Shot [In Development]
- Reflexion [In Development]
- 🧠 Extensible Tooling System — Plug in your own tools or use built-in ones (Web Search, Wikipedia, Python Executor, etc).
⚙️ Requirements
- C++20 compatible compiler (GCC 14+, Clang 17+, MSVC 2022+)
- Bazel 8.3.1+
- Dependencies (already provided for convenience)
- python3 (3.11+)
- nlohmann/json
- spdlog
🧭 Quick Start
Installation
- Clone the repository:
git clone https://github.com/RunEdgeAI/agents-sdk.git
- Navigate to SDK:
- Obtain API keys:
Building
Build everything in this space:
Configuration
You can configure API keys and other settings in three ways:
- Using a .env file:
# Copy the template
cp .env.template .env
# Edit the file with your API keys
vi .env # or use any editor
- Using environment variables:
export OPENAI_API_KEY=your_api_key_here
export ANTHROPIC_API_KEY=your_api_key_here
export GEMINI_API_KEY=your_api_key_here
export WEBSEARCH_API_KEY=your_api_key_here
- Passing API keys as command-line arguments (not recommended for production):
bazel run examples:simple_agent -- your_api_key_here
The framework will check for API keys in the following order:
- .env file
- Environment variables
- Command-line arguments
Python Tool Setup
In order to use the Python Code Execution Tool, ensure your Python environment is correctly configured so that the SDK can locate your Python runtime and libraries.
export PYTHONHOME=$(python3 -c "import sys; print(sys.prefix)")
export PYTHONPATH=$(python3 -c "import sysconfig; print(sysconfig.get_path('stdlib'))")
Usage
Here's a simple example of creating and running an autonomous agent:
#include <agents-cpp/context.h>
#include <agents-cpp/agents/autonomous_agent.h>
#include <agents-cpp/llm_interface.h>
#include <agents-cpp/tools/tool_registry.h>
int main() {
auto llm =
createLLM(
"anthropic",
"<your_api_key_here>",
"claude-3-5-sonnet-20240620");
auto context = std::make_shared<Context>();
context->setLLM(llm);
JsonObject result = agent.run(
"Research the latest developments in quantum computing");
std::cout << result["answer"].get<std::string>() << std::endl;
return 0;
}
An agent that operates autonomously to complete a task.
Definition autonomous_agent.h:29
@ REACT
Reasoning and acting.
Definition autonomous_agent.h:76
Framework Namespace.
Definition agent.h:18
nlohmann::json JsonObject
JSON object type.
Definition types.h:27
std::shared_ptr< LLMInterface > createLLM(const std::string &provider, const std::string &api_key, const std::string &model="")
Factory function to create a specific LLM provider.
Running Your First Example
The simplest way to start is with the simple_agent example, which creates a basic autonomous agent that can use tools to answer questions:
- Navigate to the release directory:
From the release directory, run the example:
bazel run examples:simple_agent -- your_api_key_here
Alternatively, you can set your API key as an environment variable:
export OPENAI_API_KEY=your_api_key_here
bazel run examples:simple_agent your_api_key_here
- Once running, you'll be prompted to enter a question or task. For example:
Enter a question or task for the agent (or 'exit' to quit):
> What's the current status of quantum computing research?
- The agent will:
- Break down the task into steps
- Use tools (like web search) to gather information
- Ask for your approval before proceeding with certain steps (if human-in-the-loop is enabled)
- Provide a comprehensive answer
- Example output:
Step: Planning how to approach the question
Status: Completed
Result: {
"plan": "1. Search for recent quantum computing research developments..."
}
--------------------------------------
Step: Searching for information on quantum computing research
Status: Waiting for approval
Context: {"search_query": "current status quantum computing research 2024"}
Approve this step? (y/n): y
...
Configuring the Example
You can modify examples/simple_agent.cpp to explore different configurations:
- Change the LLM provider:
auto llm =
createLLM(
"anthropic", api_key,
"claude-3-5-sonnet-20240620");
auto llm =
createLLM(
"google", api_key,
"gemini-pro");
- Add different tools:
context->registerTool(tools::createCalculatorTool());
context->registerTool(tools::createPythonCodeExecutionTool());
- Change the planning strategy:
agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::COT);
🧪 Included Examples
The repository includes several examples demonstrating different workflow patterns:
| Example | Description |
| simple_agent | Basic autonomous agent |
| prompt_chain_example | Prompt chaining workflow |
| routing_example | Multi-agent routing |
| parallel_example | Parallel task execution |
| orchestrator_example | Orchestrator–worker pattern |
| evaluator_optimizer_example | Evaluator–optimizer feedback loop |
| multimodal_example | Support for voice, audio, image, docs |
| autonomous_agent_example | Full-featured autonomous agent |
Run examples available:
bazel run examples:<simple_agent> -- your_api_key_here
📂 Project Structure
- lib/: Public library for SDK
- include/agents-cpp/: Public headers
- types.h: Common type definitions
- context.h: Context for agent execution
- llm_interface.h: Interface for LLM providers
- tool.h: Tool interface
- memory.h: Agent memory interface
- workflow.h: Base workflow interface
- agent.h: Base agent interface
- workflows/: Workflow pattern implementations
- agents/: Agent implementations
- tools/: Tool implementations
- llms/: LLM provider implementations
- bin/examples/: Example applications
🛠️ Extending the SDK
Adding Custom Tools
"calculator",
"Evaluates mathematical expressions",
{
{"expression", "The expression to evaluate", "string", true}
},
std::string expr = params["expression"];
double result = evaluate(expr);
true,
"Result: " + std::to_string(result),
{{"result", result}}
};
}
);
context->registerTool(custom_tool);
std::shared_ptr< Tool > createTool(const std::string &name, const std::string &description, const std::vector< Parameter > ¶meters, ToolCallback callback)
Create a custom tool with a name, description, parameters, and callback.
Creating Custom Workflows
You can create custom workflows by extending the Workflow base class or combining existing workflows:
class CustomWorkflow : public Workflow {
public:
CustomWorkflow(std::shared_ptr<Context> context)
: Workflow(context) {}
JsonObject run(
const std::string& input)
override {
}
};
🆘 Support
📚 Acknowledgements
This implementation is inspired by Anthropic's article "Building effective agents" and re-engineered in C++ for real-time, low overhead usage on edge devices.
⚖️ License
This project is licensed under a proprietary License - see the LICENSE file for details.
The future of AI is on-device
Start with our samples and discover how we could empower the next generation of AI applications.