Agents-Cpp: A High Performance C++ Framework for AI Agents
A high-performance, low-memory C++ implementation of an AI agents framework designed to enable developers to build local agentic systems.
Features
- Modular Design: Composable components for building various agent architectures
- Multiple LLM Providers: Support for OpenAI, Anthropic, Google, and Ollama
- High Performance: Optimized for efficiency and low memory usage
- Workflow Patterns: Implementation of recommended workflow patterns
- Prompt Chaining
- Routing
- Parallelization
- Orchestrator-Workers
- Evaluator-Optimizer
- Autonomous Agents: Support for fully autonomous agents with various planning strategies
- Extensible Tools: Flexible tool system with prebuilt examples
Configuration
You can configure API keys and other settings in three ways:
- Using a
.env
file: # Copy the template
cp .env.template .env
# Edit the file with your API keys
vi .env # or use any editor
- Using environment variables:
export OPENAI_API_KEY=your_api_key_here
export ANTHROPIC_API_KEY=your_api_key_here
- Passing API keys as command-line arguments (not recommended for production):
./bin/examples/simple_agent your_api_key_here
The framework will check for API keys in the following order:
.env
file
- Environment variables
- Command-line arguments
Requirements
- C++20 compatible compiler
- libcurl
- libcpr (C++ Requests)
Building
Customers will be provided pre-compiled agents library as well as sample binaries.
Usage
Here's a simple example of creating and running an autonomous agent:
#include <agents-cpp/agent_context.h>
#include <agents-cpp/agents/autonomous_agent.h>
#include <agents-cpp/llm_interface.h>
#include <agents-cpp/tools/tool_registry.h>
int main() {
auto llm = createLLM("anthropic", "your_api_key_here", "claude-3-5-sonnet-20240620");
auto context = std::make_shared<AgentContext>();
context->setLLM(llm);
JsonObject result = agent.run("Research the latest developments in quantum computing");
std::cout << result["answer"].get<String>() << std::endl;
return 0;
}
An agent that operates autonomously to complete a task.
Definition autonomous_agent.h:28
@ REACT
Reasoning and acting.
Definition autonomous_agent.h:75
Framework Namespace.
Definition agent.h:18
Getting Started
Prerequisites
Before you begin, ensure you have the following installed:
- A C++20 compatible compiler (GCC 8+, Clang 7+, or MSVC 2019+)
- libcurl with development headers
- libcpr (C++ Requests)
For convenience, the libraries for Curl and CPR are also included in the release tarball.
Installation
- Procure agents-sdk release:
- Extract agents-sdk release:
tar -xf agents_sdk_vX.X.X.tar.gz
- Obtain API keys:
Running Your First Example
The simplest way to start is with the simple_agent
example, which creates a basic autonomous agent that can use tools to answer questions:
- Navigate to the release directory:
From the release directory, run the example:
./bin/examples/simple_agent your_api_key_here
Alternatively, you can set your API key as an environment variable:
export OPENAI_API_KEY=your_api_key_here
./bin/examples/simple_agent
- Once running, you'll be prompted to enter a question or task. For example:
Enter a question or task for the agent (or 'exit' to quit):
> What's the current status of quantum computing research?
- The agent will:
- Break down the task into steps
- Use tools (like web search) to gather information
- Ask for your approval before proceeding with certain steps (if human-in-the-loop is enabled)
- Provide a comprehensive answer
- Example output:
Step: Planning how to approach the question
Status: Completed
Result: {
"plan": "1. Search for recent quantum computing research developments..."
}
--------------------------------------
Step: Searching for information on quantum computing research
Status: Waiting for approval
Context: {"search_query": "current status quantum computing research 2024"}
Approve this step? (y/n): y
...
Configuring the Example
You can modify examples/simple_agent.cpp to explore different configurations:
- Change the LLM provider:
auto llm = createLLM("anthropic", api_key, "claude-3-5-sonnet-20240620");
auto llm = createLLM("google", api_key, "gemini-pro");
- Add different tools:
context->registerTool(tools::createCalculatorTool());
context->registerTool(tools::createPythonCodeExecutionTool());
- Change the planning strategy:
agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::COT);
Examples
The repository includes several examples demonstrating different workflow patterns:
simple_agent
: Basic autonomous agent example
prompt_chain_example
: Demonstrates prompt chaining workflow
routing_example
: Shows how to implement routing
parallel_example
: Parallel execution of tasks
orchestrator_example
: Orchestrator-workers pattern
evaluator_optimizer_example
: Demonstrates the evaluator-optimizer workflow
autonomous_agent_example
: Full-featured autonomous agent
Run examples from the build directory:
./bin/examples/simple_agent your_api_key_here
Project Structure
lib/
: Public library for SDK
include/agents-cpp/
: Public headers
bin/examples/
: Example applications
Supported LLM Providers
- Anthropic Claude: Claude 3 family models (Opus, Sonnet, Haiku)
- OpenAI: GPT-4o, GPT-4, GPT-3.5 Turbo
- Google: Gemini family models (Pro, Flash)
- Ollama: Local models like Llama, Mistral, etc.
Extending
Adding Custom Tools
auto custom_tool = createTool(
"calculator",
"Evaluates mathematical expressions",
{
{"expression", "The expression to evaluate", "string", true}
},
String expr = params["expression"];
double result = evaluate(expr);
true,
"Result: " + std::to_string(result),
{{"result", result}}
};
}
);
context->registerTool(custom_tool);
Creating Custom Workflows
You can create custom workflows by extending the Workflow
base class or combining existing workflows:
class CustomWorkflow : public Workflow {
public:
CustomWorkflow(std::shared_ptr<AgentContext> context)
: Workflow(context) {}
JsonObject run(const String& input) override {
}
};
License
This project is licensed under a proprietary License - see the LICENSE file for details.
Acknowledgements
This implementation began based on Anthropic's article "Building effective agents" and frequently draws inspiration from their research and recommendations.