Contribution To Covalent Agent Kit
Last updated
Last updated
This guide explains how I contributed to Covalent's AI Agent SDK by integrating Ollama, allowing the use of locally running LLMs (Large Language Models). This integration enables developers to run AI agents without relying on paid API keys, making AI-driven automation more accessible and cost-effective. Here is the link to my
No API Costs: Since Ollama runs locally, there is no need to pay for external LLM API services.
Increased Privacy: Data remains on the local machine, enhancing security and confidentiality.
Customization & Control: Developers can fine-tune models and experiment without restrictions imposed by third-party services.
Offline Capability: AI agents can function even without an active internet connection.
I introduced a new provider type, OLLAMA
, to the existing model configuration:
Since Ollama runs locally, API requests are sent to http://localhost:11434/api/chat
. Here’s how I implemented the request logic:
I structured the response parsing to extract relevant information:
Enhancing logging for better debugging and transparency.
Optimizing response parsing for performance improvements.
Implementing function calls
By integrating Ollama, developers can now use AI agents free of cost, making AI automation truly accessible to everyone!