FlashFi
  • ⚡ FlashFi
  • ⚒️Technical Deep Dive
    • 🔄 Swapping
    • Bridging
    • 🔗Staking
  • Privy Implementation
  • Contribution To Covalent Agent Kit
  • 🚀Future Plans
Powered by GitBook
On this page
  • Overview
  • Benefits of Ollama Integration
  • Technical Implementation
  • 1. Defining the Ollama Configuration
  • 2. Handling Ollama API Calls
  • 3. Parsing Responses Efficiently
  • Future Enhancements

Contribution To Covalent Agent Kit

PreviousPrivy ImplementationNextFuture Plans

Last updated 3 months ago

Overview

This guide explains how I contributed to Covalent's AI Agent SDK by integrating Ollama, allowing the use of locally running LLMs (Large Language Models). This integration enables developers to run AI agents without relying on paid API keys, making AI-driven automation more accessible and cost-effective. Here is the link to my

Benefits of Ollama Integration

  • No API Costs: Since Ollama runs locally, there is no need to pay for external LLM API services.

  • Increased Privacy: Data remains on the local machine, enhancing security and confidentiality.

  • Customization & Control: Developers can fine-tune models and experiment without restrictions imposed by third-party services.

  • Offline Capability: AI agents can function even without an active internet connection.

Technical Implementation

1. Defining the Ollama Configuration

I introduced a new provider type, OLLAMA, to the existing model configuration:

type OllamaConfig = {
    provider: "OLLAMA";
    name: OllamaModel;
    toolChoice?: "auto" | "required";
    temperature?: number;
    apiKey?: string;
    baseURL?: string; // Option to override base URL
};

2. Handling Ollama API Calls

Since Ollama runs locally, API requests are sent to http://localhost:11434/api/chat. Here’s how I implemented the request logic:

const response = await fetch(`${config.baseURL}/api/chat`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
        model: this.model.name,
        messages: messages.map((msg) => ({ role: msg.role, content: msg.content || "" })),
        stream: true,
    }),
});

3. Parsing Responses Efficiently

I structured the response parsing to extract relevant information:

const formatResponse = (content: string): FormattedResponse => {
    const parts = content.split("</think>");
    return {
        thinking: parts[0]?.replace("<think>", "").trim() || "",
        response: parts[1]?.trim() || content.trim(),
    };
};

Future Enhancements

  • Enhancing logging for better debugging and transparency.

  • Optimizing response parsing for performance improvements.

  • Implementing function calls

By integrating Ollama, developers can now use AI agents free of cost, making AI automation truly accessible to everyone!

PR