Together AI + Klavis AI Integration

In this tutorial, we’ll explore how to build an AI agent that integrates Together AI’s powerful LLMs with Klavis MCP Servers, enabling seamless interaction with external services and APIs.

This integration combines:

  • Together AI: High-performance open-source LLMs with function calling capabilities
  • Klavis AI: MCP servers for connecting to external tools and services

Prerequisites

Before we begin, you’ll need:

Make sure to keep these API keys secure and never commit them to version control!

Installation

First, install the required packages:

pip install together requests

Setup Environment Variables

import os

# Set environment variables
os.environ["TOGETHER_API_KEY"] = "your-together-api-key-here"  # Replace with your actual Together API key
os.environ["KLAVIS_API_KEY"] = "your-klavis-api-key-here"      # Replace with your actual Klavis API key

Klavis API Client

Set up the Klavis API client to interact with MCP servers:

import requests
import urllib.parse
from typing import Dict, Any, Optional, List
import webbrowser

class KlavisAPI:
    """API Client for Klavis API."""
    
    def __init__(self, api_key: str, base_url: str = "https://api.klavis.ai"):
        self.api_key = api_key
        self.base_url = base_url
        self.headers = {
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json"
        }
    
    def _make_request(self, method: str, endpoint: str, **kwargs) -> Dict[str, Any]:
        """Make HTTP request with error handling."""
        url = f"{self.base_url}{endpoint}"
        response = requests.request(method, url, headers=self.headers, **kwargs)
        response.raise_for_status()
        return response.json()
    
    def create_mcp_instance(self, server_name: str, user_id: str, platform_name: str) -> Dict[str, str]:
        """Create MCP server instance."""
        data = {
            "serverName": server_name,
            "userId": user_id,
            "platformName": platform_name,
            "connectionType": "StreamableHttp"
        }
        result = self._make_request("POST", "/mcp-server/instance/create", json=data)
        print(f"✅ Created {server_name} MCP instance")
        return {
            'serverUrl': result['serverUrl'],
            'instanceId': result['instanceId']
        }
    
    def list_tools(self, server_url: str) -> Dict[str, Any]:
        """List all available tools for an MCP server."""
        params = {"connection_type": "StreamableHttp"}
        encoded_url = urllib.parse.quote(server_url, safe='')
        return self._make_request("GET", f"/mcp-server/list-tools/{encoded_url}", params=params)
    
    def _convert_mcp_tools_to_openai_format(self, mcp_tools: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
        """Convert MCP tools format to OpenAI function calling format."""
        openai_tools = []
        
        for tool in mcp_tools:
            openai_tool = {
                "type": "function",
                "function": {
                    "name": tool.get("name", ""),
                    "description": tool.get("description", ""),
                    "parameters": tool.get("inputSchema", {})
                }
            }
            openai_tools.append(openai_tool)
        
        return openai_tools
    
    def call_tool(self, server_url: str, tool_name: str, 
                  tool_args: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
        """Call tool on MCP server."""
        data = {
            "serverUrl": server_url,
            "toolName": tool_name,
            "toolArgs": tool_args or {},
            "connectionType": "StreamableHttp"
        }
        return self._make_request("POST", "/mcp-server/call-tool", json=data)
    
    def redirect_to_oauth(self, instance_id: str, server_name: str) -> None:
        """Open OAuth authorization URL in browser."""
        oauth_url = f"{self.base_url}/oauth/{server_name.lower()}/authorize?instance_id={instance_id}"
        print(f"🔐 Opening OAuth authorization for {server_name}")
        print(f"If you are not redirected automatically, please open this URL: {oauth_url}")
        webbrowser.open(oauth_url)

AI Agent with MCP Integration

Now we’ll create an intelligent agent that uses Together AI’s powerful LLMs with Klavis MCP servers. This agent will:

  1. Discover Tools: Automatically find available tools from MCP servers
  2. Function Calling: Use Together AI’s function calling capabilities
  3. Tool Execution: Execute tools through Klavis API
  4. Smart Responses: Generate intelligent responses based on tool results
import json

class Agent:
    def __init__(self, together_client, klavis_api_client, mcp_server_url, model="meta-llama/Llama-3.3-70B-Instruct-Turbo"):
        self.together = together_client
        self.klavis = klavis_api_client
        self.mcp_server_url = mcp_server_url
        self.model = model
        print(f"🤖 Agent initialized with Together AI model: {self.model}")
    
    def process_request(self, user_message):
        """Process a user request using Together AI + Klavis integration."""
        # 1. Get available tools from the MCP server
        mcp_tools = self.klavis.list_tools(self.mcp_server_url)
        tools = self.klavis._convert_mcp_tools_to_openai_format(mcp_tools.get('tools', []))
        
        # 2. Call Together AI LLM with available tools
        messages = [
            {"role": "system", "content": "You are a helpful AI assistant with access to various tools."},
            {"role": "user", "content": user_message}
        ]
        
        response = self.together.chat.completions.create(
            model=self.model,
            messages=messages,
            tools=tools
        )
        
        assistant_message = response.choices[0].message
        messages.append(assistant_message)
        
        # 3. Execute any tool calls requested by the LLM
        if assistant_message.tool_calls:
            
            # Execute each tool call
            for tool_call in assistant_message.tool_calls:
                tool_name = tool_call.function.name
                tool_args = json.loads(tool_call.function.arguments)
                
                print(f"🛠️ Calling tool: {tool_name} with args: {tool_args}")
                
                # Call tool via Klavis API
                tool_result = self.klavis.call_tool(
                    server_url=self.mcp_server_url,
                    tool_name=tool_name,
                    tool_args=tool_args
                )
                
                # Add tool result to conversation
                messages.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": json.dumps(tool_result)
                })
            
            # 4. Get final response from Together AI
            final_response = self.together.chat.completions.create(
                model=self.model,
                messages=messages
            )
            return final_response.choices[0].message.content
        
        # If no tools were needed, return the assistant's response directly
        return assistant_message.content

Use Case Examples

Example 1: Summarize YouTube Video

1

Initialize Clients

Set up Together AI and Klavis API clients

2

Create MCP Instance

Create a YouTube MCP server instance

3

Process Request

Use the agent to analyze and summarize a YouTube video

import os
from together import Together

# Example YouTube video URL - replace with any video you'd like to analyze
YOUTUBE_VIDEO_URL = "https://www.youtube.com/watch?v=TG6QOa2JJJQ"

# 1. Initialize Together AI client and Klavis API client
together_client = Together(api_key=os.getenv("TOGETHER_API_KEY"))
klavis_api_client = KlavisAPI(api_key=os.getenv("KLAVIS_API_KEY"))

# 2. Create a YouTube MCP server instance using Klavis API
youtube_mcp_instance = klavis_api_client.create_mcp_instance(
    server_name="YouTube",
    user_id="1234",
    platform_name="Klavis",
)

# 3. Create an agent with YouTube MCP server
agent = Agent(
    together_client=together_client, 
    klavis_api_client=klavis_api_client, 
    mcp_server_url=youtube_mcp_instance["serverUrl"],
    model="meta-llama/Llama-3.3-70B-Instruct-Turbo"
)

# 4. Process the request
response = agent.process_request(
    f"Please analyze this YouTube video and provide a comprehensive summary with timestamps: {YOUTUBE_VIDEO_URL}"
)

print(response)

Example 2: Send Email via Gmail

Gmail integration requires OAuth authentication, so you’ll need to authorize the application in your browser.

# Email configuration
EMAIL_RECIPIENT = "recipient@example.com"  # Replace with the recipient's email
EMAIL_SUBJECT = "Greetings from Together AI + Klavis Integration"
EMAIL_BODY = "This is a test email sent using the Together AI and Klavis AI integration. The email was sent automatically by your AI agent!"

# 1. Create a Gmail MCP server instance using Klavis API
gmail_mcp_instance = klavis_api_client.create_mcp_instance(
    server_name="Gmail",
    user_id="1234",
    platform_name="Klavis",
)

# 2. Redirect to Gmail OAuth page for authorization
klavis_api_client.redirect_to_oauth(gmail_mcp_instance["instanceId"], "Gmail")

# 3. After OAuth authorization is complete, create the Gmail agent
gmail_agent = Agent(
    together_client=together_client,
    klavis_api_client=klavis_api_client,
    mcp_server_url=gmail_mcp_instance["serverUrl"],
    model="Qwen/Qwen2.5-72B-Instruct-Turbo"
)

# 4. Send the email
response = gmail_agent.process_request(
    f"Please send an email to {EMAIL_RECIPIENT} with the subject '{EMAIL_SUBJECT}' and the following body: '{EMAIL_BODY}'"
)

print(response)

Next Steps

Explore More MCP Servers

Try other available servers like Slack, Notion, CRM etc.

Experiment with Different Models

Test various Together AI models for different use cases.

Build Multi-Server Workflows

Create sophisticated agents that combine multiple services

Production Deployment

Scale these patterns for production applications

Useful Resources

Happy building with Together AI and Klavis! 🚀