Back to Home

Get Started with Trunk AI

Follow this guide to quickly integrate Trunk AI into your applications and start leveraging our unified AI orchestration platform.

Quick Start Guide

1

Install the SDK

Install the Trunk AI SDK using your preferred package manager.

bash
npm install @trunk/ai

This will install the core SDK which provides access to all Trunk AI features.

2

Configure your API Key

Set up your API key to authenticate with Trunk AI services.

Create a .env file in your project root and add your API key:

bash
TRUNK_API_KEY=your_api_key_here

You can get your API key from the Trunk AI Dashboard.

3

Initialize the Client

Create and configure your Trunk AI client.

javascript
import { TrunkAI } from '@trunk/ai';

// Initialize the client
const trunk = new TrunkAI({
  apiKey: process.env.TRUNK_API_KEY,
  // Optional configuration
  region: 'us-west',
  maxBatchSize: 10,
});

// Test the connection
async function testConnection() {
  const status = await trunk.getStatus();
  console.log('Connection status:', status);
}

testConnection();
4

Create Your First Request

Send your first AI request through Trunk's orchestration layer.

javascript
import { TrunkAI } from '@trunk/ai';

const trunk = new TrunkAI({
  apiKey: process.env.TRUNK_API_KEY,
});

async function generateResponse() {
  // Create a request
  const response = await trunk.generate({
    prompt: "Explain quantum computing in simple terms",
    model: "gpt-4",
    maxTokens: 150,
    // Optional: Enable verification
    verify: true,
  });
  
  console.log('Response:', response.text);
  
  // If verification is enabled
  if (response.verification) {
    console.log('Verification proof:', response.verification.proofId);
  }
}

generateResponse();
5

Implement Batch Processing

Optimize performance with Trunk AI's intelligent batch processing.

Trunk AI automatically optimizes batch processing for you, but you can also manually create and manage batches for more control:

javascript
import { TrunkAI } from '@trunk/ai';

const trunk = new TrunkAI({
  apiKey: process.env.TRUNK_API_KEY,
});

async function processBatch() {
  // Create a batch
  const batch = trunk.createBatch({
    model: "gpt-4",
    maxTokens: 100,
  });
  
  // Add requests to the batch
  batch.add({ prompt: "Summarize the benefits of AI orchestration" });
  batch.add({ prompt: "Explain the concept of batch processing" });
  batch.add({ prompt: "What are Merkle proofs?" });
  
  // Process the batch
  const results = await batch.process();
  
  // Handle results
  results.forEach((result, index) => {
    console.log(`Result ${index + 1}:`, result.text);
  });
}

processBatch();

Batch processing significantly reduces latency and costs when handling multiple AI requests.

6

Verify Results On-Chain

Implement verifiable AI results with Solana integration.

Trunk AI provides built-in verification of AI results using Solana's blockchain:

javascript
import { TrunkAI } from '@trunk/ai';

const trunk = new TrunkAI({
  apiKey: process.env.TRUNK_API_KEY,
});

async function verifyResult() {
  // Generate response with verification enabled
  const response = await trunk.generate({
    prompt: "What is the capital of France?",
    model: "gpt-4",
    verify: true,
  });
  
  console.log('Response:', response.text);
  
  // Get the verification proof
  const proofId = response.verification.proofId;
  
  // Verify the result
  const verification = await trunk.verify(proofId);
  
  console.log('Verification status:', verification.status);
  console.log('Solana transaction:', verification.transaction);
  console.log('Merkle proof:', verification.merkleProof);
}

verifyResult();

On-chain verification provides cryptographic proof that AI outputs haven't been tampered with, ensuring data integrity and trust.

Use Cases

Voice Assistants
Build voice assistants with contextual awareness and natural language processing, optimized for low latency and high reliability.
// Initialize voice processing
const voiceAssistant = trunk.createVoiceAssistant({
  contextRetention: true,
  responseLatency: "low",
});

// Process voice input
const response = await voiceAssistant.process({
  audioInput: audioBuffer,
  previousContext: sessionContext,
});
Content Generation
Generate high-quality content at scale with optimized batch processing and cost-efficient resource allocation.
// Create a content generation batch
const contentBatch = trunk.createBatch({
  model: "gpt-4",
  temperature: 0.7,
});

// Add content requests
topics.forEach(topic => {
  contentBatch.add({
    prompt: `Write a blog post about ${topic}`,
    maxTokens: 500,
  });
});

// Process in parallel
const articles = await contentBatch.process();
Data Analysis
Process and analyze large datasets with AI, optimizing for throughput and accuracy while maintaining verifiable results.
// Analyze data with verification
const analysis = await trunk.analyze({
  dataset: datasetUrl,
  analysisType: "sentiment",
  granularity: "high",
  verify: true,
});

// Get insights with verification proof
console.log(analysis.results);
console.log(analysis.verification.proofId);
Real-time Chat
Build responsive chat applications with contextual awareness and sub-second response times.
// Initialize chat context
const chatSession = trunk.createChatSession({
  model: "gpt-4",
  contextWindow: 10,
});

// Process user message
const response = await chatSession.sendMessage({
  content: userMessage,
  role: "user",
});

console.log(response.content);
Document Processing
Extract information, summarize, and analyze documents with high accuracy and verifiable results.
// Process document
const docAnalysis = await trunk.processDocument({
  document: documentBuffer,
  tasks: ["summarize", "extract_entities", "sentiment"],
  outputFormat: "json",
});

// Access structured results
console.log(docAnalysis.summary);
console.log(docAnalysis.entities);
Multi-modal AI
Process and generate content across text, image, and audio modalities with unified orchestration.
// Process multi-modal content
const result = await trunk.processMultiModal({
  text: textPrompt,
  image: imageBuffer,
  outputModality: "text",
  model: "multimodal-gpt4",
});

console.log(result.text);

Next Steps

Explore the Documentation

Dive deeper into Trunk AI's capabilities with our comprehensive documentation.

View Documentation
Join the Community

Connect with other developers and get help from the Trunk AI community.

Join Community
Deploy to Production

Learn best practices for deploying Trunk AI in production environments.

Deployment Guide