Adding AI Capabilities to JavaScript App
This article is based on my talk at JavaScript Israel Meetup. Here are the slides and relevant playground project.
Artificial intelligence is revolutionizing how we build applications, and as developers, we now have powerful tools at our disposal to easily integrate AI into our projects. In this post, we’ll explore how to add AI capabilities to React apps using the Vercel AI SDK and Ollama for local AI processing.
What is an AI Engineer?
Before diving into the technical details, it’s worth defining what an AI Engineer does. Unlike data scientists who focus on training models, AI Engineers work on the application side of AI, using APIs and SDKs to integrate AI capabilities into software. We’re not training models, but rather using them to solve real-world problems.
Some key skills for AI Engineers include:
- Proficiency with AI APIs and SDKs
- Prompt engineering
- Building AI-powered applications
- Familiarity with AI concepts and limitations
The good news is that you don’t need to be a data scientist to work effectively with AI as a developer. As we like to say, “You don’t need to be a Data Scientist to be an AI Engineer.”
Local AI with Ollama
While cloud-based AI services are powerful, there are scenarios where you might want to run AI models locally. This is where Ollama comes in.
Ollama is an open-source platform that allows you to run large language models locally on your machine. It’s particularly useful for:
- Offline development
- Privacy-sensitive applications
- Reducing cloud computing costs
- Experimenting with open-source models
Key features of Ollama include:
- Run open-source AI models locally
- Works offline
- Free and open-source
- Provides a CLI interface
- Supports vision models
- Offers integrations with various development tools
- Exposes a REST API compatible with OpenAI’s API structure
Once you have installed Ollama building block on your computer - third party applications can use it and provide AI capabilities by consuming it.
Calling Ollama via REST API
Here’s an example of how to call Ollama’s API:
const response = await fetch("http://localhost:11434/v1/chat/completions", {
method: "POST",
body: JSON.stringify({
model: "llama3",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" },
],
}),
});
const data = await response.json();
console.log(data);
Notice how similar it is to OpenAI’s API structure. This compatibility makes it easier to switch between different AI providers in your applications.
Using Ollama with OpenAI Interface
Ollama is compatible with the OpenAI interface:
import { OpenAI } from "openai";
const openai = new OpenAI({
baseURL: "http://localhost:11434/v1",
});
const response = await openai.chat.completions.create({
model: "llama3",
messages: [{ role: "user", content: "Tell me one line cool fact about AI" }],
});
console.log(response.choices[0].message.content);
Ollama SDK
For more direct integration, Ollama also provides its own SDK:
import ollama from "ollama";
const response = await ollama.chat({
model: "llama3",
messages: [{ role: "user", content: "Why is the sky blue?" }],
stream: true,
});
for await (const part of response) {
console.log(part.message.content);
}
Getting Started with the Vercel AI SDK
The Vercel AI SDK provides a high-level abstraction for working with AI models, making it easy to add AI capabilities to your React applications. Let’s start with a practical example using Google’s Gemini AI:
pnpm add @ai-sdk/google
Create a .env.local
file with your Google API key:
GOOGLE_GENERATIVE_AI_API_KEY="YOUR_KEY"
Then you can use it like this:
import { google } from "@ai-sdk/google";
import { generateText } from "ai";
const result = await generateText({
model: google("models/gemini-1.5-flash-latest"),
prompt: "Tell me a joke.",
});
console.log(result.text);
Switching Between AI Providers
One of the great things about the AI SDK is how easy it makes it to switch between different AI providers:
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
const result = await generateText({
model: openai("gpt-4"),
prompt: "Tell me a joke.",
});
You can easily swap between providers like OpenAI, Google, Anthropic, Mistral, or even local models running with Ollama.
Working with Images and Structured Data
The SDK also supports working with images and generating structured objects:
import { google } from "@ai-sdk/google";
import { generateObject, generateText } from "ai";
import { z } from "zod";
// Image example
const imageResult = await generateText({
model: google("models/gemini-1.5-flash-latest"),
messages: [
{
role: "user",
content: [
{ type: "text", text: "Describe the image in detail." },
{ type: "image", image: "https://example.com/image.jpg" },
],
},
],
});
// Structured data example
const schema = z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
});
const { object } = await generateObject({
model: google("models/gemini-1.5-flash-latest"),
schema,
prompt: "Generate a lasagna recipe.",
});
AI-Powered UI Components
The AI SDK provides React hooks and components to easily create interactive AI interfaces:
"use client";
import React from "react";
import { useChat } from "ai/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map((m) => (
<div key={m.id}>
{m.role === "user" ? "User: " : "AI: "}
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
</div>
);
}
Notice that useChat is uses swr
to call /api/chat
// app/api/chat/route.ts
import { google } from "@ai-sdk/google";
import { streamText } from "ai";
export async function POST(request: Request) {
const { messages } = await request.json();
const stream = await streamText({
model: google("models/gemini-1.5-flash-latest"),
system: "You are a helpful assistant.",
messages,
});
return stream.toAIStreamResponse();
}
Generative UI with React Server Components
The AI SDK also supports creating dynamic, AI-generated UIs using React Server Components:
"use server";
export async function streamComponent() {
const result = await streamUI({
model: openai("gpt-4"),
prompt: "Get the weather for San Francisco",
text: ({ content }) => <div>{content}</div>,
tools: {
getWeather: {
description: "Get the weather for a location",
parameters: z.object({
location: z.string(),
}),
generate: async function* ({ location }) {
yield <LoadingComponent />;
const weather = await getWeather(location);
return <WeatherComponent weather={weather} location={location} />;
},
},
},
});
return result.value;
}
This example demonstrates how to use generator functions to create streaming UI components, allowing for dynamic, responsive interfaces that update as the AI generates content.
The Future of AI in the Browser
Exciting developments are on the horizon for AI in web development:
-
Chrome 127 is experimenting with a built-in AI provider, allowing developers to access AI capabilities directly through the browser. This could potentially simplify AI integration even further.
-
WebGPU is enabling more complex AI models to run directly in the browser, opening up new possibilities for responsive and privacy-preserving AI applications.
Conclusion
The Vercel AI SDK, combined with tools like Ollama, makes it easier than ever to add powerful AI capabilities to your React applications. Whether you’re building chatbots, content generation tools, or dynamic UIs, these tools provide a flexible and intuitive way to work with various AI models, both in the cloud and locally.
By incorporating Ollama into your development workflow, you can enjoy the benefits of local AI processing while still leveraging the power and flexibility of the Vercel AI SDK. This hybrid approach allows you to choose the best solution for each specific use case in your application.
As AI continues to evolve, staying up-to-date with these tools and techniques will be crucial for developers looking to build the next generation of intelligent applications. Remember to consider important aspects like security, rate limits, and prompt engineering as you integrate AI into your projects. The field is rapidly evolving, so keep learning and experimenting!
I hope you found this introduction to the Vercel AI SDK and Ollama helpful. For more information and advanced topics, check out the official documentation for the Vercel AI SDK and the Ollama website, and stay tuned for future developments in this exciting field!