Initial commit

This commit is contained in:
St. Nebula
2026-04-23 23:58:59 -05:00
commit 47b9e3c159
257 changed files with 18913 additions and 0 deletions
@@ -0,0 +1,97 @@
---
name: developing-genkit-go
description: Develop AI-powered applications using Genkit in Go. Use when the user asks to build AI features, agents, flows, or tools in Go using Genkit, or when working with Genkit Go code involving generation, prompts, streaming, tool calling, or model providers.
metadata:
genkit-managed: true
---
# Genkit Go
Genkit Go is an AI SDK for Go that provides generation, structured output, streaming, tool calling, prompts, and flows with a unified interface across model providers.
## Hello World
```go
package main
import (
"context"
"fmt"
"log"
"net/http"
"github.com/genkit-ai/genkit/go/ai"
"github.com/genkit-ai/genkit/go/genkit"
"github.com/genkit-ai/genkit/go/plugins/googlegenai"
"github.com/genkit-ai/genkit/go/plugins/server"
)
func main() {
ctx := context.Background()
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}))
genkit.DefineFlow(g, "jokeFlow", func(ctx context.Context, topic string) (string, error) {
return genkit.GenerateText(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a joke about %s", topic),
)
})
mux := http.NewServeMux()
for _, f := range genkit.ListFlows(g) {
mux.HandleFunc("POST /"+f.Name(), genkit.Handler(f))
}
log.Fatal(server.Start(ctx, "127.0.0.1:8080", mux))
}
```
## Core Features
Load the appropriate reference based on what you need:
| Feature | Reference | When to load |
| --- | --- | --- |
| Initialization | [references/getting-started.md](references/getting-started.md) | Setting up `genkit.Init`, plugins, the `*Genkit` instance pattern |
| Generation | [references/generation.md](references/generation.md) | `Generate`, `GenerateText`, `GenerateData`, streaming, output formats |
| Prompts | [references/prompts.md](references/prompts.md) | `DefinePrompt`, `DefineDataPrompt`, `.prompt` files, schemas |
| Tools | [references/tools.md](references/tools.md) | `DefineTool`, tool interrupts, `RestartWith`/`RespondWith` |
| Flows & HTTP | [references/flows-and-http.md](references/flows-and-http.md) | `DefineFlow`, `DefineStreamingFlow`, `genkit.Handler`, HTTP serving |
| Model Providers | [references/providers.md](references/providers.md) | Google AI, Vertex AI, Anthropic, OpenAI-compatible, Ollama setup |
## Genkit CLI
Check if installed: `genkit --version`
**Installation:**
```bash
curl -sL cli.genkit.dev | bash
```
**Key commands:**
```bash
# Start app with Developer UI (tracing, flow testing) at http://localhost:4000
genkit start -- go run .
genkit start -o -- go run . # also opens browser
# Run a flow directly from the CLI
genkit flow:run myFlow '{"data": "input"}'
genkit flow:run myFlow '{"data": "input"}' --stream # with streaming
genkit flow:run myFlow '{"data": "input"}' --wait # wait for completion
# Look up Genkit documentation
genkit docs:search "streaming" go
genkit docs:list go
genkit docs:read go/flows.md
```
See [references/getting-started.md](references/getting-started.md) for full CLI and Developer UI details.
## Key Guidance
- **Pass `g` explicitly.** The `*Genkit` instance returned by `genkit.Init` is the central registry. Pass it to all Genkit functions rather than storing it as a global. This is a core pattern throughout the SDK.
- **Wrap AI logic in flows.** Flows give you tracing, observability, HTTP deployment via `genkit.Handler`, and the ability to test from the Developer UI and CLI. Any generation call worth keeping should live in a flow.
- **Use `jsonschema:"description=..."` struct tags on output types.** The model uses these descriptions to understand what each field should contain. Without them, structured output quality drops significantly.
- **Write good tool descriptions.** The model decides which tools to call based on their description string. Vague descriptions lead to missed or incorrect tool calls.
- **Use `.prompt` files for complex prompts.** They separate prompt content from Go code, support Handlebars templating, and can be iterated on without recompilation. Code-defined prompts are better for simple, single-line cases.
- **Look up the latest model IDs.** Model names change frequently. Check provider documentation for current model IDs rather than relying on hardcoded names. See [references/providers.md](references/providers.md).
@@ -0,0 +1,183 @@
# Flows & HTTP
## DefineFlow
Wrap AI logic in a flow for observability, tracing, and HTTP deployment.
```go
jokeFlow := genkit.DefineFlow(g, "jokeFlow",
func(ctx context.Context, topic string) (string, error) {
return genkit.GenerateText(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a joke about %s", topic),
)
},
)
```
### Running a Flow Directly
```go
result, err := jokeFlow.Run(ctx, "cats")
```
## DefineStreamingFlow
Flows that stream chunks back to the caller. Two common patterns:
### Pattern 1: Passthrough Streaming
Pass the stream callback directly through to `WithStreaming`. The callback type is `ai.ModelStreamCallback` = `func(context.Context, *ai.ModelResponseChunk) error`:
```go
genkit.DefineStreamingFlow(g, "streamingJokeFlow",
func(ctx context.Context, topic string, sendChunk ai.ModelStreamCallback) (string, error) {
resp, err := genkit.Generate(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a long joke about %s", topic),
ai.WithStreaming(sendChunk), // passthrough
)
if err != nil {
return "", err
}
return resp.Text(), nil
},
)
```
### Pattern 2: Manual String Streaming
Use `core.StreamCallback[string]` to stream extracted text:
```go
genkit.DefineStreamingFlow(g, "streamingJokeFlow",
func(ctx context.Context, topic string, sendChunk core.StreamCallback[string]) (string, error) {
stream := genkit.GenerateStream(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a long joke about %s", topic),
)
for result, err := range stream {
if err != nil {
return "", err
}
if result.Done {
return result.Response.Text(), nil
}
sendChunk(ctx, result.Chunk.Text())
}
return "", nil
},
)
```
### Typed Streaming Flows
Use `core.StreamCallback[T]` with `GenerateDataStream` for typed chunks:
```go
genkit.DefineStreamingFlow(g, "structuredStream",
func(ctx context.Context, input JokeRequest, sendChunk core.StreamCallback[*Joke]) (*Joke, error) {
stream := genkit.GenerateDataStream[*Joke](ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a joke about %s", input.Topic),
)
for result, err := range stream {
if err != nil { return nil, err }
if result.Done { return result.Output, nil }
sendChunk(ctx, result.Chunk)
}
return nil, nil
},
)
```
## Named Sub-Steps
Use `core.Run` inside a flow for traced sub-steps:
```go
genkit.DefineFlow(g, "pipeline",
func(ctx context.Context, input string) (string, error) {
subject, err := core.Run(ctx, "extract-subject", func() (string, error) {
return genkit.GenerateText(ctx, g,
ai.WithPrompt("Extract the subject from: %s", input),
)
})
if err != nil { return "", err }
joke, err := core.Run(ctx, "generate-joke", func() (string, error) {
return genkit.GenerateText(ctx, g,
ai.WithPrompt("Tell me a joke about %s", subject),
)
})
return joke, err
},
)
```
## HTTP Handlers
### genkit.Handler
Convert any flow into an `http.HandlerFunc`:
```go
mux := http.NewServeMux()
for _, f := range genkit.ListFlows(g) {
mux.HandleFunc("POST /"+f.Name(), genkit.Handler(f))
}
log.Fatal(server.Start(ctx, "127.0.0.1:8080", mux))
```
### Request/Response Format
**Non-streaming request:**
```bash
curl -X POST http://localhost:8080/jokeFlow \
-H "Content-Type: application/json" \
-d '{"data": "bananas"}'
```
Response: `{"result": "Why did the banana go to the doctor?..."}`
**Streaming request:**
```bash
curl -N -X POST http://localhost:8080/streamingJokeFlow \
-H "Content-Type: application/json" \
-d '{"data": "bananas"}'
```
Streaming responses use Server-Sent Events (SSE) format.
### genkit.HandlerFunc
For frameworks that expect error-returning handlers:
```go
handler := genkit.HandlerFunc(myFlow)
// handler is func(http.ResponseWriter, *http.Request) error
```
### Context Providers
Inject request context (e.g., auth headers) into flow execution:
```go
mux.HandleFunc("POST /myFlow", genkit.Handler(myFlow,
genkit.WithContextProviders(func(ctx context.Context, rd core.RequestData) (api.ActionContext, error) {
// rd.Headers contains HTTP headers
return api.ActionContext{"userId": rd.Headers.Get("X-User-Id")}, nil
}),
))
```
### ListFlows
Get all registered flows for dynamic route setup:
```go
flows := genkit.ListFlows(g) // []api.Action
for _, f := range flows {
fmt.Println(f.Name())
}
```
@@ -0,0 +1,176 @@
# Generation
## GenerateText
Simplest form. Returns a string.
```go
text, err := genkit.GenerateText(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a joke about %s", topic),
)
```
## Generate
Returns a full `*ModelResponse` with metadata, usage stats, and history.
```go
resp, err := genkit.Generate(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithSystem("You are a helpful assistant."),
ai.WithPrompt("Explain %s", topic),
)
fmt.Println(resp.Text()) // concatenated text
fmt.Println(resp.FinishReason) // ai.FinishReasonStop, etc.
fmt.Println(resp.Usage) // token counts
```
## GenerateData (Structured Output)
Returns a typed Go value parsed from the model's JSON output.
```go
type Joke struct {
Setup string `json:"setup" jsonschema:"description=The setup of the joke"`
Punchline string `json:"punchline" jsonschema:"description=The punchline"`
}
joke, resp, err := genkit.GenerateData[Joke](ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a joke about %s", topic),
)
// joke is *Joke, resp is *ModelResponse
```
## Streaming
### GenerateStream
Returns an iterator. Each value has `.Done`, `.Chunk`, and `.Response`.
```go
stream := genkit.GenerateStream(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a long story about %s", topic),
)
for result, err := range stream {
if err != nil {
return err
}
if result.Done {
finalText := result.Response.Text()
break
}
fmt.Print(result.Chunk.Text()) // incremental text
}
```
### GenerateDataStream (Structured Streaming)
Streams typed partial objects as they arrive.
```go
stream := genkit.GenerateDataStream[Joke](ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a joke about %s", topic),
)
for result, err := range stream {
if err != nil {
return err
}
if result.Done {
finalJoke := result.Output // *Joke
break
}
partialJoke := result.Chunk // *Joke (partial)
}
```
### Callback-Based Streaming
Use `ai.WithStreaming` with `Generate` for callback-style streaming. The callback receives `*ai.ModelResponseChunk`:
```go
resp, err := genkit.Generate(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Tell me a story"),
ai.WithStreaming(func(ctx context.Context, chunk *ai.ModelResponseChunk) error {
fmt.Print(chunk.Text()) // extract text from chunk
return nil
}),
)
// resp contains the final complete response
```
## Common Options
```go
// Model selection
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", nil)) // model reference
ai.WithModelName("googleai/gemini-flash-latest") // by name string
// Content
ai.WithPrompt("Tell me about %s", topic) // user message (supports fmt verbs)
ai.WithSystem("You are a pirate.") // system instructions
ai.WithMessages(msg1, msg2) // conversation history
ai.WithDocs(doc1, doc2) // context documents
ai.WithTextDocs("context 1", "context 2") // context as strings
// Model config (provider-specific)
ai.WithConfig(map[string]any{"temperature": 0.7})
```
## Output Formats
Control how the model structures its output.
### By Go Type
```go
// Automatically uses JSON format and instructs model to match the type
ai.WithOutputType(MyStruct{})
```
### By Format String
```go
ai.WithOutputFormat(ai.OutputFormatJSON) // single JSON object
ai.WithOutputFormat(ai.OutputFormatJSONL) // JSON Lines (one object per line)
ai.WithOutputFormat(ai.OutputFormatArray) // JSON array
ai.WithOutputFormat(ai.OutputFormatEnum) // constrained enum value
ai.WithOutputFormat(ai.OutputFormatText) // plain text (default)
```
### Enum Output
```go
type Color string
const (
Red Color = "red"
Green Color = "green"
Blue Color = "blue"
)
text, err := genkit.GenerateText(ctx, g,
ai.WithPrompt("What color is the sky?"),
ai.WithOutputEnums(Red, Green, Blue),
)
```
### Custom Output Instructions
```go
ai.WithOutputInstructions("Return a JSON object with fields: name (string), age (number)")
```
### Combining Format + Schema
```go
// JSONL with a typed schema (useful for streaming lists)
genkit.DefinePrompt(g, "characters",
ai.WithPrompt("Generate 5 story characters"),
ai.WithOutputType([]StoryCharacter{}),
ai.WithOutputFormat(ai.OutputFormatJSONL),
)
```
@@ -0,0 +1,142 @@
# Getting Started
## Project Setup
```bash
mkdir my-genkit-app && cd my-genkit-app
go mod init my-genkit-app
go get github.com/genkit-ai/genkit/go@latest
```
Add provider plugin(s) for the models you want to use:
```bash
go get github.com/genkit-ai/genkit/go/plugins/googlegenai # Google AI / Vertex AI
go get github.com/genkit-ai/genkit/go/plugins/anthropic # Anthropic Claude
go get github.com/genkit-ai/genkit/go/plugins/compat_oai # OpenAI-compatible
go get github.com/genkit-ai/genkit/go/plugins/ollama # Ollama (local)
```
After writing your code, run `go mod tidy` to resolve all dependencies.
## Initialization
Every Genkit app starts with `genkit.Init`, which returns a `*Genkit` instance:
```go
import (
"context"
"github.com/genkit-ai/genkit/go/genkit"
"github.com/genkit-ai/genkit/go/plugins/googlegenai"
)
ctx := context.Background()
g := genkit.Init(ctx,
genkit.WithPlugins(&googlegenai.GoogleAI{}),
)
```
### The `*Genkit` Instance
The `*Genkit` value `g` is the central registry. Pass it to every Genkit function:
```go
// Defining resources
genkit.DefineFlow(g, "myFlow", ...)
genkit.DefineTool(g, "myTool", ...)
genkit.DefinePrompt(g, "myPrompt", ...)
// Generating content
genkit.GenerateText(ctx, g, ...)
genkit.Generate(ctx, g, ...)
```
Do not store `g` in a global variable. Pass it explicitly through your call chain.
### Init Options
```go
g := genkit.Init(ctx,
// Register one or more plugins
genkit.WithPlugins(&googlegenai.GoogleAI{}, &anthropic.Anthropic{}),
// Set a default model (used when no model is specified)
genkit.WithDefaultModel("googleai/gemini-flash-latest"),
// Set directory for .prompt files (default: "prompts")
genkit.WithPromptDir("my-prompts"),
// Or embed prompts using Go's embed package
// genkit.WithPromptFS(promptsFS),
)
```
### Embedding Prompts
Use `go:embed` to bundle `.prompt` files into the binary:
```go
//go:embed prompts
var promptsFS embed.FS
g := genkit.Init(ctx,
genkit.WithPlugins(&googlegenai.GoogleAI{}),
genkit.WithPromptFS(promptsFS),
)
```
## Genkit CLI
The Genkit CLI provides a local Developer UI for running flows, tracing executions, and inspecting model interactions.
**Install:**
```bash
curl -sL cli.genkit.dev | bash
```
**Verify:**
```bash
genkit --version
```
### Developer UI
Start your app with the Developer UI attached:
```bash
genkit start -- go run .
```
This launches:
- Your app (with tracing enabled)
- The Developer UI at `http://localhost:4000`
- A telemetry API at `http://localhost:4033`
Add `-o` to auto-open the UI in your browser:
```bash
genkit start -o -- go run .
```
The Developer UI lets you:
- Run and test flows interactively
- View traces for each generation call (inputs, outputs, latency, token usage)
- Inspect prompt rendering and tool calls
- Debug multi-step flows with per-step trace data
### Without the CLI
Set `GENKIT_ENV=dev` to enable the reflection API without the CLI:
```bash
GENKIT_ENV=dev go run .
```
## Import Paths
```go
import (
"github.com/genkit-ai/genkit/go/genkit" // Core: Init, Generate*, DefineFlow, etc.
"github.com/genkit-ai/genkit/go/ai" // Types: WithModel, WithPrompt, Message, Part, etc.
"github.com/genkit-ai/genkit/go/core" // Low-level: Run (sub-steps), Flow types
"github.com/genkit-ai/genkit/go/plugins/server" // server.Start for HTTP
)
```
@@ -0,0 +1,256 @@
# Prompts
## DefinePrompt
Define a reusable prompt in code with a default model and template.
```go
jokePrompt := genkit.DefinePrompt(g, "joke",
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", nil)),
ai.WithInputType(JokeRequest{Topic: "example"}),
ai.WithPrompt("Tell me a joke about {{topic}}."),
)
```
### Execute
```go
resp, err := jokePrompt.Execute(ctx,
ai.WithInput(map[string]any{"topic": "cats"}),
)
fmt.Println(resp.Text())
```
### ExecuteStream
```go
stream := jokePrompt.ExecuteStream(ctx,
ai.WithInput(map[string]any{"topic": "cats"}),
)
for result, err := range stream {
if err != nil { return err }
if result.Done { break }
fmt.Print(result.Chunk.Text())
}
```
### Override Options at Execution
```go
resp, err := jokePrompt.Execute(ctx,
ai.WithInput(map[string]any{"topic": "cats"}),
ai.WithModelName("googleai/gemini-pro-latest"), // override model
ai.WithConfig(map[string]any{"temperature": 0.9}),
ai.WithTools(myTool),
)
```
## DefineDataPrompt (Typed Input/Output)
Strongly-typed prompts with Go generics.
```go
type JokeRequest struct {
Topic string `json:"topic"`
}
type Joke struct {
Setup string `json:"setup" jsonschema:"description=The setup"`
Punchline string `json:"punchline" jsonschema:"description=The punchline"`
}
jokePrompt := genkit.DefineDataPrompt[JokeRequest, *Joke](g, "structured-joke",
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", nil)),
ai.WithPrompt("Tell me a joke about {{topic}}."),
)
```
### Execute (typed)
```go
joke, resp, err := jokePrompt.Execute(ctx, JokeRequest{Topic: "cats"})
// joke is *Joke, resp is *ModelResponse
```
### ExecuteStream (typed)
```go
stream := jokePrompt.ExecuteStream(ctx, JokeRequest{Topic: "cats"})
for result, err := range stream {
if err != nil { return err }
if result.Done {
finalJoke := result.Output // *Joke
break
}
fmt.Print(result.Chunk) // partial *Joke
}
```
## .prompt Files (Dotprompt)
Define prompts in separate files with YAML frontmatter and Handlebars templates.
### Basic .prompt File
`prompts/joke.prompt`:
```
---
model: googleai/gemini-flash-latest
input:
schema:
topic: string
---
Tell me a joke about {{topic}}.
```
### Load and Use
```go
// LookupPrompt returns Prompt (untyped: map[string]any input, string output)
jokePrompt := genkit.LookupPrompt(g, "joke")
resp, err := jokePrompt.Execute(ctx,
ai.WithInput(map[string]any{"topic": "cats"}),
)
```
### Typed .prompt File
`prompts/structured-joke.prompt`:
```
---
model: googleai/gemini-flash-latest
config:
thinkingConfig:
thinkingBudget: 0
input:
schema: JokeRequest
output:
format: json
schema: Joke
---
Tell me a joke about {{topic}}.
```
Register Go types so the .prompt file can reference them by name:
```go
genkit.DefineSchemaFor[JokeRequest](g)
genkit.DefineSchemaFor[Joke](g)
jokePrompt := genkit.LookupDataPrompt[JokeRequest, *Joke](g, "structured-joke")
joke, resp, err := jokePrompt.Execute(ctx, JokeRequest{Topic: "cats"})
```
### LoadPrompt (Explicit Path)
```go
prompt := genkit.LoadPrompt(g, "./prompts/countries.prompt", "countries")
resp, err := prompt.Execute(ctx)
```
### .prompt File Features
**Multi-message prompts with roles:**
```
---
model: googleai/gemini-flash-latest
input:
schema:
question: string
---
{{ role "system" }}
You are a helpful assistant.
{{ role "user" }}
{{question}}
```
**Media in prompts:**
```
---
model: googleai/gemini-flash-latest
input:
schema:
videoUrl: string
contentType: string
---
{{ role "user" }}
Summarize this video:
{{media url=videoUrl contentType=contentType}}
```
**Conditionals and loops:**
```
---
input:
schema:
topic: string
dietaryRestrictions?(array): string
---
Write a recipe about {{topic}}.
{{#if dietaryRestrictions}}
Dietary restrictions: {{#each dietaryRestrictions}}{{this}}{{#unless @last}}, {{/unless}}{{/each}}.
{{/if}}
```
**Inline schema in .prompt file:**
```
---
model: googleai/gemini-flash-latest
input:
schema:
topic: string
style?: string
output:
format: json
schema:
title: string
body: string
tags(array): string
---
Write an article about {{topic}}.
{{#if style}}Write in a {{style}} style.{{/if}}
```
## Schemas
### DefineSchemaFor (from Go type)
Registers a Go struct as a named schema for use in `.prompt` files.
```go
genkit.DefineSchemaFor[JokeRequest](g)
genkit.DefineSchemaFor[Joke](g)
```
The schema name matches the Go type name. Use `jsonschema` struct tags for metadata:
```go
type Recipe struct {
Title string `json:"title" jsonschema:"description=The recipe title"`
Difficulty string `json:"difficulty" jsonschema:"enum=easy,enum=medium,enum=hard"`
Ingredients []Ingredient `json:"ingredients"`
Steps []string `json:"steps"`
}
type Ingredient struct {
Name string `json:"name"`
Amount float64 `json:"amount"`
Unit string `json:"unit"`
}
```
### DefineSchema (manual JSON Schema)
```go
genkit.DefineSchema(g, "Recipe", map[string]any{
"type": "object",
"properties": map[string]any{
"title": map[string]any{"type": "string"},
"ingredients": map[string]any{
"type": "array",
"items": map[string]any{"type": "object"},
},
},
"required": []string{"title", "ingredients"},
})
```
@@ -0,0 +1,157 @@
# Model Providers
## Google AI (Gemini)
```go
import "github.com/genkit-ai/genkit/go/plugins/googlegenai"
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}))
```
**Env var:** `GEMINI_API_KEY` or `GOOGLE_API_KEY`
Model names follow the format `googleai/<model-id>`. Look up the latest model IDs at https://ai.google.dev/gemini-api/docs/models.
```go
// By name string
ai.WithModelName("googleai/gemini-flash-latest")
// Model ref with provider-specific config
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", &genai.GenerateContentConfig{
ThinkingConfig: &genai.ThinkingConfig{
ThinkingBudget: genai.Ptr[int32](0), // disable thinking
},
}))
// Lookup a model instance
m := googlegenai.GoogleAIModel(g, "gemini-flash-latest")
```
## Vertex AI
```go
import "github.com/genkit-ai/genkit/go/plugins/googlegenai"
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.VertexAI{}))
```
**Env vars:** `GOOGLE_CLOUD_PROJECT`, `GOOGLE_CLOUD_LOCATION` (or `GOOGLE_CLOUD_REGION`)
Uses Application Default Credentials (`gcloud auth application-default login`).
Model names follow the format `vertexai/<model-id>`. Same model IDs as Google AI.
```go
ai.WithModelName("vertexai/gemini-flash-latest")
```
## Anthropic (Claude)
```go
import (
"github.com/anthropics/anthropic-sdk-go" // Anthropic SDK types
ant "github.com/genkit-ai/genkit/go/plugins/anthropic" // Genkit plugin
)
g := genkit.Init(ctx, genkit.WithPlugins(&ant.Anthropic{}))
```
**Env var:** `ANTHROPIC_API_KEY`
Model names follow the format `anthropic/<model-id>`. Look up the latest model IDs at https://docs.anthropic.com/en/docs/about-claude/models.
```go
// By name
ai.WithModelName("anthropic/claude-sonnet-4-6")
// With provider-specific config (uses Anthropic SDK types via ai.WithConfig)
ai.WithConfig(&anthropic.MessageNewParams{
Temperature: anthropic.Float(1.0),
MaxTokens: *anthropic.IntPtr(2000),
Thinking: anthropic.ThinkingConfigParamUnion{
OfEnabled: &anthropic.ThinkingConfigEnabledParam{
BudgetTokens: *anthropic.IntPtr(1024),
},
},
})
```
## OpenAI-Compatible (compat_oai)
Works with any OpenAI-compatible API: OpenAI, DeepSeek, xAI, etc.
```go
import "github.com/genkit-ai/genkit/go/plugins/compat_oai"
openaiPlugin := &compat_oai.OpenAICompatible{
Provider: "openai", // unique identifier
APIKey: os.Getenv("OPENAI_API_KEY"),
// BaseURL: "https://custom-endpoint/v1", // for non-OpenAI providers
}
g := genkit.Init(ctx, genkit.WithPlugins(openaiPlugin))
```
Define models explicitly (not auto-discovered):
```go
model := openaiPlugin.DefineModel("openai", "gpt-4o", compat_oai.ModelOptions{})
```
Use with:
```go
ai.WithModel(model)
```
## Ollama (Local Models)
```go
import "github.com/genkit-ai/genkit/go/plugins/ollama"
ollamaPlugin := &ollama.Ollama{
ServerAddress: "http://localhost:11434",
Timeout: 60, // seconds
}
g := genkit.Init(ctx, genkit.WithPlugins(ollamaPlugin))
```
Define models explicitly:
```go
model := ollamaPlugin.DefineModel(g,
ollama.ModelDefinition{
Name: "llama3.1",
Type: "chat", // or "generate"
},
nil, // optional *ModelOptions
)
```
Use with:
```go
ai.WithModel(model)
```
## Multiple Providers
Register multiple plugins in a single Genkit instance:
```go
g := genkit.Init(ctx,
genkit.WithPlugins(
&googlegenai.GoogleAI{},
&ant.Anthropic{},
),
genkit.WithDefaultModel("googleai/gemini-flash-latest"),
)
// Use different models per call
text1, _ := genkit.GenerateText(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("Hello from Gemini"),
)
text2, _ := genkit.GenerateText(ctx, g,
ai.WithModelName("anthropic/claude-sonnet-4-6"),
ai.WithPrompt("Hello from Claude"),
)
```
@@ -0,0 +1,178 @@
# Tools
## DefineTool
Define a tool the model can call during generation.
```go
type WeatherInput struct {
Location string `json:"location" jsonschema:"description=City name"`
}
type WeatherOutput struct {
Temperature float64 `json:"temperature"`
Conditions string `json:"conditions"`
}
weatherTool := genkit.DefineTool(g, "getWeather",
"Gets the current weather for a location.",
func(ctx *ai.ToolContext, input WeatherInput) (WeatherOutput, error) {
// Call your weather API
return WeatherOutput{Temperature: 72, Conditions: "sunny"}, nil
},
)
```
## Using Tools in Generation
Pass tools to `Generate`, `GenerateText`, or prompts:
```go
resp, err := genkit.Generate(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithPrompt("What's the weather in San Francisco?"),
ai.WithTools(weatherTool),
)
// The model calls the tool automatically and incorporates the result
fmt.Println(resp.Text())
```
### Tool Choice
```go
ai.WithToolChoice(ai.ToolChoiceAuto) // model decides (default)
ai.WithToolChoice(ai.ToolChoiceRequired) // model must use a tool
ai.WithToolChoice(ai.ToolChoiceNone) // model cannot use tools
```
### Max Turns
Limit how many tool-call round trips the model can make:
```go
ai.WithMaxTurns(3) // default is 5
```
## DefineMultipartTool
Tools that return both structured output and media content:
```go
screenshotTool := genkit.DefineMultipartTool(g, "screenshot",
"Takes a screenshot of the current page",
func(ctx *ai.ToolContext, input any) (*ai.MultipartToolResponse, error) {
return &ai.MultipartToolResponse{
Output: map[string]any{"success": true},
Content: []*ai.Part{ai.NewMediaPart("image/png", base64Data)},
}, nil
},
)
```
## Tool Interrupts
Pause tool execution to request human input before continuing.
### Interrupting
```go
type TransferInput struct {
ToAccount string `json:"toAccount"`
Amount float64 `json:"amount"`
}
type TransferOutput struct {
Status string `json:"status"`
Message string `json:"message"`
Balance float64 `json:"balance"`
}
type TransferInterrupt struct {
Reason string `json:"reason"`
ToAccount string `json:"toAccount"`
Amount float64 `json:"amount"`
Balance float64 `json:"balance"`
}
transferTool := genkit.DefineTool(g, "transferMoney",
"Transfers money to another account.",
func(ctx *ai.ToolContext, input TransferInput) (TransferOutput, error) {
if input.Amount > accountBalance {
return TransferOutput{}, ai.InterruptWith(ctx, TransferInterrupt{
Reason: "insufficient_balance",
ToAccount: input.ToAccount,
Amount: input.Amount,
Balance: accountBalance,
})
}
// Process transfer...
return TransferOutput{Status: "success", Balance: newBalance}, nil
},
)
```
### Handling Interrupts
```go
resp, err := genkit.Generate(ctx, g,
ai.WithModelName("googleai/gemini-flash-latest"),
ai.WithTools(transferTool),
ai.WithPrompt(userRequest),
)
for resp.FinishReason == ai.FinishReasonInterrupted {
var restarts, responses []*ai.Part
for _, interrupt := range resp.Interrupts() {
meta, ok := ai.InterruptAs[TransferInterrupt](interrupt)
if !ok {
continue
}
switch meta.Reason {
case "insufficient_balance":
// RestartWith: re-execute the tool with adjusted input
part, err := transferTool.RestartWith(interrupt,
ai.WithNewInput(TransferInput{
ToAccount: meta.ToAccount,
Amount: meta.Balance, // transfer what's available
}),
)
if err != nil { return err }
restarts = append(restarts, part)
case "confirm_large":
// RespondWith: provide a response directly without re-executing
part, err := transferTool.RespondWith(interrupt,
TransferOutput{Status: "cancelled", Message: "User declined"},
)
if err != nil { return err }
responses = append(responses, part)
}
}
// Continue generation with the resolved interrupts
resp, err = genkit.Generate(ctx, g,
ai.WithMessages(resp.History()...),
ai.WithTools(transferTool),
ai.WithToolRestarts(restarts...),
ai.WithToolResponses(responses...),
)
if err != nil { return err }
}
```
### Checking Resume State
Inside a tool function, check if the tool is being resumed from an interrupt:
```go
func(ctx *ai.ToolContext, input TransferInput) (TransferOutput, error) {
if ctx.IsResumed() {
// This is a resumed call after an interrupt
original, ok := ai.OriginalInputAs[TransferInput](ctx)
// original contains the input from the first call
}
// ...
}
```