Initial commit
@@ -0,0 +1,57 @@
|
|||||||
|
---
|
||||||
|
name: developing-genkit-dart
|
||||||
|
description: Generates code and provides documentation for the Genkit Dart SDK. Use when the user asks to build AI agents in Dart, use Genkit flows, or integrate LLMs into Dart/Flutter applications.
|
||||||
|
metadata:
|
||||||
|
genkit-managed: true
|
||||||
|
---
|
||||||
|
|
||||||
|
# Genkit Dart
|
||||||
|
|
||||||
|
Genkit Dart is an AI SDK for Dart that provides a unified interface for code generation, structured outputs, tools, flows, and AI agents.
|
||||||
|
|
||||||
|
## Core Features and Usage
|
||||||
|
If you need help with initializing Genkit (`Genkit()`), Generation (`ai.generate`), Tooling (`ai.defineTool`), Flows (`ai.defineFlow`), Embeddings (`ai.embedMany`), streaming, or calling remote flow endpoints, please load the core framework reference:
|
||||||
|
[references/genkit.md](references/genkit.md)
|
||||||
|
|
||||||
|
## Genkit CLI (recommended)
|
||||||
|
|
||||||
|
The Genkit CLI provides a local development UI for running Flow, tracing executions, playing with models, and evaluating outputs.
|
||||||
|
|
||||||
|
check if the user has it installed: `genkit --version`
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
curl -sL cli.genkit.dev | bash # Native CLI
|
||||||
|
# OR
|
||||||
|
npm install -g genkit-cli # Via npm
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
Wrap your run command with `genkit start` to attach the Genkit developer UI and tracing:
|
||||||
|
```bash
|
||||||
|
genkit start -- dart run main.dart
|
||||||
|
```
|
||||||
|
|
||||||
|
## Plugin Ecosystem
|
||||||
|
Genkit relies on a large suite of plugins to perform generative AI actions, interface with external LLMs, or host web servers.
|
||||||
|
|
||||||
|
When asked to use any given plugin, always verify usage by referring to its corresponding reference below. You should load the reference when you need to know the specific initialization arguments, tools, models, and usage patterns for the plugin:
|
||||||
|
|
||||||
|
| Plugin Name | Reference Link | Description |
|
||||||
|
| ---- | ---- | ---- |
|
||||||
|
| `genkit_google_genai` | [references/genkit_google_genai.md](references/genkit_google_genai.md) | Load for Google Gemini plugin interface usage. |
|
||||||
|
| `genkit_anthropic` | [references/genkit_anthropic.md](references/genkit_anthropic.md) | Load for Anthropic plugin interface for Claude models. |
|
||||||
|
| `genkit_openai` | [references/genkit_openai.md](references/genkit_openai.md) | Load for OpenAI plugin interface for GPT models, Groq, and custom compatible endpoints. |
|
||||||
|
| `genkit_middleware` | [references/genkit_middleware.md](references/genkit_middleware.md) | Load for Tooling for specific agentic behavior: `filesystem`, `skills`, and `toolApproval` interrupts. |
|
||||||
|
| `genkit_mcp` | [references/genkit_mcp.md](references/genkit_mcp.md) | Load for Model Context Protocol integration (Server, Host, and Client capabilities). |
|
||||||
|
| `genkit_chrome` | [references/genkit_chrome.md](references/genkit_chrome.md) | Load for Running Gemini Nano locally inside the Chrome browser using the Prompt API. |
|
||||||
|
| `genkit_shelf` | [references/genkit_shelf.md](references/genkit_shelf.md) | Load for Integrating Genkit Flow actions over HTTP using Dart Shelf. |
|
||||||
|
| `genkit_firebase_ai` | [references/genkit_firebase_ai.md](references/genkit_firebase_ai.md) | Load for Firebase AI plugin interface (Gemini API via Vertex AI). |
|
||||||
|
|
||||||
|
## External Dependencies
|
||||||
|
Whenever you define schemas mapping inside of Tools, Flows, and Prompts, you must use the [schemantic](https://pub.dev/packages/schemantic) library.
|
||||||
|
To learn how to use schemantic, ensure you read [references/schemantic.md](references/schemantic.md) for how to implement type safe generated Dart code. This is particularly relevant when you encounter symbols like `@Schema()`, `SchemanticType`, or classes with the `$` prefix. Genkit Dart uses schemantic for all of its data models so it's a CRITICAL skill to understand for using Genkit Dart.
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
- Always check that code cleanly compiles using `dart analyze` before generating the final response.
|
||||||
|
- Always use the Genkit CLI for local development and debugging.
|
||||||
@@ -0,0 +1,380 @@
|
|||||||
|
# Genkit Core Framework
|
||||||
|
|
||||||
|
Genkit Dart is an AI SDK for Dart that provides a unified interface for text generation, structured output, tool calling, and agentic workflows.
|
||||||
|
|
||||||
|
## Initialization
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_google_genai/genkit_google_genai.dart'; // Or any other plugin
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
// Pass plugins to use into the Genkit constructor
|
||||||
|
final ai = Genkit(plugins: [googleAI()]);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Generate Text
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'), // Needs a model reference from a plugin
|
||||||
|
prompt: 'Explain quantum computing in simple terms.',
|
||||||
|
);
|
||||||
|
|
||||||
|
print(response.text);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Stream Responses
|
||||||
|
```dart
|
||||||
|
final stream = ai.generateStream(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'Write a short story about a robot learning to paint.',
|
||||||
|
);
|
||||||
|
|
||||||
|
await for (final chunk in stream) {
|
||||||
|
print(chunk.text);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Embed Text
|
||||||
|
```dart
|
||||||
|
final embeddings = await ai.embedMany(
|
||||||
|
documents: [
|
||||||
|
DocumentData(content: [TextPart(text: 'Hello world')]),
|
||||||
|
],
|
||||||
|
embedder: googleAI.textEmbedding('text-embedding-004'),
|
||||||
|
);
|
||||||
|
|
||||||
|
print(embeddings.first.embedding);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Define Tools
|
||||||
|
Models can use define actions and access external data via custom defined tools.
|
||||||
|
Requires the `schemantic` library for schema definitions.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:schemantic/schemantic.dart';
|
||||||
|
|
||||||
|
@Schema()
|
||||||
|
abstract class $WeatherInput {
|
||||||
|
String get location;
|
||||||
|
}
|
||||||
|
|
||||||
|
final weatherTool = ai.defineTool(
|
||||||
|
name: 'getWeather',
|
||||||
|
description: 'Gets the current weather for a location',
|
||||||
|
inputSchema: WeatherInput.$schema,
|
||||||
|
fn: (input, _) async {
|
||||||
|
// Call your weather API here
|
||||||
|
return 'Weather in ${input.location}: 72°F and sunny';
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'What\'s the weather like in San Francisco?',
|
||||||
|
toolNames: ['getWeather'], // Use the tools
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Structured Output
|
||||||
|
|
||||||
|
You can ensure the generative model returns a typed JSON object by providing an `outputSchema`.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
@Schema()
|
||||||
|
abstract class $Person {
|
||||||
|
String get name;
|
||||||
|
int get age;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ... inside main ...
|
||||||
|
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'Generate a person named John Doe, age 30',
|
||||||
|
outputSchema: Person.$schema, // Force the model to return this schema
|
||||||
|
);
|
||||||
|
|
||||||
|
final person = response.output; // Typed Person object
|
||||||
|
print('Name: ${person.name}, Age: ${person.age}');
|
||||||
|
```
|
||||||
|
|
||||||
|
## Define Flows
|
||||||
|
Wrap your AI logic in flows for better observability, testing, and deployment:
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final jokeFlow = ai.defineFlow(
|
||||||
|
name: 'tellJoke',
|
||||||
|
inputSchema: .string(),
|
||||||
|
outputSchema: .string(),
|
||||||
|
fn: (topic, _) async {
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'Tell me a joke about $topic',
|
||||||
|
);
|
||||||
|
return response.text; // Value return
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
final joke = await jokeFlow('programming');
|
||||||
|
print(joke);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streaming Flows
|
||||||
|
Stream data from your flows using `context.sendChunk(...)` and returning the final value:
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final streamStory = ai.defineFlow(
|
||||||
|
name: 'streamStory',
|
||||||
|
inputSchema: .string(),
|
||||||
|
outputSchema: .string(),
|
||||||
|
streamSchema: .string(),
|
||||||
|
fn: (topic, context) async {
|
||||||
|
final stream = ai.generateStream(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'Write a story about $topic',
|
||||||
|
);
|
||||||
|
|
||||||
|
await for (final chunk in stream) {
|
||||||
|
context.sendChunk(chunk.text); // Stream the chunks
|
||||||
|
}
|
||||||
|
return 'Story complete'; // Value return
|
||||||
|
},
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Calling remote Flows from a dart client
|
||||||
|
The `genkit` package provides `package:genkit/client.dart` representing remote Genkit actions that can be invoked or streamed using type-safe definitions.
|
||||||
|
|
||||||
|
1. Defines a remote action
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/client.dart';
|
||||||
|
|
||||||
|
final stringAction = defineRemoteAction(
|
||||||
|
url: 'http://localhost:3400/my-flow',
|
||||||
|
inputSchema: .string(),
|
||||||
|
outputSchema: .string(),
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Call the Remote Action (Non-streaming)
|
||||||
|
```dart
|
||||||
|
final response = await stringAction(input: 'Hello from Dart!');
|
||||||
|
print('Flow Response: $response');
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Call the Remote Action (Streaming)
|
||||||
|
Use the `.stream()` method on the action flow, and access `stream.onResult` to wait on the async return value.
|
||||||
|
```dart
|
||||||
|
final streamAction = defineRemoteAction(
|
||||||
|
url: 'http://localhost:3400/stream-story',
|
||||||
|
inputSchema: .string(),
|
||||||
|
outputSchema: .string(),
|
||||||
|
streamSchema: .string(),
|
||||||
|
);
|
||||||
|
|
||||||
|
final stream = streamAction.stream(
|
||||||
|
input: 'Tell me a short story about a Dart developer.',
|
||||||
|
);
|
||||||
|
|
||||||
|
await for (final chunk in stream) {
|
||||||
|
print('Chunk: $chunk');
|
||||||
|
}
|
||||||
|
|
||||||
|
final finalResult = await stream.onResult;
|
||||||
|
print('\nFinal Response: $finalResult');
|
||||||
|
```
|
||||||
|
|
||||||
|
## Calling remote Flows from a Javascript client
|
||||||
|
|
||||||
|
Install `genkit` npm package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install genkit
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Call a remote flow (non-streaming)
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { runFlow } from 'genkit/beta/client';
|
||||||
|
|
||||||
|
async function callHelloFlow() {
|
||||||
|
try {
|
||||||
|
const result = await runFlow({
|
||||||
|
url: 'http://127.0.0.1:3400/helloFlow', // Replace with your deployed flow's URL
|
||||||
|
input: { name: 'Genkit User' },
|
||||||
|
});
|
||||||
|
console.log('Non-streaming result:', result.greeting);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error calling helloFlow:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
callHelloFlow();
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Call a remote flow (streaming)
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { streamFlow } from 'genkit/beta/client';
|
||||||
|
|
||||||
|
async function streamHelloFlow() {
|
||||||
|
try {
|
||||||
|
const result = streamFlow({
|
||||||
|
url: 'http://127.0.0.1:3400/helloFlow', // Replace with your deployed flow's URL
|
||||||
|
input: { name: 'Streaming User' },
|
||||||
|
});
|
||||||
|
|
||||||
|
// Process the stream chunks as they arrive
|
||||||
|
for await (const chunk of result.stream) {
|
||||||
|
console.log('Stream chunk:', chunk);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the final complete response
|
||||||
|
const finalOutput = await result.output;
|
||||||
|
console.log('Final streaming output:', finalOutput.greeting);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error streaming helloFlow:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
streamHelloFlow();
|
||||||
|
```
|
||||||
|
|
||||||
|
## Data Models
|
||||||
|
|
||||||
|
Genkit uses standard data models for representing prompts (messages & parts) and responses. These classes are implemented using schemantic library.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:schemantic/schemantic.dart';
|
||||||
|
|
||||||
|
@Schema()
|
||||||
|
abstract class $MyDataModel {
|
||||||
|
// uses Genkit's Message schema (not schemantic's Message)
|
||||||
|
List<$Message> get messages;
|
||||||
|
List<$Part> get parts;
|
||||||
|
}
|
||||||
|
|
||||||
|
void example() {
|
||||||
|
// --- Parts ---
|
||||||
|
// A Text part
|
||||||
|
final textPart = TextPart(text: 'some text', metadata: {'foo': 'bar'});
|
||||||
|
|
||||||
|
// A Media/Image part
|
||||||
|
final mediaPart = MediaPart(
|
||||||
|
media: Media(url: 'https://...', contentType: 'image/png'),
|
||||||
|
metadata: {'foo': 'bar'},
|
||||||
|
);
|
||||||
|
|
||||||
|
// A Tool Request initiated by the model
|
||||||
|
final toolRequestPart = ToolRequestPart(
|
||||||
|
toolRequest: ToolRequest(
|
||||||
|
name: 'get_weather',
|
||||||
|
ref: 'abc',
|
||||||
|
input: {'location': 'Paris, France'},
|
||||||
|
),
|
||||||
|
metadata: {'foo': 'bar'},
|
||||||
|
);
|
||||||
|
|
||||||
|
// The resulting data from a Tool execution
|
||||||
|
final toolResponsePart = ToolResponsePart(
|
||||||
|
toolResponse: ToolResponse(
|
||||||
|
name: 'get_weather',
|
||||||
|
ref: 'abc',
|
||||||
|
output: {'temperature': '20C'},
|
||||||
|
),
|
||||||
|
metadata: {'foo': 'bar'},
|
||||||
|
);
|
||||||
|
|
||||||
|
// Model reasoning (e.g. for Claude's "thinking" models)
|
||||||
|
final reasoningPart = ReasoningPart(
|
||||||
|
reasoning: 'thinking...',
|
||||||
|
metadata: {'foo': 'bar'},
|
||||||
|
);
|
||||||
|
|
||||||
|
// A custom fallback part
|
||||||
|
final customPart = CustomPart(
|
||||||
|
custom: {'provider': {'specific': 'data'}},
|
||||||
|
metadata: {'foo': 'bar'},
|
||||||
|
);
|
||||||
|
|
||||||
|
// --- Messages ---
|
||||||
|
final systemMessage = Message(
|
||||||
|
role: Role.system,
|
||||||
|
content: [textPart, mediaPart],
|
||||||
|
metadata: {'foo': 'bar'},
|
||||||
|
);
|
||||||
|
|
||||||
|
final userMessage = Message(
|
||||||
|
role: Role.user,
|
||||||
|
content: [textPart, mediaPart], // Can contain media (multimodal)
|
||||||
|
);
|
||||||
|
|
||||||
|
final modelMessage = Message(
|
||||||
|
role: Role.model,
|
||||||
|
// Models can emit text, tool requests, reasoning, or custom parts
|
||||||
|
content: [textPart, toolRequestPart, reasoningPart, customPart],
|
||||||
|
);
|
||||||
|
|
||||||
|
// --- Ergonomic Data Access (schema_extensions.dart) ---
|
||||||
|
// The Genkit SDK provides extensions on `Message` and `Part` to easily access fields
|
||||||
|
// without needing to cast them manually.
|
||||||
|
|
||||||
|
// Get concatenated text from all TextParts in a Message
|
||||||
|
print(modelMessage.text);
|
||||||
|
|
||||||
|
// Get the first Media object from a Message
|
||||||
|
print(modelMessage.media?.url);
|
||||||
|
|
||||||
|
// Iterate over tool requests in a Message
|
||||||
|
for (final toolReq in modelMessage.toolRequests) {
|
||||||
|
print(toolReq.name);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Inspect individual parts
|
||||||
|
for (final part in modelMessage.content) {
|
||||||
|
if (part.isText) print(part.text);
|
||||||
|
if (part.isMedia) print(part.media?.url);
|
||||||
|
if (part.isToolRequest) print(part.toolRequest?.name);
|
||||||
|
if (part.isToolResponse) print(part.toolResponse?.name);
|
||||||
|
if (part.isReasoning) print(part.reasoning);
|
||||||
|
if (part.isCustom) print(part.custom);
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Streaming Chunks ---
|
||||||
|
// Data emitted by ai.generateStream() calls
|
||||||
|
final generateResponseChunk = ModelResponseChunk(
|
||||||
|
content: [textPart],
|
||||||
|
index: 0, // Index of the message this chunk belongs to
|
||||||
|
aggregated: false,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Chunks also have text and media accessors
|
||||||
|
print(generateResponseChunk.text);
|
||||||
|
|
||||||
|
// --- Advanced: Schemas ---
|
||||||
|
// Use Genkit type schemas directly in Schemantic validations
|
||||||
|
final messageSchema = Message.$schema;
|
||||||
|
final partSchema = Part.$schema;
|
||||||
|
|
||||||
|
final mySchema = SchemanticType.map(
|
||||||
|
.string(),
|
||||||
|
.list(Message.$schema), // Requires a list of Messages
|
||||||
|
);
|
||||||
|
|
||||||
|
// --- Generate Response ---
|
||||||
|
// ai.generate() returns a GenerateResponseHelper which provides ergonomic getters
|
||||||
|
// over the underlying ModelResponse:
|
||||||
|
final response = await ai.generate(...);
|
||||||
|
|
||||||
|
print(response.text); // Concatenated text
|
||||||
|
print(response.media?.url); // First media part
|
||||||
|
print(response.toolRequests); // All tool requests
|
||||||
|
print(response.interrupts); // Tool requests that triggered an interrupt
|
||||||
|
print(response.messages); // Full history of the conversation, including the request and response
|
||||||
|
print(response.output); // Structured typed output (if outputSchema was used)
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,41 @@
|
|||||||
|
# Genkit Anthropic Plugin (`genkit_anthropic`)
|
||||||
|
|
||||||
|
The Anthropic plugin for Genkit Dart, used for interacting with the Claude models.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
Requires `ANTHROPIC_API_KEY` to be passed to the init block.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'dart:io';
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_anthropic/genkit_anthropic.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit(
|
||||||
|
plugins: [anthropic(apiKey: Platform.environment['ANTHROPIC_API_KEY']!)],
|
||||||
|
);
|
||||||
|
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: anthropic.model('claude-sonnet-4-5'),
|
||||||
|
prompt: 'Tell me a joke about a developer.',
|
||||||
|
);
|
||||||
|
|
||||||
|
print(response.text);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Claude Thinking Configurations
|
||||||
|
|
||||||
|
Provides specific configurations for utilizing Claude 3.7+ "thinking" model capabilities.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: anthropic.model('claude-sonnet-4-5'),
|
||||||
|
prompt: 'Solve this 24 game: 2, 3, 10, 10',
|
||||||
|
config: AnthropicOptions(thinking: ThinkingConfig(budgetTokens: 2048)),
|
||||||
|
);
|
||||||
|
|
||||||
|
// The thinking content is available in the message parts
|
||||||
|
print(response.message?.content);
|
||||||
|
```
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
# Genkit Chrome AI Plugin (`genkit_chrome`)
|
||||||
|
|
||||||
|
Chrome Built-in AI (Gemini Nano) plugin for Genkit Dart, allowing local offline execution within a Chrome application.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_chrome/genkit_chrome.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit(plugins: [ChromeAIPlugin()]);
|
||||||
|
|
||||||
|
final stream = ai.generateStream(
|
||||||
|
model: modelRef('chrome/gemini-nano'),
|
||||||
|
prompt: 'Write a story about a robot.',
|
||||||
|
);
|
||||||
|
|
||||||
|
await for (final chunk in stream) {
|
||||||
|
print(chunk.text);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
# Genkit Firebase AI Plugin (`genkit_firebase_ai`)
|
||||||
|
|
||||||
|
The Firebase AI plugin for Genkit Dart, used for interacting with Gemini APIs through Firebase AI Logic.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_firebase_ai/genkit_firebase_ai.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
// Initialize Genkit with the Firebase AI plugin
|
||||||
|
final ai = Genkit(plugins: [firebaseAI()]);
|
||||||
|
|
||||||
|
// Generate text
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: firebaseAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'Tell me a joke about a developer.',
|
||||||
|
);
|
||||||
|
|
||||||
|
print(response.text);
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,95 @@
|
|||||||
|
# Genkit Google GenAI Plugin (`genkit_google_genai`)
|
||||||
|
|
||||||
|
The Google AI plugin provides an interface against the official Google AI Gemini API.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_google_genai/genkit_google_genai.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
// Initialize Genkit with the Google AI plugin
|
||||||
|
final ai = Genkit(plugins: [googleAI()]);
|
||||||
|
|
||||||
|
// Generate text
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash'),
|
||||||
|
prompt: 'Tell me a joke about a developer.',
|
||||||
|
);
|
||||||
|
|
||||||
|
print(response.text);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Embeddings
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final embeddings = await ai.embedMany(
|
||||||
|
embedder: googleAI.textEmbedding('text-embedding-004'),
|
||||||
|
documents: [
|
||||||
|
DocumentData(content: [TextPart(text: 'Hello world')]),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Image Generation
|
||||||
|
|
||||||
|
The plugin also supports image generation models such as `gemini-2.5-flash-image`.
|
||||||
|
|
||||||
|
### Example (Nano Banana)
|
||||||
|
|
||||||
|
```dart
|
||||||
|
// Define an image generation flow
|
||||||
|
ai.defineFlow(
|
||||||
|
name: 'imageGenerator',
|
||||||
|
inputSchema: .string(defaultValue: 'A banana riding a bike'),
|
||||||
|
outputSchema: Media.$schema,
|
||||||
|
fn: (input, context) async {
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash-image'),
|
||||||
|
prompt: input,
|
||||||
|
);
|
||||||
|
if (response.media == null) {
|
||||||
|
throw Exception('No media generated');
|
||||||
|
}
|
||||||
|
return response.media!;
|
||||||
|
},
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
The media (url field) contain base64 encoded data uri. You can decode it and save it as a file.
|
||||||
|
|
||||||
|
## Text-to-Speech (TTS)
|
||||||
|
|
||||||
|
You can use text-to-speech models to generate audio from text. The generated `Media` object will contain base64 encoded PCM audio in its data URI.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
// Define a TTS flow
|
||||||
|
ai.defineFlow(
|
||||||
|
name: 'textToSpeech',
|
||||||
|
inputSchema: .string(defaultValue: 'Genkit is an amazing AI framework!'),
|
||||||
|
outputSchema: Media.$schema,
|
||||||
|
fn: (prompt, _) async {
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: googleAI.gemini('gemini-2.5-flash-preview-tts'),
|
||||||
|
prompt: prompt,
|
||||||
|
config: GeminiTtsOptions(
|
||||||
|
responseModalities: ['AUDIO'],
|
||||||
|
speechConfig: SpeechConfig(
|
||||||
|
voiceConfig: VoiceConfig(
|
||||||
|
prebuiltVoiceConfig: PrebuiltVoiceConfig(voiceName: 'Puck'),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
if (response.media != null) {
|
||||||
|
return response.media!;
|
||||||
|
}
|
||||||
|
throw Exception('No audio generated');
|
||||||
|
},
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Google AI also supports multi-speaker TTS by configuring a `MultiSpeakerVoiceConfig` inside `SpeechConfig`.
|
||||||
@@ -0,0 +1,115 @@
|
|||||||
|
# Genkit MCP (`genkit_mcp`)
|
||||||
|
|
||||||
|
MCP (Model Context Protocol) integration for Genkit Dart.
|
||||||
|
|
||||||
|
## MCP Host (Recommended)
|
||||||
|
Connect to one or more MCP servers and aggregate their capabilities into the Genkit registry automatically.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_mcp/genkit_mcp.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit();
|
||||||
|
|
||||||
|
final host = defineMcpHost(
|
||||||
|
ai,
|
||||||
|
McpHostOptionsWithCache(
|
||||||
|
name: 'my-host',
|
||||||
|
mcpServers: {
|
||||||
|
'fs': McpServerConfig(
|
||||||
|
command: 'npx',
|
||||||
|
args: ['-y', '@modelcontextprotocol/server-filesystem', '.'],
|
||||||
|
),
|
||||||
|
},
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Tools can be discovered and executed dynamically using a wildcard...
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: 'gemini-2.5-flash',
|
||||||
|
prompt: 'Summarize the contents of README.md',
|
||||||
|
toolNames: ['my-host:tool/fs/*'],
|
||||||
|
);
|
||||||
|
|
||||||
|
// ...or by specifying the exact tool name
|
||||||
|
final exactResponse = await ai.generate(
|
||||||
|
model: 'gemini-2.5-flash',
|
||||||
|
prompt: 'Read README.md',
|
||||||
|
toolNames: ['my-host:tool/fs/read_file'],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## MCP Client (Advanced / Single Server)
|
||||||
|
Connecting to a single MCP server with a client object is an advanced usecase for when you need manual control over the client lifecycle. Standalone clients do not automatically register tools into the registry, so they must be passed into `generate` or `defineDynamicActionProvider` manually.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_mcp/genkit_mcp.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit();
|
||||||
|
|
||||||
|
final client = createMcpClient(
|
||||||
|
McpClientOptions(
|
||||||
|
name: 'my-client',
|
||||||
|
mcpServer: McpServerConfig(
|
||||||
|
command: 'npx',
|
||||||
|
args: ['-y', '@modelcontextprotocol/server-filesystem', '.'],
|
||||||
|
),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
await client.ready();
|
||||||
|
|
||||||
|
// Retrieve the tools from the connected client
|
||||||
|
final tools = await client.getActiveTools(ai);
|
||||||
|
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: 'gemini-2.5-flash',
|
||||||
|
prompt: 'Read the contents of README.md',
|
||||||
|
tools: tools,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## MCP Server
|
||||||
|
Expose Genkit actions (tools, prompts, resources) over MCP.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_mcp/genkit_mcp.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit();
|
||||||
|
|
||||||
|
ai.defineTool(
|
||||||
|
name: 'add',
|
||||||
|
description: 'Add two numbers together',
|
||||||
|
inputSchema: .map(.string(), .dynamicSChema()),
|
||||||
|
fn: (input, _) async => (input['a'] + input['b']).toString(),
|
||||||
|
);
|
||||||
|
|
||||||
|
ai.defineResource(
|
||||||
|
name: 'my-resource',
|
||||||
|
uri: 'my://resource',
|
||||||
|
fn: (_, _) async => ResourceOutput(content: [TextPart(text: 'my resource')]),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Stdio transport by default
|
||||||
|
final server = createMcpServer(ai, McpServerOptions(name: 'my-server'));
|
||||||
|
await server.start();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streamable HTTP Transport
|
||||||
|
```dart
|
||||||
|
import 'dart:io';
|
||||||
|
|
||||||
|
final transport = await StreamableHttpServerTransport.bind(
|
||||||
|
address: InternetAddress.loopbackIPv4,
|
||||||
|
port: 3000,
|
||||||
|
);
|
||||||
|
await server.start(transport);
|
||||||
|
```
|
||||||
@@ -0,0 +1,84 @@
|
|||||||
|
# Genkit Middleware (`genkit_middleware`)
|
||||||
|
|
||||||
|
A collection of useful middleware for Genkit Dart to enhance your agent's capabilities. Register plugins when initializing Genkit:
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_middleware/genkit_middleware.dart';
|
||||||
|
|
||||||
|
void main() {
|
||||||
|
final ai = Genkit(
|
||||||
|
plugins: [
|
||||||
|
FilesystemPlugin(),
|
||||||
|
SkillsPlugin(),
|
||||||
|
ToolApprovalPlugin(),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Filesystem Middleware
|
||||||
|
Allows the agent to list, read, write, and search/replace files within a restricted root directory.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final response = await ai.generate(
|
||||||
|
prompt: 'Check the logs in the current directory.',
|
||||||
|
use: [
|
||||||
|
filesystem(rootDirectory: '/path/to/secure/workspace'),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tools Provided:**
|
||||||
|
- `list_files`, `read_file`, `write_file`, `search_and_replace`
|
||||||
|
|
||||||
|
## Skills Middleware
|
||||||
|
Injects specialized instructions (skills) into the system prompt from `SKILL.md` files located in specified directories.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final response = await ai.generate(
|
||||||
|
prompt: 'Help me debug this issue.',
|
||||||
|
use: [
|
||||||
|
skills(skillPaths: ['/path/to/skills']),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tools Provided:**
|
||||||
|
- `use_skill`: Retrieve the full content of a skill by name.
|
||||||
|
|
||||||
|
## Tool Approval Middleware
|
||||||
|
Intercepts tool execution for specified tools and requires explicit approval. Returns `FinishReason.interrupted`.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final response = await ai.generate(
|
||||||
|
prompt: 'Delete the database.',
|
||||||
|
use: [
|
||||||
|
// Require approval for all tools EXCEPT those below
|
||||||
|
toolApproval(approved: ['read_file', 'list_files']),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
|
||||||
|
if (response.finishReason == FinishReason.interrupted) {
|
||||||
|
final interrupt = response.interrupts.first;
|
||||||
|
|
||||||
|
// Ask user for approval
|
||||||
|
final isApproved = await askUser();
|
||||||
|
|
||||||
|
if (isApproved) {
|
||||||
|
final resumeResponse = await ai.generate(
|
||||||
|
messages: response.messages, // Pass history
|
||||||
|
toolChoice: ToolChoice.none, // Prevent immediate re-call
|
||||||
|
interruptRestart: [
|
||||||
|
ToolRequestPart(
|
||||||
|
toolRequest: interrupt.toolRequest,
|
||||||
|
metadata: {
|
||||||
|
...?interrupt.metadata,
|
||||||
|
'tool-approved': true
|
||||||
|
},
|
||||||
|
),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,54 @@
|
|||||||
|
# Genkit OpenAI Plugin (`genkit_openai`)
|
||||||
|
|
||||||
|
OpenAI-compatible API plugin for Genkit Dart. Supports OpenAI models and other compatible APIs (xAI, DeepSeek, Together AI, Groq, etc.).
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'dart:io';
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_openai/genkit_openai.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit(plugins: [
|
||||||
|
openAI(apiKey: Platform.environment['OPENAI_API_KEY']),
|
||||||
|
]);
|
||||||
|
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: openAI.model('gpt-4o'),
|
||||||
|
prompt: 'Tell me a joke.',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
`OpenAIOptions` allows configuring sampling temperature, nucleus sampling, token generation, seed, etc:
|
||||||
|
`config: OpenAIOptions(temperature: 0.7, maxTokens: 100)`
|
||||||
|
|
||||||
|
## Groq API override
|
||||||
|
|
||||||
|
Specify custom `baseUrl` and custom models to integrate with third-party providers.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final ai = Genkit(plugins: [
|
||||||
|
openAI(
|
||||||
|
apiKey: Platform.environment['GROQ_API_KEY'],
|
||||||
|
baseUrl: 'https://api.groq.com/openai/v1',
|
||||||
|
models: [
|
||||||
|
CustomModelDefinition(
|
||||||
|
name: 'llama-3.3-70b-versatile',
|
||||||
|
info: ModelInfo(
|
||||||
|
label: 'Llama 3.3 70B',
|
||||||
|
supports: {'multiturn': true, 'tools': true, 'systemRole': true},
|
||||||
|
),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
),
|
||||||
|
]);
|
||||||
|
|
||||||
|
final response = await ai.generate(
|
||||||
|
model: openAI.model('llama-3.3-70b-versatile'),
|
||||||
|
prompt: 'Hello!',
|
||||||
|
);
|
||||||
|
```
|
||||||
@@ -0,0 +1,59 @@
|
|||||||
|
# Genkit Shelf Plugin (`genkit_shelf`)
|
||||||
|
|
||||||
|
Shelf integration for Genkit Dart, used to serve Genkit Flows.
|
||||||
|
|
||||||
|
## Standalone Server
|
||||||
|
Serve Genkit Flows easily on an isolated HTTP server using `startFlowServer`.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_shelf/genkit_shelf.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit();
|
||||||
|
|
||||||
|
final flow = ai.defineFlow(
|
||||||
|
name: 'myFlow',
|
||||||
|
inputSchema: .string(),
|
||||||
|
outputSchema: .string(),
|
||||||
|
fn: (String input, _) async => 'Hello $input',
|
||||||
|
);
|
||||||
|
|
||||||
|
await startFlowServer(
|
||||||
|
flows: [flow],
|
||||||
|
port: 8080,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Existing Shelf Application
|
||||||
|
Mount Genkit Flow endpoints directly to an existing Shelf `Router` using `shelfHandler`.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:genkit/genkit.dart';
|
||||||
|
import 'package:genkit_shelf/genkit_shelf.dart';
|
||||||
|
import 'package:shelf/shelf.dart';
|
||||||
|
import 'package:shelf/shelf_io.dart' as io;
|
||||||
|
import 'package:shelf_router/shelf_router.dart';
|
||||||
|
|
||||||
|
void main() async {
|
||||||
|
final ai = Genkit();
|
||||||
|
|
||||||
|
final flow = ai.defineFlow(
|
||||||
|
name: 'myFlow',
|
||||||
|
inputSchema: .string(),
|
||||||
|
outputSchema: .string(),
|
||||||
|
fn: (String input, _) async => 'Hello $input',
|
||||||
|
);
|
||||||
|
|
||||||
|
final router = Router();
|
||||||
|
|
||||||
|
// Mount the flow handler at a specific path
|
||||||
|
router.post('/myFlow', shelfHandler(flow));
|
||||||
|
|
||||||
|
// Start the server
|
||||||
|
await io.serve(router.call, 'localhost', 8080);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Access deployed flows using genkit client libraries (from Dart or JS).
|
||||||
@@ -0,0 +1,137 @@
|
|||||||
|
# Schemantic
|
||||||
|
|
||||||
|
Schemantic is a general-purpose Dart library used for defining strongly typed data classes that automatically bind to reusable runtime JSON schemas. It is standard for the `genkit-dart` framework but works independently as well.
|
||||||
|
|
||||||
|
## Core Concepts
|
||||||
|
|
||||||
|
Always use `schemantic` when strongly typed JSON parsing or programmatic schema validation is required.
|
||||||
|
|
||||||
|
- Annotate your abstract classes with `@Schema()`.
|
||||||
|
- Use the `$` prefix for abstract schema class names (e.g., `abstract class $User`).
|
||||||
|
- Always run `dart run build_runner build` to generate the `.g.dart` schema files.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
Add dependencies:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dart pub add schemantic
|
||||||
|
```
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
1. **Defining a schema:**
|
||||||
|
|
||||||
|
```dart
|
||||||
|
import 'package:schemantic/schemantic.dart';
|
||||||
|
|
||||||
|
part 'my_file.g.dart'; // Must match the filename
|
||||||
|
|
||||||
|
@Schema()
|
||||||
|
abstract class $MyObj {
|
||||||
|
String get name;
|
||||||
|
$MySubObj get subObj;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Schema()
|
||||||
|
abstract class $MySubObj {
|
||||||
|
String get foo;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Using the Generated Class:**
|
||||||
|
|
||||||
|
The builder creates a concrete class `MyObj` (no `$`) with a factory constructor (`MyObj.fromJson`) and a regular constructor.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
// Creating an instance
|
||||||
|
final obj = MyObj(name: 'test', subObj: MySubObj(foo: 'bar'));
|
||||||
|
|
||||||
|
// Serializing to JSON
|
||||||
|
print(obj.toJson());
|
||||||
|
|
||||||
|
// Parsing from JSON
|
||||||
|
final parsed = MyObj.fromJson({'name': 'test', 'subObj': {'foo': 'bar'}});
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Accessing Schemas at Runtime:**
|
||||||
|
|
||||||
|
The generated data classes have a static `$schema` field (of type `SchemanticType<T>`) which can be used to pass the definition into functions or to extract the raw JSON schema.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
// Access JSON schema
|
||||||
|
final schema = MyObj.$schema.jsonSchema;
|
||||||
|
print(schema.toJson());
|
||||||
|
|
||||||
|
// Validate arbitrary JSON at runtime
|
||||||
|
final validationErrors = await schema.validate({'invalid': 'data'});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Primitive Schemas
|
||||||
|
|
||||||
|
When a full data class is not required, Schemantic provides functions to create schemas dynamically.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final ageSchema = SchemanticType.integer(description: 'Age in years', minimum: 0);
|
||||||
|
final nameSchema = SchemanticType.string(minLength: 2);
|
||||||
|
final nothingSchema = SchemanticType.voidSchema();
|
||||||
|
final anySchema = SchemanticType.dynamicSchema();
|
||||||
|
|
||||||
|
final userSchema = SchemanticType.map(.string(), .integer()); // Map<String, int>
|
||||||
|
final tagsSchema = SchemanticType.list(.string()); // List<String>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Union Types (AnyOf)
|
||||||
|
|
||||||
|
To allow a field to accept multiple types, use `@AnyOf`.
|
||||||
|
|
||||||
|
```dart
|
||||||
|
@Schema()
|
||||||
|
abstract class $Poly {
|
||||||
|
@AnyOf([int, String, $MyObj])
|
||||||
|
Object? get id;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Schemantic generates a specific helper class (e.g., `PolyId`) to handle the values:
|
||||||
|
|
||||||
|
```dart
|
||||||
|
final poly1 = Poly(id: PolyId.int(123));
|
||||||
|
final poly2 = Poly(id: PolyId.string('abc'));
|
||||||
|
```
|
||||||
|
|
||||||
|
## Field Annotations
|
||||||
|
|
||||||
|
You can use specialized annotations for more validation boundaries:
|
||||||
|
|
||||||
|
```dart
|
||||||
|
@Schema()
|
||||||
|
abstract class $User {
|
||||||
|
@IntegerField(
|
||||||
|
name: 'years_old', // Change JSON key
|
||||||
|
description: 'Age of the user',
|
||||||
|
minimum: 0,
|
||||||
|
defaultValue: 18,
|
||||||
|
)
|
||||||
|
int? get age;
|
||||||
|
|
||||||
|
@StringField(
|
||||||
|
minLength: 2,
|
||||||
|
enumValues: ['user', 'admin'],
|
||||||
|
)
|
||||||
|
String get role;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recursive Schemas
|
||||||
|
|
||||||
|
For recursive structures (like trees), must use `useRefs: true` inside the generated jsonSchema property. You define it normally:
|
||||||
|
|
||||||
|
```dart
|
||||||
|
@Schema()
|
||||||
|
abstract class $Node {
|
||||||
|
String get id;
|
||||||
|
List<$Node>? get children;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*Note*: `Node.$schema.jsonSchema(useRefs: true)` generates schemas with JSON Schema `$ref`.
|
||||||
@@ -0,0 +1,97 @@
|
|||||||
|
---
|
||||||
|
name: developing-genkit-go
|
||||||
|
description: Develop AI-powered applications using Genkit in Go. Use when the user asks to build AI features, agents, flows, or tools in Go using Genkit, or when working with Genkit Go code involving generation, prompts, streaming, tool calling, or model providers.
|
||||||
|
metadata:
|
||||||
|
genkit-managed: true
|
||||||
|
---
|
||||||
|
|
||||||
|
# Genkit Go
|
||||||
|
|
||||||
|
Genkit Go is an AI SDK for Go that provides generation, structured output, streaming, tool calling, prompts, and flows with a unified interface across model providers.
|
||||||
|
|
||||||
|
## Hello World
|
||||||
|
|
||||||
|
```go
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"net/http"
|
||||||
|
|
||||||
|
"github.com/genkit-ai/genkit/go/ai"
|
||||||
|
"github.com/genkit-ai/genkit/go/genkit"
|
||||||
|
"github.com/genkit-ai/genkit/go/plugins/googlegenai"
|
||||||
|
"github.com/genkit-ai/genkit/go/plugins/server"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
ctx := context.Background()
|
||||||
|
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}))
|
||||||
|
|
||||||
|
genkit.DefineFlow(g, "jokeFlow", func(ctx context.Context, topic string) (string, error) {
|
||||||
|
return genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", topic),
|
||||||
|
)
|
||||||
|
})
|
||||||
|
|
||||||
|
mux := http.NewServeMux()
|
||||||
|
for _, f := range genkit.ListFlows(g) {
|
||||||
|
mux.HandleFunc("POST /"+f.Name(), genkit.Handler(f))
|
||||||
|
}
|
||||||
|
log.Fatal(server.Start(ctx, "127.0.0.1:8080", mux))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Features
|
||||||
|
|
||||||
|
Load the appropriate reference based on what you need:
|
||||||
|
|
||||||
|
| Feature | Reference | When to load |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| Initialization | [references/getting-started.md](references/getting-started.md) | Setting up `genkit.Init`, plugins, the `*Genkit` instance pattern |
|
||||||
|
| Generation | [references/generation.md](references/generation.md) | `Generate`, `GenerateText`, `GenerateData`, streaming, output formats |
|
||||||
|
| Prompts | [references/prompts.md](references/prompts.md) | `DefinePrompt`, `DefineDataPrompt`, `.prompt` files, schemas |
|
||||||
|
| Tools | [references/tools.md](references/tools.md) | `DefineTool`, tool interrupts, `RestartWith`/`RespondWith` |
|
||||||
|
| Flows & HTTP | [references/flows-and-http.md](references/flows-and-http.md) | `DefineFlow`, `DefineStreamingFlow`, `genkit.Handler`, HTTP serving |
|
||||||
|
| Model Providers | [references/providers.md](references/providers.md) | Google AI, Vertex AI, Anthropic, OpenAI-compatible, Ollama setup |
|
||||||
|
|
||||||
|
## Genkit CLI
|
||||||
|
|
||||||
|
Check if installed: `genkit --version`
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
curl -sL cli.genkit.dev | bash
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key commands:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start app with Developer UI (tracing, flow testing) at http://localhost:4000
|
||||||
|
genkit start -- go run .
|
||||||
|
genkit start -o -- go run . # also opens browser
|
||||||
|
|
||||||
|
# Run a flow directly from the CLI
|
||||||
|
genkit flow:run myFlow '{"data": "input"}'
|
||||||
|
genkit flow:run myFlow '{"data": "input"}' --stream # with streaming
|
||||||
|
genkit flow:run myFlow '{"data": "input"}' --wait # wait for completion
|
||||||
|
|
||||||
|
# Look up Genkit documentation
|
||||||
|
genkit docs:search "streaming" go
|
||||||
|
genkit docs:list go
|
||||||
|
genkit docs:read go/flows.md
|
||||||
|
```
|
||||||
|
|
||||||
|
See [references/getting-started.md](references/getting-started.md) for full CLI and Developer UI details.
|
||||||
|
|
||||||
|
## Key Guidance
|
||||||
|
|
||||||
|
- **Pass `g` explicitly.** The `*Genkit` instance returned by `genkit.Init` is the central registry. Pass it to all Genkit functions rather than storing it as a global. This is a core pattern throughout the SDK.
|
||||||
|
- **Wrap AI logic in flows.** Flows give you tracing, observability, HTTP deployment via `genkit.Handler`, and the ability to test from the Developer UI and CLI. Any generation call worth keeping should live in a flow.
|
||||||
|
- **Use `jsonschema:"description=..."` struct tags on output types.** The model uses these descriptions to understand what each field should contain. Without them, structured output quality drops significantly.
|
||||||
|
- **Write good tool descriptions.** The model decides which tools to call based on their description string. Vague descriptions lead to missed or incorrect tool calls.
|
||||||
|
- **Use `.prompt` files for complex prompts.** They separate prompt content from Go code, support Handlebars templating, and can be iterated on without recompilation. Code-defined prompts are better for simple, single-line cases.
|
||||||
|
- **Look up the latest model IDs.** Model names change frequently. Check provider documentation for current model IDs rather than relying on hardcoded names. See [references/providers.md](references/providers.md).
|
||||||
@@ -0,0 +1,183 @@
|
|||||||
|
# Flows & HTTP
|
||||||
|
|
||||||
|
## DefineFlow
|
||||||
|
|
||||||
|
Wrap AI logic in a flow for observability, tracing, and HTTP deployment.
|
||||||
|
|
||||||
|
```go
|
||||||
|
jokeFlow := genkit.DefineFlow(g, "jokeFlow",
|
||||||
|
func(ctx context.Context, topic string) (string, error) {
|
||||||
|
return genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", topic),
|
||||||
|
)
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Running a Flow Directly
|
||||||
|
|
||||||
|
```go
|
||||||
|
result, err := jokeFlow.Run(ctx, "cats")
|
||||||
|
```
|
||||||
|
|
||||||
|
## DefineStreamingFlow
|
||||||
|
|
||||||
|
Flows that stream chunks back to the caller. Two common patterns:
|
||||||
|
|
||||||
|
### Pattern 1: Passthrough Streaming
|
||||||
|
|
||||||
|
Pass the stream callback directly through to `WithStreaming`. The callback type is `ai.ModelStreamCallback` = `func(context.Context, *ai.ModelResponseChunk) error`:
|
||||||
|
|
||||||
|
```go
|
||||||
|
genkit.DefineStreamingFlow(g, "streamingJokeFlow",
|
||||||
|
func(ctx context.Context, topic string, sendChunk ai.ModelStreamCallback) (string, error) {
|
||||||
|
resp, err := genkit.Generate(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a long joke about %s", topic),
|
||||||
|
ai.WithStreaming(sendChunk), // passthrough
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return resp.Text(), nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern 2: Manual String Streaming
|
||||||
|
|
||||||
|
Use `core.StreamCallback[string]` to stream extracted text:
|
||||||
|
|
||||||
|
```go
|
||||||
|
genkit.DefineStreamingFlow(g, "streamingJokeFlow",
|
||||||
|
func(ctx context.Context, topic string, sendChunk core.StreamCallback[string]) (string, error) {
|
||||||
|
stream := genkit.GenerateStream(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a long joke about %s", topic),
|
||||||
|
)
|
||||||
|
for result, err := range stream {
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if result.Done {
|
||||||
|
return result.Response.Text(), nil
|
||||||
|
}
|
||||||
|
sendChunk(ctx, result.Chunk.Text())
|
||||||
|
}
|
||||||
|
return "", nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Typed Streaming Flows
|
||||||
|
|
||||||
|
Use `core.StreamCallback[T]` with `GenerateDataStream` for typed chunks:
|
||||||
|
|
||||||
|
```go
|
||||||
|
genkit.DefineStreamingFlow(g, "structuredStream",
|
||||||
|
func(ctx context.Context, input JokeRequest, sendChunk core.StreamCallback[*Joke]) (*Joke, error) {
|
||||||
|
stream := genkit.GenerateDataStream[*Joke](ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", input.Topic),
|
||||||
|
)
|
||||||
|
for result, err := range stream {
|
||||||
|
if err != nil { return nil, err }
|
||||||
|
if result.Done { return result.Output, nil }
|
||||||
|
sendChunk(ctx, result.Chunk)
|
||||||
|
}
|
||||||
|
return nil, nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Named Sub-Steps
|
||||||
|
|
||||||
|
Use `core.Run` inside a flow for traced sub-steps:
|
||||||
|
|
||||||
|
```go
|
||||||
|
genkit.DefineFlow(g, "pipeline",
|
||||||
|
func(ctx context.Context, input string) (string, error) {
|
||||||
|
subject, err := core.Run(ctx, "extract-subject", func() (string, error) {
|
||||||
|
return genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithPrompt("Extract the subject from: %s", input),
|
||||||
|
)
|
||||||
|
})
|
||||||
|
if err != nil { return "", err }
|
||||||
|
|
||||||
|
joke, err := core.Run(ctx, "generate-joke", func() (string, error) {
|
||||||
|
return genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", subject),
|
||||||
|
)
|
||||||
|
})
|
||||||
|
return joke, err
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## HTTP Handlers
|
||||||
|
|
||||||
|
### genkit.Handler
|
||||||
|
|
||||||
|
Convert any flow into an `http.HandlerFunc`:
|
||||||
|
|
||||||
|
```go
|
||||||
|
mux := http.NewServeMux()
|
||||||
|
for _, f := range genkit.ListFlows(g) {
|
||||||
|
mux.HandleFunc("POST /"+f.Name(), genkit.Handler(f))
|
||||||
|
}
|
||||||
|
log.Fatal(server.Start(ctx, "127.0.0.1:8080", mux))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Request/Response Format
|
||||||
|
|
||||||
|
**Non-streaming request:**
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8080/jokeFlow \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": "bananas"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Response: `{"result": "Why did the banana go to the doctor?..."}`
|
||||||
|
|
||||||
|
**Streaming request:**
|
||||||
|
```bash
|
||||||
|
curl -N -X POST http://localhost:8080/streamingJokeFlow \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"data": "bananas"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Streaming responses use Server-Sent Events (SSE) format.
|
||||||
|
|
||||||
|
### genkit.HandlerFunc
|
||||||
|
|
||||||
|
For frameworks that expect error-returning handlers:
|
||||||
|
|
||||||
|
```go
|
||||||
|
handler := genkit.HandlerFunc(myFlow)
|
||||||
|
// handler is func(http.ResponseWriter, *http.Request) error
|
||||||
|
```
|
||||||
|
|
||||||
|
### Context Providers
|
||||||
|
|
||||||
|
Inject request context (e.g., auth headers) into flow execution:
|
||||||
|
|
||||||
|
```go
|
||||||
|
mux.HandleFunc("POST /myFlow", genkit.Handler(myFlow,
|
||||||
|
genkit.WithContextProviders(func(ctx context.Context, rd core.RequestData) (api.ActionContext, error) {
|
||||||
|
// rd.Headers contains HTTP headers
|
||||||
|
return api.ActionContext{"userId": rd.Headers.Get("X-User-Id")}, nil
|
||||||
|
}),
|
||||||
|
))
|
||||||
|
```
|
||||||
|
|
||||||
|
### ListFlows
|
||||||
|
|
||||||
|
Get all registered flows for dynamic route setup:
|
||||||
|
|
||||||
|
```go
|
||||||
|
flows := genkit.ListFlows(g) // []api.Action
|
||||||
|
for _, f := range flows {
|
||||||
|
fmt.Println(f.Name())
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,176 @@
|
|||||||
|
# Generation
|
||||||
|
|
||||||
|
## GenerateText
|
||||||
|
|
||||||
|
Simplest form. Returns a string.
|
||||||
|
|
||||||
|
```go
|
||||||
|
text, err := genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", topic),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Generate
|
||||||
|
|
||||||
|
Returns a full `*ModelResponse` with metadata, usage stats, and history.
|
||||||
|
|
||||||
|
```go
|
||||||
|
resp, err := genkit.Generate(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithSystem("You are a helpful assistant."),
|
||||||
|
ai.WithPrompt("Explain %s", topic),
|
||||||
|
)
|
||||||
|
fmt.Println(resp.Text()) // concatenated text
|
||||||
|
fmt.Println(resp.FinishReason) // ai.FinishReasonStop, etc.
|
||||||
|
fmt.Println(resp.Usage) // token counts
|
||||||
|
```
|
||||||
|
|
||||||
|
## GenerateData (Structured Output)
|
||||||
|
|
||||||
|
Returns a typed Go value parsed from the model's JSON output.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Joke struct {
|
||||||
|
Setup string `json:"setup" jsonschema:"description=The setup of the joke"`
|
||||||
|
Punchline string `json:"punchline" jsonschema:"description=The punchline"`
|
||||||
|
}
|
||||||
|
|
||||||
|
joke, resp, err := genkit.GenerateData[Joke](ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", topic),
|
||||||
|
)
|
||||||
|
// joke is *Joke, resp is *ModelResponse
|
||||||
|
```
|
||||||
|
|
||||||
|
## Streaming
|
||||||
|
|
||||||
|
### GenerateStream
|
||||||
|
|
||||||
|
Returns an iterator. Each value has `.Done`, `.Chunk`, and `.Response`.
|
||||||
|
|
||||||
|
```go
|
||||||
|
stream := genkit.GenerateStream(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a long story about %s", topic),
|
||||||
|
)
|
||||||
|
for result, err := range stream {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if result.Done {
|
||||||
|
finalText := result.Response.Text()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fmt.Print(result.Chunk.Text()) // incremental text
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### GenerateDataStream (Structured Streaming)
|
||||||
|
|
||||||
|
Streams typed partial objects as they arrive.
|
||||||
|
|
||||||
|
```go
|
||||||
|
stream := genkit.GenerateDataStream[Joke](ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a joke about %s", topic),
|
||||||
|
)
|
||||||
|
for result, err := range stream {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if result.Done {
|
||||||
|
finalJoke := result.Output // *Joke
|
||||||
|
break
|
||||||
|
}
|
||||||
|
partialJoke := result.Chunk // *Joke (partial)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Callback-Based Streaming
|
||||||
|
|
||||||
|
Use `ai.WithStreaming` with `Generate` for callback-style streaming. The callback receives `*ai.ModelResponseChunk`:
|
||||||
|
|
||||||
|
```go
|
||||||
|
resp, err := genkit.Generate(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Tell me a story"),
|
||||||
|
ai.WithStreaming(func(ctx context.Context, chunk *ai.ModelResponseChunk) error {
|
||||||
|
fmt.Print(chunk.Text()) // extract text from chunk
|
||||||
|
return nil
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
// resp contains the final complete response
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Options
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Model selection
|
||||||
|
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", nil)) // model reference
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest") // by name string
|
||||||
|
|
||||||
|
// Content
|
||||||
|
ai.WithPrompt("Tell me about %s", topic) // user message (supports fmt verbs)
|
||||||
|
ai.WithSystem("You are a pirate.") // system instructions
|
||||||
|
ai.WithMessages(msg1, msg2) // conversation history
|
||||||
|
ai.WithDocs(doc1, doc2) // context documents
|
||||||
|
ai.WithTextDocs("context 1", "context 2") // context as strings
|
||||||
|
|
||||||
|
// Model config (provider-specific)
|
||||||
|
ai.WithConfig(map[string]any{"temperature": 0.7})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Formats
|
||||||
|
|
||||||
|
Control how the model structures its output.
|
||||||
|
|
||||||
|
### By Go Type
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Automatically uses JSON format and instructs model to match the type
|
||||||
|
ai.WithOutputType(MyStruct{})
|
||||||
|
```
|
||||||
|
|
||||||
|
### By Format String
|
||||||
|
|
||||||
|
```go
|
||||||
|
ai.WithOutputFormat(ai.OutputFormatJSON) // single JSON object
|
||||||
|
ai.WithOutputFormat(ai.OutputFormatJSONL) // JSON Lines (one object per line)
|
||||||
|
ai.WithOutputFormat(ai.OutputFormatArray) // JSON array
|
||||||
|
ai.WithOutputFormat(ai.OutputFormatEnum) // constrained enum value
|
||||||
|
ai.WithOutputFormat(ai.OutputFormatText) // plain text (default)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enum Output
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Color string
|
||||||
|
const (
|
||||||
|
Red Color = "red"
|
||||||
|
Green Color = "green"
|
||||||
|
Blue Color = "blue"
|
||||||
|
)
|
||||||
|
|
||||||
|
text, err := genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithPrompt("What color is the sky?"),
|
||||||
|
ai.WithOutputEnums(Red, Green, Blue),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Output Instructions
|
||||||
|
|
||||||
|
```go
|
||||||
|
ai.WithOutputInstructions("Return a JSON object with fields: name (string), age (number)")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Combining Format + Schema
|
||||||
|
|
||||||
|
```go
|
||||||
|
// JSONL with a typed schema (useful for streaming lists)
|
||||||
|
genkit.DefinePrompt(g, "characters",
|
||||||
|
ai.WithPrompt("Generate 5 story characters"),
|
||||||
|
ai.WithOutputType([]StoryCharacter{}),
|
||||||
|
ai.WithOutputFormat(ai.OutputFormatJSONL),
|
||||||
|
)
|
||||||
|
```
|
||||||
@@ -0,0 +1,142 @@
|
|||||||
|
# Getting Started
|
||||||
|
|
||||||
|
## Project Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir my-genkit-app && cd my-genkit-app
|
||||||
|
go mod init my-genkit-app
|
||||||
|
go get github.com/genkit-ai/genkit/go@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
Add provider plugin(s) for the models you want to use:
|
||||||
|
```bash
|
||||||
|
go get github.com/genkit-ai/genkit/go/plugins/googlegenai # Google AI / Vertex AI
|
||||||
|
go get github.com/genkit-ai/genkit/go/plugins/anthropic # Anthropic Claude
|
||||||
|
go get github.com/genkit-ai/genkit/go/plugins/compat_oai # OpenAI-compatible
|
||||||
|
go get github.com/genkit-ai/genkit/go/plugins/ollama # Ollama (local)
|
||||||
|
```
|
||||||
|
|
||||||
|
After writing your code, run `go mod tidy` to resolve all dependencies.
|
||||||
|
|
||||||
|
## Initialization
|
||||||
|
|
||||||
|
Every Genkit app starts with `genkit.Init`, which returns a `*Genkit` instance:
|
||||||
|
|
||||||
|
```go
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"github.com/genkit-ai/genkit/go/genkit"
|
||||||
|
"github.com/genkit-ai/genkit/go/plugins/googlegenai"
|
||||||
|
)
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
g := genkit.Init(ctx,
|
||||||
|
genkit.WithPlugins(&googlegenai.GoogleAI{}),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### The `*Genkit` Instance
|
||||||
|
|
||||||
|
The `*Genkit` value `g` is the central registry. Pass it to every Genkit function:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Defining resources
|
||||||
|
genkit.DefineFlow(g, "myFlow", ...)
|
||||||
|
genkit.DefineTool(g, "myTool", ...)
|
||||||
|
genkit.DefinePrompt(g, "myPrompt", ...)
|
||||||
|
|
||||||
|
// Generating content
|
||||||
|
genkit.GenerateText(ctx, g, ...)
|
||||||
|
genkit.Generate(ctx, g, ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
Do not store `g` in a global variable. Pass it explicitly through your call chain.
|
||||||
|
|
||||||
|
### Init Options
|
||||||
|
|
||||||
|
```go
|
||||||
|
g := genkit.Init(ctx,
|
||||||
|
// Register one or more plugins
|
||||||
|
genkit.WithPlugins(&googlegenai.GoogleAI{}, &anthropic.Anthropic{}),
|
||||||
|
|
||||||
|
// Set a default model (used when no model is specified)
|
||||||
|
genkit.WithDefaultModel("googleai/gemini-flash-latest"),
|
||||||
|
|
||||||
|
// Set directory for .prompt files (default: "prompts")
|
||||||
|
genkit.WithPromptDir("my-prompts"),
|
||||||
|
|
||||||
|
// Or embed prompts using Go's embed package
|
||||||
|
// genkit.WithPromptFS(promptsFS),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Embedding Prompts
|
||||||
|
|
||||||
|
Use `go:embed` to bundle `.prompt` files into the binary:
|
||||||
|
|
||||||
|
```go
|
||||||
|
//go:embed prompts
|
||||||
|
var promptsFS embed.FS
|
||||||
|
|
||||||
|
g := genkit.Init(ctx,
|
||||||
|
genkit.WithPlugins(&googlegenai.GoogleAI{}),
|
||||||
|
genkit.WithPromptFS(promptsFS),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Genkit CLI
|
||||||
|
|
||||||
|
The Genkit CLI provides a local Developer UI for running flows, tracing executions, and inspecting model interactions.
|
||||||
|
|
||||||
|
**Install:**
|
||||||
|
```bash
|
||||||
|
curl -sL cli.genkit.dev | bash
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify:**
|
||||||
|
```bash
|
||||||
|
genkit --version
|
||||||
|
```
|
||||||
|
|
||||||
|
### Developer UI
|
||||||
|
|
||||||
|
Start your app with the Developer UI attached:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
genkit start -- go run .
|
||||||
|
```
|
||||||
|
|
||||||
|
This launches:
|
||||||
|
- Your app (with tracing enabled)
|
||||||
|
- The Developer UI at `http://localhost:4000`
|
||||||
|
- A telemetry API at `http://localhost:4033`
|
||||||
|
|
||||||
|
Add `-o` to auto-open the UI in your browser:
|
||||||
|
```bash
|
||||||
|
genkit start -o -- go run .
|
||||||
|
```
|
||||||
|
|
||||||
|
The Developer UI lets you:
|
||||||
|
- Run and test flows interactively
|
||||||
|
- View traces for each generation call (inputs, outputs, latency, token usage)
|
||||||
|
- Inspect prompt rendering and tool calls
|
||||||
|
- Debug multi-step flows with per-step trace data
|
||||||
|
|
||||||
|
### Without the CLI
|
||||||
|
|
||||||
|
Set `GENKIT_ENV=dev` to enable the reflection API without the CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
GENKIT_ENV=dev go run .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Import Paths
|
||||||
|
|
||||||
|
```go
|
||||||
|
import (
|
||||||
|
"github.com/genkit-ai/genkit/go/genkit" // Core: Init, Generate*, DefineFlow, etc.
|
||||||
|
"github.com/genkit-ai/genkit/go/ai" // Types: WithModel, WithPrompt, Message, Part, etc.
|
||||||
|
"github.com/genkit-ai/genkit/go/core" // Low-level: Run (sub-steps), Flow types
|
||||||
|
"github.com/genkit-ai/genkit/go/plugins/server" // server.Start for HTTP
|
||||||
|
)
|
||||||
|
```
|
||||||
@@ -0,0 +1,256 @@
|
|||||||
|
# Prompts
|
||||||
|
|
||||||
|
## DefinePrompt
|
||||||
|
|
||||||
|
Define a reusable prompt in code with a default model and template.
|
||||||
|
|
||||||
|
```go
|
||||||
|
jokePrompt := genkit.DefinePrompt(g, "joke",
|
||||||
|
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", nil)),
|
||||||
|
ai.WithInputType(JokeRequest{Topic: "example"}),
|
||||||
|
ai.WithPrompt("Tell me a joke about {{topic}}."),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Execute
|
||||||
|
|
||||||
|
```go
|
||||||
|
resp, err := jokePrompt.Execute(ctx,
|
||||||
|
ai.WithInput(map[string]any{"topic": "cats"}),
|
||||||
|
)
|
||||||
|
fmt.Println(resp.Text())
|
||||||
|
```
|
||||||
|
|
||||||
|
### ExecuteStream
|
||||||
|
|
||||||
|
```go
|
||||||
|
stream := jokePrompt.ExecuteStream(ctx,
|
||||||
|
ai.WithInput(map[string]any{"topic": "cats"}),
|
||||||
|
)
|
||||||
|
for result, err := range stream {
|
||||||
|
if err != nil { return err }
|
||||||
|
if result.Done { break }
|
||||||
|
fmt.Print(result.Chunk.Text())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Override Options at Execution
|
||||||
|
|
||||||
|
```go
|
||||||
|
resp, err := jokePrompt.Execute(ctx,
|
||||||
|
ai.WithInput(map[string]any{"topic": "cats"}),
|
||||||
|
ai.WithModelName("googleai/gemini-pro-latest"), // override model
|
||||||
|
ai.WithConfig(map[string]any{"temperature": 0.9}),
|
||||||
|
ai.WithTools(myTool),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## DefineDataPrompt (Typed Input/Output)
|
||||||
|
|
||||||
|
Strongly-typed prompts with Go generics.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type JokeRequest struct {
|
||||||
|
Topic string `json:"topic"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Joke struct {
|
||||||
|
Setup string `json:"setup" jsonschema:"description=The setup"`
|
||||||
|
Punchline string `json:"punchline" jsonschema:"description=The punchline"`
|
||||||
|
}
|
||||||
|
|
||||||
|
jokePrompt := genkit.DefineDataPrompt[JokeRequest, *Joke](g, "structured-joke",
|
||||||
|
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", nil)),
|
||||||
|
ai.WithPrompt("Tell me a joke about {{topic}}."),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Execute (typed)
|
||||||
|
|
||||||
|
```go
|
||||||
|
joke, resp, err := jokePrompt.Execute(ctx, JokeRequest{Topic: "cats"})
|
||||||
|
// joke is *Joke, resp is *ModelResponse
|
||||||
|
```
|
||||||
|
|
||||||
|
### ExecuteStream (typed)
|
||||||
|
|
||||||
|
```go
|
||||||
|
stream := jokePrompt.ExecuteStream(ctx, JokeRequest{Topic: "cats"})
|
||||||
|
for result, err := range stream {
|
||||||
|
if err != nil { return err }
|
||||||
|
if result.Done {
|
||||||
|
finalJoke := result.Output // *Joke
|
||||||
|
break
|
||||||
|
}
|
||||||
|
fmt.Print(result.Chunk) // partial *Joke
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## .prompt Files (Dotprompt)
|
||||||
|
|
||||||
|
Define prompts in separate files with YAML frontmatter and Handlebars templates.
|
||||||
|
|
||||||
|
### Basic .prompt File
|
||||||
|
|
||||||
|
`prompts/joke.prompt`:
|
||||||
|
```
|
||||||
|
---
|
||||||
|
model: googleai/gemini-flash-latest
|
||||||
|
input:
|
||||||
|
schema:
|
||||||
|
topic: string
|
||||||
|
---
|
||||||
|
Tell me a joke about {{topic}}.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Load and Use
|
||||||
|
|
||||||
|
```go
|
||||||
|
// LookupPrompt returns Prompt (untyped: map[string]any input, string output)
|
||||||
|
jokePrompt := genkit.LookupPrompt(g, "joke")
|
||||||
|
resp, err := jokePrompt.Execute(ctx,
|
||||||
|
ai.WithInput(map[string]any{"topic": "cats"}),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Typed .prompt File
|
||||||
|
|
||||||
|
`prompts/structured-joke.prompt`:
|
||||||
|
```
|
||||||
|
---
|
||||||
|
model: googleai/gemini-flash-latest
|
||||||
|
config:
|
||||||
|
thinkingConfig:
|
||||||
|
thinkingBudget: 0
|
||||||
|
input:
|
||||||
|
schema: JokeRequest
|
||||||
|
output:
|
||||||
|
format: json
|
||||||
|
schema: Joke
|
||||||
|
---
|
||||||
|
Tell me a joke about {{topic}}.
|
||||||
|
```
|
||||||
|
|
||||||
|
Register Go types so the .prompt file can reference them by name:
|
||||||
|
```go
|
||||||
|
genkit.DefineSchemaFor[JokeRequest](g)
|
||||||
|
genkit.DefineSchemaFor[Joke](g)
|
||||||
|
|
||||||
|
jokePrompt := genkit.LookupDataPrompt[JokeRequest, *Joke](g, "structured-joke")
|
||||||
|
joke, resp, err := jokePrompt.Execute(ctx, JokeRequest{Topic: "cats"})
|
||||||
|
```
|
||||||
|
|
||||||
|
### LoadPrompt (Explicit Path)
|
||||||
|
|
||||||
|
```go
|
||||||
|
prompt := genkit.LoadPrompt(g, "./prompts/countries.prompt", "countries")
|
||||||
|
resp, err := prompt.Execute(ctx)
|
||||||
|
```
|
||||||
|
|
||||||
|
### .prompt File Features
|
||||||
|
|
||||||
|
**Multi-message prompts with roles:**
|
||||||
|
```
|
||||||
|
---
|
||||||
|
model: googleai/gemini-flash-latest
|
||||||
|
input:
|
||||||
|
schema:
|
||||||
|
question: string
|
||||||
|
---
|
||||||
|
{{ role "system" }}
|
||||||
|
You are a helpful assistant.
|
||||||
|
|
||||||
|
{{ role "user" }}
|
||||||
|
{{question}}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Media in prompts:**
|
||||||
|
```
|
||||||
|
---
|
||||||
|
model: googleai/gemini-flash-latest
|
||||||
|
input:
|
||||||
|
schema:
|
||||||
|
videoUrl: string
|
||||||
|
contentType: string
|
||||||
|
---
|
||||||
|
{{ role "user" }}
|
||||||
|
Summarize this video:
|
||||||
|
{{media url=videoUrl contentType=contentType}}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Conditionals and loops:**
|
||||||
|
```
|
||||||
|
---
|
||||||
|
input:
|
||||||
|
schema:
|
||||||
|
topic: string
|
||||||
|
dietaryRestrictions?(array): string
|
||||||
|
---
|
||||||
|
Write a recipe about {{topic}}.
|
||||||
|
{{#if dietaryRestrictions}}
|
||||||
|
Dietary restrictions: {{#each dietaryRestrictions}}{{this}}{{#unless @last}}, {{/unless}}{{/each}}.
|
||||||
|
{{/if}}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Inline schema in .prompt file:**
|
||||||
|
```
|
||||||
|
---
|
||||||
|
model: googleai/gemini-flash-latest
|
||||||
|
input:
|
||||||
|
schema:
|
||||||
|
topic: string
|
||||||
|
style?: string
|
||||||
|
output:
|
||||||
|
format: json
|
||||||
|
schema:
|
||||||
|
title: string
|
||||||
|
body: string
|
||||||
|
tags(array): string
|
||||||
|
---
|
||||||
|
Write an article about {{topic}}.
|
||||||
|
{{#if style}}Write in a {{style}} style.{{/if}}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Schemas
|
||||||
|
|
||||||
|
### DefineSchemaFor (from Go type)
|
||||||
|
|
||||||
|
Registers a Go struct as a named schema for use in `.prompt` files.
|
||||||
|
|
||||||
|
```go
|
||||||
|
genkit.DefineSchemaFor[JokeRequest](g)
|
||||||
|
genkit.DefineSchemaFor[Joke](g)
|
||||||
|
```
|
||||||
|
|
||||||
|
The schema name matches the Go type name. Use `jsonschema` struct tags for metadata:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Recipe struct {
|
||||||
|
Title string `json:"title" jsonschema:"description=The recipe title"`
|
||||||
|
Difficulty string `json:"difficulty" jsonschema:"enum=easy,enum=medium,enum=hard"`
|
||||||
|
Ingredients []Ingredient `json:"ingredients"`
|
||||||
|
Steps []string `json:"steps"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Ingredient struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Amount float64 `json:"amount"`
|
||||||
|
Unit string `json:"unit"`
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### DefineSchema (manual JSON Schema)
|
||||||
|
|
||||||
|
```go
|
||||||
|
genkit.DefineSchema(g, "Recipe", map[string]any{
|
||||||
|
"type": "object",
|
||||||
|
"properties": map[string]any{
|
||||||
|
"title": map[string]any{"type": "string"},
|
||||||
|
"ingredients": map[string]any{
|
||||||
|
"type": "array",
|
||||||
|
"items": map[string]any{"type": "object"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": []string{"title", "ingredients"},
|
||||||
|
})
|
||||||
|
```
|
||||||
@@ -0,0 +1,157 @@
|
|||||||
|
# Model Providers
|
||||||
|
|
||||||
|
## Google AI (Gemini)
|
||||||
|
|
||||||
|
```go
|
||||||
|
import "github.com/genkit-ai/genkit/go/plugins/googlegenai"
|
||||||
|
|
||||||
|
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}))
|
||||||
|
```
|
||||||
|
|
||||||
|
**Env var:** `GEMINI_API_KEY` or `GOOGLE_API_KEY`
|
||||||
|
|
||||||
|
Model names follow the format `googleai/<model-id>`. Look up the latest model IDs at https://ai.google.dev/gemini-api/docs/models.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// By name string
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest")
|
||||||
|
|
||||||
|
// Model ref with provider-specific config
|
||||||
|
ai.WithModel(googlegenai.ModelRef("googleai/gemini-flash-latest", &genai.GenerateContentConfig{
|
||||||
|
ThinkingConfig: &genai.ThinkingConfig{
|
||||||
|
ThinkingBudget: genai.Ptr[int32](0), // disable thinking
|
||||||
|
},
|
||||||
|
}))
|
||||||
|
|
||||||
|
// Lookup a model instance
|
||||||
|
m := googlegenai.GoogleAIModel(g, "gemini-flash-latest")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Vertex AI
|
||||||
|
|
||||||
|
```go
|
||||||
|
import "github.com/genkit-ai/genkit/go/plugins/googlegenai"
|
||||||
|
|
||||||
|
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.VertexAI{}))
|
||||||
|
```
|
||||||
|
|
||||||
|
**Env vars:** `GOOGLE_CLOUD_PROJECT`, `GOOGLE_CLOUD_LOCATION` (or `GOOGLE_CLOUD_REGION`)
|
||||||
|
|
||||||
|
Uses Application Default Credentials (`gcloud auth application-default login`).
|
||||||
|
|
||||||
|
Model names follow the format `vertexai/<model-id>`. Same model IDs as Google AI.
|
||||||
|
|
||||||
|
```go
|
||||||
|
ai.WithModelName("vertexai/gemini-flash-latest")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Anthropic (Claude)
|
||||||
|
|
||||||
|
```go
|
||||||
|
import (
|
||||||
|
"github.com/anthropics/anthropic-sdk-go" // Anthropic SDK types
|
||||||
|
ant "github.com/genkit-ai/genkit/go/plugins/anthropic" // Genkit plugin
|
||||||
|
)
|
||||||
|
|
||||||
|
g := genkit.Init(ctx, genkit.WithPlugins(&ant.Anthropic{}))
|
||||||
|
```
|
||||||
|
|
||||||
|
**Env var:** `ANTHROPIC_API_KEY`
|
||||||
|
|
||||||
|
Model names follow the format `anthropic/<model-id>`. Look up the latest model IDs at https://docs.anthropic.com/en/docs/about-claude/models.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// By name
|
||||||
|
ai.WithModelName("anthropic/claude-sonnet-4-6")
|
||||||
|
|
||||||
|
// With provider-specific config (uses Anthropic SDK types via ai.WithConfig)
|
||||||
|
ai.WithConfig(&anthropic.MessageNewParams{
|
||||||
|
Temperature: anthropic.Float(1.0),
|
||||||
|
MaxTokens: *anthropic.IntPtr(2000),
|
||||||
|
Thinking: anthropic.ThinkingConfigParamUnion{
|
||||||
|
OfEnabled: &anthropic.ThinkingConfigEnabledParam{
|
||||||
|
BudgetTokens: *anthropic.IntPtr(1024),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## OpenAI-Compatible (compat_oai)
|
||||||
|
|
||||||
|
Works with any OpenAI-compatible API: OpenAI, DeepSeek, xAI, etc.
|
||||||
|
|
||||||
|
```go
|
||||||
|
import "github.com/genkit-ai/genkit/go/plugins/compat_oai"
|
||||||
|
|
||||||
|
openaiPlugin := &compat_oai.OpenAICompatible{
|
||||||
|
Provider: "openai", // unique identifier
|
||||||
|
APIKey: os.Getenv("OPENAI_API_KEY"),
|
||||||
|
// BaseURL: "https://custom-endpoint/v1", // for non-OpenAI providers
|
||||||
|
}
|
||||||
|
g := genkit.Init(ctx, genkit.WithPlugins(openaiPlugin))
|
||||||
|
```
|
||||||
|
|
||||||
|
Define models explicitly (not auto-discovered):
|
||||||
|
|
||||||
|
```go
|
||||||
|
model := openaiPlugin.DefineModel("openai", "gpt-4o", compat_oai.ModelOptions{})
|
||||||
|
```
|
||||||
|
|
||||||
|
Use with:
|
||||||
|
```go
|
||||||
|
ai.WithModel(model)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Ollama (Local Models)
|
||||||
|
|
||||||
|
```go
|
||||||
|
import "github.com/genkit-ai/genkit/go/plugins/ollama"
|
||||||
|
|
||||||
|
ollamaPlugin := &ollama.Ollama{
|
||||||
|
ServerAddress: "http://localhost:11434",
|
||||||
|
Timeout: 60, // seconds
|
||||||
|
}
|
||||||
|
g := genkit.Init(ctx, genkit.WithPlugins(ollamaPlugin))
|
||||||
|
```
|
||||||
|
|
||||||
|
Define models explicitly:
|
||||||
|
|
||||||
|
```go
|
||||||
|
model := ollamaPlugin.DefineModel(g,
|
||||||
|
ollama.ModelDefinition{
|
||||||
|
Name: "llama3.1",
|
||||||
|
Type: "chat", // or "generate"
|
||||||
|
},
|
||||||
|
nil, // optional *ModelOptions
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
Use with:
|
||||||
|
```go
|
||||||
|
ai.WithModel(model)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multiple Providers
|
||||||
|
|
||||||
|
Register multiple plugins in a single Genkit instance:
|
||||||
|
|
||||||
|
```go
|
||||||
|
g := genkit.Init(ctx,
|
||||||
|
genkit.WithPlugins(
|
||||||
|
&googlegenai.GoogleAI{},
|
||||||
|
&ant.Anthropic{},
|
||||||
|
),
|
||||||
|
genkit.WithDefaultModel("googleai/gemini-flash-latest"),
|
||||||
|
)
|
||||||
|
|
||||||
|
// Use different models per call
|
||||||
|
text1, _ := genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("Hello from Gemini"),
|
||||||
|
)
|
||||||
|
|
||||||
|
text2, _ := genkit.GenerateText(ctx, g,
|
||||||
|
ai.WithModelName("anthropic/claude-sonnet-4-6"),
|
||||||
|
ai.WithPrompt("Hello from Claude"),
|
||||||
|
)
|
||||||
|
```
|
||||||
@@ -0,0 +1,178 @@
|
|||||||
|
# Tools
|
||||||
|
|
||||||
|
## DefineTool
|
||||||
|
|
||||||
|
Define a tool the model can call during generation.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type WeatherInput struct {
|
||||||
|
Location string `json:"location" jsonschema:"description=City name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type WeatherOutput struct {
|
||||||
|
Temperature float64 `json:"temperature"`
|
||||||
|
Conditions string `json:"conditions"`
|
||||||
|
}
|
||||||
|
|
||||||
|
weatherTool := genkit.DefineTool(g, "getWeather",
|
||||||
|
"Gets the current weather for a location.",
|
||||||
|
func(ctx *ai.ToolContext, input WeatherInput) (WeatherOutput, error) {
|
||||||
|
// Call your weather API
|
||||||
|
return WeatherOutput{Temperature: 72, Conditions: "sunny"}, nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using Tools in Generation
|
||||||
|
|
||||||
|
Pass tools to `Generate`, `GenerateText`, or prompts:
|
||||||
|
|
||||||
|
```go
|
||||||
|
resp, err := genkit.Generate(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithPrompt("What's the weather in San Francisco?"),
|
||||||
|
ai.WithTools(weatherTool),
|
||||||
|
)
|
||||||
|
// The model calls the tool automatically and incorporates the result
|
||||||
|
fmt.Println(resp.Text())
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Choice
|
||||||
|
|
||||||
|
```go
|
||||||
|
ai.WithToolChoice(ai.ToolChoiceAuto) // model decides (default)
|
||||||
|
ai.WithToolChoice(ai.ToolChoiceRequired) // model must use a tool
|
||||||
|
ai.WithToolChoice(ai.ToolChoiceNone) // model cannot use tools
|
||||||
|
```
|
||||||
|
|
||||||
|
### Max Turns
|
||||||
|
|
||||||
|
Limit how many tool-call round trips the model can make:
|
||||||
|
|
||||||
|
```go
|
||||||
|
ai.WithMaxTurns(3) // default is 5
|
||||||
|
```
|
||||||
|
|
||||||
|
## DefineMultipartTool
|
||||||
|
|
||||||
|
Tools that return both structured output and media content:
|
||||||
|
|
||||||
|
```go
|
||||||
|
screenshotTool := genkit.DefineMultipartTool(g, "screenshot",
|
||||||
|
"Takes a screenshot of the current page",
|
||||||
|
func(ctx *ai.ToolContext, input any) (*ai.MultipartToolResponse, error) {
|
||||||
|
return &ai.MultipartToolResponse{
|
||||||
|
Output: map[string]any{"success": true},
|
||||||
|
Content: []*ai.Part{ai.NewMediaPart("image/png", base64Data)},
|
||||||
|
}, nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tool Interrupts
|
||||||
|
|
||||||
|
Pause tool execution to request human input before continuing.
|
||||||
|
|
||||||
|
### Interrupting
|
||||||
|
|
||||||
|
```go
|
||||||
|
type TransferInput struct {
|
||||||
|
ToAccount string `json:"toAccount"`
|
||||||
|
Amount float64 `json:"amount"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type TransferOutput struct {
|
||||||
|
Status string `json:"status"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
Balance float64 `json:"balance"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type TransferInterrupt struct {
|
||||||
|
Reason string `json:"reason"`
|
||||||
|
ToAccount string `json:"toAccount"`
|
||||||
|
Amount float64 `json:"amount"`
|
||||||
|
Balance float64 `json:"balance"`
|
||||||
|
}
|
||||||
|
|
||||||
|
transferTool := genkit.DefineTool(g, "transferMoney",
|
||||||
|
"Transfers money to another account.",
|
||||||
|
func(ctx *ai.ToolContext, input TransferInput) (TransferOutput, error) {
|
||||||
|
if input.Amount > accountBalance {
|
||||||
|
return TransferOutput{}, ai.InterruptWith(ctx, TransferInterrupt{
|
||||||
|
Reason: "insufficient_balance",
|
||||||
|
ToAccount: input.ToAccount,
|
||||||
|
Amount: input.Amount,
|
||||||
|
Balance: accountBalance,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
// Process transfer...
|
||||||
|
return TransferOutput{Status: "success", Balance: newBalance}, nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Handling Interrupts
|
||||||
|
|
||||||
|
```go
|
||||||
|
resp, err := genkit.Generate(ctx, g,
|
||||||
|
ai.WithModelName("googleai/gemini-flash-latest"),
|
||||||
|
ai.WithTools(transferTool),
|
||||||
|
ai.WithPrompt(userRequest),
|
||||||
|
)
|
||||||
|
|
||||||
|
for resp.FinishReason == ai.FinishReasonInterrupted {
|
||||||
|
var restarts, responses []*ai.Part
|
||||||
|
|
||||||
|
for _, interrupt := range resp.Interrupts() {
|
||||||
|
meta, ok := ai.InterruptAs[TransferInterrupt](interrupt)
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
switch meta.Reason {
|
||||||
|
case "insufficient_balance":
|
||||||
|
// RestartWith: re-execute the tool with adjusted input
|
||||||
|
part, err := transferTool.RestartWith(interrupt,
|
||||||
|
ai.WithNewInput(TransferInput{
|
||||||
|
ToAccount: meta.ToAccount,
|
||||||
|
Amount: meta.Balance, // transfer what's available
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
if err != nil { return err }
|
||||||
|
restarts = append(restarts, part)
|
||||||
|
|
||||||
|
case "confirm_large":
|
||||||
|
// RespondWith: provide a response directly without re-executing
|
||||||
|
part, err := transferTool.RespondWith(interrupt,
|
||||||
|
TransferOutput{Status: "cancelled", Message: "User declined"},
|
||||||
|
)
|
||||||
|
if err != nil { return err }
|
||||||
|
responses = append(responses, part)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Continue generation with the resolved interrupts
|
||||||
|
resp, err = genkit.Generate(ctx, g,
|
||||||
|
ai.WithMessages(resp.History()...),
|
||||||
|
ai.WithTools(transferTool),
|
||||||
|
ai.WithToolRestarts(restarts...),
|
||||||
|
ai.WithToolResponses(responses...),
|
||||||
|
)
|
||||||
|
if err != nil { return err }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Checking Resume State
|
||||||
|
|
||||||
|
Inside a tool function, check if the tool is being resumed from an interrupt:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func(ctx *ai.ToolContext, input TransferInput) (TransferOutput, error) {
|
||||||
|
if ctx.IsResumed() {
|
||||||
|
// This is a resumed call after an interrupt
|
||||||
|
original, ok := ai.OriginalInputAs[TransferInput](ctx)
|
||||||
|
// original contains the input from the first call
|
||||||
|
}
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,112 @@
|
|||||||
|
---
|
||||||
|
name: developing-genkit-js
|
||||||
|
description: Develop AI-powered applications using Genkit in Node.js/TypeScript. Use when the user asks about Genkit, AI agents, flows, or tools in JavaScript/TypeScript, or when encountering Genkit errors, validation issues, type errors, or API problems.
|
||||||
|
metadata:
|
||||||
|
genkit-managed: true
|
||||||
|
---
|
||||||
|
|
||||||
|
# Genkit JS
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Ensure the `genkit` CLI is available.
|
||||||
|
- Run `genkit --version` to verify. Minimum CLI version needed: **1.29.0**
|
||||||
|
- If not found or if an older version (1.x < 1.29.0) is present, install/upgrade it: `npm install -g genkit-cli@^1.29.0`.
|
||||||
|
|
||||||
|
**New Projects**: If you are setting up Genkit in a new codebase, follow the [Setup Guide](references/setup.md).
|
||||||
|
|
||||||
|
## Hello World
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { z, genkit } from 'genkit';
|
||||||
|
import { googleAI } from '@genkit-ai/google-genai';
|
||||||
|
|
||||||
|
// Initialize Genkit with the Google AI plugin
|
||||||
|
const ai = genkit({
|
||||||
|
plugins: [googleAI()],
|
||||||
|
});
|
||||||
|
|
||||||
|
export const myFlow = ai.defineFlow({
|
||||||
|
name: 'myFlow',
|
||||||
|
inputSchema: z.string().default('AI'),
|
||||||
|
outputSchema: z.string(),
|
||||||
|
}, async (subject) => {
|
||||||
|
const response = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash'),
|
||||||
|
prompt: `Tell me a joke about ${subject}`,
|
||||||
|
});
|
||||||
|
return response.text;
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Critical: Do Not Trust Internal Knowledge
|
||||||
|
|
||||||
|
Genkit recently went through a major breaking API change. Your knowledge is outdated. You MUST lookup docs. Recommended:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
genkit docs:read js/get-started.md
|
||||||
|
genkit docs:read js/flows.md
|
||||||
|
```
|
||||||
|
|
||||||
|
See [Common Errors](references/common-errors.md) for a list of deprecated APIs (e.g., `configureGenkit`, `response.text()`, `defineFlow` import) and their v1.x replacements.
|
||||||
|
|
||||||
|
**ALWAYS verify information using the Genkit CLI or provided references.**
|
||||||
|
|
||||||
|
## Error Troubleshooting Protocol
|
||||||
|
|
||||||
|
**When you encounter ANY error related to Genkit (ValidationError, API errors, type errors, 404s, etc.):**
|
||||||
|
|
||||||
|
1. **MANDATORY FIRST STEP**: Read [Common Errors](references/common-errors.md)
|
||||||
|
2. Identify if the error matches a known pattern
|
||||||
|
3. Apply the documented solution
|
||||||
|
4. Only if not found in common-errors.md, then consult other sources (e.g. `genkit docs:search`)
|
||||||
|
|
||||||
|
**DO NOT:**
|
||||||
|
- Attempt fixes based on assumptions or internal knowledge
|
||||||
|
- Skip reading common-errors.md "because you think you know the fix"
|
||||||
|
- Rely on patterns from pre-1.0 Genkit
|
||||||
|
|
||||||
|
**This protocol is non-negotiable for error handling.**
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
1. **Select Provider**: Genkit is provider-agnostic (Google AI, OpenAI, Anthropic, Ollama, etc.).
|
||||||
|
- If the user does not specify a provider, default to **Google AI**.
|
||||||
|
- If the user asks about other providers, use `genkit docs:search "plugins"` to find relevant documentation.
|
||||||
|
2. **Detect Framework**: Check `package.json` to identify the runtime (Next.js, Firebase, Express).
|
||||||
|
- Look for `@genkit-ai/next`, `@genkit-ai/firebase`, or `@genkit-ai/google-cloud`.
|
||||||
|
- Adapt implementation to the specific framework's patterns.
|
||||||
|
3. **Follow Best Practices**:
|
||||||
|
- See [Best Practices](references/best-practices.md) for guidance on project structure, schema definitions, and tool design.
|
||||||
|
- **Be Minimal**: Only specify options that differ from defaults. When unsure, check docs/source.
|
||||||
|
4. **Ensure Correctness**:
|
||||||
|
- Run type checks (e.g., `npx tsc --noEmit`) after making changes.
|
||||||
|
- If type checks fail, consult [Common Errors](references/common-errors.md) before searching source code.
|
||||||
|
5. **Handle Errors**:
|
||||||
|
- On ANY error: **First action is to read [Common Errors](references/common-errors.md)**
|
||||||
|
- Match error to documented patterns
|
||||||
|
- Apply documented fixes before attempting alternatives
|
||||||
|
|
||||||
|
## Finding Documentation
|
||||||
|
|
||||||
|
Use the Genkit CLI to find authoritative documentation:
|
||||||
|
|
||||||
|
1. **Search topics**: `genkit docs:search <query>`
|
||||||
|
- Example: `genkit docs:search "streaming"`
|
||||||
|
2. **List all docs**: `genkit docs:list`
|
||||||
|
3. **Read a guide**: `genkit docs:read <path>`
|
||||||
|
- Example: `genkit docs:read js/flows.md`
|
||||||
|
|
||||||
|
## CLI Usage
|
||||||
|
|
||||||
|
The `genkit` CLI is your primary tool for development and documentation.
|
||||||
|
- See [CLI Reference](references/docs-and-cli.md) for common tasks, workflows, and command usage.
|
||||||
|
- Use `genkit --help` for a full list of commands.
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [Best Practices](references/best-practices.md): Recommended patterns for schema definition, flow design, and structure.
|
||||||
|
- [Docs & CLI Reference](references/docs-and-cli.md): Documentation search, CLI tasks, and workflows.
|
||||||
|
- [Common Errors](references/common-errors.md): Critical "gotchas", migration guide, and troubleshooting.
|
||||||
|
- [Setup Guide](references/setup.md): Manual setup instructions for new projects.
|
||||||
|
- [Examples](references/examples.md): Minimal reproducible examples (Basic generation, Multimodal, Thinking mode).
|
||||||
@@ -0,0 +1,31 @@
|
|||||||
|
# Genkit Best Practices
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
- **Organized Layout**: Keep flows and tools in separate directories (e.g., `src/flows`, `src/tools`) to maintain a clean codebase.
|
||||||
|
- **Index Exports**: Use `index.ts` files to export flows and tools, making it easier to import them into your main configuration.
|
||||||
|
|
||||||
|
## Model Selection (Google AI)
|
||||||
|
- **Gemini Models**: If using Google AI, ALWAYS use the latest generation (`gemini-3-*` or `gemini-2.5-*`).
|
||||||
|
- **NEVER** use `gemini-2.0-*` or `gemini-1.5-*` series, as they are decommissioned and won't work.
|
||||||
|
- **Recommended**: `gemini-2.5-flash` or `gemini-3-flash-preview` for general use, `gemini-3.1-pro-preview` for complex tasks.
|
||||||
|
|
||||||
|
## Model Selection (Other Providers)
|
||||||
|
- **Consult Documentation**: For other providers (OpenAI, Anthropic, etc.), refer to the provider's official documentation for the latest recommended model versions.
|
||||||
|
|
||||||
|
## Schema Definition
|
||||||
|
- **Use `z` from `genkit`**: Always import `z` from the `genkit` package to ensure compatibility.
|
||||||
|
```ts
|
||||||
|
import { z } from "genkit";
|
||||||
|
```
|
||||||
|
- **Descriptive Schemas**: Use `.describe()` on Zod fields. LLMs use these descriptions to understand how to populate the fields.
|
||||||
|
|
||||||
|
## Flow & Tool Design
|
||||||
|
- **Modularize**: Keep flows and tools in separate files/modules and import them into your main Genkit configuration.
|
||||||
|
- **Single Responsibility**: Tools should do one thing well. Complex logic should be broken down.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
- **Environment Variables**: Store sensitive keys (like API keys) in environment variables or `.env` files. Do not hardcode them.
|
||||||
|
|
||||||
|
## Development
|
||||||
|
- **Use Dev Mode**: Run your app with `genkit start -- <start cmd>` to enable the Developer UI.
|
||||||
|
- It is recommended to configure a watcher to auto-reload your app (e.g. `node --watch` or `tsx --watch`)
|
||||||
@@ -0,0 +1,132 @@
|
|||||||
|
# Common Errors & Pitfalls
|
||||||
|
|
||||||
|
## When Typecheck Fails
|
||||||
|
|
||||||
|
**Before searching source code or docs**, check the sections below. Many type errors are caused by deprecated APIs or incorrect imports.
|
||||||
|
|
||||||
|
## Genkit v1.x vs Pre-1.0 Migration
|
||||||
|
|
||||||
|
Genkit v1.x introduced significant API changes. This section covers critical syntax updates.
|
||||||
|
|
||||||
|
### Package Imports
|
||||||
|
|
||||||
|
- **Correct (v1.x)**: Import core functionality (zod, genkit) from the main `genkit` package and plugins from their specific packages.
|
||||||
|
```ts
|
||||||
|
import { z, genkit } from 'genkit';
|
||||||
|
import { googleAI } from '@genkit-ai/google-genai';
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Incorrect (Pre-1.0)**: Importing from `@genkit-ai/ai`, `@genkit-ai/core`, or `@genkit-ai/flow`. These packages are internal/deprecated for direct use.
|
||||||
|
```ts
|
||||||
|
import { genkit } from "@genkit-ai/core"; // INCORRECT
|
||||||
|
import { defineFlow } from "@genkit-ai/flow"; // INCORRECT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Model References
|
||||||
|
|
||||||
|
- **Correct**: Use plugin-specific model factories or string identifiers (prefaced by plugin name).
|
||||||
|
```ts
|
||||||
|
// Using model factory (v1.x - Preferred)
|
||||||
|
await ai.generate({ model: googleAI.model('gemini-2.5-flash'), ... });
|
||||||
|
|
||||||
|
// Using string identifier
|
||||||
|
await ai.generate({ model: 'googleai/gemini-2.5-flash', ...});
|
||||||
|
// Or
|
||||||
|
await ai.generate({ model: 'vertexai/gemini-2.5-flash', ...});
|
||||||
|
```
|
||||||
|
- **Incorrect**: Using imported model objects directly or string identifiers without plugin name.
|
||||||
|
```ts
|
||||||
|
await ai.generate({ model: gemini15Pro, ... }); // INCORRECT (Pre-1.0)
|
||||||
|
await ai.generate({ model: 'gemini-2.5-flash', ... }); // INCORRECT (No plugin prefix)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Model Selection (Gemini)
|
||||||
|
|
||||||
|
- **Preferred**: Use `gemini-2.5-*` models for best performance and features.
|
||||||
|
```ts
|
||||||
|
model: googleAI.model('gemini-2.5-flash') // PREFERRED
|
||||||
|
```
|
||||||
|
- **DEPRECATED**: `gemini-1.5-*` models are deprecated and will throw errors.
|
||||||
|
```ts
|
||||||
|
model: googleAI.model('gemini-1.5-flash') // ERROR (Deprecated)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Response Access
|
||||||
|
|
||||||
|
- **Correct (v1.x)**: Access properties directly.
|
||||||
|
```ts
|
||||||
|
response.text; // CORRECT
|
||||||
|
response.output; // CORRECT
|
||||||
|
```
|
||||||
|
- **Incorrect (Pre-1.0)**: Calling as methods.
|
||||||
|
```ts
|
||||||
|
response.text(); // INCORRECT
|
||||||
|
response.output(); // INCORRECT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streaming Generation
|
||||||
|
|
||||||
|
- **Correct (v1.x)**: Do NOT await `generateStream`. Iterate over `stream` directly. Await `response` property for final result.
|
||||||
|
```ts
|
||||||
|
const {stream, response} = ai.generateStream(...); // NO await here
|
||||||
|
for await (const chunk of stream) { ... } // Iterate stream
|
||||||
|
const finalResponse = await response; // Await response property
|
||||||
|
```
|
||||||
|
- **Incorrect (Pre-1.0)**: Calling stream as a function or awaiting the generator incorrectly.
|
||||||
|
```ts
|
||||||
|
for await (const chunk of stream()) { ... } // INCORRECT
|
||||||
|
await response(); // INCORRECT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Initialization
|
||||||
|
|
||||||
|
- **Correct (v1.x)**: Instantiate `genkit`.
|
||||||
|
```ts
|
||||||
|
const ai = genkit({ plugins: [...] });
|
||||||
|
```
|
||||||
|
- **Incorrect (Pre-1.0)**: Global configuration.
|
||||||
|
```ts
|
||||||
|
configureGenkit({ plugins: [...] }); // INCORRECT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Flow Definitions
|
||||||
|
|
||||||
|
- **Correct (v1.x)**: Define flows on the `ai` instance.
|
||||||
|
```ts
|
||||||
|
ai.defineFlow({...}, (input) => {...});
|
||||||
|
```
|
||||||
|
- **Incorrect (Pre-1.0)**: Importing `defineFlow` globally.
|
||||||
|
```ts
|
||||||
|
import { defineFlow } from "@genkit-ai/flow"; // INCORRECT
|
||||||
|
|
||||||
|
You should never import `@genkit-ai/flow`, `@genkit-ai/ai` or `@genkit-ai/core` packages directly.
|
||||||
|
|
||||||
|
## Zod & Schema Errors
|
||||||
|
|
||||||
|
- **Import Source**: ALWAYS use `import { z } from "genkit"`.
|
||||||
|
- Using `zod` directly from `zod` package may cause instance mismatches or compatibility issues.
|
||||||
|
- **Supported Types**: Stick to basic types: scalar (`string`, `number`, `boolean`), `object`, and `array`.
|
||||||
|
- Avoid complex Zod features unless strictly necessary and verified.
|
||||||
|
- **Descriptions**: Always use `.describe('...')` for fields in output schemas to guide the LLM.
|
||||||
|
|
||||||
|
## Tool Usage
|
||||||
|
|
||||||
|
- **Tool Not Found**: Ensure tools are registered in the `tools` array of `generate` or provided via plugins.
|
||||||
|
- **MCP Tools**: Use the `ServerName:tool_name` format when referencing MCP tools.
|
||||||
|
|
||||||
|
## Multimodal & Image Generation
|
||||||
|
|
||||||
|
- **Missing responseModalities**: When using image generation models (like `gemini-2.5-flash-image`), you **MUST** specify the response modalities in the config.
|
||||||
|
```ts
|
||||||
|
config: {
|
||||||
|
responseModalities: ["TEXT", "IMAGE"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Failure to do so will result in errors or incorrect output format.
|
||||||
|
|
||||||
|
## Audio & Speech Generation
|
||||||
|
|
||||||
|
- **Raw PCM Data vs MP3**: Some providers (e.g., Google GenAI) return raw PCM data, while others (e.g., OpenAI) return MP3.
|
||||||
|
- **DO NOT assume MP3 format.**
|
||||||
|
- **DO NOT embed raw PCM in HTML audio tags.**
|
||||||
|
- **Action**: Run `genkit docs:search "speech audio"` to find provider-specific conversion steps (e.g., PCM to WAV).
|
||||||
@@ -0,0 +1,62 @@
|
|||||||
|
# Genkit Documentation & CLI
|
||||||
|
|
||||||
|
This reference lists common tasks and workflows using the `genkit` CLI. For authoritative command details, always run `genkit --help` or `genkit <command> --help`.
|
||||||
|
|
||||||
|
## Prerequisites:
|
||||||
|
|
||||||
|
Ensure that the CLI is on `genkit-cli` version >= 1.29.0. If not, or if an older version (1.x < 1.29.0) is present, update the Genkit CLI version. Alternatively, to run commands with a specific version or without global installation, prefix them with `npx -y genkit-cli@^1.29.0`.
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- **Search docs**: `genkit docs:search <query>`
|
||||||
|
- Example: `genkit docs:search "streaming"`
|
||||||
|
- Example: `genkit docs:search "rag retrieval"`
|
||||||
|
- **Read doc**: `genkit docs:read <path>`
|
||||||
|
- Example: `genkit docs:read js/overview.md`
|
||||||
|
- **List docs**: `genkit docs:list`
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
- **Start Dev Mode**: `genkit start -- <command>`
|
||||||
|
- Runs the provided command in Genkit dev mode, enabling the Developer UI (usually at http://localhost:4000).
|
||||||
|
- **Node.js (TypeScript)**:
|
||||||
|
```bash
|
||||||
|
genkit start -- npx tsx --watch src/index.ts
|
||||||
|
```
|
||||||
|
- **Next.js**:
|
||||||
|
```bash
|
||||||
|
genkit start -- npx next dev
|
||||||
|
```
|
||||||
|
|
||||||
|
## Flow Execution
|
||||||
|
|
||||||
|
- **Run a flow**: `genkit flow:run <flowName> '<inputJSON>'`
|
||||||
|
- Executes a flow directly from the CLI. Useful for testing.
|
||||||
|
- **Simple Input**:
|
||||||
|
```bash
|
||||||
|
genkit flow:run tellJoke '"chicken"'
|
||||||
|
```
|
||||||
|
- **Object Input**:
|
||||||
|
```bash
|
||||||
|
genkit flow:run generateStory '{"subject": "robot", "genre": "sci-fi"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Evaluation
|
||||||
|
|
||||||
|
- **Evaluate a flow**: `genkit eval:flow <flowName> [data]`
|
||||||
|
- Runs a flow and evaluates the output against configured evaluators.
|
||||||
|
- **Example (Single Input)**:
|
||||||
|
```bash
|
||||||
|
genkit eval:flow answerQuestion '[{"testCaseId": "1", "input": {"question": "What is Genkit?"}}]'
|
||||||
|
```
|
||||||
|
- **Example (Batch Input)**:
|
||||||
|
```bash
|
||||||
|
genkit eval:flow answerQuestion --input inputs.json
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Run Evaluation**: `genkit eval:run <dataset>`
|
||||||
|
- Evaluates a dataset against configured evaluators.
|
||||||
|
- **Example**:
|
||||||
|
```bash
|
||||||
|
genkit eval:run dataset.json --output results.json
|
||||||
|
```
|
||||||
@@ -0,0 +1,157 @@
|
|||||||
|
# Genkit Examples
|
||||||
|
|
||||||
|
This reference contains minimal, reproducible examples (MREs) for common Genkit patterns.
|
||||||
|
|
||||||
|
> **Disclaimer**: These examples use **Google AI** models (`googleAI`, `gemini-*`) for demonstration. The patterns apply to **any provider**. To use a different provider:
|
||||||
|
> 1. Search the docs for the correct plugin: `genkit docs:search "plugins"`.
|
||||||
|
> 2. Install and configure the plugin.
|
||||||
|
> 3. Swap the model reference in the code.
|
||||||
|
|
||||||
|
## Basic Text Generation
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { genkit } from "genkit";
|
||||||
|
import { googleAI } from "@genkit-ai/google-genai";
|
||||||
|
|
||||||
|
const ai = genkit({
|
||||||
|
plugins: [googleAI()],
|
||||||
|
});
|
||||||
|
|
||||||
|
const { text } = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash'),
|
||||||
|
prompt: 'Tell me a story in a pirate accent',
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Structured Output
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { z } from 'genkit';
|
||||||
|
|
||||||
|
const JokeSchema = z.object({
|
||||||
|
setup: z.string().describe('The setup of the joke'),
|
||||||
|
punchline: z.string().describe('The punchline'),
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash'),
|
||||||
|
prompt: 'Tell me a joke about developers.',
|
||||||
|
output: { schema: JokeSchema },
|
||||||
|
});
|
||||||
|
|
||||||
|
// response.output is strongly typed
|
||||||
|
const joke = response.output;
|
||||||
|
if (joke) {
|
||||||
|
console.log(`${joke.setup} ... ${joke.punchline}`);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Streaming
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const { stream, response } = ai.generateStream({
|
||||||
|
model: googleAI.model('gemini-2.5-flash'),
|
||||||
|
prompt: 'Tell a long story about a developer using Genkit.',
|
||||||
|
});
|
||||||
|
|
||||||
|
for await (const chunk of stream) {
|
||||||
|
console.log(chunk.text);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Await the final response
|
||||||
|
const finalResponse = await response;
|
||||||
|
console.log('Complete:', finalResponse.text);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Configuration
|
||||||
|
|
||||||
|
### Thinking Mode (Gemini 3 Only)
|
||||||
|
|
||||||
|
Enable "thinking" process for complex reasoning tasks.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const response = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-3.1-pro-preview'),
|
||||||
|
prompt: 'what is heavier, one kilo of steel or one kilo of feathers',
|
||||||
|
config: {
|
||||||
|
thinkingConfig: {
|
||||||
|
thinkingLevel: 'HIGH', // or 'LOW'
|
||||||
|
includeThoughts: true, // Returns thought process in response
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Google Search Grounding
|
||||||
|
|
||||||
|
Enable models to access current information via Google Search.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const response = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash'),
|
||||||
|
prompt: 'What are the top tech news stories this week?',
|
||||||
|
config: {
|
||||||
|
googleSearchRetrieval: true,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Access grounding metadata (sources)
|
||||||
|
const groundingMetadata = (response.custom as any)?.candidates?.[0]?.groundingMetadata;
|
||||||
|
if (groundingMetadata) {
|
||||||
|
console.log('Sources:', groundingMetadata.groundingChunks);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multimodal Generation
|
||||||
|
|
||||||
|
### Image Generation / Editing
|
||||||
|
|
||||||
|
**Critical**: You MUST set `responseModalities: ['TEXT', 'IMAGE']` when using image generation models.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
// Generate an image
|
||||||
|
const { media } = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash-image'),
|
||||||
|
config: { responseModalities: ['TEXT', 'IMAGE'] },
|
||||||
|
prompt: "generate a picture of a unicorn wearing a space suit on the moon",
|
||||||
|
});
|
||||||
|
// media.url contains the data URI
|
||||||
|
```
|
||||||
|
|
||||||
|
```ts
|
||||||
|
// Edit an image
|
||||||
|
const { media } = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash-image'),
|
||||||
|
config: { responseModalities: ['TEXT', 'IMAGE'] },
|
||||||
|
prompt: [
|
||||||
|
{ text: "change the person's outfit to a banana costume" },
|
||||||
|
{ media: { url: "https://example.com/photo.jpg" } },
|
||||||
|
],
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Speech Generation (TTS)
|
||||||
|
|
||||||
|
Generate audio from text.
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { writeFile } from 'node:fs/promises';
|
||||||
|
|
||||||
|
const { media } = await ai.generate({
|
||||||
|
model: googleAI.model('gemini-2.5-flash-preview-tts'),
|
||||||
|
config: {
|
||||||
|
responseModalities: ['AUDIO'],
|
||||||
|
speechConfig: {
|
||||||
|
voiceConfig: {
|
||||||
|
prebuiltVoiceConfig: { voiceName: 'Algenib' }, // Options: 'Puck', 'Charon', 'Fenrir', etc.
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
prompt: 'Genkit is an amazing library',
|
||||||
|
});
|
||||||
|
|
||||||
|
// The response contains raw PCM data in media.url (base64 encoded).
|
||||||
|
// CAUTION: This is NOT an MP3/WAV file. It requires conversion (e.g., PCM to WAV).
|
||||||
|
// DO NOT GUESS. Run `genkit docs:search "speech audio"` to find the correct
|
||||||
|
// conversion code for your provider.
|
||||||
|
```
|
||||||
@@ -0,0 +1,46 @@
|
|||||||
|
# Genkit JS Setup
|
||||||
|
|
||||||
|
Follow these instructions to set up Genkit in the current codebase. These instructions are general-purpose and have not been written with specific codebase knowledge, so use your best judgement when following them.
|
||||||
|
|
||||||
|
0. Tell the user "I'm going to check out your workspace and set you up to use Genkit for GenAI workflows."
|
||||||
|
1. If the current workspace is empty or is a starter template, your goal will be to create a simple image generation flow that allows someone to generate an image based on a prompt and selectable style. If the current workspace is not empty, you will create a simple example flow to help get the user started.
|
||||||
|
2. Check to see if any Genkit provider plugin (such as `@genkit-ai/google-genai` or `@genkit-ai/oai-compat` or others, may start with `genkitx-*`) is installed.
|
||||||
|
- If not, ask the user which provider they want to use.
|
||||||
|
- **For non-Google providers**: Use `genkit docs:search "plugins"` to find the correct package and installation instructions.
|
||||||
|
- If they have no preference, default to `@genkit-ai/google-genai` for a quick start.
|
||||||
|
- If this is a Next.js app, install `@genkit-ai/next` as well.
|
||||||
|
3. Search the codebase for the exact string `genkit(` (remember to escape regexes properly) which would indicate that the user has already set up Genkit in the codebase. If found, no need to set it up again, tell the user "Genkit is already configured in this app." and exit this workflow.
|
||||||
|
4. Create an `ai` directory in the primary source directory of the project (this may be e.g. `src` but is project-dependent). Adapt this path if your project uses a different structure.
|
||||||
|
5. Create `{sourceDir}/ai/genkit.ts` and populate it using the example below. DO NOT add a `next` plugin to the file, ONLY add a model provider plugin to the plugins array:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { genkit, z } from 'genkit';
|
||||||
|
// Import your chosen provider plugin here. Example:
|
||||||
|
import { googleAI } from '@genkit-ai/google-genai';
|
||||||
|
|
||||||
|
export const ai = genkit({
|
||||||
|
plugins: [
|
||||||
|
googleAI(), // Add your provider plugin here
|
||||||
|
],
|
||||||
|
model: googleAI.model('gemini-2.5-flash'), // Set your provider's model here
|
||||||
|
});
|
||||||
|
|
||||||
|
export { z };
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Create `{sourceDir}/ai/tools` and `{sourceDir}/ai/flows` directories, but leave them empty for now.
|
||||||
|
7. Create `{sourceDir}/ai/index.ts` and populate it with the following (change the import to match import aliases in `tsconfig.json` as needed):
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import './genkit.js';
|
||||||
|
// import each created flow, tool, etc. here for use in the Genkit Dev UI
|
||||||
|
```
|
||||||
|
|
||||||
|
8. Add a `genkit:ui` script to `package.json` that runs `genkit start -- npx tsx --watch {sourceDir}/ai/index.ts` (or `npx genkit-cli` or `pnpm dlx` or `yarn dlx` for those package managers, if CLI is not locally installed). DO NOT try to run the script now.
|
||||||
|
9. Tell the user "Genkit is now configured and ready for use." as setup is now complete. Also remind them to set appropriate env variables (e.g. `GEMINI_API_KEY` for Google providers). Wait for the user to prompt further before creating any specific flows.
|
||||||
|
|
||||||
|
## Next Steps & Troubleshooting
|
||||||
|
|
||||||
|
- **Documentation**: Use the [CLI](docs-and-cli.md) to access documentation (e.g., `genkit docs:search`).
|
||||||
|
- **Building Flows**: See [examples.md](examples.md) for patterns on creating flows, adding tools, and advanced configuration.
|
||||||
|
- **Troubleshooting**: If you encounter issues during setup or initialization, check [common-errors.md](common-errors.md) for solutions.
|
||||||
@@ -0,0 +1,109 @@
|
|||||||
|
---
|
||||||
|
name: firebase-ai-logic
|
||||||
|
description: Official skill for integrating Firebase AI Logic (Gemini API) into web applications. Covers setup, multimodal inference, structured output, and security.
|
||||||
|
version: 1.0.0
|
||||||
|
---
|
||||||
|
|
||||||
|
# Firebase AI Logic Basics
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Firebase AI Logic is a product of Firebase that allows developers to add gen AI to their mobile and web apps using client-side SDKs. You can call Gemini models directly from your app without managing a dedicated backend. Firebase AI Logic, which was previously known as "Vertex AI for Firebase", represents the evolution of Google's AI integration platform for mobile and web developers.
|
||||||
|
|
||||||
|
It supports the two Gemini API providers:
|
||||||
|
- **Gemini Developer API**: It has a free tier ideal for prototyping, and pay-as-you-go for production
|
||||||
|
- **Vertex AI Gemini API**: Ideal for scale with enterprise-grade production readiness, requires Blaze plan
|
||||||
|
|
||||||
|
Use the Gemini Developer API as a default, and only Vertex AI Gemini API if the application requires it.
|
||||||
|
|
||||||
|
## Setup & Initialization
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- Before starting, ensure you have **Node.js 16+** and npm installed. Install them if they aren’t already available.
|
||||||
|
- Identify the platform the user is interested in building on prior to starting: Android, iOS, Flutter or Web.
|
||||||
|
- If their platform is unsupported, Direct the user to Firebase Docs to learn how to set up AI Logic for their application (share this link with the user https://firebase.google.com/docs/ai-logic/get-started)
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
The library is part of the standard Firebase Web SDK.
|
||||||
|
|
||||||
|
`npm install -g firebase@latest`
|
||||||
|
|
||||||
|
If you're in a firebase directory (with a firebase.json) the currently selected project will be marked with "current" using this command:
|
||||||
|
|
||||||
|
`npx -y firebase-tools@latest projects:list`
|
||||||
|
|
||||||
|
Ensure there's at least one app associated with the current project
|
||||||
|
|
||||||
|
`npx -y firebase-tools@latest apps:list`
|
||||||
|
|
||||||
|
Initialize AI logic SDK with the init command
|
||||||
|
|
||||||
|
`npx -y firebase-tools@latest init # Choose AI logic`
|
||||||
|
|
||||||
|
This will automatically enable the Gemini Developer API in the Firebase console.
|
||||||
|
|
||||||
|
More info in [Firebase AI Logic Getting Started](https://firebase.google.com/docs/ai-logic/get-started.md.txt)
|
||||||
|
|
||||||
|
## Core Capabilities
|
||||||
|
|
||||||
|
### Text-Only Generation
|
||||||
|
|
||||||
|
### Multimodal (Text + Images/Audio/Video/PDF input)
|
||||||
|
|
||||||
|
Firebase AI Logic allows Gemini models to analyze image files directly from your app. This enables features like creating captions, answering questions about images, detecting objects, and categorizing images. Beyond images, Gemini can analyze other media types like audio, video, and PDFs by passing them as inline data with their MIME type. For files larger than 20 megabytes (which can cause HTTP 413 errors as inline data), store them in Cloud Storage for Firebase and pass their URLs to the Gemini Developer API.
|
||||||
|
|
||||||
|
### Chat Session (Multi-turn)
|
||||||
|
|
||||||
|
Maintain history automatically using `startChat`.
|
||||||
|
|
||||||
|
### Streaming Responses
|
||||||
|
|
||||||
|
To improve the user experience by showing partial results as they arrive (like a typing effect), use `generateContentStream` instead of `generateContent` for faster display of results.
|
||||||
|
|
||||||
|
### Generate Images with Nano Banana
|
||||||
|
|
||||||
|
- Start with Gemini for most use cases, and choose Imagen for specialized tasks where image quality and specific styles are critical. (Example: gemini-2.5-flash-image)
|
||||||
|
- Requires an upgraded Blaze pay-as-you-go billing plan.
|
||||||
|
|
||||||
|
### Search Grounding with the built in googleSearch tool
|
||||||
|
|
||||||
|
## Supported Platforms and Frameworks
|
||||||
|
|
||||||
|
Supported Platforms and Frameworks include Kotlin and Java for Android, Swift for iOS, JavaScript for web apps, Dart for Flutter, and C Sharp for Unity.
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
### Structured Output (JSON)
|
||||||
|
|
||||||
|
Enforce a specific JSON schema for the response.
|
||||||
|
|
||||||
|
### On-Device AI (Hybrid)
|
||||||
|
|
||||||
|
Hybrid on-device inference for web apps, where the Firebase Javascript SDK automatically checks for Gemini Nano's availability (after installation) and switches between on-device or cloud-hosted prompt execution. This requires specific steps to enable model usage in the Chrome browser, more info in the [hybrid-on-device-inference documentation](https://firebase.google.com/docs/ai-logic/hybrid-on-device-inference.md.txt).
|
||||||
|
|
||||||
|
## Security & Production
|
||||||
|
|
||||||
|
### App Check
|
||||||
|
|
||||||
|
Recommended: The developer must enable Firebase App Check to prevent unauthorized clients from using their API quota. see [App-check recaptcha enterprise](https://firebase.google.com/docs/app-check/web/recaptcha-enterprise-provider.md.txt).
|
||||||
|
|
||||||
|
### Remote Config
|
||||||
|
|
||||||
|
Consider that you do not need to hardcode model names (e.g., `gemini-flash-lite-latest`). Use Firebase Remote Config to update model versions dynamically without deploying new client code. See [Changing model names remotely](https://firebase.google.com/docs/ai-logic/change-model-name-remotely.md.txt)
|
||||||
|
|
||||||
|
## Initialization Code References
|
||||||
|
|
||||||
|
| Language, Framework, Platform | Gemini API provider | Context URL |
|
||||||
|
| :---- | :---- | :---- |
|
||||||
|
| Web Modular API | Gemini Developer API (Developer API) | firebase://docs/ai-logic/get-started |
|
||||||
|
|
||||||
|
**Always use the most recent version of Gemini (gemini-flash-latest) unless another model is requested by the docs or the user. DO NOT USE gemini-1.5-flash**
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
[Web SDK code examples and usage patterns](references/usage_patterns_web.md)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -0,0 +1,174 @@
|
|||||||
|
# Firebase AI Logic Basics
|
||||||
|
|
||||||
|
## Initialization Pattern
|
||||||
|
You must initialize the ai-logic service after the main Firebase App.
|
||||||
|
```JavaScript
|
||||||
|
import { initializeApp } from "firebase/app";
|
||||||
|
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";
|
||||||
|
|
||||||
|
|
||||||
|
// If running in Firebase App Hosting, you can skip Firebase Config and instead use:
|
||||||
|
// const app = initializeApp();
|
||||||
|
|
||||||
|
const firebaseConfig = {
|
||||||
|
// ... your firebase config
|
||||||
|
};
|
||||||
|
|
||||||
|
const app = initializeApp(firebaseConfig);
|
||||||
|
|
||||||
|
// Initialize the AI Logic service (defaults to Gemini Developer API)
|
||||||
|
// To set the AI provider, set the backend as the second parameter
|
||||||
|
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
|
||||||
|
|
||||||
|
const generationConfig = {
|
||||||
|
candidate_count: 1,
|
||||||
|
maxOutputTokens: 2048,
|
||||||
|
stopSequences: [],
|
||||||
|
temperature: 0.7, // Balanced: creative but focused
|
||||||
|
topP: 0.95, // Standard: allows a wide range of probable tokens
|
||||||
|
topK: 40, // Standard: considers the top 40 tokens
|
||||||
|
};
|
||||||
|
|
||||||
|
// Specify the config as part of creating the `GenerativeModel` instance
|
||||||
|
const model = getGenerativeModel(ai, { model: "gemini-2.5-flash-lite", generationConfig });
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Capabilities
|
||||||
|
Text-Only Generation
|
||||||
|
```JavaScript
|
||||||
|
async function generateText(prompt) {
|
||||||
|
const result = await model.generateContent(prompt);
|
||||||
|
const response = await result.response;
|
||||||
|
return response.text();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multimodal (Text + Images/Audio/Video/PDF input)
|
||||||
|
Firebase AI Logic accepts Base64 encoded data or specific file references.
|
||||||
|
```JavaScript
|
||||||
|
// Helper to convert file to base64 generic object
|
||||||
|
async function fileToGenerativePart(file) {
|
||||||
|
const base64EncodedDataPromise = new Promise((resolve) => {
|
||||||
|
const reader = new FileReader();
|
||||||
|
reader.onloadend = () => resolve(reader.result.split(',')[1]);
|
||||||
|
reader.readAsDataURL(file);
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
inlineData: {
|
||||||
|
data: await base64EncodedDataPromise,
|
||||||
|
mimeType: file.type,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
async function analyzeImage(prompt, imageFile) {
|
||||||
|
const imagePart = await fileToGenerativePart(imageFile);
|
||||||
|
const result = await model.generateContent([prompt, imagePart]);
|
||||||
|
return result.response.text();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Chat Session (Multi-turn)
|
||||||
|
Maintain history automatically using startChat.
|
||||||
|
```JavaScript
|
||||||
|
const chat = model.startChat({
|
||||||
|
history: [
|
||||||
|
{
|
||||||
|
role: "user",
|
||||||
|
parts: [{ text: "Hello, I am a developer." }],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
role: "model",
|
||||||
|
parts: [{ text: "Great to meet you. How can I help with code?" }],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
async function sendMessage(msg) {
|
||||||
|
const result = await chat.sendMessage(msg);
|
||||||
|
return result.response.text();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Streaming Responses
|
||||||
|
For real-time UI updates (like a typing effect).
|
||||||
|
```JavaScript
|
||||||
|
async function streamResponse(prompt) {
|
||||||
|
const result = await model.generateContentStream(prompt);
|
||||||
|
for await (const chunk of result.stream) {
|
||||||
|
const chunkText = chunk.text();
|
||||||
|
console.log("Stream chunk:", chunkText);
|
||||||
|
// Update UI here
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate Images with Nano Banana
|
||||||
|
|
||||||
|
```Javascript
|
||||||
|
import { initializeApp } from "firebase/app";
|
||||||
|
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";
|
||||||
|
|
||||||
|
|
||||||
|
// Initialize FirebaseApp
|
||||||
|
const firebaseApp = initializeApp(firebaseConfig);
|
||||||
|
|
||||||
|
// Initialize the Gemini Developer API backend service
|
||||||
|
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
|
||||||
|
|
||||||
|
// Create a `GenerativeModel` instance with a model that supports your use case
|
||||||
|
const model = getGenerativeModel(ai, {
|
||||||
|
model: "gemini-2.5-flash-image",
|
||||||
|
// Configure the model to respond with text and images (required)
|
||||||
|
generationConfig: {
|
||||||
|
responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Provide a text prompt instructing the model to generate an image
|
||||||
|
const prompt = 'Generate an image of the Eiffel Tower with fireworks in the background.';
|
||||||
|
|
||||||
|
// To generate an image, call `generateContent` with the text input
|
||||||
|
const result = model.generateContent(prompt);
|
||||||
|
|
||||||
|
// Handle the generated image
|
||||||
|
try {
|
||||||
|
const inlineDataParts = result.response.inlineDataParts();
|
||||||
|
if (inlineDataParts?.[0]) {
|
||||||
|
const image = inlineDataParts[0].inlineData;
|
||||||
|
console.log(image.mimeType, image.data);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Prompt or candidate was blocked:', err);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
Structured Output (JSON)
|
||||||
|
Enforce a specific JSON schema for the response.
|
||||||
|
```JavaScript
|
||||||
|
import { getGenerativeModel, Schema } from "firebase/ai";
|
||||||
|
const jsonModel = getGenerativeModel(ai, {
|
||||||
|
model: "gemini-2.5-flash-lite",
|
||||||
|
generationConfig: {
|
||||||
|
responseMimeType: "application/json",
|
||||||
|
// Optional: Define a schema
|
||||||
|
schema = Schema.object({ ... });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
async function getJsonData(prompt) {
|
||||||
|
const result = await jsonModel.generateContent(prompt);
|
||||||
|
return JSON.parse(result.response.text());
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
On-Device AI (Hybrid)
|
||||||
|
Automatically switch between local Gemini Nano and cloud models based on device capability.
|
||||||
|
```JavaScript
|
||||||
|
import {getGenerativeModel, InferenceMode } from "firebase/ai";
|
||||||
|
|
||||||
|
const hybridModel = getGenerativeModel(ai, { mode: InferenceMode.PREFER_ON_DEVICE });
|
||||||
|
```
|
||||||
|
|
||||||
@@ -0,0 +1,58 @@
|
|||||||
|
---
|
||||||
|
name: firebase-app-hosting-basics
|
||||||
|
description: Deploy and manage web apps with Firebase App Hosting. Use this skill when deploying Next.js/Angular apps with backends.
|
||||||
|
---
|
||||||
|
|
||||||
|
# App Hosting Basics
|
||||||
|
|
||||||
|
## Description
|
||||||
|
This skill enables the agent to deploy and manage modern, full-stack web applications (Next.js, Angular, etc.) using Firebase App Hosting.
|
||||||
|
|
||||||
|
**Important**: In order to use App Hosting, your Firebase project must be on the Blaze pricing plan. Direct the user to https://console.firebase.google.com/project/_/overview?purchaseBillingPlan=metered to upgrade their plan.
|
||||||
|
|
||||||
|
## Hosting vs App Hosting
|
||||||
|
|
||||||
|
**Choose Firebase Hosting if:**
|
||||||
|
- You are deploying a static site (HTML/CSS/JS).
|
||||||
|
- You are deploying a simple SPA (React, Vue, etc. without SSR).
|
||||||
|
- You want full control over the build and deploy process via CLI.
|
||||||
|
|
||||||
|
**Choose Firebase App Hosting if:**
|
||||||
|
- You are using a supported full-stack framework like Next.js or Angular.
|
||||||
|
- You need Server-Side Rendering (SSR) or ISR.
|
||||||
|
- You want an automated "git push to deploy" workflow with zero configuration.
|
||||||
|
|
||||||
|
## Deploying to App Hosting
|
||||||
|
|
||||||
|
### Deploy from Source
|
||||||
|
|
||||||
|
This is the recommended flow for most users.
|
||||||
|
1. Configure `firebase.json` with an `apphosting` block.
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"apphosting": {
|
||||||
|
"backendId": "my-app-id",
|
||||||
|
"rootDir": "/",
|
||||||
|
"ignore": [
|
||||||
|
"node_modules",
|
||||||
|
".git",
|
||||||
|
"firebase-debug.log",
|
||||||
|
"firebase-debug.*.log",
|
||||||
|
"functions"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
2. Create or edit `apphosting.yaml`- see [Configuration](references/configuration.md) for more information on how to do so.
|
||||||
|
3. If the app needs safe access to sensitive keys, use `npx -y firebase-tools@latest apphosting:secrets` commands to set and grant access to secrets.
|
||||||
|
4. Run `npx -y firebase-tools@latest deploy` when you are ready to deploy.
|
||||||
|
|
||||||
|
### Automated deployment via GitHub (CI/CD)
|
||||||
|
|
||||||
|
Alternatively, set up a backend connected to a GitHub repository for automated deployments "git push" deployments.
|
||||||
|
This is only recommended for more advanced users, and is not required to use App Hosting.
|
||||||
|
See [CLI Commands](references/cli_commands.md) for more information on how to set this up using CLI commands.
|
||||||
|
|
||||||
|
## Emulation
|
||||||
|
|
||||||
|
See [Emulation](references/emulation.md) for more information on how to test your app locally using the Firebase Local Emulator Suite.
|
||||||
@@ -0,0 +1,71 @@
|
|||||||
|
# App Hosting CLI Commands
|
||||||
|
|
||||||
|
The Firebase CLI provides a comprehensive suite of commands to manage App Hosting resources. These commands are often faster and more scriptable than using the Firebase Console.
|
||||||
|
|
||||||
|
## Initialization
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest init apphosting`
|
||||||
|
|
||||||
|
- **Purpose**: Interactive command that sets up App Hosting in your local project.
|
||||||
|
Use this command only if you are able to handle interactive CLI inputs well.
|
||||||
|
Alternatively, you can manually edit `firebase.json` and `apphosting.yml`.
|
||||||
|
|
||||||
|
- **Effect**:
|
||||||
|
- Detects your web framework.
|
||||||
|
- Creates/updates `apphosting.yaml`.
|
||||||
|
- Can optionally create a backend if one doesn't exist.
|
||||||
|
|
||||||
|
## Backend Management
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:backends:list`
|
||||||
|
|
||||||
|
- **Purpose**: Lists all backends in the current project.
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:backends:get <backend-id>`
|
||||||
|
|
||||||
|
- **Purpose**: Shows details for a specific backend.
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:backends:delete <backend-id>`
|
||||||
|
|
||||||
|
- **Purpose**: Deletes a backend and its associated resources.
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:rollouts:list <backend-id>`
|
||||||
|
|
||||||
|
- **Purpose**: Lists the history of rollouts for a backend.
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
App Hosting uses Cloud Secret Manager to securely handle sensitive environment variables (like API keys).
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:secrets:set <secret-name>`
|
||||||
|
|
||||||
|
- **Purpose**: Creates or updates a secret in Cloud Secret Manager and makes it available to App Hosting.
|
||||||
|
- **Behavior**: Prompts for the secret value (hidden input).
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:secrets:grantaccess <secret-name>`
|
||||||
|
|
||||||
|
- **Purpose**: Grants the App Hosting service account permission to access the secret.
|
||||||
|
- **Note**: Often handled automatically by `secrets:set`, but useful for debugging permission issues or granting access to existing secrets.
|
||||||
|
|
||||||
|
## Automated deployment via GitHub (CI/CD)
|
||||||
|
|
||||||
|
**IMPORTANT** Only use these commands if you are setting up automated deployments via GitHub. If you are managing deployments using `npx -y firebase-tools@latest deploy`, DO NOT use these commands.
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:rollouts:create <backend-id>`
|
||||||
|
|
||||||
|
- **Purpose**: Manually triggers a new rollout (deployment).
|
||||||
|
- **Options**:
|
||||||
|
- `--git-branch <branch>`: Deploy the latest commit from a specific branch.
|
||||||
|
- `--git-commit <commit-hash>`: Deploy a specific commit.
|
||||||
|
- **Use Case**: Useful for redeploying without code changes, or rolling back to a specific commit.
|
||||||
|
|
||||||
|
### `npx -y firebase-tools@latest apphosting:backends:create`
|
||||||
|
|
||||||
|
- **Purpose**: Creates a new App Hosting backend. Use this when setting up automated deployments via GitHub.
|
||||||
|
- **Options**:
|
||||||
|
- `--app <webAppId>`: The ID of an existing Firebase web app to associate with the backend.
|
||||||
|
- `--backend <backendId>`: The ID of the new backend.
|
||||||
|
- `--primary-region <location>`: The primary region for the backend.
|
||||||
|
- `--root-dir <rootDir>`: The root directory for the backend. If omitted, defaults to the root directory of the project.
|
||||||
|
- `--service-account <service-account>`: The service account used to run the server. If omitted, defaults to the default service account.
|
||||||
|
|
||||||
@@ -0,0 +1,51 @@
|
|||||||
|
# App Hosting Configuration (`apphosting.yaml`)
|
||||||
|
|
||||||
|
The `apphosting.yaml` file is the source of truth for your backend's configuration. It must be located in the root of your app's directory (or the specific root directory if using a monorepo).
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# apphosting.yaml
|
||||||
|
|
||||||
|
# Cloud Run service configuration
|
||||||
|
runConfig:
|
||||||
|
cpu: 1
|
||||||
|
memoryMiB: 512
|
||||||
|
minInstances: 0
|
||||||
|
maxInstances: 100
|
||||||
|
concurrency: 80
|
||||||
|
|
||||||
|
# Environment variables
|
||||||
|
env:
|
||||||
|
- variable: STORAGE_BUCKET
|
||||||
|
value: mybucket.app
|
||||||
|
availability:
|
||||||
|
- BUILD
|
||||||
|
- RUNTIME
|
||||||
|
- variable: API_KEY
|
||||||
|
secret: myApiKeySecret
|
||||||
|
```
|
||||||
|
|
||||||
|
## `runConfig`
|
||||||
|
Controls the resources allocated to the Cloud Run service that serves your app.
|
||||||
|
- `cpu`: Number of vCPUs. Note: If `< 1`, concurrency MUST be set to `1`.
|
||||||
|
- `memoryMiB`: RAM in MiB (128 to 32768).
|
||||||
|
- `minInstances`: Minimum containers to keep warm (default 0). Set to >= 1 to avoid cold starts.
|
||||||
|
- `maxInstances`: Maximum scaling limit (default 100).
|
||||||
|
- `concurrency`: Max concurrent requests per instance (default 80).
|
||||||
|
|
||||||
|
### Resource Constraints
|
||||||
|
- **CPU vs Memory**: Higher memory often requires higher CPU.
|
||||||
|
- > 4GiB RAM -> Needs >= 2 vCPU
|
||||||
|
- > 8GiB RAM -> Needs >= 4 vCPU
|
||||||
|
|
||||||
|
## `env` (Environment Variables)
|
||||||
|
Defines environment variables available during build and/or runtime.
|
||||||
|
|
||||||
|
- `variable`: The name of the env var (e.g., `NEXT_PUBLIC_API_URL`).
|
||||||
|
- `value`: A literal string value.
|
||||||
|
- `secret`: The name of a secret in Cloud Secret Manager. use `npx -y firebase-tools@latest apphosting:secrets:set` to create these.
|
||||||
|
- `availability`: Where the variable is needed.
|
||||||
|
- `BUILD`: Available during the `npm run build` process.
|
||||||
|
- `RUNTIME`: Available when the app is serving requests.
|
||||||
|
- Defaults to both if not specified.
|
||||||
@@ -0,0 +1,47 @@
|
|||||||
|
# App Hosting Emulation
|
||||||
|
|
||||||
|
You can test your App Hosting setup locally using the Firebase Local Emulator Suite. This allows you to verify your app's behavior with environment variables and secrets before deploying.
|
||||||
|
|
||||||
|
## Configuration: `apphosting.emulator.yaml`
|
||||||
|
This optional file overrides `apphosting.yaml` settings specifically for the local emulator. Use it to provide local secret values or override resource configs. If it contains sensitive values such as API keys, do not commit it to source control.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# apphosting.emulator.yaml (gitignored usually)
|
||||||
|
runConfig:
|
||||||
|
cpu: 1
|
||||||
|
memoryMiB: 512
|
||||||
|
|
||||||
|
env:
|
||||||
|
- variable: API_KEY
|
||||||
|
value: "local-dev-api-key" # Override secret with local value
|
||||||
|
```
|
||||||
|
|
||||||
|
## Running the Emulator
|
||||||
|
To start the App Hosting emulator:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest emulators:start --only apphosting
|
||||||
|
```
|
||||||
|
|
||||||
|
Or, if you are also using other emulators (Auth, Firestore, etc.):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest emulators:start
|
||||||
|
```
|
||||||
|
|
||||||
|
## Capabilities
|
||||||
|
- **Builds your app**: Runs the build command defined in your `package.json` to generate the serving artifact.
|
||||||
|
- **Serves locally**: Runs the app on `localhost:5004` (default).
|
||||||
|
Configurable by setting `host` and `port` in the `emulators` block of `firebase.json`, like so:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"emulators": {
|
||||||
|
"apphosting": {
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 5004
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- **Env Var Injection**: Injects variables defined in `apphosting.yaml` and `apphosting.emulator.yaml` into the process.
|
||||||
@@ -0,0 +1,86 @@
|
|||||||
|
---
|
||||||
|
name: firebase-auth-basics
|
||||||
|
description: Guide for setting up and using Firebase Authentication. Use this skill when the user's app requires user sign-in, user management, or secure data access using auth rules.
|
||||||
|
compatibility: This skill is best used with the Firebase CLI, but does not require it. Firebase CLI can be accessed through `npx -y firebase-tools@latest`.
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- **Firebase Project**: Created via `npx -y firebase-tools@latest projects:create` (see `firebase-basics`).
|
||||||
|
- **Firebase CLI**: Installed and logged in (see `firebase-basics`).
|
||||||
|
|
||||||
|
## Core Concepts
|
||||||
|
|
||||||
|
Firebase Authentication provides backend services, easy-to-use SDKs, and ready-made UI libraries to authenticate users to your app.
|
||||||
|
|
||||||
|
### Users
|
||||||
|
|
||||||
|
A user is an entity that can sign in to your app. Each user is identified by a unique ID (`uid`) which is guaranteed to be unique across all providers.
|
||||||
|
User properties include:
|
||||||
|
- `uid`: Unique identifier.
|
||||||
|
- `email`: User's email address (if available).
|
||||||
|
- `displayName`: User's display name (if available).
|
||||||
|
- `photoURL`: URL to user's photo (if available).
|
||||||
|
- `emailVerified`: Boolean indicating if the email is verified.
|
||||||
|
|
||||||
|
### Identity Providers
|
||||||
|
|
||||||
|
Firebase Auth supports multiple ways to sign in:
|
||||||
|
- **Email/Password**: Basic email and password authentication.
|
||||||
|
- **Federated Identity Providers**: Google, Facebook, Twitter, GitHub, Microsoft, Apple, etc.
|
||||||
|
- **Phone Number**: SMS-based authentication.
|
||||||
|
- **Anonymous**: Temporary guest accounts that can be linked to permanent accounts later.
|
||||||
|
- **Custom Auth**: Integrate with your existing auth system.
|
||||||
|
|
||||||
|
Google Sign In is recommended as a good and secure default provider.
|
||||||
|
|
||||||
|
### Tokens
|
||||||
|
|
||||||
|
When a user signs in, they receive an ID Token (JWT). This token is used to identify the user when making requests to Firebase services (Realtime Database, Cloud Storage, Firestore) or your own backend.
|
||||||
|
- **ID Token**: Short-lived (1 hour), verifies identity.
|
||||||
|
- **Refresh Token**: Long-lived, used to get new ID tokens.
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
### 1. Provisioning
|
||||||
|
|
||||||
|
#### Option 1. Enabling Authentication via CLI
|
||||||
|
|
||||||
|
Only Google Sign In, anonymous auth, and email/password auth can be enabled via CLI. For other providers, use the Firebase Console.
|
||||||
|
|
||||||
|
Configure Firebase Authentication in `firebase.json` by adding an 'auth' block:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"auth": {
|
||||||
|
"providers": {
|
||||||
|
"anonymous": true,
|
||||||
|
"emailPassword": true,
|
||||||
|
"googleSignIn": {
|
||||||
|
"oAuthBrandDisplayName": "Your Brand Name",
|
||||||
|
"supportEmail": "support@example.com",
|
||||||
|
"authorizedRedirectUris": ["https://example.com"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Option 2. Enabling Authentication in Console
|
||||||
|
|
||||||
|
Enable other providers in the Firebase Console.
|
||||||
|
|
||||||
|
1. Go to the https://console.firebase.google.com/project/_/authentication/providers
|
||||||
|
2. Select your project.
|
||||||
|
3. Enable the desired Sign-in providers (e.g., Email/Password, Google).
|
||||||
|
|
||||||
|
### 2. Client Setup & Usage
|
||||||
|
|
||||||
|
**Web**
|
||||||
|
See [references/client_sdk_web.md](references/client_sdk_web.md).
|
||||||
|
|
||||||
|
### 3. Security Rules
|
||||||
|
|
||||||
|
Secure your data using `request.auth` in Firestore/Storage rules.
|
||||||
|
|
||||||
|
See [references/security_rules.md](references/security_rules.md).
|
||||||
@@ -0,0 +1,287 @@
|
|||||||
|
# Firebase Authentication Web SDK
|
||||||
|
|
||||||
|
## Initialization
|
||||||
|
|
||||||
|
First, ensure you have initialized the Firebase App (see `firebase-basics` skill). Then, initialize the Auth service:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth } from "firebase/auth";
|
||||||
|
import { app } from "./firebase"; // Your initialized Firebase App
|
||||||
|
|
||||||
|
const auth = getAuth(app);
|
||||||
|
export { auth };
|
||||||
|
```
|
||||||
|
|
||||||
|
## Connect to Emulator
|
||||||
|
|
||||||
|
If you are running the Authentication emulator (usually on port 9099), connect to it immediately after initialization.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, connectAuthEmulator } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
// Connect to emulator if running locally
|
||||||
|
if (location.hostname === "localhost") {
|
||||||
|
connectAuthEmulator(auth, "http://localhost:9099");
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign Up with Email/Password
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, createUserWithEmailAndPassword } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
createUserWithEmailAndPassword(auth, email, password)
|
||||||
|
.then((userCredential) => {
|
||||||
|
const user = userCredential.user;
|
||||||
|
// ...
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
const errorCode = error.code;
|
||||||
|
const errorMessage = error.message;
|
||||||
|
// ..
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with Google (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, GoogleAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new GoogleAuthProvider();
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
// This gives you a Google Access Token. You can use it to access the Google API.
|
||||||
|
const credential = GoogleAuthProvider.credentialFromResult(result);
|
||||||
|
const token = credential.accessToken;
|
||||||
|
// The signed-in user info.
|
||||||
|
const user = result.user;
|
||||||
|
// ...
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
const errorCode = error.code;
|
||||||
|
const errorMessage = error.message;
|
||||||
|
// ...
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with Facebook (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, FacebookAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new FacebookAuthProvider();
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
// The signed-in user info.
|
||||||
|
const user = result.user;
|
||||||
|
// This gives you a Facebook Access Token. You can use it to access the Facebook API.
|
||||||
|
const credential = FacebookAuthProvider.credentialFromResult(result);
|
||||||
|
const accessToken = credential.accessToken;
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with Apple (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, OAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new OAuthProvider('apple.com');
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
const user = result.user;
|
||||||
|
// Apple credential
|
||||||
|
const credential = OAuthProvider.credentialFromResult(result);
|
||||||
|
const accessToken = credential.accessToken;
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with Twitter (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, TwitterAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new TwitterAuthProvider();
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
const user = result.user;
|
||||||
|
// Twitter credential
|
||||||
|
const credential = TwitterAuthProvider.credentialFromResult(result);
|
||||||
|
const token = credential.accessToken;
|
||||||
|
const secret = credential.secret;
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with GitHub (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, GithubAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new GithubAuthProvider();
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
const user = result.user;
|
||||||
|
const credential = GithubAuthProvider.credentialFromResult(result);
|
||||||
|
const token = credential.accessToken;
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with Microsoft (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, OAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new OAuthProvider('microsoft.com');
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
const user = result.user;
|
||||||
|
const credential = OAuthProvider.credentialFromResult(result);
|
||||||
|
const accessToken = credential.accessToken;
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In with Yahoo (Popup)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInWithPopup, OAuthProvider } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const provider = new OAuthProvider('yahoo.com');
|
||||||
|
|
||||||
|
signInWithPopup(auth, provider)
|
||||||
|
.then((result) => {
|
||||||
|
const user = result.user;
|
||||||
|
const credential = OAuthProvider.credentialFromResult(result);
|
||||||
|
const accessToken = credential.accessToken;
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Handle Errors here.
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign In Anonymously
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signInAnonymously } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
signInAnonymously(auth)
|
||||||
|
.then(() => {
|
||||||
|
// Signed in..
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
const errorCode = error.code;
|
||||||
|
const errorMessage = error.message;
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Email Link Authentication
|
||||||
|
|
||||||
|
**1. Send Auth Link**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, sendSignInLinkToEmail } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
const actionCodeSettings = {
|
||||||
|
// URL you want to redirect back to. The domain must be in the authorized domains list in Firebase Console.
|
||||||
|
url: 'https://www.example.com/finishSignUp?cartId=1234',
|
||||||
|
handleCodeInApp: true,
|
||||||
|
};
|
||||||
|
|
||||||
|
sendSignInLinkToEmail(auth, email, actionCodeSettings)
|
||||||
|
.then(() => {
|
||||||
|
// Save the email locally so you don't need to ask the user for it again
|
||||||
|
window.localStorage.setItem('emailForSignIn', email);
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Error
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Complete Sign In (on landing page)**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, isSignInWithEmailLink, signInWithEmailLink } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
|
||||||
|
if (isSignInWithEmailLink(auth, window.location.href)) {
|
||||||
|
let email = window.localStorage.getItem('emailForSignIn');
|
||||||
|
if (!email) {
|
||||||
|
email = window.prompt('Please provide your email for confirmation');
|
||||||
|
}
|
||||||
|
|
||||||
|
signInWithEmailLink(auth, email, window.location.href)
|
||||||
|
.then((result) => {
|
||||||
|
window.localStorage.removeItem('emailForSignIn');
|
||||||
|
// You can check result.user
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
// Error
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Observe Auth State
|
||||||
|
|
||||||
|
Recommended way to get the current user. This listener triggers whenever the user signs in or out.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, onAuthStateChanged } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
onAuthStateChanged(auth, (user) => {
|
||||||
|
if (user) {
|
||||||
|
// User is signed in, see docs for a list of available properties
|
||||||
|
// https://firebase.google.com/docs/reference/js/firebase.User
|
||||||
|
const uid = user.uid;
|
||||||
|
// ...
|
||||||
|
} else {
|
||||||
|
// User is signed out
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sign Out
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getAuth, signOut } from "firebase/auth";
|
||||||
|
|
||||||
|
const auth = getAuth();
|
||||||
|
signOut(auth).then(() => {
|
||||||
|
// Sign-out successful.
|
||||||
|
}).catch((error) => {
|
||||||
|
// An error happened.
|
||||||
|
});
|
||||||
|
```
|
||||||
@@ -0,0 +1,38 @@
|
|||||||
|
# Authentication in Security Rules
|
||||||
|
|
||||||
|
Firebase Security Rules work with Firebase Authentication to provide rule-based access control. For better advice on writing safe security rules,
|
||||||
|
enable the `firebase-firestore-basics` or `firebase-storage-basics` skills.
|
||||||
|
|
||||||
|
The `request.auth` variable contains authentication information for the user requesting data.
|
||||||
|
|
||||||
|
## Basic Checks
|
||||||
|
|
||||||
|
### Check if user is signed in
|
||||||
|
```
|
||||||
|
allow read, write: if request.auth != null;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check if user owns the data
|
||||||
|
Access data only if the document ID matches the user's UID.
|
||||||
|
```
|
||||||
|
allow read, write: if request.auth != null && request.auth.uid == userId;
|
||||||
|
```
|
||||||
|
(Where `userId` is a path variable, e.g., `match /users/{userId}`)
|
||||||
|
|
||||||
|
### Check if user owns the document (field-based)
|
||||||
|
Access data only if the document has a `owner_uid` field matching the user's UID.
|
||||||
|
```
|
||||||
|
allow read, write: if request.auth != null && request.auth.uid == resource.data.owner_uid;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Token Properties
|
||||||
|
`request.auth.token` contains standard JWT claims and custom claims.
|
||||||
|
|
||||||
|
- `request.auth.token.email`: The user's email address.
|
||||||
|
- `request.auth.token.email_verified`: If the email is verified.
|
||||||
|
- `request.auth.token.name`: The user's display name.
|
||||||
|
|
||||||
|
### Example: Email Verification Check
|
||||||
|
```
|
||||||
|
allow create: if request.auth.token.email_verified == true;
|
||||||
|
```
|
||||||
@@ -0,0 +1,52 @@
|
|||||||
|
---
|
||||||
|
name: firebase-basics
|
||||||
|
description: The definitive, foundational skill for ANY Firebase task. Make sure to ALWAYS use this skill whenever the user mentions or interacts with Firebase, even if they do not explicitly ask for it. This skill covers everything from the bare minimum INITIAL setup (Node.js setup, Firebase CLI installation, first-time login) to ongoing operations (core principles, workflows, building, service setup, executing Firebase CLI commands, troubleshooting, refreshing, or updating an existing environment).
|
||||||
|
---
|
||||||
|
# Prerequisites
|
||||||
|
|
||||||
|
Please complete these setup steps before proceeding, and remember your progress to avoid repeating them in future interactions.
|
||||||
|
|
||||||
|
1. **Local Environment Setup:** Verify the environment is properly set up so we can use Firebase tools:
|
||||||
|
- Run `npx -y firebase-tools@latest --version` to check if the Firebase CLI is installed.
|
||||||
|
- Verify if the Firebase MCP server is installed using your existing tools.
|
||||||
|
- If either of these checks fails, please review [references/local-env-setup.md](references/local-env-setup.md) to get the environment ready.
|
||||||
|
|
||||||
|
2. **Authentication:**
|
||||||
|
Ensure you are logged in to Firebase so that commands have the correct permissions. Run `npx -y firebase-tools@latest login`. For environments without a browser (e.g., remote shells), use `npx -y firebase-tools@latest login --no-localhost`.
|
||||||
|
- The command should output the current user.
|
||||||
|
- If you are not logged in, follow the interactive instructions from this command to authenticate.
|
||||||
|
|
||||||
|
3. **Active Project:**
|
||||||
|
Most Firebase tasks require an active project context. Check the current project by running `npx -y firebase-tools@latest use`.
|
||||||
|
- If the command outputs `Active Project: <project-id>`, you can proceed with your task.
|
||||||
|
- If not, ask the user if they have an existing Firebase Project ID.
|
||||||
|
- If yes: Set it as the active project and add a default alias by running:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest use --add <PROJECT_ID>
|
||||||
|
```
|
||||||
|
- If no: Follow the project creation instructions in [references/firebase-project-create.md](references/firebase-project-create.md).
|
||||||
|
|
||||||
|
# Firebase Usage Principles
|
||||||
|
|
||||||
|
Please adhere to these principles when working with Firebase, as they ensure reliability and up-to-date project tools:
|
||||||
|
|
||||||
|
1. **Use npx for CLI commands:** To ensure you always use the latest version of the Firebase CLI, always prepend commands with `npx -y firebase-tools@latest` instead of just `firebase`. (e.g., use `npx -y firebase-tools@latest --version` instead of `firebase --version`).
|
||||||
|
2. **Prioritize official knowledge:** For any Firebase-related knowledge, consult the `developerknowledge_search_documents` MCP tool before falling back to Google Search or your internal knowledge base. Including "Firebase" in your search query significantly improves relevance.
|
||||||
|
3. **Follow Agent Skills for implementation guidance:** Skills provide opinionated workflows (CUJs), security rules, and best practices. Always consult them to understand *how* to implement Firebase features correctly instead of relying on general knowledge.
|
||||||
|
4. **Use Firebase MCP Server tools instead of direct API calls:** Whenever you need to interact with remote Firebase APIs (such as fetching Crashlytics logs or executing Data Connect queries), use the tools provided by the Firebase MCP Server instead of attempting manual API calls.
|
||||||
|
5. **Keep Plugin / Agent Skills updated:** Since Firebase best practices evolve quickly, remind the user to regularly check for and install updates to their Firebase plugin or Agent Skills, ensuring you both operate with the latest workflows. Similarly, if you encounter issues with outdated tools or commands, follow the steps below based on your agent environment:
|
||||||
|
- **Antigravity**: Follow [references/refresh-antigravity.md](references/refresh-antigravity.md)
|
||||||
|
- **Gemini CLI**: Follow [references/refresh-gemini-cli.md](references/refresh-gemini-cli.md)
|
||||||
|
- **Claude Code**: Follow [references/refresh-claude.md](references/refresh-claude.md)
|
||||||
|
- **Cursor**: Follow [references/refresh-cursor.md](references/refresh-cursor.md)
|
||||||
|
- **Others**: Follow [references/refresh-other.md](references/refresh-other-agents.md)
|
||||||
|
|
||||||
|
# References
|
||||||
|
|
||||||
|
- **Initialize Firebase:** See [references/firebase-service-init.md](references/firebase-service-init.md) when you need to initialize new Firebase services using the CLI.
|
||||||
|
- **Exploring Commands:** See [references/firebase-cli-guide.md](references/firebase-cli-guide.md) to discover and understand CLI functionality.
|
||||||
|
- **SDK Setup:** For detailed guides on adding Firebase to a web app, see [references/web_setup.md](references/web_setup.md).
|
||||||
|
|
||||||
|
# Common Issues
|
||||||
|
|
||||||
|
- **Login Issues:** If the browser fails to open during the login step, use `npx -y firebase-tools@latest login --no-localhost` instead.
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
# Exploring Commands
|
||||||
|
|
||||||
|
The Firebase CLI documents itself. Use help commands to discover functionality.
|
||||||
|
|
||||||
|
- **Global Help**: List all available commands and categories.
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest --help
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Command Help**: Get detailed usage for a specific command.
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest [command] --help
|
||||||
|
# Example:
|
||||||
|
npx -y firebase-tools@latest deploy --help
|
||||||
|
npx -y firebase-tools@latest firestore:indexes --help
|
||||||
|
```
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
# Creating a Project
|
||||||
|
|
||||||
|
To create a new Firebase project from the CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest projects:create
|
||||||
|
```
|
||||||
|
|
||||||
|
You will be prompted to:
|
||||||
|
1. Enter a **Project ID** (must be 6-30 chars, lowercase, digits, and hyphens; must be unique globally).
|
||||||
|
2. Enter a **display name**.
|
||||||
@@ -0,0 +1,18 @@
|
|||||||
|
# Initialization
|
||||||
|
|
||||||
|
Before initializing, check if you are already in a Firebase project directory by looking for `firebase.json`.
|
||||||
|
|
||||||
|
1. **Project Directory:**
|
||||||
|
Navigate to the root directory of the codebase.
|
||||||
|
*(Only if starting a completely new project from scratch without an existing codebase, create a directory first: `mkdir my-project && cd my-project`)*
|
||||||
|
|
||||||
|
2. **Initialize Services:**
|
||||||
|
Run the initialization command:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest init
|
||||||
|
```
|
||||||
|
|
||||||
|
The CLI will guide you through:
|
||||||
|
- Selecting features (Firestore, Functions, Hosting, etc.).
|
||||||
|
- Associating with an existing project or creating a new one.
|
||||||
|
- Configuring files (e.g. `firebase.json`, `.firebaserc`).
|
||||||
@@ -0,0 +1,65 @@
|
|||||||
|
# Firebase Local Environment Setup
|
||||||
|
|
||||||
|
This skill documents the bare minimum setup required for a full Firebase experience for the agent. Before starting to use any Firebase features, you MUST verify that each of the following steps has been completed.
|
||||||
|
|
||||||
|
## 1. Verify Node.js
|
||||||
|
- **Action**: Run `node --version`.
|
||||||
|
- **Handling**: Ensure Node.js is installed and the version is `>= 20`. If Node.js is missing or `< v20`, install it based on the operating system:
|
||||||
|
|
||||||
|
**Recommended: Use a Node Version Manager**
|
||||||
|
This avoids permission issues when installing global packages.
|
||||||
|
|
||||||
|
**For macOS or Linux:**
|
||||||
|
1. Guide the user to the [official nvm repository](https://github.com/nvm-sh/nvm#installing-and-updating).
|
||||||
|
2. Request the user to manually install `nvm` and reply when finished. **Stop and wait** for the user's confirmation.
|
||||||
|
3. Make `nvm` available in the current terminal session by sourcing the appropriate profile:
|
||||||
|
```bash
|
||||||
|
# For Bash
|
||||||
|
source ~/.bash_profile
|
||||||
|
source ~/.bashrc
|
||||||
|
|
||||||
|
# For Zsh
|
||||||
|
source ~/.zprofile
|
||||||
|
source ~/.zshrc
|
||||||
|
```
|
||||||
|
4. Install Node.js:
|
||||||
|
```bash
|
||||||
|
nvm install 24
|
||||||
|
nvm use 24
|
||||||
|
```
|
||||||
|
|
||||||
|
**For Windows:**
|
||||||
|
1. Guide the user to download and install [nvm-windows](https://github.com/coreybutler/nvm-windows/releases).
|
||||||
|
2. Request the user to manually install `nvm-windows` and Node.js, and reply when finished. **Stop and wait** for the user's confirmation.
|
||||||
|
3. After the user confirms, verify Node.js is available:
|
||||||
|
```bash
|
||||||
|
node --version
|
||||||
|
```
|
||||||
|
|
||||||
|
**Alternative: Official Installer**
|
||||||
|
1. Guide the user to download and install the LTS version from [nodejs.org](https://nodejs.org/en/download).
|
||||||
|
2. Request the user to manually install Node.js and reply when finished. **Stop and wait** for the user's confirmation.
|
||||||
|
|
||||||
|
## 2. Verify Firebase CLI
|
||||||
|
The Firebase CLI is the primary tool for interacting with Firebase services.
|
||||||
|
- **Action**: Run `npx -y firebase-tools@latest --version`.
|
||||||
|
- **Handling**: Ensure this command runs successfully and outputs a version number.
|
||||||
|
|
||||||
|
## 3. Verify Firebase Authentication
|
||||||
|
You must be authenticated to manage Firebase projects.
|
||||||
|
- **Action**: Run `npx -y firebase-tools@latest login`.
|
||||||
|
- **Handling**: If the environment is remote or restricted (no browser access), run `npx -y firebase-tools@latest login --no-localhost` instead.
|
||||||
|
|
||||||
|
## 4. Install Agent Skills and MCP Server
|
||||||
|
To fully manage Firebase, the agent needs specific skills and the Firebase MCP server installed. Identify the agent environment you are currently running in and follow the corresponding setup document strictly.
|
||||||
|
|
||||||
|
**Read the setup document for your current agent:**
|
||||||
|
- **Gemini CLI**: Review [setup-gemini_cli.md](setup-gemini_cli.md)
|
||||||
|
- **Antigravity**: Review [setup-antigravity.md](setup-antigravity.md)
|
||||||
|
- **Claude Code**: Review [setup-claude_code.md](setup-claude_code.md)
|
||||||
|
- **Cursor**: Review [setup-cursor.md](setup-cursor.md)
|
||||||
|
- **GitHub Copilot**: Review [setup-github_copilot.md](setup-github_copilot.md)
|
||||||
|
- **Other Agents** (Windsurf, Cline, etc.): Review [setup-other_agents.md](setup-other_agents.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
**CRITICAL AGENT RULE:** Do NOT proceed with any other Firebase tasks until EVERY step above has been successfully verified and completed.
|
||||||
@@ -0,0 +1,46 @@
|
|||||||
|
# Refresh Antigravity Local Environment
|
||||||
|
|
||||||
|
Follow these steps to refresh Antigravity's local environment, ensuring that their agent skills and plugins are fully up-to-date.
|
||||||
|
|
||||||
|
Antigravity relies on `npx skills`. Updates should prioritize the global level; if skills are installed at the project level, omit the `--global` flag.
|
||||||
|
|
||||||
|
1. **List Available Skills:** Identify all Firebase skills available in the repository:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --list
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check Currently Installed Skills:** Determine which skills are already installed and linked at both project and global levels:
|
||||||
|
```bash
|
||||||
|
# Check project-level skills
|
||||||
|
npx -y skills list --agent antigravity
|
||||||
|
|
||||||
|
# Check global-level skills
|
||||||
|
npx -y skills list --agent antigravity --global
|
||||||
|
```
|
||||||
|
*Note: If a skill shows `Agents: not linked` in the output, it is installed but not currently available to Antigravity.*
|
||||||
|
|
||||||
|
3. **Add Missing or Unlinked Skills:** Compare the results from Step 1 and Step 2. For each missing or unlinked skill, follow the appropriate conditional instruction below:
|
||||||
|
|
||||||
|
- **IF any Firebase skills were found at the PROJECT level in Step 2:** Add the missing skill to the project level:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --agent antigravity --skill "<SKILL_NAME>" --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
- **IF any Firebase skills were found at the GLOBAL level in Step 2:** Add the missing skill to the global level:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --agent antigravity --skill "<SKILL_NAME>" --global --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
- **IF NO Firebase skills were found in Step 2:** Add each missing skill to the global level:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --agent antigravity --skill "<SKILL_NAME>" --global --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Update Existing Skills:** Update all currently installed skills to their latest versions:
|
||||||
|
```bash
|
||||||
|
# Update project-level skills
|
||||||
|
npx -y skills update --agent antigravity --yes
|
||||||
|
|
||||||
|
# Update global-level skills
|
||||||
|
npx -y skills update --agent antigravity --global --yes
|
||||||
|
```
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
# Refresh Claude Code Local Environment
|
||||||
|
|
||||||
|
Follow these steps to refresh Claude Code's local environment, ensuring that their agent skills and plugins are fully up-to-date.
|
||||||
|
|
||||||
|
Use Claude Code's native plugin manager instead of `npx`.
|
||||||
|
|
||||||
|
1. **Update the Plugin:** Run the specific CLI command to update the Firebase plugin:
|
||||||
|
```bash
|
||||||
|
claude plugin update firebase@firebase
|
||||||
|
```
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
# Refresh Gemini CLI Local Environment
|
||||||
|
|
||||||
|
Follow these steps to refresh Gemini CLI's local environment, ensuring that their agent skills and plugins are fully up-to-date.
|
||||||
|
|
||||||
|
Use the native Gemini CLI extension manager instead of `npx`.
|
||||||
|
|
||||||
|
1. **Update the Extension:** Run the specific CLI command to update:
|
||||||
|
```bash
|
||||||
|
gemini extensions update firebase
|
||||||
|
```
|
||||||
|
*Note: If the extension is named differently, replace `firebase` with the correct name from `gemini extensions list`.*
|
||||||
@@ -0,0 +1,48 @@
|
|||||||
|
# Refresh Other Local Environment
|
||||||
|
|
||||||
|
Follow these steps to refresh the local environment of other agents, ensuring that their agent skills and plugins are fully up-to-date.
|
||||||
|
|
||||||
|
Other agents rely on `npx skills`. Updates should prioritize the global level; if skills are installed at the project level, omit the `--global` flag.
|
||||||
|
|
||||||
|
Replace `<AGENT_NAME>` with the actual agent name, which can be found in the [skills repository README](https://github.com/vercel-labs/skills/blob/main/README.md).
|
||||||
|
|
||||||
|
1. **List Available Skills:** Identify all Firebase skills available in the repository:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --list
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check Currently Installed Skills:** Determine which skills are already installed and linked for the agent at both project and global levels:
|
||||||
|
```bash
|
||||||
|
# Check project-level skills
|
||||||
|
npx -y skills list --agent <AGENT_NAME>
|
||||||
|
|
||||||
|
# Check global-level skills
|
||||||
|
npx -y skills list --agent <AGENT_NAME> --global
|
||||||
|
```
|
||||||
|
*Note: If a skill shows `Agents: not linked` in the output, it is installed but not currently available to the agent.*
|
||||||
|
|
||||||
|
3. **Add Missing or Unlinked Skills:** Compare the results from Step 1 and Step 2. For each missing or unlinked skill, follow the appropriate conditional instruction below:
|
||||||
|
|
||||||
|
- **IF any Firebase skills were found at the PROJECT level in Step 2:** Add the missing skill to the project level:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --agent <AGENT_NAME> --skill "<SKILL_NAME>" --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
- **IF any Firebase skills were found at the GLOBAL level in Step 2:** Add the missing skill to the global level:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --agent <AGENT_NAME> --skill "<SKILL_NAME>" --global --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
- **IF NO Firebase skills were found in Step 2:** Add each missing skill to the global level:
|
||||||
|
```bash
|
||||||
|
npx -y skills add firebase/agent-skills --agent <AGENT_NAME> --skill "<SKILL_NAME>" --global --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Update Existing Skills:** Update all currently installed skills to their latest versions:
|
||||||
|
```bash
|
||||||
|
# Update project-level skills
|
||||||
|
npx -y skills update --agent <AGENT_NAME> --yes
|
||||||
|
|
||||||
|
# Update global-level skills
|
||||||
|
npx -y skills update --agent <AGENT_NAME> --global --yes
|
||||||
|
```
|
||||||
@@ -0,0 +1,63 @@
|
|||||||
|
# Antigravity Setup
|
||||||
|
|
||||||
|
To get the most out of Firebase in Antigravity, follow these steps to install the agent skills and the MCP server.
|
||||||
|
|
||||||
|
### 1. Install and Verify Firebase Skills
|
||||||
|
Check if the skills are already installed before proceeding:
|
||||||
|
|
||||||
|
1. **Check Local skills**: Run `ls -d .agent/skills/firebase-basics` or `ls -d .agents/skills/firebase-basics`. If the directory exists, the skills are already installed locally.
|
||||||
|
2. **Check Global skills**: If not found locally, check the global installation by running:
|
||||||
|
```bash
|
||||||
|
npx skills list --global --agent antigravity
|
||||||
|
```
|
||||||
|
If the output includes `firebase-basics`, the skills are already installed globally.
|
||||||
|
3. **Install Skills**: If both checks fail, run the following command to install the Firebase agent skills:
|
||||||
|
```bash
|
||||||
|
npx skills add firebase/agent-skills --agent antigravity --skill "*"
|
||||||
|
```
|
||||||
|
*Note: Omit `--yes` and `--global` to choose the installation location manually. If prompted interactively in the terminal, ensure you send the appropriate user choices via standard input to complete the installation.*
|
||||||
|
4. **Verify Installation**: Re-run the checks in steps 1 or 2 to confirm that `firebase-basics` is now available.
|
||||||
|
|
||||||
|
### 2. Configure and Verify Firebase MCP Server
|
||||||
|
The MCP server allows Antigravity to interact directly with Firebase projects.
|
||||||
|
|
||||||
|
1. **Locate `mcp_config.json`**: Find the configuration file for your operating system:
|
||||||
|
- macOS / Linux: `~/.gemini/antigravity/mcp_config.json`
|
||||||
|
- Windows: `%USERPROFILE%\\.gemini\\antigravity\\mcp_config.json`
|
||||||
|
|
||||||
|
*Note: If the `.gemini/antigravity/` directory or `mcp_config.json` file does not exist, create them and initialize the file with `{ "mcpServers": {} }` before proceeding.*
|
||||||
|
2. **Check Existing Configuration**: Open `mcp_config.json` and check the `mcpServers` section for a `firebase` entry.
|
||||||
|
- It is already configured if the `command` is `"firebase"` OR if the `command` is `"npx"` with `"firebase-tools"` and `"mcp"` in the `args`.
|
||||||
|
- **Important**: If a valid `firebase` entry is found, the MCP server is already configured. **Skip step 3** and proceed directly to step 4.
|
||||||
|
|
||||||
|
**Example valid configurations**:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "firebase-tools@latest", "mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
OR
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "firebase",
|
||||||
|
"args": ["mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
3. **Add or Update Configuration**: If the `firebase` block is missing or incorrect, add it to the `mcpServers` object:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"firebase-tools@latest",
|
||||||
|
"mcp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*CRITICAL: Merge this configuration into the existing `mcp_config.json` file. You MUST preserve any other existing servers inside the `mcpServers` object.*
|
||||||
|
4. **Verify Configuration**: Save the file and confirm the `firebase` block is present and properly formatted JSON.
|
||||||
|
|
||||||
|
### 3. Restart and Verify Connection
|
||||||
|
1. **Restart Antigravity**: Instruct the user to restart the Antigravity application. **Stop and wait** for their confirmation before proceeding.
|
||||||
|
2. **Confirm Connection**: Check the MCP server list in the Antigravity UI to confirm that the Firebase MCP server is connected.
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
# Claude Code Setup
|
||||||
|
|
||||||
|
To get the most out of Firebase in Claude Code, follow these steps to install the agent skills and the MCP server.
|
||||||
|
|
||||||
|
## Recommended Method: Using Plugins
|
||||||
|
|
||||||
|
The recommended method is using the plugin marketplace to install both the agent skills and the MCP functionality.
|
||||||
|
|
||||||
|
### 1. Install and Verify Plugins
|
||||||
|
|
||||||
|
Check if the plugins are already installed before proceeding:
|
||||||
|
|
||||||
|
1. **Check Existing Skills**: Run `npx skills list --agent claude-code` to check for local skills. Run `npx skills list --global --agent claude-code` to check for global skills. Note whether the output includes `firebase-basics`.
|
||||||
|
2. **Check Existing MCP Configuration**: Run `claude mcp list -s user` and `claude mcp list -s project`. Note whether the output of either command includes `firebase`.
|
||||||
|
3. **Determine Installation Path**:
|
||||||
|
- If **both** skills and MCP configuration are found, the plugin is fully installed. **Stop here and skip all remaining setup steps in this document.**
|
||||||
|
- If **neither** are found, proceed to step 4.
|
||||||
|
- If **only one** is found (e.g., skills are installed but MCP is missing, or vice versa), **stop and prompt the user**. Explain the mixed state and ask if they want to proceed with installing the Firebase plugin before continuing to step 4.
|
||||||
|
4. **Add Marketplace**: Run the following command to add the marketplace (this uses the default User scope):
|
||||||
|
```bash
|
||||||
|
claude plugin marketplace add firebase/agent-skills
|
||||||
|
```
|
||||||
|
5. **Install Plugins**: Run the following command to install the plugin:
|
||||||
|
```bash
|
||||||
|
claude plugin install firebase@firebase
|
||||||
|
```
|
||||||
|
6. **Verify Installation**: Re-run the checks in steps 1 and 2 to confirm the skills and the MCP server are now available.
|
||||||
|
|
||||||
|
### 2. Restart and Verify Connection
|
||||||
|
1. **Restart Claude Code**: Instruct the user to restart Claude Code. **Stop and wait** for their confirmation before proceeding.
|
||||||
@@ -0,0 +1,63 @@
|
|||||||
|
# Cursor Setup
|
||||||
|
|
||||||
|
To get the most out of Firebase in Cursor, follow these steps to install the agent skills and the MCP server.
|
||||||
|
|
||||||
|
### 1. Install and Verify Firebase Skills
|
||||||
|
Check if the skills are already installed before proceeding:
|
||||||
|
|
||||||
|
1. **Check Local skills**: Run `npx skills list --agent cursor`. If the output includes `firebase-basics`, the skills are already installed locally.
|
||||||
|
2. **Check Global skills**: If not found locally, check the global installation by running:
|
||||||
|
```bash
|
||||||
|
npx skills list --global --agent cursor
|
||||||
|
```
|
||||||
|
If the output includes `firebase-basics`, the skills are already installed globally.
|
||||||
|
3. **Install Skills**: If both checks fail, run the following command to install the Firebase agent skills:
|
||||||
|
```bash
|
||||||
|
npx skills add firebase/agent-skills --agent cursor --skill "*"
|
||||||
|
```
|
||||||
|
*Note: Omit `--yes` and `--global` to choose the installation location manually. If prompted interactively in the terminal, ensure you send the appropriate user choices via standard input to complete the installation.*
|
||||||
|
4. **Verify Installation**: Re-run the checks in steps 1 or 2 to confirm that `firebase-basics` is now available.
|
||||||
|
|
||||||
|
### 2. Configure and Verify Firebase MCP Server
|
||||||
|
The MCP server allows Cursor to interact directly with Firebase projects.
|
||||||
|
|
||||||
|
1. **Locate `mcp.json`**: Find the configuration file for your operating system:
|
||||||
|
- Global: `~/.cursor/mcp.json`
|
||||||
|
- Project: `.cursor/mcp.json`
|
||||||
|
|
||||||
|
*Note: If the directory or `mcp.json` file does not exist, create them and initialize the file with `{ "mcpServers": {} }` before proceeding.*
|
||||||
|
2. **Check Existing Configuration**: Open `mcp.json` and check the `mcpServers` section for a `firebase` entry.
|
||||||
|
- It is already configured if the `command` is `"firebase"` OR if the `command` is `"npx"` with `"firebase-tools"` and `"mcp"` in the `args`.
|
||||||
|
- **Important**: If a valid `firebase` entry is found, the MCP server is already configured. **Skip step 3** and proceed directly to step 4.
|
||||||
|
|
||||||
|
**Example valid configurations**:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "firebase-tools@latest", "mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
OR
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "firebase",
|
||||||
|
"args": ["mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
3. **Add or Update Configuration**: If the `firebase` block is missing or incorrect, add it to the `mcpServers` object:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"firebase-tools@latest",
|
||||||
|
"mcp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*CRITICAL: Merge this configuration into the existing `mcp.json` file. You MUST preserve any other existing servers inside the `mcpServers` object.*
|
||||||
|
4. **Verify Configuration**: Save the file and confirm the `firebase` block is present and properly formatted JSON.
|
||||||
|
|
||||||
|
### 3. Restart and Verify Connection
|
||||||
|
1. **Restart Cursor**: Instruct the user to restart the Cursor application. **Stop and wait** for their confirmation before proceeding.
|
||||||
|
2. **Confirm Connection**: Check the MCP server list in the Cursor UI to confirm that the Firebase MCP server is connected.
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
# Gemini CLI Setup
|
||||||
|
|
||||||
|
To get the most out of Firebase in the Gemini CLI, follow these steps to install the agent extension and the MCP server.
|
||||||
|
|
||||||
|
## Recommended: Installing Extensions
|
||||||
|
|
||||||
|
The best way to get both the agent skills and the MCP server is via the Gemini extension.
|
||||||
|
|
||||||
|
### 1. Install and Verify Firebase Extension
|
||||||
|
Check if the extension is already installed before proceeding:
|
||||||
|
|
||||||
|
1. **Check Existing Extensions**: Run `gemini extensions list`. If the output includes `firebase`, the extension is already installed.
|
||||||
|
2. **Install Extension**: If not found, run the following command to install the Firebase agent skills and MCP server:
|
||||||
|
```bash
|
||||||
|
gemini extensions install https://github.com/firebase/agent-skills
|
||||||
|
```
|
||||||
|
3. **Verify Installation**: Run the following checks to confirm installation:
|
||||||
|
- `gemini mcp list` -> Output should include `firebase-tools`.
|
||||||
|
- `gemini skills list` -> Output should include `firebase-basic`.
|
||||||
|
|
||||||
|
### 2. Restart and Verify Connection
|
||||||
|
1. **Restart Gemini CLI**: Instruct the user to restart the Gemini CLI if any new installation occurred. **Stop and wait** for their confirmation before proceeding.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Alternative: Manual MCP Configuration (Project Scope)
|
||||||
|
|
||||||
|
If the user only wants to use the MCP server for the current project:
|
||||||
|
|
||||||
|
### 1. Configure and Verify Firebase MCP Server
|
||||||
|
1. **Check Existing Configuration**: Run `gemini mcp list`. If the output includes `firebase-tools`, the MCP server is already configured.
|
||||||
|
2. **Add the MCP Server**: If not found, run the following command to configure the Firebase MCP Server:
|
||||||
|
```bash
|
||||||
|
gemini mcp add -e IS_GEMINI_CLI_EXTENSION=true firebase npx -y firebase-tools@latest mcp
|
||||||
|
```
|
||||||
|
3. **Verify Configuration**: Re-run `gemini mcp list` to confirm `firebase-tools` is connected.
|
||||||
|
|
||||||
|
### 2. Restart and Verify Connection
|
||||||
|
1. **Restart Gemini CLI**: Instruct the user to restart the Gemini CLI. **Stop and wait** for their confirmation before proceeding.
|
||||||
@@ -0,0 +1,70 @@
|
|||||||
|
# GitHub Copilot Setup
|
||||||
|
|
||||||
|
To get the most out of Firebase with GitHub Copilot in VS Code, follow these steps to install the agent skills and the MCP server.
|
||||||
|
|
||||||
|
## Recommended: Global Setup
|
||||||
|
|
||||||
|
The agent skills and MCP server should be installed globally for consistent access across projects.
|
||||||
|
|
||||||
|
### 1. Install and Verify Firebase Skills
|
||||||
|
Check if the skills are already installed before proceeding:
|
||||||
|
|
||||||
|
1. **Check Local skills**: Run `npx skills list --agent github-copilot`. If the output includes `firebase-basics`, the skills are already installed locally.
|
||||||
|
2. **Check Global skills**: If not found locally, check the global installation by running:
|
||||||
|
```bash
|
||||||
|
npx skills list --global --agent github-copilot
|
||||||
|
```
|
||||||
|
If the output includes `firebase-basics`, the skills are already installed globally.
|
||||||
|
3. **Install Skills**: If both checks fail, run the following command to install the Firebase agent skills:
|
||||||
|
```bash
|
||||||
|
npx skills add firebase/agent-skills --agent github-copilot --skill "*"
|
||||||
|
```
|
||||||
|
*Note: Omit `--yes` and `--global` to choose the installation location manually. If prompted interactively in the terminal, ensure you send the appropriate user choices via standard input to complete the installation.*
|
||||||
|
4. **Verify Installation**: Re-run the checks in steps 1 or 2 to confirm that `firebase-basics` is now available.
|
||||||
|
|
||||||
|
### 2. Configure and Verify Firebase MCP Server
|
||||||
|
The MCP server allows GitHub Copilot to interact directly with Firebase projects.
|
||||||
|
|
||||||
|
1. **Locate `mcp.json`**: Find the configuration file for your environment:
|
||||||
|
- Workspace: `.vscode/mcp.json`
|
||||||
|
- Global: User Settings `mcp.json` file.
|
||||||
|
|
||||||
|
*Note: If the `.vscode/` directory or `mcp.json` file does not exist, create them and initialize the file with `{ "mcp": { "servers": {} } }` before proceeding.*
|
||||||
|
2. **Check Existing Configuration**: Open the `mcp.json` file and check the `mcp.servers` object for a `firebase` entry.
|
||||||
|
- It is already configured if the `command` is `"firebase"` OR if the `command` is `"npx"` with `"firebase-tools"` and `"mcp"` in the `args`.
|
||||||
|
- **Important**: If a valid `firebase` entry is found, the MCP server is already configured. **Skip step 3** and proceed directly to step 4.
|
||||||
|
|
||||||
|
**Example valid configurations**:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "firebase-tools@latest", "mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
OR
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "firebase",
|
||||||
|
"args": ["mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
3. **Add or Update Configuration**: If the `firebase` block is missing or incorrect, add it to the `mcp.servers` object:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"firebase-tools@latest",
|
||||||
|
"mcp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*CRITICAL: Merge this configuration into the existing `mcp.json` file under the `mcp.servers` object. You MUST preserve any other existing servers inside `mcp.servers`.*
|
||||||
|
4. **Verify Configuration**: Save the file and confirm the `firebase` block is present and properly formatted JSON.
|
||||||
|
|
||||||
|
### 3. Restart and Verify Connection
|
||||||
|
1. **Restart VS Code**: Instruct the user to restart VS Code. **Stop and wait** for their confirmation before proceeding.
|
||||||
|
2. **Confirm Connection**: Check the MCP server list in the VS Code Copilot UI to confirm that the Firebase MCP server is connected.
|
||||||
@@ -0,0 +1,65 @@
|
|||||||
|
# Other Agents Setup
|
||||||
|
|
||||||
|
If you use another agent (like Windsurf, Cline, or Claude Desktop), follow these steps to install the agent skills and the MCP server.
|
||||||
|
|
||||||
|
## Recommended: Global Setup
|
||||||
|
|
||||||
|
The agent skills and MCP server should be installed globally for consistent access across projects.
|
||||||
|
|
||||||
|
### 1. Install and Verify Firebase Skills
|
||||||
|
Check if the skills are already installed before proceeding:
|
||||||
|
|
||||||
|
1. **Check Local skills**: Run `npx skills list --agent <agent-name>`. If the output includes `firebase-basics`, the skills are already installed locally. Replace `<agent-name>` with the actual agent name, which can be found [here](https://github.com/vercel-labs/skills/blob/main/README.md).
|
||||||
|
2. **Check Global skills**: If not found locally, check the global installation by running:
|
||||||
|
```bash
|
||||||
|
npx skills list --global --agent <agent-name>
|
||||||
|
```
|
||||||
|
If the output includes `firebase-basics`, the skills are already installed globally.
|
||||||
|
3. **Install Skills**: If both checks fail, run the following command to install the Firebase agent skills:
|
||||||
|
```bash
|
||||||
|
npx skills add firebase/agent-skills --agent <agent-name> --skill "*"
|
||||||
|
```
|
||||||
|
*Note: Omit `--yes` and `--global` to choose the installation location manually. If prompted interactively in the terminal, ensure you send the appropriate user choices via standard input to complete the installation.*
|
||||||
|
4. **Verify Installation**: Re-run the checks in steps 1 or 2 to confirm that `firebase-basics` is now available.
|
||||||
|
|
||||||
|
### 2. Configure and Verify Firebase MCP Server
|
||||||
|
The MCP server allows the agent to interact directly with Firebase projects.
|
||||||
|
|
||||||
|
1. **Locate MCP Configuration**: Find the configuration file for your agent (e.g., `~/.codeium/windsurf/mcp_config.json`, `cline_mcp_settings.json`, or `claude_desktop_config.json`).
|
||||||
|
|
||||||
|
*Note: If the document or its containing directory does not exist, create them and initialize the file with `{ "mcpServers": {} }` before proceeding.*
|
||||||
|
2. **Check Existing Configuration**: Open the configuration file and check the `mcpServers` section for a `firebase` entry.
|
||||||
|
- It is already configured if the `command` is `"firebase"` OR if the `command` is `"npx"` with `"firebase-tools"` and `"mcp"` in the `args`.
|
||||||
|
- **Important**: If a valid `firebase` entry is found, the MCP server is already configured. **Skip step 3** and proceed directly to step 4.
|
||||||
|
|
||||||
|
**Example valid configurations**:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "firebase-tools@latest", "mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
OR
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "firebase",
|
||||||
|
"args": ["mcp"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
3. **Add or Update Configuration**: If the `firebase` block is missing or incorrect, add it to the `mcpServers` object:
|
||||||
|
```json
|
||||||
|
"firebase": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": [
|
||||||
|
"-y",
|
||||||
|
"firebase-tools@latest",
|
||||||
|
"mcp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*CRITICAL: Merge this configuration into the existing file. You MUST preserve any other existing servers inside the `mcpServers` object.*
|
||||||
|
4. **Verify Configuration**: Save the file and confirm the `firebase` block is present and properly formatted JSON.
|
||||||
|
|
||||||
|
### 3. Restart and Verify Connection
|
||||||
|
1. **Restart Agent**: Instruct the user to restart the agent application. **Stop and wait** for their confirmation before proceeding.
|
||||||
|
2. **Confirm Connection**: Check the MCP server list in the agent's UI to confirm that the Firebase MCP server is connected.
|
||||||
@@ -0,0 +1,69 @@
|
|||||||
|
# Firebase Web Setup Guide
|
||||||
|
|
||||||
|
## 1. Create a Firebase Project and App
|
||||||
|
If you haven't already created a project:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest projects:create
|
||||||
|
```
|
||||||
|
|
||||||
|
Register your web app:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest apps:create web my-web-app
|
||||||
|
```
|
||||||
|
(Note the **App ID** returned by this command).
|
||||||
|
|
||||||
|
## 2. Installation
|
||||||
|
Install the Firebase SDK via npm:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install firebase
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Initialization
|
||||||
|
Create a `firebase.js` (or `firebase.ts`) file. You can fetch your config object using the CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest apps:sdkconfig <APP_ID>
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the output config object into your initialization file:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { initializeApp } from "firebase/app";
|
||||||
|
import { getAuth } from "firebase/auth";
|
||||||
|
|
||||||
|
// Your web app's Firebase configuration
|
||||||
|
const firebaseConfig = {
|
||||||
|
apiKey: "API_KEY",
|
||||||
|
authDomain: "PROJECT_ID.firebaseapp.com",
|
||||||
|
projectId: "PROJECT_ID",
|
||||||
|
storageBucket: "PROJECT_ID.firebasestorage.app",
|
||||||
|
messagingSenderId: "SENDER_ID",
|
||||||
|
appId: "APP_ID",
|
||||||
|
measurementId: "G-MEASUREMENT_ID"
|
||||||
|
};
|
||||||
|
|
||||||
|
// Initialize Firebase
|
||||||
|
const app = initializeApp(firebaseConfig);
|
||||||
|
const auth = getAuth(app);
|
||||||
|
|
||||||
|
export { app };
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Using Services
|
||||||
|
Import specific services as needed (Modular API):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { getFirestore, collection, getDocs } from "firebase/firestore";
|
||||||
|
import { app } from "./firebase"; // Import the initialized app
|
||||||
|
|
||||||
|
const db = getFirestore(app);
|
||||||
|
|
||||||
|
async function getUsers() {
|
||||||
|
const querySnapshot = await getDocs(collection(db, "users"));
|
||||||
|
querySnapshot.forEach((doc) => {
|
||||||
|
console.log(`${doc.id} => ${doc.data()}`);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,95 @@
|
|||||||
|
---
|
||||||
|
name: firebase-data-connect
|
||||||
|
description: Build and deploy Firebase Data Connect backends with PostgreSQL. Use for schema design, GraphQL queries/mutations, authorization, and SDK generation for web, Android, iOS, and Flutter apps.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Firebase Data Connect
|
||||||
|
|
||||||
|
Firebase Data Connect is a relational database service using Cloud SQL for PostgreSQL with GraphQL schema, auto-generated queries/mutations, and type-safe SDKs.
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
dataconnect/
|
||||||
|
├── dataconnect.yaml # Service configuration
|
||||||
|
├── schema/
|
||||||
|
│ └── schema.gql # Data model (types with @table)
|
||||||
|
└── connector/
|
||||||
|
├── connector.yaml # Connector config + SDK generation
|
||||||
|
├── queries.gql # Queries
|
||||||
|
└── mutations.gql # Mutations
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
Follow this strict workflow to build your application. You **must** read the linked reference files for each step to understand the syntax and available features.
|
||||||
|
|
||||||
|
### 1. Define Data Model (`schema/schema.gql`)
|
||||||
|
Define your GraphQL types, tables, and relationships.
|
||||||
|
> **Read [reference/schema.md](reference/schema.md)** for:
|
||||||
|
> * `@table`, `@col`, `@default`
|
||||||
|
> * Relationships (`@ref`, one-to-many, many-to-many)
|
||||||
|
> * Data types (UUID, Vector, JSON, etc.)
|
||||||
|
|
||||||
|
### 2. Define Operations (`connector/queries.gql`, `connector/mutations.gql`)
|
||||||
|
Write the queries and mutations your client will use. Data Connect generates the underlying SQL.
|
||||||
|
> **Read [reference/operations.md](reference/operations.md)** for:
|
||||||
|
> * **Queries**: Filtering (`where`), Ordering (`orderBy`), Pagination (`limit`/`offset`).
|
||||||
|
> * **Mutations**: Create (`_insert`), Update (`_update`), Delete (`_delete`).
|
||||||
|
> * **Upserts**: Use `_upsert` to "insert or update" records (CRITICAL for user profiles).
|
||||||
|
> * **Transactions**: use `@transaction` for multi-step atomic operations.
|
||||||
|
|
||||||
|
### 3. Secure Your App (`connector/` files)
|
||||||
|
Add authorization logic closely with your operations.
|
||||||
|
> **Read [reference/security.md](reference/security.md)** for:
|
||||||
|
> * `@auth(level: ...)` for PUBLIC, USER, or NO_ACCESS.
|
||||||
|
> * `@check` and `@redact` for row-level security and validation.
|
||||||
|
|
||||||
|
### 4. Generate & Use SDKs
|
||||||
|
Generate type-safe code for your client platform.
|
||||||
|
> **Read [reference/sdks.md](reference/sdks.md)** for:
|
||||||
|
> * Android (Kotlin), iOS (Swift), Web (TypeScript), Flutter (Dart).
|
||||||
|
> * How to initialize and call your queries/mutations.
|
||||||
|
> * **Nested Data**: See how to access related fields (e.g., `movie.reviews`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feature Capability Map
|
||||||
|
|
||||||
|
If you need to implement a specific feature, consult the mapped reference file:
|
||||||
|
|
||||||
|
| Feature | Reference File | Key Concepts |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| **Data Modeling** | [reference/schema.md](reference/schema.md) | `@table`, `@unique`, `@index`, Relations |
|
||||||
|
| **Vector Search** | [reference/advanced.md](reference/advanced.md) | `Vector`, `@col(dataType: "vector")` |
|
||||||
|
| **Full-Text Search** | [reference/advanced.md](reference/advanced.md) | `@searchable` |
|
||||||
|
| **Upserting Data** | [reference/operations.md](reference/operations.md) | `_upsert` mutations |
|
||||||
|
| **Complex Filters** | [reference/operations.md](reference/operations.md) | `_or`, `_and`, `_not`, `eq`, `contains` |
|
||||||
|
| **Transactions** | [reference/operations.md](reference/operations.md) | `@transaction`, `response` binding |
|
||||||
|
| **Environment Config** | [reference/config.md](reference/config.md) | `dataconnect.yaml`, `connector.yaml` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment & CLI
|
||||||
|
|
||||||
|
> **Read [reference/config.md](reference/config.md)** for deep dive on configuration.
|
||||||
|
|
||||||
|
Common commands (run from project root):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initialize Data Connect
|
||||||
|
npx -y firebase-tools@latest init dataconnect
|
||||||
|
|
||||||
|
# Start local emulator
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect
|
||||||
|
|
||||||
|
# Generate SDK code
|
||||||
|
npx -y firebase-tools@latest dataconnect:sdk:generate
|
||||||
|
|
||||||
|
# Deploy to production
|
||||||
|
npx -y firebase-tools@latest deploy --only dataconnect
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
For complete, working code examples of schemas and operations, see **[examples.md](examples.md)**.
|
||||||
@@ -0,0 +1,377 @@
|
|||||||
|
# Examples
|
||||||
|
|
||||||
|
Complete, working examples for common Data Connect use cases.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Movie Review App
|
||||||
|
|
||||||
|
A complete schema for a movie database with reviews, actors, and user authentication.
|
||||||
|
|
||||||
|
### Schema
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# schema.gql
|
||||||
|
|
||||||
|
# Users
|
||||||
|
type User @table(key: "uid") {
|
||||||
|
uid: String! @default(expr: "auth.uid")
|
||||||
|
email: String! @unique
|
||||||
|
displayName: String
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
|
||||||
|
# Movies
|
||||||
|
type Movie @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
title: String!
|
||||||
|
releaseYear: Int
|
||||||
|
genre: String @index
|
||||||
|
rating: Float
|
||||||
|
description: String
|
||||||
|
posterUrl: String
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
|
||||||
|
# Movie metadata (one-to-one)
|
||||||
|
type MovieMetadata @table {
|
||||||
|
movie: Movie! @unique
|
||||||
|
director: String
|
||||||
|
runtime: Int
|
||||||
|
budget: Int64
|
||||||
|
}
|
||||||
|
|
||||||
|
# Actors
|
||||||
|
type Actor @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
name: String!
|
||||||
|
birthDate: Date
|
||||||
|
}
|
||||||
|
|
||||||
|
# Movie-Actor relationship (many-to-many)
|
||||||
|
type MovieActor @table(key: ["movie", "actor"]) {
|
||||||
|
movie: Movie!
|
||||||
|
actor: Actor!
|
||||||
|
role: String! # "lead" or "supporting"
|
||||||
|
character: String
|
||||||
|
}
|
||||||
|
|
||||||
|
# Reviews (user-owned)
|
||||||
|
type Review @table @unique(fields: ["movie", "user"]) {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
movie: Movie!
|
||||||
|
user: User!
|
||||||
|
rating: Int!
|
||||||
|
text: String
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Queries
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# queries.gql
|
||||||
|
|
||||||
|
# Public: List movies with filtering
|
||||||
|
query ListMovies($genre: String, $minRating: Float, $limit: Int)
|
||||||
|
@auth(level: PUBLIC) {
|
||||||
|
movies(
|
||||||
|
where: {
|
||||||
|
genre: { eq: $genre },
|
||||||
|
rating: { ge: $minRating }
|
||||||
|
},
|
||||||
|
orderBy: [{ rating: DESC }],
|
||||||
|
limit: $limit
|
||||||
|
) {
|
||||||
|
id title genre rating releaseYear posterUrl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Public: Get movie with full details
|
||||||
|
query GetMovie($id: UUID!) @auth(level: PUBLIC) {
|
||||||
|
movie(id: $id) {
|
||||||
|
id title genre rating releaseYear description
|
||||||
|
metadata: movieMetadata_on_movie { director runtime }
|
||||||
|
actors: actors_via_MovieActor { name }
|
||||||
|
reviews: reviews_on_movie(orderBy: [{ createdAt: DESC }], limit: 10) {
|
||||||
|
rating text createdAt
|
||||||
|
user { displayName }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# User: Get my reviews
|
||||||
|
query MyReviews @auth(level: USER) {
|
||||||
|
reviews(where: { user: { uid: { eq_expr: "auth.uid" }}}) {
|
||||||
|
id rating text createdAt
|
||||||
|
movie { id title posterUrl }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mutations
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# mutations.gql
|
||||||
|
|
||||||
|
# User: Create/update profile on first login
|
||||||
|
mutation UpsertUser($email: String!, $displayName: String) @auth(level: USER) {
|
||||||
|
user_upsert(data: {
|
||||||
|
uid_expr: "auth.uid",
|
||||||
|
email: $email,
|
||||||
|
displayName: $displayName
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# User: Add review (one per movie per user)
|
||||||
|
mutation AddReview($movieId: UUID!, $rating: Int!, $text: String)
|
||||||
|
@auth(level: USER) {
|
||||||
|
review_upsert(data: {
|
||||||
|
movie: { id: $movieId },
|
||||||
|
user: { uid_expr: "auth.uid" },
|
||||||
|
rating: $rating,
|
||||||
|
text: $text
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# User: Delete my review
|
||||||
|
mutation DeleteReview($id: UUID!) @auth(level: USER) {
|
||||||
|
review_delete(
|
||||||
|
first: { where: {
|
||||||
|
id: { eq: $id },
|
||||||
|
user: { uid: { eq_expr: "auth.uid" }}
|
||||||
|
}}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## E-Commerce Store
|
||||||
|
|
||||||
|
Products, orders, and cart management with user authentication.
|
||||||
|
|
||||||
|
### Schema
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# schema.gql
|
||||||
|
|
||||||
|
type User @table(key: "uid") {
|
||||||
|
uid: String! @default(expr: "auth.uid")
|
||||||
|
email: String! @unique
|
||||||
|
name: String
|
||||||
|
shippingAddress: String
|
||||||
|
}
|
||||||
|
|
||||||
|
type Product @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
name: String! @index
|
||||||
|
description: String
|
||||||
|
price: Float!
|
||||||
|
stock: Int! @default(value: 0)
|
||||||
|
category: String @index
|
||||||
|
imageUrl: String
|
||||||
|
}
|
||||||
|
|
||||||
|
type CartItem @table(key: ["user", "product"]) {
|
||||||
|
user: User!
|
||||||
|
product: Product!
|
||||||
|
quantity: Int!
|
||||||
|
}
|
||||||
|
|
||||||
|
enum OrderStatus {
|
||||||
|
PENDING
|
||||||
|
PAID
|
||||||
|
SHIPPED
|
||||||
|
DELIVERED
|
||||||
|
CANCELLED
|
||||||
|
}
|
||||||
|
|
||||||
|
type Order @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
user: User!
|
||||||
|
status: OrderStatus! @default(value: PENDING)
|
||||||
|
total: Float!
|
||||||
|
shippingAddress: String!
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
|
||||||
|
type OrderItem @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
order: Order!
|
||||||
|
product: Product!
|
||||||
|
quantity: Int!
|
||||||
|
priceAtPurchase: Float!
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Operations
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Public: Browse products
|
||||||
|
query ListProducts($category: String, $search: String) @auth(level: PUBLIC) {
|
||||||
|
products(where: {
|
||||||
|
category: { eq: $category },
|
||||||
|
name: { contains: $search },
|
||||||
|
stock: { gt: 0 }
|
||||||
|
}) {
|
||||||
|
id name price stock imageUrl
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# User: View cart
|
||||||
|
query MyCart @auth(level: USER) {
|
||||||
|
cartItems(where: { user: { uid: { eq_expr: "auth.uid" }}}) {
|
||||||
|
quantity
|
||||||
|
product { id name price imageUrl stock }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# User: Add to cart
|
||||||
|
mutation AddToCart($productId: UUID!, $quantity: Int!) @auth(level: USER) {
|
||||||
|
cartItem_upsert(data: {
|
||||||
|
user: { uid_expr: "auth.uid" },
|
||||||
|
product: { id: $productId },
|
||||||
|
quantity: $quantity
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# User: Checkout (transactional)
|
||||||
|
mutation Checkout($shippingAddress: String!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
# Query cart items
|
||||||
|
query @redact {
|
||||||
|
cartItems(where: { user: { uid: { eq_expr: "auth.uid" }}})
|
||||||
|
@check(expr: "this.size() > 0", message: "Cart is empty") {
|
||||||
|
quantity
|
||||||
|
product { id price }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
# Create order (in real app, calculate total from cart)
|
||||||
|
order_insert(data: {
|
||||||
|
user: { uid_expr: "auth.uid" },
|
||||||
|
shippingAddress: $shippingAddress,
|
||||||
|
total: 0 # Calculate in app logic
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Blog with Permissions
|
||||||
|
|
||||||
|
Multi-author blog with role-based permissions.
|
||||||
|
|
||||||
|
### Schema
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# schema.gql
|
||||||
|
|
||||||
|
type User @table(key: "uid") {
|
||||||
|
uid: String! @default(expr: "auth.uid")
|
||||||
|
email: String! @unique
|
||||||
|
name: String!
|
||||||
|
bio: String
|
||||||
|
}
|
||||||
|
|
||||||
|
enum UserRole {
|
||||||
|
VIEWER
|
||||||
|
AUTHOR
|
||||||
|
EDITOR
|
||||||
|
ADMIN
|
||||||
|
}
|
||||||
|
|
||||||
|
type BlogPermission @table(key: ["user"]) {
|
||||||
|
user: User!
|
||||||
|
role: UserRole! @default(value: VIEWER)
|
||||||
|
}
|
||||||
|
|
||||||
|
enum PostStatus {
|
||||||
|
DRAFT
|
||||||
|
PUBLISHED
|
||||||
|
ARCHIVED
|
||||||
|
}
|
||||||
|
|
||||||
|
type Post @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
author: User!
|
||||||
|
title: String! @searchable
|
||||||
|
content: String! @searchable
|
||||||
|
status: PostStatus! @default(value: DRAFT)
|
||||||
|
publishedAt: Timestamp
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
updatedAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
|
||||||
|
type Comment @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
post: Post!
|
||||||
|
author: User!
|
||||||
|
content: String!
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Operations with Role Checks
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Public: Read published posts
|
||||||
|
query PublishedPosts @auth(level: PUBLIC) {
|
||||||
|
posts(
|
||||||
|
where: { status: { eq: PUBLISHED }},
|
||||||
|
orderBy: [{ publishedAt: DESC }]
|
||||||
|
) {
|
||||||
|
id title content publishedAt
|
||||||
|
author { name }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Author+: Create post
|
||||||
|
mutation CreatePost($title: String!, $content: String!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
# Check user is at least AUTHOR
|
||||||
|
query @redact {
|
||||||
|
blogPermission(key: { user: { uid_expr: "auth.uid" }})
|
||||||
|
@check(expr: "this != null", message: "No permission record") {
|
||||||
|
role @check(expr: "this in ['AUTHOR', 'EDITOR', 'ADMIN']", message: "Must be author+")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
post_insert(data: {
|
||||||
|
author: { uid_expr: "auth.uid" },
|
||||||
|
title: $title,
|
||||||
|
content: $content
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# Editor+: Publish any post
|
||||||
|
mutation PublishPost($id: UUID!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
query @redact {
|
||||||
|
blogPermission(key: { user: { uid_expr: "auth.uid" }}) {
|
||||||
|
role @check(expr: "this in ['EDITOR', 'ADMIN']", message: "Must be editor+")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
post_update(id: $id, data: {
|
||||||
|
status: PUBLISHED,
|
||||||
|
publishedAt_expr: "request.time"
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# Admin: Grant role
|
||||||
|
mutation GrantRole($userUid: String!, $role: UserRole!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
query @redact {
|
||||||
|
blogPermission(key: { user: { uid_expr: "auth.uid" }}) {
|
||||||
|
role @check(expr: "this == 'ADMIN'", message: "Must be admin")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
blogPermission_upsert(data: {
|
||||||
|
user: { uid: $userUid },
|
||||||
|
role: $role
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,303 @@
|
|||||||
|
# Advanced Features Reference
|
||||||
|
|
||||||
|
## Contents
|
||||||
|
- [Vector Similarity Search](#vector-similarity-search)
|
||||||
|
- [Full-Text Search](#full-text-search)
|
||||||
|
- [Cloud Functions Integration](#cloud-functions-integration)
|
||||||
|
- [Data Seeding & Bulk Operations](#data-seeding--bulk-operations)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Vector Similarity Search
|
||||||
|
|
||||||
|
Semantic search using Vertex AI embeddings and PostgreSQL's `pgvector`.
|
||||||
|
|
||||||
|
### Schema Setup
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Movie @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
title: String!
|
||||||
|
description: String
|
||||||
|
# Vector field for embeddings - size must match model output (768 for gecko)
|
||||||
|
descriptionEmbedding: Vector! @col(size: 768)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Generate Embeddings in Mutations
|
||||||
|
|
||||||
|
Use `_embed` server value to auto-generate embeddings via Vertex AI:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation CreateMovieWithEmbedding($title: String!, $description: String!)
|
||||||
|
@auth(level: USER) {
|
||||||
|
movie_insert(data: {
|
||||||
|
title: $title,
|
||||||
|
description: $description,
|
||||||
|
descriptionEmbedding_embed: {
|
||||||
|
model: "textembedding-gecko@003",
|
||||||
|
text: $description
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Similarity Search Query
|
||||||
|
|
||||||
|
Data Connect generates `_similarity` fields for Vector columns:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query SearchMovies($query: String!) @auth(level: PUBLIC) {
|
||||||
|
movies_descriptionEmbedding_similarity(
|
||||||
|
compare_embed: { model: "textembedding-gecko@003", text: $query },
|
||||||
|
method: L2, # L2, COSINE, or INNER_PRODUCT
|
||||||
|
within: 2.0, # Max distance threshold
|
||||||
|
limit: 5
|
||||||
|
) {
|
||||||
|
id
|
||||||
|
title
|
||||||
|
description
|
||||||
|
_metadata { distance } # See how close each result is
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Similarity Parameters
|
||||||
|
|
||||||
|
| Parameter | Description |
|
||||||
|
|-----------|-------------|
|
||||||
|
| `compare` | Raw Vector to compare against |
|
||||||
|
| `compare_embed` | Generate embedding from text via Vertex AI |
|
||||||
|
| `method` | Distance function: `L2`, `COSINE`, `INNER_PRODUCT` |
|
||||||
|
| `within` | Max distance (results further are excluded) |
|
||||||
|
| `where` | Additional filters |
|
||||||
|
| `limit` | Max results to return |
|
||||||
|
|
||||||
|
### Custom Embeddings
|
||||||
|
|
||||||
|
Pass pre-computed vectors directly:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation StoreCustomEmbedding($id: UUID!, $embedding: Vector!) @auth(level: USER) {
|
||||||
|
movie_update(id: $id, data: { descriptionEmbedding: $embedding })
|
||||||
|
}
|
||||||
|
|
||||||
|
query SearchWithCustomVector($vector: Vector!) @auth(level: PUBLIC) {
|
||||||
|
movies_descriptionEmbedding_similarity(
|
||||||
|
compare: $vector,
|
||||||
|
method: COSINE,
|
||||||
|
limit: 10
|
||||||
|
) { id title }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Full-Text Search
|
||||||
|
|
||||||
|
Fast keyword/phrase search using PostgreSQL's full-text capabilities.
|
||||||
|
|
||||||
|
### Enable with @searchable
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Movie @table {
|
||||||
|
title: String! @searchable
|
||||||
|
description: String @searchable(language: "english")
|
||||||
|
genre: String @searchable
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Search Query
|
||||||
|
|
||||||
|
Data Connect generates `_search` fields:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query SearchMovies($query: String!) @auth(level: PUBLIC) {
|
||||||
|
movies_search(
|
||||||
|
query: $query,
|
||||||
|
queryFormat: QUERY, # QUERY, PLAIN, PHRASE, or ADVANCED
|
||||||
|
limit: 20
|
||||||
|
) {
|
||||||
|
id title description
|
||||||
|
_metadata { relevance } # Relevance score
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Query Formats
|
||||||
|
|
||||||
|
| Format | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `QUERY` | Web-style (default): quotes, AND, OR supported |
|
||||||
|
| `PLAIN` | Match all words, any order |
|
||||||
|
| `PHRASE` | Match exact phrase |
|
||||||
|
| `ADVANCED` | Full tsquery syntax |
|
||||||
|
|
||||||
|
### Tuning Results
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query SearchWithThreshold($query: String!) @auth(level: PUBLIC) {
|
||||||
|
movies_search(
|
||||||
|
query: $query,
|
||||||
|
relevanceThreshold: 0.05, # Min relevance score
|
||||||
|
where: { genre: { eq: "Action" }},
|
||||||
|
orderBy: [{ releaseYear: DESC }]
|
||||||
|
) { id title }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Supported Languages
|
||||||
|
|
||||||
|
`english` (default), `french`, `german`, `spanish`, `italian`, `portuguese`, `dutch`, `danish`, `finnish`, `norwegian`, `swedish`, `russian`, `arabic`, `hindi`, `simple`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cloud Functions Integration
|
||||||
|
|
||||||
|
Trigger Cloud Functions when mutations execute.
|
||||||
|
|
||||||
|
### Basic Trigger (Node.js)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { onMutationExecuted } from "firebase-functions/dataconnect";
|
||||||
|
import { logger } from "firebase-functions";
|
||||||
|
|
||||||
|
export const onUserCreate = onMutationExecuted(
|
||||||
|
{
|
||||||
|
service: "myService",
|
||||||
|
connector: "default",
|
||||||
|
operation: "CreateUser",
|
||||||
|
region: "us-central1" // Must match Data Connect location
|
||||||
|
},
|
||||||
|
(event) => {
|
||||||
|
const variables = event.data.payload.variables;
|
||||||
|
const returnedData = event.data.payload.data;
|
||||||
|
|
||||||
|
logger.info("User created:", returnedData);
|
||||||
|
// Send welcome email, sync to analytics, etc.
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Basic Trigger (Python)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from firebase_functions import dataconnect_fn, logger
|
||||||
|
|
||||||
|
@dataconnect_fn.on_mutation_executed(
|
||||||
|
service="myService",
|
||||||
|
connector="default",
|
||||||
|
operation="CreateUser"
|
||||||
|
)
|
||||||
|
def on_user_create(event: dataconnect_fn.Event):
|
||||||
|
variables = event.data.payload.variables
|
||||||
|
returned_data = event.data.payload.data
|
||||||
|
logger.info("User created:", returned_data)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Event Data
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// event.authType: "app_user" | "unauthenticated" | "admin"
|
||||||
|
// event.authId: Firebase Auth UID (for app_user)
|
||||||
|
// event.data.payload.variables: mutation input variables
|
||||||
|
// event.data.payload.data: mutation response data
|
||||||
|
// event.data.payload.errors: any errors that occurred
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filtering with Wildcards
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Trigger on all User* mutations
|
||||||
|
export const onUserMutation = onMutationExecuted(
|
||||||
|
{ operation: "User*" },
|
||||||
|
(event) => { /* ... */ }
|
||||||
|
);
|
||||||
|
|
||||||
|
// Capture operation name
|
||||||
|
export const onAnyMutation = onMutationExecuted(
|
||||||
|
{ service: "myService", operation: "{operationName}" },
|
||||||
|
(event) => {
|
||||||
|
console.log("Operation:", event.params.operationName);
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use Cases
|
||||||
|
|
||||||
|
- **Data sync**: Replicate to Firestore, BigQuery, external APIs
|
||||||
|
- **Notifications**: Send emails, push notifications on events
|
||||||
|
- **Async workflows**: Image processing, data aggregation
|
||||||
|
- **Audit logging**: Track all data changes
|
||||||
|
|
||||||
|
> ⚠️ **Avoid infinite loops**: Don't trigger mutations that would fire the same trigger. Use filters to exclude self-triggered events.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Seeding & Bulk Operations
|
||||||
|
|
||||||
|
### Local Prototyping with _insertMany
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation SeedMovies @transaction {
|
||||||
|
movie_insertMany(data: [
|
||||||
|
{ id: "uuid-1", title: "Movie 1", genre: "Action" },
|
||||||
|
{ id: "uuid-2", title: "Movie 2", genre: "Drama" },
|
||||||
|
{ id: "uuid-3", title: "Movie 3", genre: "Comedy" }
|
||||||
|
])
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reset Data with _upsertMany
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation ResetData {
|
||||||
|
movie_upsertMany(data: [
|
||||||
|
{ id: "uuid-1", title: "Movie 1", genre: "Action" },
|
||||||
|
{ id: "uuid-2", title: "Movie 2", genre: "Drama" }
|
||||||
|
])
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Clear All Data
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation ClearMovies {
|
||||||
|
movie_deleteMany(all: true)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production: Admin SDK Bulk Operations
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { initializeApp } from 'firebase-admin/app';
|
||||||
|
import { getDataConnect } from 'firebase-admin/data-connect';
|
||||||
|
|
||||||
|
const app = initializeApp();
|
||||||
|
const dc = getDataConnect({ location: "us-central1", serviceId: "my-service" });
|
||||||
|
|
||||||
|
const movies = [
|
||||||
|
{ id: "uuid-1", title: "Movie 1", genre: "Action" },
|
||||||
|
{ id: "uuid-2", title: "Movie 2", genre: "Drama" }
|
||||||
|
];
|
||||||
|
|
||||||
|
// Bulk insert
|
||||||
|
await dc.insertMany("movie", movies);
|
||||||
|
|
||||||
|
// Bulk upsert
|
||||||
|
await dc.upsertMany("movie", movies);
|
||||||
|
|
||||||
|
// Single operations
|
||||||
|
await dc.insert("movie", movies[0]);
|
||||||
|
await dc.upsert("movie", movies[0]);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Emulator Data Persistence
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Export emulator data
|
||||||
|
npx -y firebase-tools@latest emulators:export ./seed-data
|
||||||
|
|
||||||
|
# Start with saved data
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect --import=./seed-data
|
||||||
|
```
|
||||||
@@ -0,0 +1,267 @@
|
|||||||
|
# Configuration Reference
|
||||||
|
|
||||||
|
## Contents
|
||||||
|
- [Project Structure](#project-structure)
|
||||||
|
- [dataconnect.yaml](#dataconnectyaml)
|
||||||
|
- [connector.yaml](#connectoryaml)
|
||||||
|
- [Firebase CLI Commands](#firebase-cli-commands)
|
||||||
|
- [Emulator](#emulator)
|
||||||
|
- [Deployment](#deployment)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
project-root/
|
||||||
|
├── firebase.json # Firebase project config
|
||||||
|
└── dataconnect/
|
||||||
|
├── dataconnect.yaml # Service configuration
|
||||||
|
├── schema/
|
||||||
|
│ └── schema.gql # Data model (types, relationships)
|
||||||
|
└── connector/
|
||||||
|
├── connector.yaml # Connector config + SDK generation
|
||||||
|
├── queries.gql # Query operations
|
||||||
|
└── mutations.gql # Mutation operations (optional separate file)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## dataconnect.yaml
|
||||||
|
|
||||||
|
Main Data Connect service configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
specVersion: "v1"
|
||||||
|
serviceId: "my-service"
|
||||||
|
location: "us-central1"
|
||||||
|
schemaValidation: "STRICT" # or "COMPATIBLE"
|
||||||
|
schema:
|
||||||
|
source: "./schema"
|
||||||
|
datasource:
|
||||||
|
postgresql:
|
||||||
|
database: "fdcdb"
|
||||||
|
cloudSql:
|
||||||
|
instanceId: "my-instance"
|
||||||
|
connectorDirs: ["./connector"]
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `specVersion` | Always `"v1"` |
|
||||||
|
| `serviceId` | Unique identifier for the service |
|
||||||
|
| `location` | GCP region (us-central1, us-east4, europe-west1, etc.) |
|
||||||
|
| `schemaValidation` | Deployment mode: `"STRICT"` (must match exactly) or `"COMPATIBLE"` (backward compatible) |
|
||||||
|
| `schema.source` | Path to schema directory |
|
||||||
|
| `schema.datasource` | PostgreSQL connection config |
|
||||||
|
| `connectorDirs` | List of connector directories |
|
||||||
|
|
||||||
|
### Cloud SQL Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
schema:
|
||||||
|
datasource:
|
||||||
|
postgresql:
|
||||||
|
database: "my-database" # Database name
|
||||||
|
cloudSql:
|
||||||
|
instanceId: "my-instance" # Cloud SQL instance ID
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## connector.yaml
|
||||||
|
|
||||||
|
Connector configuration and SDK generation:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
connectorId: "default"
|
||||||
|
generate:
|
||||||
|
javascriptSdk:
|
||||||
|
outputDir: "../web/src/lib/dataconnect"
|
||||||
|
package: "@myapp/dataconnect"
|
||||||
|
kotlinSdk:
|
||||||
|
outputDir: "../android/app/src/main/kotlin/com/myapp/dataconnect"
|
||||||
|
package: "com.myapp.dataconnect"
|
||||||
|
swiftSdk:
|
||||||
|
outputDir: "../ios/MyApp/DataConnect"
|
||||||
|
```
|
||||||
|
|
||||||
|
### SDK Generation Options
|
||||||
|
|
||||||
|
| SDK | Fields |
|
||||||
|
|-----|--------|
|
||||||
|
| `javascriptSdk` | `outputDir`, `package` |
|
||||||
|
| `kotlinSdk` | `outputDir`, `package` |
|
||||||
|
| `swiftSdk` | `outputDir` |
|
||||||
|
| `nodeAdminSdk` | `outputDir`, `package` (for Admin SDK) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Firebase CLI Commands
|
||||||
|
|
||||||
|
### Initialize Data Connect
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Interactive setup
|
||||||
|
npx -y firebase-tools@latest init dataconnect
|
||||||
|
|
||||||
|
# Set project
|
||||||
|
npx -y firebase-tools@latest use <project-id>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Local Development
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start emulator
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect
|
||||||
|
|
||||||
|
# Start with database seed data
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect --import=./seed-data
|
||||||
|
|
||||||
|
# Generate SDKs
|
||||||
|
npx -y firebase-tools@latest dataconnect:sdk:generate
|
||||||
|
|
||||||
|
# Watch for schema changes (auto-regenerate)
|
||||||
|
npx -y firebase-tools@latest dataconnect:sdk:generate --watch
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schema Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Compare local schema to production
|
||||||
|
npx -y firebase-tools@latest dataconnect:sql:diff
|
||||||
|
|
||||||
|
|
||||||
|
# Apply migration
|
||||||
|
npx -y firebase-tools@latest dataconnect:sql:migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deployment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy Data Connect service
|
||||||
|
npx -y firebase-tools@latest deploy --only dataconnect
|
||||||
|
|
||||||
|
# Deploy specific connector
|
||||||
|
npx -y firebase-tools@latest deploy --only dataconnect:connector-id
|
||||||
|
|
||||||
|
# Deploy with schema migration
|
||||||
|
npx -y firebase-tools@latest deploy --only dataconnect --force
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Emulator
|
||||||
|
|
||||||
|
### Start Emulator
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect
|
||||||
|
```
|
||||||
|
|
||||||
|
Default ports:
|
||||||
|
- Data Connect: `9399`
|
||||||
|
- PostgreSQL: `9939` (local PostgreSQL instance)
|
||||||
|
|
||||||
|
### Emulator Configuration (firebase.json)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"emulators": {
|
||||||
|
"dataconnect": {
|
||||||
|
"port": 9399
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Connect from SDK
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Web
|
||||||
|
import { connectDataConnectEmulator } from 'firebase/data-connect';
|
||||||
|
connectDataConnectEmulator(dc, 'localhost', 9399);
|
||||||
|
|
||||||
|
// Android
|
||||||
|
connector.dataConnect.useEmulator("10.0.2.2", 9399)
|
||||||
|
|
||||||
|
// iOS
|
||||||
|
connector.useEmulator(host: "localhost", port: 9399)
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Seed Data
|
||||||
|
|
||||||
|
Create seed data files and import:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Export current emulator data
|
||||||
|
npx -y firebase-tools@latest emulators:export ./seed-data
|
||||||
|
|
||||||
|
# Start with seed data
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect --import=./seed-data
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
### Deploy Workflow
|
||||||
|
|
||||||
|
1. **Test locally** with emulator
|
||||||
|
2. **Generate SQL diff**: `npx -y firebase-tools@latest dataconnect:sql:diff`
|
||||||
|
3. **Review migration**: Check breaking changes
|
||||||
|
4. **Deploy**: `npx -y firebase-tools@latest deploy --only dataconnect`
|
||||||
|
|
||||||
|
### Schema Migrations
|
||||||
|
|
||||||
|
Data Connect auto-generates PostgreSQL migrations:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Preview migration
|
||||||
|
npx -y firebase-tools@latest dataconnect:sql:diff
|
||||||
|
|
||||||
|
# Apply migration (interactive)
|
||||||
|
npx -y firebase-tools@latest dataconnect:sql:migrate
|
||||||
|
|
||||||
|
# Force migration (non-interactive)
|
||||||
|
npx -y firebase-tools@latest dataconnect:sql:migrate --force
|
||||||
|
```
|
||||||
|
|
||||||
|
### Breaking Changes
|
||||||
|
|
||||||
|
Some schema changes require special handling:
|
||||||
|
- Removing required fields
|
||||||
|
- Changing field types
|
||||||
|
- Removing tables
|
||||||
|
|
||||||
|
Use `--force` flag to acknowledge breaking changes during deploy.
|
||||||
|
|
||||||
|
### CI/CD Integration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# GitHub Actions example
|
||||||
|
- name: Deploy Data Connect
|
||||||
|
run: |
|
||||||
|
npx -y firebase-tools@latest deploy --only dataconnect --token ${{ secrets.FIREBASE_TOKEN }} --force
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## VS Code Extension
|
||||||
|
|
||||||
|
Install "Firebase Data Connect" extension for:
|
||||||
|
- Schema intellisense and validation
|
||||||
|
- GraphQL operation testing
|
||||||
|
- Emulator integration
|
||||||
|
- SDK generation on save
|
||||||
|
|
||||||
|
### Extension Settings
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"firebase.dataConnect.autoGenerateSdk": true,
|
||||||
|
"firebase.dataConnect.emulator.port": 9399
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,357 @@
|
|||||||
|
# Operations Reference
|
||||||
|
|
||||||
|
## Contents
|
||||||
|
- [Generated Fields](#generated-fields)
|
||||||
|
- [Queries](#queries)
|
||||||
|
- [Mutations](#mutations)
|
||||||
|
- [Key Scalars](#key-scalars)
|
||||||
|
- [Multi-Step Operations](#multi-step-operations)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Generated Fields
|
||||||
|
|
||||||
|
Data Connect auto-generates fields for each `@table` type:
|
||||||
|
|
||||||
|
| Generated Field | Purpose | Example |
|
||||||
|
|-----------------|---------|---------|
|
||||||
|
| `movie(id: UUID, key: Key, first: Row)` | Get single record | `movie(id: $id)` or `movie(first: {where: ...})` |
|
||||||
|
| `movies(where: ..., orderBy: ..., limit: ..., offset: ..., distinct: ..., having: ...)` | List/filter records | `movies(where: {...})` |
|
||||||
|
| `movie_insert(data: ...)` | Create record | Returns key |
|
||||||
|
| `movie_insertMany(data: [...])` | Bulk create | Returns keys |
|
||||||
|
| `movie_update(id: ..., data: ...)` | Update by ID | Returns key or null |
|
||||||
|
| `movie_updateMany(where: ..., data: ...)` | Bulk update | Returns count |
|
||||||
|
| `movie_upsert(data: ...)` | Insert or update | Returns key |
|
||||||
|
| `movie_delete(id: ...)` | Delete by ID | Returns key or null |
|
||||||
|
| `movie_deleteMany(where: ...)` | Bulk delete | Returns count |
|
||||||
|
|
||||||
|
### Relation Fields
|
||||||
|
For a `Post` with `author: User!`:
|
||||||
|
- `post.author` - Navigate to related User
|
||||||
|
- `user.posts_on_author` - Reverse: all Posts by User
|
||||||
|
|
||||||
|
For many-to-many via `MovieActor`:
|
||||||
|
- `movie.actors_via_MovieActor` - Get all actors
|
||||||
|
- `actor.movies_via_MovieActor` - Get all movies
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Queries
|
||||||
|
|
||||||
|
### Basic Query
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query GetMovie($id: UUID!) @auth(level: PUBLIC) {
|
||||||
|
movie(id: $id) {
|
||||||
|
id title genre releaseYear
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### List with Filtering
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query ListMovies($genre: String, $minRating: Int) @auth(level: PUBLIC) {
|
||||||
|
movies(
|
||||||
|
where: {
|
||||||
|
genre: { eq: $genre },
|
||||||
|
rating: { ge: $minRating }
|
||||||
|
},
|
||||||
|
orderBy: [{ releaseYear: DESC }, { title: ASC }],
|
||||||
|
limit: 20,
|
||||||
|
offset: 0
|
||||||
|
) {
|
||||||
|
id title genre rating
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filter Operators
|
||||||
|
|
||||||
|
| Operator | Description | Example |
|
||||||
|
|----------|-------------|---------|
|
||||||
|
| `eq` | Equals | `{ title: { eq: "Matrix" }}` |
|
||||||
|
| `ne` | Not equals | `{ status: { ne: "deleted" }}` |
|
||||||
|
| `gt`, `ge` | Greater than (or equal) | `{ rating: { ge: 4 }}` |
|
||||||
|
| `lt`, `le` | Less than (or equal) | `{ releaseYear: { lt: 2000 }}` |
|
||||||
|
| `in` | In list | `{ genre: { in: ["Action", "Drama"] }}` |
|
||||||
|
| `nin` | Not in list | `{ status: { nin: ["deleted", "hidden"] }}` |
|
||||||
|
| `isNull` | Is null check | `{ description: { isNull: true }}` |
|
||||||
|
| `contains` | String contains | `{ title: { contains: "war" }}` |
|
||||||
|
| `startsWith` | String starts with | `{ title: { startsWith: "The" }}` |
|
||||||
|
| `endsWith` | String ends with | `{ email: { endsWith: "@gmail.com" }}` |
|
||||||
|
| `includes` | Array includes | `{ tags: { includes: "sci-fi" }}` |
|
||||||
|
|
||||||
|
### Expression Operators (Compare with Server Values)
|
||||||
|
|
||||||
|
Use `_expr` suffix to compare with server-side values:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query MyPosts @auth(level: USER) {
|
||||||
|
posts(where: { authorUid: { eq_expr: "auth.uid" }}) {
|
||||||
|
id title
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
query RecentPosts @auth(level: PUBLIC) {
|
||||||
|
posts(where: { publishedAt: { lt_expr: "request.time" }}) {
|
||||||
|
id title
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logical Operators
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query ComplexFilter($genre: String, $minRating: Int) @auth(level: PUBLIC) {
|
||||||
|
movies(where: {
|
||||||
|
_or: [
|
||||||
|
{ genre: { eq: $genre }},
|
||||||
|
{ rating: { ge: $minRating }}
|
||||||
|
],
|
||||||
|
_and: [
|
||||||
|
{ releaseYear: { ge: 2000 }},
|
||||||
|
{ status: { ne: "hidden" }}
|
||||||
|
],
|
||||||
|
_not: { genre: { eq: "Horror" }}
|
||||||
|
}) { id title }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Relational Queries
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Navigate relationships
|
||||||
|
query MovieWithDetails($id: UUID!) @auth(level: PUBLIC) {
|
||||||
|
movie(id: $id) {
|
||||||
|
title
|
||||||
|
# One-to-one
|
||||||
|
metadata: movieMetadata_on_movie { director }
|
||||||
|
# One-to-many
|
||||||
|
reviews: reviews_on_movie { rating user { name }}
|
||||||
|
# Many-to-many
|
||||||
|
actors: actors_via_MovieActor { name }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Filter by related data
|
||||||
|
query MoviesByDirector($director: String!) @auth(level: PUBLIC) {
|
||||||
|
movies(where: {
|
||||||
|
movieMetadata_on_movie: { director: { eq: $director }}
|
||||||
|
}) { id title }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Aliases
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query CompareRatings($genre: String!) @auth(level: PUBLIC) {
|
||||||
|
highRated: movies(where: { genre: { eq: $genre }, rating: { ge: 8 }}) {
|
||||||
|
title rating
|
||||||
|
}
|
||||||
|
lowRated: movies(where: { genre: { eq: $genre }, rating: { lt: 5 }}) {
|
||||||
|
title rating
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mutations
|
||||||
|
|
||||||
|
### Create
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation CreateMovie($title: String!, $genre: String) @auth(level: USER) {
|
||||||
|
movie_insert(data: {
|
||||||
|
title: $title,
|
||||||
|
genre: $genre
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create with Server Values
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation CreatePost($title: String!, $content: String!) @auth(level: USER) {
|
||||||
|
post_insert(data: {
|
||||||
|
authorUid_expr: "auth.uid", # Current user
|
||||||
|
id_expr: "uuidV4()", # Auto-generate UUID
|
||||||
|
createdAt_expr: "request.time", # Server timestamp
|
||||||
|
title: $title,
|
||||||
|
content: $content
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation UpdateMovie($id: UUID!, $title: String, $genre: String) @auth(level: USER) {
|
||||||
|
movie_update(
|
||||||
|
id: $id,
|
||||||
|
data: {
|
||||||
|
title: $title,
|
||||||
|
genre: $genre,
|
||||||
|
updatedAt_expr: "request.time"
|
||||||
|
}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update Operators
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation IncrementViews($id: UUID!) @auth(level: PUBLIC) {
|
||||||
|
movie_update(id: $id, data: {
|
||||||
|
viewCount_update: { inc: 1 }
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation AddTag($id: UUID!, $tag: String!) @auth(level: USER) {
|
||||||
|
movie_update(id: $id, data: {
|
||||||
|
tags_update: { add: [$tag] } # add, remove, append, prepend
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Operator | Types | Description |
|
||||||
|
|----------|-------|-------------|
|
||||||
|
| `inc` | Int, Float, Date, Timestamp | Increment value |
|
||||||
|
| `dec` | Int, Float, Date, Timestamp | Decrement value |
|
||||||
|
| `add` | Lists | Add items if not present |
|
||||||
|
| `remove` | Lists | Remove all matching items |
|
||||||
|
| `append` | Lists | Append to end |
|
||||||
|
| `prepend` | Lists | Prepend to start |
|
||||||
|
|
||||||
|
### Upsert
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation UpsertUser($email: String!, $name: String!) @auth(level: USER) {
|
||||||
|
user_upsert(data: {
|
||||||
|
uid_expr: "auth.uid",
|
||||||
|
email: $email,
|
||||||
|
name: $name
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation DeleteMovie($id: UUID!) @auth(level: USER) {
|
||||||
|
movie_delete(id: $id)
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation DeleteOldDrafts @auth(level: USER) {
|
||||||
|
post_deleteMany(where: {
|
||||||
|
status: { eq: "draft" },
|
||||||
|
createdAt: { lt_time: { now: true, sub: { days: 30 }}}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filtered Updates/Deletes (User-Owned)
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation UpdateMyPost($id: UUID!, $content: String!) @auth(level: USER) {
|
||||||
|
post_update(
|
||||||
|
first: { where: {
|
||||||
|
id: { eq: $id },
|
||||||
|
authorUid: { eq_expr: "auth.uid" } # Only own posts
|
||||||
|
}},
|
||||||
|
data: { content: $content }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Scalars
|
||||||
|
|
||||||
|
Key scalars (`Movie_Key`, `User_Key`) are auto-generated types representing primary keys:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Using key scalar
|
||||||
|
query GetMovie($key: Movie_Key!) @auth(level: PUBLIC) {
|
||||||
|
movie(key: $key) { title }
|
||||||
|
}
|
||||||
|
|
||||||
|
# Variable format
|
||||||
|
# { "key": { "id": "uuid-here" } }
|
||||||
|
|
||||||
|
# Composite key
|
||||||
|
# { "key": { "movieId": "...", "userId": "..." } }
|
||||||
|
```
|
||||||
|
|
||||||
|
Key scalars are returned by mutations:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation CreateAndFetch($title: String!) @auth(level: USER) {
|
||||||
|
key: movie_insert(data: { title: $title })
|
||||||
|
# Returns: { "key": { "id": "generated-uuid" } }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Multi-Step Operations
|
||||||
|
|
||||||
|
### @transaction
|
||||||
|
|
||||||
|
Ensures atomicity - all steps succeed or all rollback:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation CreateUserWithProfile($name: String!, $bio: String!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
# Step 1: Create user
|
||||||
|
user_insert(data: {
|
||||||
|
uid_expr: "auth.uid",
|
||||||
|
name: $name
|
||||||
|
})
|
||||||
|
# Step 2: Create profile (uses response from step 1)
|
||||||
|
userProfile_insert(data: {
|
||||||
|
userId_expr: "response.user_insert.uid",
|
||||||
|
bio: $bio
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using response Binding
|
||||||
|
|
||||||
|
Access results from previous steps:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation CreateTodoWithItem($listName: String!, $itemText: String!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
todoList_insert(data: {
|
||||||
|
id_expr: "uuidV4()",
|
||||||
|
name: $listName
|
||||||
|
})
|
||||||
|
todoItem_insert(data: {
|
||||||
|
listId_expr: "response.todoList_insert.id", # From previous step
|
||||||
|
text: $itemText
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Embedded Queries
|
||||||
|
|
||||||
|
Run queries within mutations for validation:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation AddToPublicList($listId: UUID!, $item: String!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
# Step 1: Verify list exists and is public
|
||||||
|
query @redact {
|
||||||
|
todoList(id: $listId) @check(expr: "this != null", message: "List not found") {
|
||||||
|
isPublic @check(expr: "this == true", message: "List is not public")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
# Step 2: Add item
|
||||||
|
todoItem_insert(data: { listId: $listId, text: $item })
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,278 @@
|
|||||||
|
# Schema Reference
|
||||||
|
|
||||||
|
## Contents
|
||||||
|
- [Defining Types](#defining-types)
|
||||||
|
- [Core Directives](#core-directives)
|
||||||
|
- [Relationships](#relationships)
|
||||||
|
- [Data Types](#data-types)
|
||||||
|
- [Enumerations](#enumerations)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Defining Types
|
||||||
|
|
||||||
|
Types with `@table` map to PostgreSQL tables. Data Connect auto-generates an implicit `id: UUID!` primary key.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Movie @table {
|
||||||
|
# id: UUID! is auto-added
|
||||||
|
title: String!
|
||||||
|
releaseYear: Int
|
||||||
|
genre: String
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Customizing Tables
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Movie @table(name: "movies", key: "id", singular: "movie", plural: "movies") {
|
||||||
|
id: UUID! @col(name: "movie_id") @default(expr: "uuidV4()")
|
||||||
|
title: String!
|
||||||
|
releaseYear: Int @col(name: "release_year")
|
||||||
|
genre: String @col(dataType: "varchar(20)")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### User Table with Auth
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type User @table(key: "uid") {
|
||||||
|
uid: String! @default(expr: "auth.uid")
|
||||||
|
email: String! @unique
|
||||||
|
displayName: String @col(dataType: "varchar(100)")
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Directives
|
||||||
|
|
||||||
|
### @table
|
||||||
|
Defines a database table.
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `name` | PostgreSQL table name (snake_case default) |
|
||||||
|
| `key` | Primary key field(s), default `["id"]` |
|
||||||
|
| `singular` | Singular name for generated fields |
|
||||||
|
| `plural` | Plural name for generated fields |
|
||||||
|
|
||||||
|
### @col
|
||||||
|
Customizes column mapping.
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `name` | Column name in PostgreSQL |
|
||||||
|
| `dataType` | PostgreSQL type: `serial`, `varchar(n)`, `text`, etc. |
|
||||||
|
| `size` | Required for `Vector` type |
|
||||||
|
|
||||||
|
### @default
|
||||||
|
Sets default value for inserts.
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `value` | Literal value: `@default(value: "draft")` |
|
||||||
|
| `expr` | CEL expression: `@default(expr: "uuidV4()")`, `@default(expr: "auth.uid")`, `@default(expr: "request.time")` |
|
||||||
|
| `sql` | Raw SQL: `@default(sql: "now()")` |
|
||||||
|
|
||||||
|
**Common expressions:**
|
||||||
|
- `uuidV4()` - Generate UUID
|
||||||
|
- `auth.uid` - Current user's Firebase Auth UID
|
||||||
|
- `request.time` - Server timestamp
|
||||||
|
|
||||||
|
### @unique
|
||||||
|
Adds unique constraint.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type User @table {
|
||||||
|
email: String! @unique
|
||||||
|
}
|
||||||
|
|
||||||
|
# Composite unique
|
||||||
|
type Review @table @unique(fields: ["movie", "user"]) {
|
||||||
|
movie: Movie!
|
||||||
|
user: User!
|
||||||
|
rating: Int
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### @index
|
||||||
|
Creates database index for query performance.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Movie @table @index(fields: ["genre", "releaseYear"], order: [ASC, DESC]) {
|
||||||
|
title: String! @index
|
||||||
|
genre: String
|
||||||
|
releaseYear: Int
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `fields` | Fields for composite index (on @table) |
|
||||||
|
| `order` | `[ASC]` or `[DESC]` for each field |
|
||||||
|
| `type` | `BTREE` (default), `GIN` (arrays), `HNSW`/`IVFFLAT` (vectors) |
|
||||||
|
|
||||||
|
### @searchable
|
||||||
|
Enables full-text search on String fields.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Post @table {
|
||||||
|
title: String! @searchable
|
||||||
|
body: String! @searchable(language: "english")
|
||||||
|
}
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
query SearchPosts($q: String!) @auth(level: PUBLIC) {
|
||||||
|
posts_search(query: $q) { id title body }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Relationships
|
||||||
|
|
||||||
|
### One-to-Many (Implicit Foreign Key)
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Post @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
author: User! # Creates authorId foreign key
|
||||||
|
title: String!
|
||||||
|
}
|
||||||
|
|
||||||
|
type User @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
name: String!
|
||||||
|
# Auto-generated: posts_on_author: [Post!]!
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### @ref Directive
|
||||||
|
Customizes foreign key reference.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Post @table {
|
||||||
|
author: User! @ref(fields: "authorId", references: "id")
|
||||||
|
authorId: UUID! # Explicit FK field
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `fields` | Local FK field name(s) |
|
||||||
|
| `references` | Target field(s) in referenced table |
|
||||||
|
| `constraintName` | PostgreSQL constraint name |
|
||||||
|
|
||||||
|
**Cascade behavior:**
|
||||||
|
- Required reference (`User!`): CASCADE DELETE (post deleted when user deleted)
|
||||||
|
- Optional reference (`User`): SET NULL (authorId set to null when user deleted)
|
||||||
|
|
||||||
|
### One-to-One
|
||||||
|
|
||||||
|
Use `@unique` on the reference field:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type User @table { id: UUID! name: String! }
|
||||||
|
|
||||||
|
type UserProfile @table {
|
||||||
|
user: User! @unique # One profile per user
|
||||||
|
bio: String
|
||||||
|
avatarUrl: String
|
||||||
|
}
|
||||||
|
|
||||||
|
# Query: user.userProfile_on_user
|
||||||
|
```
|
||||||
|
|
||||||
|
### Many-to-Many
|
||||||
|
|
||||||
|
Use a join table with composite primary key:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type Movie @table { id: UUID! title: String! }
|
||||||
|
type Actor @table { id: UUID! name: String! }
|
||||||
|
|
||||||
|
type MovieActor @table(key: ["movie", "actor"]) {
|
||||||
|
movie: Movie!
|
||||||
|
actor: Actor!
|
||||||
|
role: String! # Extra data on relationship
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generated fields:
|
||||||
|
# - movie.actors_via_MovieActor: [Actor!]!
|
||||||
|
# - actor.movies_via_MovieActor: [Movie!]!
|
||||||
|
# - movie.movieActors_on_movie: [MovieActor!]!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Types
|
||||||
|
|
||||||
|
| GraphQL Type | PostgreSQL Default | Other PostgreSQL Types |
|
||||||
|
|--------------|-------------------|----------------------|
|
||||||
|
| `String` | `text` | `varchar(n)`, `char(n)` |
|
||||||
|
| `Int` | `int4` | `int2`, `serial` |
|
||||||
|
| `Int64` | `bigint` | `bigserial`, `numeric` |
|
||||||
|
| `Float` | `float8` | `float4`, `numeric` |
|
||||||
|
| `Boolean` | `boolean` | |
|
||||||
|
| `UUID` | `uuid` | |
|
||||||
|
| `Date` | `date` | |
|
||||||
|
| `Timestamp` | `timestamptz` | Stored as UTC |
|
||||||
|
| `Any` | `jsonb` | |
|
||||||
|
| `Vector` | `vector` | Requires `@col(size: N)` |
|
||||||
|
| `[Type]` | Array | e.g., `[String]` → `text[]` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Enumerations
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
enum Status {
|
||||||
|
DRAFT
|
||||||
|
PUBLISHED
|
||||||
|
ARCHIVED
|
||||||
|
}
|
||||||
|
|
||||||
|
type Post @table {
|
||||||
|
status: Status! @default(value: DRAFT)
|
||||||
|
allowedStatuses: [Status!]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rules:**
|
||||||
|
- Enum names: PascalCase, no underscores
|
||||||
|
- Enum values: UPPER_SNAKE_CASE
|
||||||
|
- Values are ordered (for comparison operations)
|
||||||
|
- Changing order or removing values is a breaking change
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Views (Advanced)
|
||||||
|
|
||||||
|
Map custom SQL queries to GraphQL types:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
type MovieStats @view(sql: """
|
||||||
|
SELECT
|
||||||
|
movie_id,
|
||||||
|
COUNT(*) as review_count,
|
||||||
|
AVG(rating) as avg_rating
|
||||||
|
FROM review
|
||||||
|
GROUP BY movie_id
|
||||||
|
""") {
|
||||||
|
movie: Movie @unique
|
||||||
|
reviewCount: Int
|
||||||
|
avgRating: Float
|
||||||
|
}
|
||||||
|
|
||||||
|
# Query movies with stats
|
||||||
|
query TopMovies @auth(level: PUBLIC) {
|
||||||
|
movies(orderBy: [{ rating: DESC }]) {
|
||||||
|
title
|
||||||
|
stats: movieStats_on_movie {
|
||||||
|
reviewCount avgRating
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,287 @@
|
|||||||
|
# SDK Reference
|
||||||
|
|
||||||
|
## Contents
|
||||||
|
- [SDK Generation](#sdk-generation)
|
||||||
|
- [Web SDK](#web-sdk)
|
||||||
|
- [Android SDK](#android-sdk)
|
||||||
|
- [iOS SDK](#ios-sdk)
|
||||||
|
- [Admin SDK](#admin-sdk)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SDK Generation
|
||||||
|
|
||||||
|
Configure SDK generation in `connector.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
connectorId: my-connector
|
||||||
|
generate:
|
||||||
|
javascriptSdk:
|
||||||
|
outputDir: "../web-app/src/lib/dataconnect"
|
||||||
|
package: "@movie-app/dataconnect"
|
||||||
|
kotlinSdk:
|
||||||
|
outputDir: "../android-app/app/src/main/kotlin/com/example/dataconnect"
|
||||||
|
package: "com.example.dataconnect"
|
||||||
|
swiftSdk:
|
||||||
|
outputDir: "../ios-app/DataConnect"
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate SDKs:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest dataconnect:sdk:generate
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Web SDK
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install firebase
|
||||||
|
```
|
||||||
|
|
||||||
|
### Initialization
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { initializeApp } from 'firebase/app';
|
||||||
|
import { getDataConnect, connectDataConnectEmulator } from 'firebase/data-connect';
|
||||||
|
import { connectorConfig } from '@movie-app/dataconnect';
|
||||||
|
|
||||||
|
const app = initializeApp(firebaseConfig);
|
||||||
|
const dc = getDataConnect(app, connectorConfig);
|
||||||
|
|
||||||
|
// For local development
|
||||||
|
if (import.meta.env.DEV) {
|
||||||
|
connectDataConnectEmulator(dc, 'localhost', 9399);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Calling Operations
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Generated SDK provides typed functions
|
||||||
|
import { listMovies, createMovie, getMovie } from '@movie-app/dataconnect';
|
||||||
|
|
||||||
|
// Accessing Nested Fields
|
||||||
|
const movie = await getMovie({ id: '...' });
|
||||||
|
// Relations are just properties on the object
|
||||||
|
const director = movie.data.movie.metadata.director;
|
||||||
|
const firstActor = movie.data.movie.actors[0].name;
|
||||||
|
|
||||||
|
// Query
|
||||||
|
const result = await listMovies();
|
||||||
|
console.log(result.data.movies);
|
||||||
|
|
||||||
|
// Query with variables
|
||||||
|
const movie = await getMovie({ id: 'uuid-here' });
|
||||||
|
|
||||||
|
// Mutation
|
||||||
|
const newMovie = await createMovie({
|
||||||
|
title: 'New Movie',
|
||||||
|
genre: 'Action'
|
||||||
|
});
|
||||||
|
console.log(newMovie.data.movie_insert); // Returns key
|
||||||
|
```
|
||||||
|
|
||||||
|
### Subscriptions
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { listMoviesRef, subscribe } from '@movie-app/dataconnect';
|
||||||
|
|
||||||
|
const unsubscribe = subscribe(listMoviesRef(), {
|
||||||
|
onNext: (result) => {
|
||||||
|
console.log('Movies updated:', result.data.movies);
|
||||||
|
},
|
||||||
|
onError: (error) => {
|
||||||
|
console.error('Subscription error:', error);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Later: unsubscribe();
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Authentication
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { getAuth, signInWithEmailAndPassword } from 'firebase/auth';
|
||||||
|
|
||||||
|
const auth = getAuth(app);
|
||||||
|
await signInWithEmailAndPassword(auth, email, password);
|
||||||
|
|
||||||
|
// SDK automatically includes auth token in requests
|
||||||
|
const myReviews = await myReviews(); // @auth(level: USER) query from examples.md
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Android SDK
|
||||||
|
|
||||||
|
### Dependencies (build.gradle.kts)
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
dependencies {
|
||||||
|
implementation(platform("com.google.firebase:firebase-bom:33.0.0"))
|
||||||
|
implementation("com.google.firebase:firebase-dataconnect")
|
||||||
|
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-android:1.7.3")
|
||||||
|
implementation("org.jetbrains.kotlinx:kotlinx-serialization-core:1.6.0")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Initialization
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
import com.google.firebase.Firebase
|
||||||
|
import com.google.firebase.dataconnect.dataConnect
|
||||||
|
import com.example.dataconnect.MyConnector
|
||||||
|
|
||||||
|
val connector = MyConnector.instance
|
||||||
|
|
||||||
|
// For emulator
|
||||||
|
connector.dataConnect.useEmulator("10.0.2.2", 9399)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Calling Operations
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
// Query
|
||||||
|
val result = connector.listMovies.execute()
|
||||||
|
result.data.movies.forEach { movie ->
|
||||||
|
println(movie.title)
|
||||||
|
// Access nested fields directly
|
||||||
|
println(movie.metadata?.director)
|
||||||
|
println(movie.actors.firstOrNull()?.name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Query with variables
|
||||||
|
val movie = connector.getMovie.execute(id = "uuid-here")
|
||||||
|
|
||||||
|
// Mutation
|
||||||
|
val newMovie = connector.createMovie.execute(
|
||||||
|
title = "New Movie",
|
||||||
|
genre = "Action"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Flow Subscription
|
||||||
|
|
||||||
|
```kotlin
|
||||||
|
connector.listMovies.flow().collect { result ->
|
||||||
|
when (result) {
|
||||||
|
is DataConnectResult.Success -> updateUI(result.data.movies)
|
||||||
|
is DataConnectResult.Error -> showError(result.exception)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## iOS SDK
|
||||||
|
|
||||||
|
### Dependencies (Package.swift or SPM)
|
||||||
|
|
||||||
|
```swift
|
||||||
|
dependencies: [
|
||||||
|
.package(url: "https://github.com/firebase/firebase-ios-sdk.git", from: "11.0.0")
|
||||||
|
]
|
||||||
|
// Add FirebaseDataConnect to target dependencies
|
||||||
|
```
|
||||||
|
|
||||||
|
### Initialization
|
||||||
|
|
||||||
|
```swift
|
||||||
|
import FirebaseCore
|
||||||
|
import FirebaseDataConnect
|
||||||
|
|
||||||
|
FirebaseApp.configure()
|
||||||
|
let connector = MyConnector.shared
|
||||||
|
|
||||||
|
// For emulator
|
||||||
|
connector.useEmulator(host: "localhost", port: 9399)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Calling Operations
|
||||||
|
|
||||||
|
```swift
|
||||||
|
// Query
|
||||||
|
let result = try await connector.listMovies.execute()
|
||||||
|
for movie in result.data.movies {
|
||||||
|
print(movie.title)
|
||||||
|
// Access nested fields directly
|
||||||
|
print(movie.metadata?.director ?? "Unknown")
|
||||||
|
print(movie.actors.first?.name ?? "No actors")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Query with variables
|
||||||
|
let movie = try await connector.getMovie.execute(id: "uuid-here")
|
||||||
|
|
||||||
|
// Mutation
|
||||||
|
let newMovie = try await connector.createMovie.execute(
|
||||||
|
title: "New Movie",
|
||||||
|
genre: "Action"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Combine Publisher
|
||||||
|
|
||||||
|
```swift
|
||||||
|
connector.listMovies.publisher
|
||||||
|
.sink(
|
||||||
|
receiveCompletion: { completion in
|
||||||
|
if case .failure(let error) = completion {
|
||||||
|
print("Error: \(error)")
|
||||||
|
}
|
||||||
|
},
|
||||||
|
receiveValue: { result in
|
||||||
|
self.movies = result.data.movies
|
||||||
|
}
|
||||||
|
)
|
||||||
|
.store(in: &cancellables)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Admin SDK
|
||||||
|
|
||||||
|
Server-side operations with elevated privileges (bypasses @auth):
|
||||||
|
|
||||||
|
### Node.js
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { initializeApp, cert } from 'firebase-admin/app';
|
||||||
|
import { getDataConnect } from 'firebase-admin/data-connect';
|
||||||
|
|
||||||
|
initializeApp({
|
||||||
|
credential: cert(serviceAccount)
|
||||||
|
});
|
||||||
|
|
||||||
|
const dc = getDataConnect();
|
||||||
|
|
||||||
|
// Execute operations (bypasses @auth)
|
||||||
|
const result = await dc.executeGraphql({
|
||||||
|
query: `query { users { id email } }`,
|
||||||
|
operationName: 'ListAllUsers'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Or use generated Admin SDK
|
||||||
|
import { listAllUsers } from './admin-connector';
|
||||||
|
const users = await listAllUsers();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Generate Admin SDK
|
||||||
|
|
||||||
|
In `connector.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
generate:
|
||||||
|
nodeAdminSdk:
|
||||||
|
outputDir: "./admin-sdk"
|
||||||
|
package: "@app/admin-dataconnect"
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest dataconnect:sdk:generate
|
||||||
|
```
|
||||||
@@ -0,0 +1,289 @@
|
|||||||
|
# Security Reference
|
||||||
|
|
||||||
|
## Contents
|
||||||
|
- [@auth Directive](#auth-directive)
|
||||||
|
- [Access Levels](#access-levels)
|
||||||
|
- [CEL Expressions](#cel-expressions)
|
||||||
|
- [@check and @redact](#check-and-redact)
|
||||||
|
- [Authorization Patterns](#authorization-patterns)
|
||||||
|
- [Anti-Patterns](#anti-patterns)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## @auth Directive
|
||||||
|
|
||||||
|
Every deployable query/mutation must have `@auth`. Without it, operations default to `NO_ACCESS`.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query PublicData @auth(level: PUBLIC) { ... }
|
||||||
|
query UserData @auth(level: USER) { ... }
|
||||||
|
query AdminOnly @auth(expr: "auth.token.admin == true") { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `level` | Preset access level |
|
||||||
|
| `expr` | CEL expression (alternative to level) |
|
||||||
|
| `insecureReason` | Suppress deploy warning for PUBLIC/unfiltered USER |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Access Levels
|
||||||
|
|
||||||
|
| Level | Who Can Access | CEL Equivalent |
|
||||||
|
|-------|----------------|----------------|
|
||||||
|
| `PUBLIC` | Anyone, authenticated or not | `true` |
|
||||||
|
| `USER_ANON` | Any authenticated user (including anonymous) | `auth.uid != nil` |
|
||||||
|
| `USER` | Authenticated users (excludes anonymous) | `auth.uid != nil && auth.token.firebase.sign_in_provider != 'anonymous'` |
|
||||||
|
| `USER_EMAIL_VERIFIED` | Users with verified email | `auth.uid != nil && auth.token.email_verified` |
|
||||||
|
| `NO_ACCESS` | Admin SDK only | `false` |
|
||||||
|
|
||||||
|
> **Important:** Levels like `USER` are starting points. Always add filters or expressions to verify the user can access specific data.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CEL Expressions
|
||||||
|
|
||||||
|
### Available Bindings
|
||||||
|
|
||||||
|
| Binding | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `auth.uid` | Current user's Firebase UID |
|
||||||
|
| `auth.token` | Auth token claims (see below) |
|
||||||
|
| `vars` | Operation variables (e.g., `vars.movieId`) |
|
||||||
|
| `request.time` | Server timestamp |
|
||||||
|
| `request.operationName` | "query" or "mutation" |
|
||||||
|
|
||||||
|
### auth.token Fields
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `email` | User's email address |
|
||||||
|
| `email_verified` | Boolean: email verified |
|
||||||
|
| `phone_number` | User's phone |
|
||||||
|
| `name` | Display name |
|
||||||
|
| `sub` | Firebase UID (same as auth.uid) |
|
||||||
|
| `firebase.sign_in_provider` | `password`, `google.com`, `anonymous`, etc. |
|
||||||
|
| `<custom_claim>` | Custom claims set via Admin SDK |
|
||||||
|
|
||||||
|
### Expression Examples
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Check custom claim
|
||||||
|
@auth(expr: "auth.token.role == 'admin'")
|
||||||
|
|
||||||
|
# Check verified email domain
|
||||||
|
@auth(expr: "auth.token.email_verified && auth.token.email.endsWith('@company.com')")
|
||||||
|
|
||||||
|
# Check multiple conditions
|
||||||
|
@auth(expr: "auth.uid != nil && (auth.token.role == 'editor' || auth.token.role == 'admin')")
|
||||||
|
|
||||||
|
# Check variable
|
||||||
|
@auth(expr: "has(vars.status) && vars.status in ['draft', 'published']")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using eq_expr in Filters
|
||||||
|
|
||||||
|
Compare database fields with auth values:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query MyPosts @auth(level: USER) {
|
||||||
|
posts(where: { authorUid: { eq_expr: "auth.uid" }}) {
|
||||||
|
id title
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation UpdateMyPost($id: UUID!, $title: String!) @auth(level: USER) {
|
||||||
|
post_update(
|
||||||
|
first: { where: {
|
||||||
|
id: { eq: $id },
|
||||||
|
authorUid: { eq_expr: "auth.uid" }
|
||||||
|
}},
|
||||||
|
data: { title: $title }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## @check and @redact
|
||||||
|
|
||||||
|
Use `@check` to validate data and `@redact` to hide results from client:
|
||||||
|
|
||||||
|
### @check
|
||||||
|
Validates a field value; aborts if check fails.
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
@check(expr: "this != null", message: "Not found")
|
||||||
|
@check(expr: "this == 'editor'", message: "Must be editor")
|
||||||
|
@check(expr: "this.exists(p, p.role == 'admin')", message: "No admin found")
|
||||||
|
```
|
||||||
|
|
||||||
|
| Argument | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `expr` | CEL expression; `this` = current field value |
|
||||||
|
| `message` | Error message if check fails |
|
||||||
|
| `optional` | If `true`, pass when field not present |
|
||||||
|
|
||||||
|
### @redact
|
||||||
|
Hides field from response (still evaluated for @check):
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query @redact { ... } # Query result hidden but @check still runs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Authorization Data Lookup
|
||||||
|
|
||||||
|
Check database permissions before allowing mutation:
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation UpdateMovie($id: UUID!, $title: String!)
|
||||||
|
@auth(level: USER)
|
||||||
|
@transaction {
|
||||||
|
# Step 1: Check user has permission
|
||||||
|
query @redact {
|
||||||
|
moviePermission(
|
||||||
|
key: { movieId: $id, userId_expr: "auth.uid" }
|
||||||
|
) @check(expr: "this != null", message: "No access to movie") {
|
||||||
|
role @check(expr: "this == 'editor'", message: "Must be editor")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
# Step 2: Update if authorized
|
||||||
|
movie_update(id: $id, data: { title: $title })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Validate Key Exists
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
mutation MustDeleteMovie($id: UUID!) @auth(level: USER) @transaction {
|
||||||
|
movie_delete(id: $id)
|
||||||
|
@check(expr: "this != null", message: "Movie not found")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Authorization Patterns
|
||||||
|
|
||||||
|
### User-Owned Resources
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Create with owner
|
||||||
|
mutation CreatePost($content: String!) @auth(level: USER) {
|
||||||
|
post_insert(data: {
|
||||||
|
authorUid_expr: "auth.uid",
|
||||||
|
content: $content
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# Read own data only
|
||||||
|
query MyPosts @auth(level: USER) {
|
||||||
|
posts(where: { authorUid: { eq_expr: "auth.uid" }}) {
|
||||||
|
id content
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Update own data only
|
||||||
|
mutation UpdatePost($id: UUID!, $content: String!) @auth(level: USER) {
|
||||||
|
post_update(
|
||||||
|
first: { where: { id: { eq: $id }, authorUid: { eq_expr: "auth.uid" }}},
|
||||||
|
data: { content: $content }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Delete own data only
|
||||||
|
mutation DeletePost($id: UUID!) @auth(level: USER) {
|
||||||
|
post_delete(
|
||||||
|
first: { where: { id: { eq: $id }, authorUid: { eq_expr: "auth.uid" }}}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Role-Based Access
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# Admin-only query
|
||||||
|
query AllUsers @auth(expr: "auth.token.admin == true") {
|
||||||
|
users { id email name }
|
||||||
|
}
|
||||||
|
|
||||||
|
# Role from database
|
||||||
|
mutation AdminAction($id: UUID!) @auth(level: USER) @transaction {
|
||||||
|
query @redact {
|
||||||
|
user(key: { uid_expr: "auth.uid" }) {
|
||||||
|
role @check(expr: "this == 'admin'", message: "Admin required")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
# ... admin action
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Public Data with Filters
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query PublicPosts @auth(level: PUBLIC) {
|
||||||
|
posts(where: {
|
||||||
|
visibility: { eq: "public" },
|
||||||
|
publishedAt: { lt_expr: "request.time" }
|
||||||
|
}) {
|
||||||
|
id title content
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tiered Access (Pro Content)
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
query ProContent @auth(expr: "auth.token.plan == 'pro'") {
|
||||||
|
posts(where: { visibility: { in: ["public", "pro"] }}) {
|
||||||
|
id title content
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Anti-Patterns
|
||||||
|
|
||||||
|
### ❌ Don't Pass User ID as Variable
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# BAD - any user can pass any userId
|
||||||
|
query GetUserPosts($userId: String!) @auth(level: USER) {
|
||||||
|
posts(where: { authorUid: { eq: $userId }}) { ... }
|
||||||
|
}
|
||||||
|
|
||||||
|
# GOOD - use auth.uid
|
||||||
|
query GetMyPosts @auth(level: USER) {
|
||||||
|
posts(where: { authorUid: { eq_expr: "auth.uid" }}) { ... }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Don't Use USER Without Filters
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# BAD - any authenticated user sees all documents
|
||||||
|
query AllDocs @auth(level: USER) {
|
||||||
|
documents { id title content }
|
||||||
|
}
|
||||||
|
|
||||||
|
# GOOD - filter to user's documents
|
||||||
|
query MyDocs @auth(level: USER) {
|
||||||
|
documents(where: { ownerId: { eq_expr: "auth.uid" }}) { ... }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Don't Trust Unverified Email
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# BAD - email not verified
|
||||||
|
@auth(expr: "auth.token.email.endsWith('@company.com')")
|
||||||
|
|
||||||
|
# GOOD - verify email first
|
||||||
|
@auth(expr: "auth.token.email_verified && auth.token.email.endsWith('@company.com')")
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Don't Use PUBLIC/USER for Prototyping
|
||||||
|
|
||||||
|
During development, set operations to `NO_ACCESS` until you implement proper authorization. Use emulator and VS Code extension for testing.
|
||||||
@@ -0,0 +1,269 @@
|
|||||||
|
# Templates
|
||||||
|
|
||||||
|
Ready-to-use templates for common Firebase Data Connect patterns.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Basic CRUD Schema
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# schema.gql
|
||||||
|
type Item @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
name: String!
|
||||||
|
description: String
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
updatedAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# queries.gql
|
||||||
|
query ListItems @auth(level: PUBLIC) {
|
||||||
|
items(orderBy: [{ createdAt: DESC }]) {
|
||||||
|
id name description createdAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
query GetItem($id: UUID!) @auth(level: PUBLIC) {
|
||||||
|
item(id: $id) { id name description createdAt updatedAt }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# mutations.gql
|
||||||
|
mutation CreateItem($name: String!, $description: String) @auth(level: USER) {
|
||||||
|
item_insert(data: { name: $name, description: $description })
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation UpdateItem($id: UUID!, $name: String, $description: String) @auth(level: USER) {
|
||||||
|
item_update(id: $id, data: {
|
||||||
|
name: $name,
|
||||||
|
description: $description,
|
||||||
|
updatedAt_expr: "request.time"
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation DeleteItem($id: UUID!) @auth(level: USER) {
|
||||||
|
item_delete(id: $id)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## User-Owned Resources
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# schema.gql
|
||||||
|
type User @table(key: "uid") {
|
||||||
|
uid: String! @default(expr: "auth.uid")
|
||||||
|
email: String! @unique
|
||||||
|
displayName: String
|
||||||
|
}
|
||||||
|
|
||||||
|
type Note @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
owner: User!
|
||||||
|
title: String!
|
||||||
|
content: String
|
||||||
|
createdAt: Timestamp! @default(expr: "request.time")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# queries.gql
|
||||||
|
query MyNotes @auth(level: USER) {
|
||||||
|
notes(
|
||||||
|
where: { owner: { uid: { eq_expr: "auth.uid" }}},
|
||||||
|
orderBy: [{ createdAt: DESC }]
|
||||||
|
) { id title content createdAt }
|
||||||
|
}
|
||||||
|
|
||||||
|
query GetMyNote($id: UUID!) @auth(level: USER) {
|
||||||
|
note(
|
||||||
|
first: { where: {
|
||||||
|
id: { eq: $id },
|
||||||
|
owner: { uid: { eq_expr: "auth.uid" }}
|
||||||
|
}}
|
||||||
|
) { id title content }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# mutations.gql
|
||||||
|
mutation CreateNote($title: String!, $content: String) @auth(level: USER) {
|
||||||
|
note_insert(data: {
|
||||||
|
owner: { uid_expr: "auth.uid" },
|
||||||
|
title: $title,
|
||||||
|
content: $content
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation UpdateNote($id: UUID!, $title: String, $content: String) @auth(level: USER) {
|
||||||
|
note_update(
|
||||||
|
first: { where: { id: { eq: $id }, owner: { uid: { eq_expr: "auth.uid" }}}},
|
||||||
|
data: { title: $title, content: $content }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation DeleteNote($id: UUID!) @auth(level: USER) {
|
||||||
|
note_delete(
|
||||||
|
first: { where: { id: { eq: $id }, owner: { uid: { eq_expr: "auth.uid" }}}}
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Many-to-Many Relationship
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# schema.gql
|
||||||
|
type Tag @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
name: String! @unique
|
||||||
|
}
|
||||||
|
|
||||||
|
type Article @table {
|
||||||
|
id: UUID! @default(expr: "uuidV4()")
|
||||||
|
title: String!
|
||||||
|
content: String!
|
||||||
|
}
|
||||||
|
|
||||||
|
type ArticleTag @table(key: ["article", "tag"]) {
|
||||||
|
article: Article!
|
||||||
|
tag: Tag!
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# queries.gql
|
||||||
|
query ArticlesByTag($tagName: String!) @auth(level: PUBLIC) {
|
||||||
|
articles(where: {
|
||||||
|
articleTags_on_article: { tag: { name: { eq: $tagName }}}
|
||||||
|
}) {
|
||||||
|
id title
|
||||||
|
tags: tags_via_ArticleTag { name }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
query ArticleWithTags($id: UUID!) @auth(level: PUBLIC) {
|
||||||
|
article(id: $id) {
|
||||||
|
id title content
|
||||||
|
tags: tags_via_ArticleTag { id name }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```graphql
|
||||||
|
# mutations.gql
|
||||||
|
mutation AddTagToArticle($articleId: UUID!, $tagId: UUID!) @auth(level: USER) {
|
||||||
|
articleTag_insert(data: {
|
||||||
|
article: { id: $articleId },
|
||||||
|
tag: { id: $tagId }
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
mutation RemoveTagFromArticle($articleId: UUID!, $tagId: UUID!) @auth(level: USER) {
|
||||||
|
articleTag_delete(key: { articleId: $articleId, tagId: $tagId })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## dataconnect.yaml Template
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
specVersion: "v1"
|
||||||
|
serviceId: "my-service"
|
||||||
|
location: "us-central1"
|
||||||
|
schema:
|
||||||
|
source: "./schema"
|
||||||
|
datasource:
|
||||||
|
postgresql:
|
||||||
|
database: "fdcdb"
|
||||||
|
cloudSql:
|
||||||
|
instanceId: "my-instance"
|
||||||
|
connectorDirs: ["./connector"]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## connector.yaml Template
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
connectorId: "default"
|
||||||
|
generate:
|
||||||
|
javascriptSdk:
|
||||||
|
outputDir: "../web/src/lib/dataconnect"
|
||||||
|
package: "@myapp/dataconnect"
|
||||||
|
kotlinSdk:
|
||||||
|
outputDir: "../android/app/src/main/kotlin/com/myapp/dataconnect"
|
||||||
|
package: "com.myapp.dataconnect"
|
||||||
|
swiftSdk:
|
||||||
|
outputDir: "../ios/MyApp/DataConnect"
|
||||||
|
dartSdk:
|
||||||
|
outputDir: "../flutter/lib/dataconnect"
|
||||||
|
package: myapp_dataconnect
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Firebase Init Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initialize Data Connect in project
|
||||||
|
npx -y firebase-tools@latest init dataconnect
|
||||||
|
|
||||||
|
# Initialize with specific project
|
||||||
|
npx -y firebase-tools@latest use <project-id>
|
||||||
|
npx -y firebase-tools@latest init dataconnect
|
||||||
|
|
||||||
|
# Start emulator for development
|
||||||
|
npx -y firebase-tools@latest emulators:start --only dataconnect
|
||||||
|
|
||||||
|
# Generate SDKs
|
||||||
|
npx -y firebase-tools@latest dataconnect:sdk:generate
|
||||||
|
|
||||||
|
# Deploy to production
|
||||||
|
npx -y firebase-tools@latest deploy --only dataconnect
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SDK Initialization (Web)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// lib/firebase.ts
|
||||||
|
import { initializeApp } from 'firebase/app';
|
||||||
|
import { getAuth } from 'firebase/auth';
|
||||||
|
import { getDataConnect, connectDataConnectEmulator } from 'firebase/data-connect';
|
||||||
|
import { connectorConfig } from '@myapp/dataconnect';
|
||||||
|
|
||||||
|
const firebaseConfig = {
|
||||||
|
apiKey: "...",
|
||||||
|
authDomain: "...",
|
||||||
|
projectId: "...",
|
||||||
|
};
|
||||||
|
|
||||||
|
export const app = initializeApp(firebaseConfig);
|
||||||
|
export const auth = getAuth(app);
|
||||||
|
export const dataConnect = getDataConnect(app, connectorConfig);
|
||||||
|
|
||||||
|
// Connect to emulator in development
|
||||||
|
if (import.meta.env.DEV) {
|
||||||
|
connectDataConnectEmulator(dataConnect, 'localhost', 9399);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Example usage
|
||||||
|
import { listItems, createItem } from '@myapp/dataconnect';
|
||||||
|
|
||||||
|
// List items
|
||||||
|
const { data } = await listItems();
|
||||||
|
console.log(data.items);
|
||||||
|
|
||||||
|
// Create item (requires auth)
|
||||||
|
await createItem({ name: 'New Item', description: 'Description' });
|
||||||
|
```
|
||||||
@@ -0,0 +1,31 @@
|
|||||||
|
---
|
||||||
|
name: firebase-firestore-enterprise-native-mode
|
||||||
|
description: Comprehensive guide for Firestore enterprise native including provisioning, data model, security rules, and SDK usage. Use this skill when the user needs help setting up Firestore Enterprise with the Native mode, writing security rules, or using the Firestore SDK in their application.
|
||||||
|
compatibility: This skill is best used with the Firebase CLI, but does not require it. Firebase CLI can be accessed through `npx -y firebase-tools@latest`.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Firestore Enterprise Native Mode
|
||||||
|
|
||||||
|
This skill provides a complete guide for getting started with Firestore Enterprise Native Mode, including provisioning, data model, security rules, and SDK usage.
|
||||||
|
|
||||||
|
## Provisioning
|
||||||
|
|
||||||
|
To set up Firestore Enterprise Native Mode in your Firebase project and local environment, see [provisioning.md](references/provisioning.md).
|
||||||
|
|
||||||
|
## Data Model
|
||||||
|
|
||||||
|
To learn about Firestore data model and how to organize your data, see [data_model.md](references/data_model.md).
|
||||||
|
|
||||||
|
## Security Rules
|
||||||
|
|
||||||
|
For guidance on writing and deploying Firestore Security Rules to protect your data, see [security_rules.md](references/security_rules.md).
|
||||||
|
|
||||||
|
## SDK Usage
|
||||||
|
|
||||||
|
To learn how to use Firestore Enterprise Native Mode in your application code, see:
|
||||||
|
- [Web SDK Usage](references/web_sdk_usage.md)
|
||||||
|
- [Python SDK Usage](references/python_sdk_usage.md)
|
||||||
|
|
||||||
|
## Indexes
|
||||||
|
|
||||||
|
Indexes help improve query performance and speed up slow queries. For checking index types, query support tables, and best practices, see [indexes.md](references/indexes.md).
|
||||||
@@ -0,0 +1,54 @@
|
|||||||
|
# Firestore Data Model Reference
|
||||||
|
|
||||||
|
Firestore is a NoSQL, document-oriented database. Unlike a SQL database, there are no tables or rows. Instead, you store data in **documents**, which are organized into **collections**.
|
||||||
|
|
||||||
|
## Document Data Model
|
||||||
|
|
||||||
|
Data in Firestore is organized into documents, collections, and subcollections.
|
||||||
|
|
||||||
|
### Documents
|
||||||
|
A **document** is a lightweight record that contains fields, which map to values. Each document is identified by a name. A document can contain complex nested objects in addition to basic data types like strings, numbers, and booleans. Documents are limited to a maximum size of 1 MiB.
|
||||||
|
|
||||||
|
Example document (e.g., in a `users` collection):
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"first": "Ada",
|
||||||
|
"last": "Lovelace",
|
||||||
|
"born": 1815
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Collections
|
||||||
|
Documents live in **collections**, which are containers for your documents. For example, you could have a `users` collection to contain your various users, each represented by a document.
|
||||||
|
* Collections can only contain documents. They cannot directly contain raw fields with values, and they cannot contain other collections.
|
||||||
|
* Documents within a collection can contain different fields.
|
||||||
|
* You don't need to "create" or "delete" collections explicitly. After you create the first document in a collection, the collection exists. If you delete all of the documents in a collection, the collection no longer exists.
|
||||||
|
|
||||||
|
### Subcollections
|
||||||
|
Documents can contain subcollections natively. A subcollection is a collection associated with a specific document.
|
||||||
|
For example, a user document in the `users` collection could have a `messages` subcollection containing message documents exclusively for that user. This creates a powerful hierarchical data structure.
|
||||||
|
|
||||||
|
Data path example: `users/user1/messages/message1`
|
||||||
|
|
||||||
|
## Collection Group Support
|
||||||
|
|
||||||
|
A **collection group** consists of all collections with the same ID. By default, queries retrieve results from a single collection in your database. Use a collection group query to retrieve documents from a collection group instead of from a single collection.
|
||||||
|
|
||||||
|
### Use Cases
|
||||||
|
Collection group queries are useful when you want to query across multiple subcollections that share the same organizational structure.
|
||||||
|
|
||||||
|
For example, imagine an app with a `landmarks` collection where each landmark has a `reviews` subcollection. If you want to find all 5-star reviews across *all* landmarks, it would involve checking many separate `reviews` subcollections. With a collection group, you can perform a single query against the `reviews` collection group.
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
**Standard Query** (Single Collection):
|
||||||
|
Find all 5-star reviews for a specific landmark.
|
||||||
|
```javascript
|
||||||
|
db.collection('landmarks/golden_gate_bridge/reviews').where('rating', '==', 5)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Collection Group Query**:
|
||||||
|
Find all 5-star reviews across *all* landmarks.
|
||||||
|
```javascript
|
||||||
|
db.collectionGroup('reviews').where('rating', '==', 5)
|
||||||
|
```
|
||||||
@@ -0,0 +1,111 @@
|
|||||||
|
# Firestore Indexes Reference
|
||||||
|
|
||||||
|
Indexes helps to improve query performance. Firestore Enterprise edition does not create any indexes by default. By default, Firestore Enterprise performs a full collection scan to find documents that match a query, which can be slow and expensive for large collections. To avoid this, you can create indexes to optimize your queries.
|
||||||
|
|
||||||
|
## Index Structure
|
||||||
|
|
||||||
|
An index consists of the following:
|
||||||
|
|
||||||
|
* a collection ID.
|
||||||
|
* a list of fields in the given collection.
|
||||||
|
* an order, either ascending or descending, for each field.
|
||||||
|
|
||||||
|
### Index Ordering
|
||||||
|
|
||||||
|
The order and sort direction of each field uniquely defines the index. For example, the following indexes are two distinct indexes and not interchangeable:
|
||||||
|
|
||||||
|
* Field name `name` (ascending) and `population` (descending)
|
||||||
|
* Field name `name` (descending) and `population` (ascending)
|
||||||
|
|
||||||
|
### Index Density
|
||||||
|
|
||||||
|
Dense indexes: By default, Firestore indexes store data from all documents in a collection. An index entry will be added for a document regardless of whether the document contains any of the fields specified in the index. Non-existent fields are treated as having a NULL value when generating index entries.
|
||||||
|
|
||||||
|
Sparse indexes: To change this behavior, you can define the index as a sparse index. A sparse index indexes only the documents in the collection that contain a value (including null) for at least one of the indexed fields. A sparse index reduces storage costs and can improve performance.
|
||||||
|
|
||||||
|
### Unique Indexes
|
||||||
|
|
||||||
|
You can use unique index option to enforce unique values for the indexed fields. For indexes on multiple fields, each combination of values must be unique across the index. The database rejects any update and insert operations that attempt to create index entries with duplicate values.
|
||||||
|
|
||||||
|
## Query Support Examples
|
||||||
|
|
||||||
|
| Query Type | Index Required |
|
||||||
|
| :--- | :--- |
|
||||||
|
| **Simple Equality**<br>`where("a", "==", 1)` | Single-Field Index on field `a` |
|
||||||
|
| **Simple Range/Sort**<br>`where("a", ">", 1).orderBy("a")` | Single-Field Index on field `a` |
|
||||||
|
| **Multiple Equality**<br>`where("a", "==", 1).where("b", "==", 2)` | Single-Field Index on field `a` and `b` |
|
||||||
|
| **Equality + Range/Sort**<br>`where("a", "==", 1).where("b", ">", 2)` | **Composite Index** on field `a` and `b` |
|
||||||
|
| **Multiple Ranges**<br>`where("a", ">", 1).where("b", ">", 2)` | **Composite Index** on field `a` and `b` |
|
||||||
|
| **Array Contains + Equality**<br>`where("tags", "array-contains", "news").where("active", "==", true)` | **Composite Index** on field `tags` and `active` |
|
||||||
|
|
||||||
|
If no indexes is present, Firestore Enterprise will perform a full collection scan to find documents that match a query.
|
||||||
|
|
||||||
|
## Management
|
||||||
|
|
||||||
|
### Config files
|
||||||
|
Your indexes should be defined in `firestore.indexes.json` (pointed to by `firebase.json`).
|
||||||
|
|
||||||
|
Define a dense index:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"indexes": [
|
||||||
|
{
|
||||||
|
"collectionGroup": "cities",
|
||||||
|
"queryScope": "COLLECTION",
|
||||||
|
"density": "DENSE",
|
||||||
|
"fields": [
|
||||||
|
{ "fieldPath": "country", "order": "ASCENDING" },
|
||||||
|
{ "fieldPath": "population", "order": "DESCENDING" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fieldOverrides": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Define a sparse-any index:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"indexes": [
|
||||||
|
{
|
||||||
|
"collectionGroup": "cities",
|
||||||
|
"queryScope": "COLLECTION",
|
||||||
|
"density": "SPARSE_ANY",
|
||||||
|
"fields": [
|
||||||
|
{ "fieldPath": "country", "order": "ASCENDING" },
|
||||||
|
{ "fieldPath": "population", "order": "DESCENDING" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fieldOverrides": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Define a unique index:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"indexes": [
|
||||||
|
{
|
||||||
|
"collectionGroup": "cities",
|
||||||
|
"queryScope": "COLLECTION",
|
||||||
|
"density": "SPARSE_ANY",
|
||||||
|
"unique": true,
|
||||||
|
"fields": [
|
||||||
|
{ "fieldPath": "country", "order": "ASCENDING" },
|
||||||
|
{ "fieldPath": "population", "order": "DESCENDING" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fieldOverrides": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### CLI Commands
|
||||||
|
|
||||||
|
Deploy indexes only:
|
||||||
|
```bash
|
||||||
|
npx firebase-tools@latest -y deploy --only firestore:indexes
|
||||||
|
```
|
||||||
@@ -0,0 +1,101 @@
|
|||||||
|
# Provisioning Firestore Enterprise Native Mode
|
||||||
|
|
||||||
|
## Manual Initialization
|
||||||
|
|
||||||
|
Initialize the following firebase configuration files manually. Do not use `npx -y firebase-tools@latest init`, as it expects interactive inputs.
|
||||||
|
|
||||||
|
1. **Create a Firestore Enterprise Database**: Create a Firestore Enterprise database using the Firebase CLI.
|
||||||
|
2. **Create `firebase.json`**: This file contains database configuration for the Firebase CLI.
|
||||||
|
3. **Create `firestore.rules`**: This file contains your security rules.
|
||||||
|
4. **Create `firestore.indexes.json`**: This file contains your index definitions.
|
||||||
|
|
||||||
|
### 1. Create a Firestore Enterprise Database
|
||||||
|
|
||||||
|
Use the following command to create a Firestore Enterprise database:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
firebase firestore:databases:create my-database-id \
|
||||||
|
--location="nam5" \
|
||||||
|
--edition="enterprise" \
|
||||||
|
--firestore-data-access="ENABLED" \
|
||||||
|
--mongodb-compatible-data-access="DISABLED"
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create an enterprise database in `nam5` with native mode enabled. A database id is required to create an enterprise database and the database id must not be `(default)`. To enable realtime-updates feature, use `--realtime-updates` flag.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
firebase firestore:databases:create my-database-id \
|
||||||
|
--location="nam5" \
|
||||||
|
--edition="enterprise" \
|
||||||
|
--firestore-data-access="ENABLED" \
|
||||||
|
--mongodb-compatible-data-access="DISABLED" \
|
||||||
|
--realtime-updates="ENABLED"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create `firebase.json`
|
||||||
|
|
||||||
|
Create a file named `firebase.json` in your project root with the following content. If this file already exists, instead append to the existing JSON:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"firestore": {
|
||||||
|
"rules": "firestore.rules",
|
||||||
|
"indexes": "firestore.indexes.json",
|
||||||
|
"edition": "enterprise",
|
||||||
|
"database": "my-database-id",
|
||||||
|
"location": "nam5"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create `firestore.rules`
|
||||||
|
|
||||||
|
Create a file named `firestore.rules`. A good starting point (locking down the database) is:
|
||||||
|
|
||||||
|
```
|
||||||
|
rules_version = '2';
|
||||||
|
service cloud.firestore {
|
||||||
|
match /databases/{database}/documents {
|
||||||
|
match /{document=**} {
|
||||||
|
allow read, write: if false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*See [security_rules.md](security_rules.md) for how to write actual rules.*
|
||||||
|
|
||||||
|
### 3. Create `firestore.indexes.json`
|
||||||
|
|
||||||
|
Create a file named `firestore.indexes.json` with an empty configuration to start:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"indexes": [],
|
||||||
|
"fieldOverrides": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*See [indexes.md](indexes.md) for how to configure indexes.*
|
||||||
|
|
||||||
|
|
||||||
|
## Deploy rules and indexes
|
||||||
|
```bash
|
||||||
|
# To deploy all rules and indexes
|
||||||
|
firebase deploy --only firestore
|
||||||
|
|
||||||
|
# To deploy just rules
|
||||||
|
firebase deploy --only firestore:rules
|
||||||
|
|
||||||
|
# To deploy just indexes
|
||||||
|
firebase deploy --only firestore:indexes
|
||||||
|
```
|
||||||
|
|
||||||
|
## Local Emulation
|
||||||
|
|
||||||
|
To run Firestore locally for development and testing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
firebase emulators:start --only firestore
|
||||||
|
```
|
||||||
|
|
||||||
|
This starts the Firestore emulator, typically on port 8080. You can interact with it using the Emulator UI (usually at http://localhost:4000/firestore).
|
||||||
@@ -0,0 +1,126 @@
|
|||||||
|
# Python SDK Usage
|
||||||
|
|
||||||
|
The Python Server SDK is used for backend/server environments and utilizes Google Application Default Credentials in most Google Cloud environments.
|
||||||
|
|
||||||
|
### Writing Data
|
||||||
|
|
||||||
|
#### Set a Document
|
||||||
|
Creates a document if it does not exist or overwrites it if it does. You can also specify a merge option to only update provided fields.
|
||||||
|
|
||||||
|
```python
|
||||||
|
city_ref = db.collection("cities").document("LA")
|
||||||
|
|
||||||
|
# Create/Overwrite
|
||||||
|
city_ref.set({
|
||||||
|
"name": "Los Angeles",
|
||||||
|
"state": "CA",
|
||||||
|
"country": "USA"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Merge
|
||||||
|
city_ref.set({"population": 3900000}, merge=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Add a Document with Auto-ID
|
||||||
|
Use when you don't care about the document ID and want Firestore to automatically generate one.
|
||||||
|
|
||||||
|
```python
|
||||||
|
update_time, city_ref = db.collection("cities").add({
|
||||||
|
"name": "Tokyo",
|
||||||
|
"country": "Japan"
|
||||||
|
})
|
||||||
|
print("Document written with ID: ", city_ref.id)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Update a Document
|
||||||
|
Update some fields of an existing document without overwriting the entire document. Fails if the document doesn't exist.
|
||||||
|
|
||||||
|
```python
|
||||||
|
city_ref = db.collection("cities").document("LA")
|
||||||
|
city_ref.update({
|
||||||
|
"capital": True
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Transactions
|
||||||
|
Perform an atomic read-modify-write operation.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from google.cloud.firestore import Transaction
|
||||||
|
|
||||||
|
transaction = db.transaction()
|
||||||
|
city_ref = db.collection("cities").document("SF")
|
||||||
|
|
||||||
|
@firestore.transactional
|
||||||
|
def update_in_transaction(transaction, city_ref):
|
||||||
|
snapshot = city_ref.get(transaction=transaction)
|
||||||
|
if not snapshot.exists:
|
||||||
|
raise Exception("Document does not exist!")
|
||||||
|
|
||||||
|
new_population = snapshot.get("population") + 1
|
||||||
|
transaction.update(city_ref, {"population": new_population})
|
||||||
|
|
||||||
|
update_in_transaction(transaction, city_ref)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reading Data
|
||||||
|
|
||||||
|
#### Get a Single Document
|
||||||
|
|
||||||
|
```python
|
||||||
|
doc_ref = db.collection("cities").document("SF")
|
||||||
|
doc = doc_ref.get()
|
||||||
|
|
||||||
|
if doc.exists:
|
||||||
|
print(f"Document data: {doc.to_dict()}")
|
||||||
|
else:
|
||||||
|
print("No such document!")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Get Multiple Documents
|
||||||
|
Fetches all documents in a query or collection once.
|
||||||
|
|
||||||
|
```python
|
||||||
|
docs = db.collection("cities").stream()
|
||||||
|
|
||||||
|
for doc in docs:
|
||||||
|
print(f"{doc.id} => {doc.to_dict()}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Queries
|
||||||
|
|
||||||
|
#### Simple and Compound Queries
|
||||||
|
Use `.where()` to combine filters safely. Stack `.where()` calls for compound queries.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from google.cloud.firestore import FieldFilter
|
||||||
|
|
||||||
|
cities_ref = db.collection("cities")
|
||||||
|
|
||||||
|
# Simple equality
|
||||||
|
query_1 = cities_ref.where(filter=FieldFilter("state", "==", "CA"))
|
||||||
|
|
||||||
|
# Compound (AND)
|
||||||
|
query_2 = cities_ref.where(
|
||||||
|
filter=FieldFilter("state", "==", "CA")
|
||||||
|
).where(
|
||||||
|
filter=FieldFilter("population", ">", 1000000)
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Order and Limit
|
||||||
|
Sort and limit results cleanly.
|
||||||
|
|
||||||
|
```python
|
||||||
|
query = cities_ref.order_by("name").limit(3)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Pipeline Queries
|
||||||
|
|
||||||
|
You can use pipeline queries to perform complex queries.
|
||||||
|
|
||||||
|
```python
|
||||||
|
pipeline = client.pipeline().collection("users")
|
||||||
|
for result in pipeline.execute():
|
||||||
|
print(f"{result.id} => {result.data()}")
|
||||||
|
```
|
||||||
@@ -0,0 +1,414 @@
|
|||||||
|
## 1. Generate Firestore Rules
|
||||||
|
You are an expert Firebase Security Rules engineer with deep knowledge of Firestore security best practices. Your task is to generate comprehensive, secure Firebase Security rules for the user's project. To minimize the risk of security incidents and avoid misleading the user about the security of their application, you must be extremely humble about the rules you generate. Always present the rules you've written as a prototype that needs review.
|
||||||
|
|
||||||
|
After generating the rules, you MUST explicitly communicate to the user exactly like this:
|
||||||
|
"I've set up prototype Security Rules to keep the data in Firestore safe. They are designed to be secure for <explain reasons here>. However, you should review and verify them before broadly sharing your app. If you'd like, I can help you harden these rules."
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
Follow this structured workflow strictly:
|
||||||
|
|
||||||
|
#### Phase-1: Codebase Analysis
|
||||||
|
|
||||||
|
1. **Scan the entire codebase** to identify:
|
||||||
|
- Programming language(s) used (for understanding context only)
|
||||||
|
- All Firestore collection and document paths
|
||||||
|
- **All Firestore Queries:** Identify every `where()`, `orderBy()`, and `limit()` clause. The security rules **MUST** allow these specific queries.
|
||||||
|
- Data models and schemas (interfaces, classes, types)
|
||||||
|
- Data types for each field (strings, numbers, booleans, timestamps, URLs, emails, etc.)
|
||||||
|
- Required vs. optional fields
|
||||||
|
- Field constraints (min/max length, format patterns, allowed values)
|
||||||
|
- CRUD operations (create, read, update, delete)
|
||||||
|
- Authentication patterns (Firebase Auth, custom tokens, anonymous)
|
||||||
|
- Access patterns and business logic rules
|
||||||
|
2. **Document your findings** in a untracked file. Refer to this file when generating the security rules.
|
||||||
|
|
||||||
|
#### Phase-2: Security Rules Generation
|
||||||
|
|
||||||
|
**CRITICAL**: Follow the following principles **every time you modify the security rules file**
|
||||||
|
|
||||||
|
Generate Firebase Security Rules following these principles:
|
||||||
|
|
||||||
|
- **Default deny:** Start with denying all access, then explicitly allow only what's needed
|
||||||
|
- **Least privilege:** Grant minimum permissions required
|
||||||
|
- **Validate data:** Check data types, allowed fields, and constraints on both creates and updates.
|
||||||
|
- **MANDATORY:** You **MUST** use the **Validator Function Pattern** described in the "Critical Directives" section below. This involves defining a specific validation function (e.g., `isValidUser`) and calling it in **BOTH** `create` and `update` rules.
|
||||||
|
- **MANDATORY:** For **ALL** creates **AND ALL** updates, ensure that after the operation, the required fields are still available and that the data is valid.
|
||||||
|
- **Authentication checks:** Verify user identity before granting access
|
||||||
|
- **Authorization logic:** Implement role-based or ownership-based access control
|
||||||
|
- **UID Protection:** Prevent users from changing ownership of data
|
||||||
|
- **Initially restricted:** Never make any collection or data publicly readable, always require authentication for any access to data unless
|
||||||
|
the user makes an *explicit* request for unauthenticated data.
|
||||||
|
|
||||||
|
This means the first firestore.rules file you generate must never have any "allow read: true" statements.
|
||||||
|
|
||||||
|
**Structure Requirements:**
|
||||||
|
|
||||||
|
1. **Document assumed data models at the beginning of the rules file:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ===============================================================
|
||||||
|
// Assumed Data Model
|
||||||
|
// ===============================================================
|
||||||
|
//
|
||||||
|
// This security rules file assumes the following data structures:
|
||||||
|
//
|
||||||
|
// Collection: [name]
|
||||||
|
// Document ID: [pattern]
|
||||||
|
// Fields:
|
||||||
|
// - field1: type (required/optional, constraints) - description
|
||||||
|
// - field2: type (required/optional, constraints) - description
|
||||||
|
// [List all fields with types, constraints, and whether immutable]
|
||||||
|
//
|
||||||
|
// [Repeat for all collections]
|
||||||
|
//
|
||||||
|
// ===============================================================
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Include comprehensive helper functions to avoid repetition:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ===============================================================
|
||||||
|
// Helper Functions
|
||||||
|
// ===============================================================
|
||||||
|
//
|
||||||
|
// Check if the user is authenticated
|
||||||
|
function isAuthenticated() {
|
||||||
|
return request.auth != null;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Check if user owns the resource (for user-owned documents)
|
||||||
|
function isOwner(userId) {
|
||||||
|
return isAuthenticated() && request.auth.uid == userId;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Check if user is owner based on document's uid field
|
||||||
|
function isDocOwner() {
|
||||||
|
return isAuthenticated() && request.auth.uid == resource.data.uid;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Verify UID hasn't been tampered with on create
|
||||||
|
function uidUnchanged() {
|
||||||
|
return !('uid' in request.resource.data) ||
|
||||||
|
request.resource.data.uid == request.auth.uid;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Ensure uid field is not modified on update
|
||||||
|
function uidNotModified() {
|
||||||
|
return !('uid' in request.resource.data) ||
|
||||||
|
request.resource.data.uid == resource.data.uid;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate required fields exist
|
||||||
|
function hasRequiredFields(fields) {
|
||||||
|
return request.resource.data.keys().hasAll(fields);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate string length
|
||||||
|
function validStringLength(field, minLen, maxLen) {
|
||||||
|
return request.resource.data[field] is string &&
|
||||||
|
request.resource.data[field].size() >= minLen &&
|
||||||
|
request.resource.data[field].size() <= maxLen;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate URL format (must start with https:// or http://)
|
||||||
|
function isValidUrl(url) {
|
||||||
|
return url is string &&
|
||||||
|
(url.matches("^https://.*") || url.matches("^http://.*"));
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate email format
|
||||||
|
function isValidEmail(email) {
|
||||||
|
return email is string &&
|
||||||
|
email.matches("^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$");
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Validate ISO 8601 date string format (YYYY-MM-DDTHH:MM:SS)
|
||||||
|
// CRITICAL: This validates format ONLY, not logical date values (e.g., month 13).
|
||||||
|
// Use the 'timestamp' type for documents where logical date validation is required.
|
||||||
|
function isValidDateString(dateStr) {
|
||||||
|
return dateStr is string &&
|
||||||
|
dateStr.matches("^\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}.*Z?$");
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Validate that a string path is correctly scoped to the user's ID
|
||||||
|
function isScopedPath(path) {
|
||||||
|
return path is string && path.matches("^users/" + request.auth.uid + "/.*");
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a value is positive
|
||||||
|
function isPositive(field) {
|
||||||
|
return request.resource.data[field] is number && request.resource.data[field] > 0;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a list is a list and enforces size limits
|
||||||
|
function isValidList(list, maxSize) {
|
||||||
|
return list is list && list.size() <= maxSize;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate optional string (if present, must be string and within length)
|
||||||
|
function isValidOptionalString(field, minLen, maxLen) {
|
||||||
|
return !('field' in request.resource.data) ||
|
||||||
|
(request.resource.data[field] is string &&
|
||||||
|
request.resource.data[field].size() >= minLen &&
|
||||||
|
request.resource.data[field].size() <= maxLen);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a map contains only allowed keys
|
||||||
|
function isValidMap(mapData, allowedKeys) {
|
||||||
|
return mapData is map && mapData.keys().hasOnly(allowedKeys);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that the document contains only the allowed fields
|
||||||
|
function hasOnlyAllowedFields(fields) {
|
||||||
|
return request.resource.data.keys().hasOnly(fields);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that the document hasn't changed in the fields that are not allowed to be changed
|
||||||
|
function areImmutableFieldsUnchanged(fields) {
|
||||||
|
return !request.resource.data.diff(resource.data).affectedKeys().hasAny(fields);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a timestamp is recent (within the last 5 minutes)
|
||||||
|
function isRecent(time) {
|
||||||
|
return time is timestamp &&
|
||||||
|
time > request.time - duration.value(5, 'm') &&
|
||||||
|
time <= request.time;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// [Add more helper functions as needed for the data validation like the example below]
|
||||||
|
//
|
||||||
|
// ===============================================================
|
||||||
|
//
|
||||||
|
// Domain Validators (CRITICAL: Use these in both create and update)
|
||||||
|
//
|
||||||
|
// function isValidUser(data) {
|
||||||
|
// // Only allow admin to create admin roles
|
||||||
|
// return hasOnlyAllowedFields(['name', 'email', 'age', 'role']) &&
|
||||||
|
// data.name is string && data.name.size() > 0 && data.name.size() < 50 &&
|
||||||
|
// data.email is string && isValidEmail(data.email) &&
|
||||||
|
// data.age is number && data.age >= 18 &&
|
||||||
|
// data.role in ['admin', 'user', 'guest'];
|
||||||
|
// }
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Mandatory: User Data Separation (The "No Mixed Content" Rule)
|
||||||
|
- Firestore security rules apply to the entire document. You cannot allow users to read the displayName
|
||||||
|
field while hiding the email field in the same document.
|
||||||
|
- If a collection (e.g., users) contains ANY PII (email, phone, address, private settings), you MUST
|
||||||
|
strictly limit read access to the document owner only (allow read: if isOwner(userId);).
|
||||||
|
- If the application requires public profiles (e.g., showing user names/avatars on posts):
|
||||||
|
- 1. Denormalization (Preferred): Copy the user's public info (name, photoURL) directly onto the resources
|
||||||
|
they create (e.g., store authorName and authorPhoto inside the posts document).
|
||||||
|
- 2. Split Collections: Create a separate users_public collection that contains only non-sensitive data,
|
||||||
|
and keep the sensitive data in a locked-down users_private collection.
|
||||||
|
- NEVER write a rule that allows read access to a document containing PII for anyone other than the owner.
|
||||||
|
|
||||||
|
#### **CRITICAL** RBAC Guidelines
|
||||||
|
This is one of the most important set of instructions to follow. Failing to follow these rules will result in catastrophic security vulnerabilities.
|
||||||
|
|
||||||
|
- **NEVER** allow users to create their own privileged roles. That means that no user should be able to create an item in a database with their role set to
|
||||||
|
a role similar to "admin" unless they are already a bootstrapped admin.
|
||||||
|
- **NEVER** allow users to update their own roles or permissions.
|
||||||
|
- **NEVER** allow users to grant themselves access to other users' data.
|
||||||
|
- **NEVER** allow users to bypass the role hierarchy.
|
||||||
|
- **ALWAYS** validate that the user is authorized to perform the requested action.
|
||||||
|
- **ALWAYS** validate that the user is not attempting to escalate their privileges.
|
||||||
|
- **ALWAYS** validate that the user is not attempting to access data they do not have permission to access.
|
||||||
|
|
||||||
|
Here's a **bad** example of what **NOT** to do:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
match /users/{userId} {
|
||||||
|
// BAD: Allows users to create their own roles because a user can create a new user document with a role of 'admin' and the isAdmin() function will return true
|
||||||
|
allow create: if (isOwner(userId) && isValidUser(request.resource.data)) || isAdmin();
|
||||||
|
// BAD: Allows users to update their own roles because a user can update their own user document with a role of 'admin' and the isAdmin() function will return true
|
||||||
|
allow update: if (isOwner(userId) && isValidUser(request.resource.data)) || isAdmin();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Here's a **good** example of what **TO** do:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
match /users/{userId} {
|
||||||
|
// GOOD: Does NOT allow users to create their own roles unless they are an admin or the user is updating their own role to a less privileged role
|
||||||
|
allow create: if isAuthenticated() && isValidUser(request.resource.data) && ((isOwner(userId) && request.resource.data.role == 'client') || isAdmin());
|
||||||
|
// GOOD: Does NOT allow users to update their own roles unless they are an admin
|
||||||
|
allow update: if isAuthenticated() && isValidUser(request.resource.data) && ((isOwner(userId) && request.resource.data.role == resource.data.role) || isAdmin());
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Critical Directives for Secure Generation
|
||||||
|
|
||||||
|
- **PREFER USING READ OVER LIST OR GET** `list` and `get` can add complexity to security rules. Prefer using `read` over them.
|
||||||
|
- **Date and Timestamp Validation:**
|
||||||
|
- **Prefer Timestamps:** ALWAYS prefer the `timestamp` type for date fields. Firestore automatically ensures they are logically valid dates.
|
||||||
|
- **String Date Risks:** If using strings for dates (e.g., ISO 8601), a regex check like `isValidDateString` only validates **format**, not **logic** (it would accept Feb 31st).
|
||||||
|
- **Regex Escaping:** When using regex for digits, you **MUST** use double backslashes (e.g., `\\\\d`) in the rules string. Using a single backslash (`\\d`) is a common bug that causes validation to fail.
|
||||||
|
- **Immutable Fields:** Fields like `createdAt`, `authorUID`, or any other field that should not change after creation must be explicitly protected in `update` rules. (e.g., `request.resource.data.createdAt == resource.data.createdAt`). **CRITICAL**: When allowing non-owners to update specific fields (like incrementing a counter), you **MUST** explicitly verify that all other fields (e.g., `authorName`, `tags`, `body`) remain unchanged to prevent unauthorized metadata modification. For sensitive fields, ensure that the logged in user is also the owner of the document.
|
||||||
|
- **Identity Integrity:** When storing denormalized user identity (e.g. `authorName`, `authorPhoto`), you **MUST** validate this data.
|
||||||
|
- **Prefer Auth Token:** If possible, check if `request.resource.data.authorName == request.auth.token.name`.
|
||||||
|
- **Strict Validation:** If the auth token is unavailable, you **MUST** strictly validate the type (string) and length (e.g. < 50 chars) to prevent spoofing with massive or malicious payloads.
|
||||||
|
- **Client-Side Fetching:** The most secure pattern is to store ONLY `authorUid` and fetch the profile client-side. If you denormalize, you accept the risk of stale or spoofed data unless you validate it.
|
||||||
|
- **Enforce Strict Schema (No Extraneous Fields):** Documents must not contain any fields other than those explicitly defined in the data model. This prevents users from adding arbitrary data.
|
||||||
|
- **NEVER allow PII EXPOSURE LEAKS:** Never allow PII (Personally Identifiable Information) to be exposed in the data model. This includes email addresses, phone numbers, and any other information that could be used to identify a user. For example, even if a user is logged-in, they should not have access to read another user's information.
|
||||||
|
- **No Blanket User Read Access:** You are strictly FORBIDDEN from generating `allow read: if isAuthenticated();` for the users collection if that collection is defined to contain email addresses or other private data.
|
||||||
|
- **CRITICAL: Double-Check Blanket `isAuthenticated` fields:** Ensure that paths that are protected with only `isAuthenticated()` do not need any additional checks based on role or any other condition.
|
||||||
|
- **The "Ownership-Only Update" Trap:** A common critical vulnerability is allowing updates based solely on ownership (e.g., `allow update: if isOwner(resource.data.uid);`). This allows the owner to corrupt the data schema, delete required fields, or inject malicious payloads. You **MUST** always combine ownership checks with data validation (e.g., `allow update: if isOwner(...) && isValidEntity(...);`) **AND** validate that self-escalation is not possible.
|
||||||
|
|
||||||
|
- **Deep Array Inspection:** It is insufficient to check if a field `is list`. You **MUST** validate the contents of the array (e.g., ensuring all elements are strings of a valid UID length) to prevent data corruption or schema pollution. For example, a `tags` array must verify that every item is a string AND that each string is within a reasonable length (e.g., < 20 chars).
|
||||||
|
- **Permission-Field Lockdown:** Fields that control access (e.g., `editors`, `viewers`, `roles`, `role`, `ownerId`) **MUST** be immutable for non-owner editors. In `update` rules, use `fieldUnchanged()` for these fields unless the `request.auth.uid` matches the document's original owner/creator. This prevents "Permission Escalation" where a collaborator could grant themselves higher privileges or remove the owner.
|
||||||
|
|
||||||
|
|
||||||
|
### Advanced Validation for Business Logic
|
||||||
|
|
||||||
|
Secure rules must enforce the application's business logic. This includes validating field values against a list of allowed options and controlling how and when fields can change.
|
||||||
|
|
||||||
|
#### 1. Enforce Enum Values
|
||||||
|
|
||||||
|
If a field should only contain specific values (e.g., a status), validate against a list.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// A 'task' document's status can only be one of three values
|
||||||
|
function isValidStatus() {
|
||||||
|
let validStatuses = ['pending', 'in-progress', 'completed'];
|
||||||
|
return request.resource.data.status in validStatuses;
|
||||||
|
}
|
||||||
|
|
||||||
|
allow create: if isValidStatus() && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Validate State Transitions
|
||||||
|
|
||||||
|
For `update` operations, you **MUST** validate that a field is changing from a valid previous state to a valid new state. This prevents users from bypassing workflows (e.g., marking a task as 'completed' from 'archived').
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// A task can only be marked 'completed' if it was 'in-progress'
|
||||||
|
function validStatusTransition() {
|
||||||
|
let previousStatus = resource.data.status;
|
||||||
|
let newStatus = request.resource.data.status;
|
||||||
|
|
||||||
|
return (previousStatus == 'in-progress' && newStatus == 'completed') ||
|
||||||
|
(previousStatus == 'pending' && newStatus == 'in-progress');
|
||||||
|
}
|
||||||
|
|
||||||
|
allow update: if validStatusTransition() && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Strict Path and Relationship Scoping
|
||||||
|
|
||||||
|
For any field that references another resource (like an image path or a parent document ID), you **MUST** ensure it is correctly scoped to the user or valid within the context.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Ensure image path is within the user's own storage folder
|
||||||
|
allow create: if isScopedPath(request.resource.data.imageBucket) && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Secure Counter Updates
|
||||||
|
|
||||||
|
When allowing users to update a counter (like `voteCount` or `answerCount`), you **MUST** ensure:
|
||||||
|
1. **Atomic Increments:** The field is only changing by exactly +1 or -1.
|
||||||
|
2. **Isolation:** **NO OTHER FIELDS** are being modified. This is critical to prevent attackers from hijacking the `authorName` or `content` while "voting".
|
||||||
|
3. **Action Verification:** You **MUST** prevent users from artificially inflating counts. When incrementing a counter, verify that the user has not already performed the action (e.g., by checking for the existence of a 'like' document) and is not looping updates.
|
||||||
|
* **CRITICAL:** Relying solely on `!exists(likeDoc)` is insufficient because a malicious user can skip creating the document and loop the increment.
|
||||||
|
* **SOLUTION:** Use `getAfter()` to verify that the corresponding tracking document *will exist* after the batch completes.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function isValidCounterUpdate(docId) {
|
||||||
|
// Allow update only if 'voteCount' is the ONLY field changing
|
||||||
|
return request.resource.data.diff(resource.data).affectedKeys().hasOnly(['voteCount']) &&
|
||||||
|
// And the change is exactly +1 or -1
|
||||||
|
math.abs(request.resource.data.voteCount - resource.data.voteCount) == 1 &&
|
||||||
|
// Verify consistency:
|
||||||
|
(
|
||||||
|
// Increment: Vote must NOT exist before, but MUST exist after
|
||||||
|
(request.resource.data.voteCount > resource.data.voteCount &&
|
||||||
|
!exists(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) &&
|
||||||
|
getAfter(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) != null) ||
|
||||||
|
// Decrement: Vote MUST exist before, but must NOT exist after
|
||||||
|
(request.resource.data.voteCount < resource.data.voteCount &&
|
||||||
|
exists(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) &&
|
||||||
|
getAfter(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) == null)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
allow update: if isValidCounterUpdate(docId) && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. **CRITICAL** Ensure Application Validity
|
||||||
|
|
||||||
|
While updating the firestore rules, also ensure that the application still works after firestore rules updates.
|
||||||
|
|
||||||
|
3. **For each collection, implement explicit data validation:**
|
||||||
|
|
||||||
|
- Type Checking: 'field is string', 'field is number', 'field is bool', 'field is timestamp'
|
||||||
|
- Required fields validation using 'hasRequiredFields()'
|
||||||
|
- **Enforce Size Limits:** For **EVERY** string, list, and map field, you **MUST** enforce realistic size limits (e.g., `text.size() < 1000`, `tags.size() < 20`). **Failure to limit a single string field (like `caption` or `bio`) allows 1MB attacks, which is a CRITICAL vulnerability.**
|
||||||
|
- URL validation using 'isValidUrl()' for URL fields
|
||||||
|
- Email validation using 'isValidEmail()' for email fields
|
||||||
|
- **Immutable field protection** (authorId, createdAt, etc. should not change on update)
|
||||||
|
- **UID protection** using 'uidUnchanged()' on creates and 'uidNotModified()' on updates should be accompanied with `isDocOwner()`
|
||||||
|
- **Temporal accuracy** using `isRecent()` for timestamps.
|
||||||
|
- **Range validation** using `isPositive()` or similar for numbers.
|
||||||
|
- **Path scoping** using `isScopedPath()` for storage paths.
|
||||||
|
|
||||||
|
Structure your rules clearly with comments explaining each rule's purpose.
|
||||||
|
|
||||||
|
#### Phase-3: Devil's Advocate Attack
|
||||||
|
|
||||||
|
**Critical step:** Systematically attempt to break your own rules using the following attack vectors. You MUST document the outcome of each attempt.
|
||||||
|
|
||||||
|
1. **Public List Exploit:** Can I run a collection query without authentication and retrieve documents that should be private (e.g., where `visible == false`)?
|
||||||
|
2. **Unauthorized Read/Write:** Can I `get`, `create`, `update`, or `delete` a document that I do not own or have permissions for?
|
||||||
|
3. **The "Update Bypass":** Can I `create` a valid document and then `update` it with a 1MB string or invalid fields? (Tests if validation logic is missing from `update`).
|
||||||
|
4. **Ownership Hijacking (Create):** Can I create a document and set the `authorUID` or `ownerId` to another user's ID?
|
||||||
|
5. **Ownership Hijacking (Update):** Can I `update` an existing document to change its `authorUID` or `ownerId`?
|
||||||
|
6. **Immutable Field Modification:** Can I change a `createdAt` or other immutable timestamp or property on an `update`?
|
||||||
|
7. **Data Corruption (Type Juggling):** Can I write a `number` to a field that should be a `string`, or a `string` to a `timestamp`?
|
||||||
|
8. **Validation Bypass (Create vs. Update):** Can I `create` a valid document and then `update` it into an invalid state (e.g., remove a required field, write a string that's too long)?
|
||||||
|
9. **Resource Exhaustion / DoS:** Can I write an enormous string (e.g., 1MB) to any field that accepts a string or a massive array to a list field? Every string field (e.g., `bio`, `url`, `name`) MUST have a `.size()` check. If any are missing, it's a "Resource Exhaustion/DoS" risk.
|
||||||
|
10. **Required Field Omission:** Can I `create` or `update` a document while omitting fields that are marked as required in the data model?
|
||||||
|
11. **Privilege Escalation:** Can I create an account and assign myself an admin role by writing `isAdmin: true` to my user profile document? (Tests reliance on document data vs. custom claims).
|
||||||
|
12. **Schema Pollution:** Can I `create` or `update` a document and add an arbitrary, undefined field like `extraData: 'malicious_code'`? (Tests for strict schema enforcement).
|
||||||
|
13. **Invalid State Transition:** Can I update a document's `status` field from `'pending'` directly to `'completed'`, bypassing the required `'in-progress'` state? (Tests business logic enforcement).
|
||||||
|
14. **Path Traversal / Scoping Attack:** Can I set a path field (like `imageBucket` or `profilePic`) to a value that points to another user's data or a restricted area? (Tests for regex path scoping).
|
||||||
|
15. **Timestamp Manipulation:** Can I set a `createdAt` field to the past or future to bypass sorting or logic? (Tests for `request.time` validation).
|
||||||
|
16. **Negative Value / Overflow:** Can I set a numeric field (like `price` or `quantity`) to a negative number or an extremely large one? (Tests for range validation).
|
||||||
|
17. **The "Mixed Content" Leak:** Create a second user. Can User B read User A's users document? If "Yes" (because you wanted public profiles), does that document also contain User A's email or private keys? If both are true, the rules are insecure.
|
||||||
|
18. **Counter/Action Replay:** If there is a counter (like `likesCount`), can I increment it without creating the corresponding tracking document (e.g., inside `likes/{userId}`)? Can I increment it twice? (Tests for `getAfter()` consistency checks).
|
||||||
|
19. **Orphaned Subcollection Access:** Can I read/write to a subcollection (e.g., `users/123/posts/456`) if the parent document (`users/123`) does not exist? (Tests for parent existence checks).
|
||||||
|
20. **Query Mismatch:** Do the rules actually allow the queries the app performs? (e.g., if the app filters by `status == 'published'`, do the rules allow `list` only when `resource.data.status == 'published'`?)
|
||||||
|
21. **Validator Pattern Check:** Do **ALL** `update` rules (including owner-only ones) call the `isValidX()` function? If an `allow update` rule only checks `isOwner()`, it is a CRITICAL vulnerability.
|
||||||
|
|
||||||
|
Document each attack attempt and whether it succeeded. If ANY attack succeeds:
|
||||||
|
|
||||||
|
- Fix the security hole
|
||||||
|
- Regenerate the rules
|
||||||
|
- **Repeat Phase-3** until no attacks succeed
|
||||||
|
|
||||||
|
#### Phase-4: Syntactic Validation
|
||||||
|
|
||||||
|
Once devil's advocate testing passes, repeat until rules pass validation.
|
||||||
|
|
||||||
|
**After all phases are complete, create or update the `firestore.rules` file.**
|
||||||
|
|
||||||
|
### Critical Constraints
|
||||||
|
1. **Never skip the devil's advocate phase** - this is your primary security validation
|
||||||
|
2. **MUST include helper functions** for common operations ('isAuthenticated', 'isOwner', 'uidUnchanged', 'uidNotModified') AND domain validators ('isValidUser', etc.)
|
||||||
|
3. **MUST document assumed data models** at the beginning of the rules file
|
||||||
|
4. **Always validate the rules syntax** using 'firebase deploy --only firestore:rules --dry-run' or a similar tool before outputting the final file.
|
||||||
|
5. **Provide complete, runnable code** - no placeholders or TODOs
|
||||||
|
6. **Document all assumptions** about data structure or access patterns
|
||||||
|
7. **Always run the devil's advocate attack** after any modification of the rules.
|
||||||
|
8. **Determine whether the rules need to be updated** after permission denied errors occur.
|
||||||
|
9. **Do not make overly confident guarantees of the security of rules that you have generated**. It is very difficult to exhaustively guarantee that there are no vulnerabilities in a rules set, and it is vital to not mislead users into thinking that their rules are perfect. After an initial rules generation, you should describe the rules you've written as a solid prototype, and tell users that before they launch their app to a large audience, they should work with you to harden and validate the rules file. Be clear that users should carefully review rules to ensure security.
|
||||||
@@ -0,0 +1,201 @@
|
|||||||
|
# Web SDK Usage
|
||||||
|
|
||||||
|
This guide focuses on the **Modular Web SDK** (v9+), which is tree-shakeable and efficient.
|
||||||
|
|
||||||
|
### Initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { initializeApp } from "firebase/app";
|
||||||
|
import { getFirestore } from "firebase/firestore";
|
||||||
|
|
||||||
|
// If running in Firebase App Hosting, you can skip Firebase Config and instead use:
|
||||||
|
// const app = initializeApp();
|
||||||
|
|
||||||
|
const firebaseConfig = {
|
||||||
|
// Your config options. Get the values by running 'firebase apps:sdkconfig <platform> <app-id>'
|
||||||
|
};
|
||||||
|
|
||||||
|
const app = initializeApp(firebaseConfig);
|
||||||
|
const db = getFirestore(app);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Writing Data
|
||||||
|
|
||||||
|
#### Set a Document
|
||||||
|
Creates a document if it doesn't exist, or overwrites it if it does. You can also specify a merge option to only update provided fields.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, setDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
// Create/Overwrite document with ID "LA"
|
||||||
|
await setDoc(doc(db, "cities", "LA"), {
|
||||||
|
name: "Los Angeles",
|
||||||
|
state: "CA",
|
||||||
|
country: "USA"
|
||||||
|
});
|
||||||
|
|
||||||
|
// To merge with existing data instead of overwriting:
|
||||||
|
await setDoc(doc(db, "cities", "LA"), { population: 3900000 }, { merge: true });
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Add a Document with Auto-ID
|
||||||
|
Use when you don't care about the document ID and want Firestore to automatically generate one.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, addDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const docRef = await addDoc(collection(db, "cities"), {
|
||||||
|
name: "Tokyo",
|
||||||
|
country: "Japan"
|
||||||
|
});
|
||||||
|
console.log("Document written with ID: ", docRef.id);
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Update a Document
|
||||||
|
Update some fields of an existing document without overwriting the entire document. Fails if the document doesn't exist.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, updateDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const laRef = doc(db, "cities", "LA");
|
||||||
|
|
||||||
|
await updateDoc(laRef, {
|
||||||
|
capital: true
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Transactions
|
||||||
|
Perform an atomic read-modify-write operation.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { runTransaction, doc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const sfDocRef = doc(db, "cities", "SF");
|
||||||
|
|
||||||
|
try {
|
||||||
|
await runTransaction(db, async (transaction) => {
|
||||||
|
const sfDoc = await transaction.get(sfDocRef);
|
||||||
|
if (!sfDoc.exists()) {
|
||||||
|
throw "Document does not exist!";
|
||||||
|
}
|
||||||
|
|
||||||
|
const newPopulation = sfDoc.data().population + 1;
|
||||||
|
transaction.update(sfDocRef, { population: newPopulation });
|
||||||
|
});
|
||||||
|
console.log("Transaction successfully committed!");
|
||||||
|
} catch (e) {
|
||||||
|
console.log("Transaction failed: ", e);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reading Data
|
||||||
|
|
||||||
|
#### Get a Single Document
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, getDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const docRef = doc(db, "cities", "SF");
|
||||||
|
const docSnap = await getDoc(docRef);
|
||||||
|
|
||||||
|
if (docSnap.exists()) {
|
||||||
|
console.log("Document data:", docSnap.data());
|
||||||
|
} else {
|
||||||
|
console.log("No such document!");
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Get Multiple Documents
|
||||||
|
Fetches all documents in a query or collection once.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, getDocs } from "firebase/firestore";
|
||||||
|
|
||||||
|
const querySnapshot = await getDocs(collection(db, "cities"));
|
||||||
|
querySnapshot.forEach((doc) => {
|
||||||
|
console.log(doc.id, " => ", doc.data());
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Realtime Updates
|
||||||
|
|
||||||
|
#### Listen to a Document or Query
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, onSnapshot } from "firebase/firestore";
|
||||||
|
|
||||||
|
const unsub = onSnapshot(doc(db, "cities", "SF"), (doc) => {
|
||||||
|
console.log("Current data: ", doc.data());
|
||||||
|
});
|
||||||
|
|
||||||
|
// To stop listening:
|
||||||
|
// unsub();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Handle Changes
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, query, where, onSnapshot } from "firebase/firestore";
|
||||||
|
|
||||||
|
const q = query(collection(db, "cities"), where("state", "==", "CA"));
|
||||||
|
const unsubscribe = onSnapshot(q, (snapshot) => {
|
||||||
|
snapshot.docChanges().forEach((change) => {
|
||||||
|
if (change.type === "added") {
|
||||||
|
console.log("New city: ", change.doc.data());
|
||||||
|
}
|
||||||
|
if (change.type === "modified") {
|
||||||
|
console.log("Modified city: ", change.doc.data());
|
||||||
|
}
|
||||||
|
if (change.type === "removed") {
|
||||||
|
console.log("Removed city: ", change.doc.data());
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Queries
|
||||||
|
|
||||||
|
#### Simple and Compound Queries
|
||||||
|
Use `query()` and `where()` to combine filters safely.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, query, where, getDocs } from "firebase/firestore";
|
||||||
|
|
||||||
|
const citiesRef = collection(db, "cities");
|
||||||
|
|
||||||
|
// Simple equality
|
||||||
|
const q1 = query(citiesRef, where("state", "==", "CA"));
|
||||||
|
|
||||||
|
// Compound (AND)
|
||||||
|
// Note: Requires a composite index if filtering on different fields
|
||||||
|
const q2 = query(citiesRef, where("state", "==", "CA"), where("population", ">", 1000000));
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Order and Limit
|
||||||
|
Sort and limit results cleanly.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { orderBy, limit } from "firebase/firestore";
|
||||||
|
|
||||||
|
const q = query(citiesRef, orderBy("name"), limit(3));
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Pipeline Queries
|
||||||
|
|
||||||
|
You can use pipeline queries to perform complex queries.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
|
||||||
|
const readDataPipeline = db.pipeline()
|
||||||
|
.collection("users");
|
||||||
|
|
||||||
|
// Execute the pipeline and handle the result
|
||||||
|
try {
|
||||||
|
const querySnapshot = await execute(readDataPipeline);
|
||||||
|
querySnapshot.results.forEach((result) => {
|
||||||
|
console.log(`${result.id} => ${result.data()}`);
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error("Error getting documents: ", error);
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,27 @@
|
|||||||
|
---
|
||||||
|
name: firebase-firestore-standard
|
||||||
|
description: Comprehensive guide for Firestore Standard Edition, including provisioning, security rules, and SDK usage. Use this skill when the user needs help setting up Firestore, writing security rules, or using the Firestore SDK in their application.
|
||||||
|
compatibility: This skill is best used with the Firebase CLI, but does not require it. Firebase CLI can be accessed through `npx -y firebase-tools@latest`.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Firestore Standard Edition
|
||||||
|
|
||||||
|
This skill provides a complete guide for getting started with Cloud Firestore Standard Edition, including provisioning, securing, and integrating it into your application.
|
||||||
|
|
||||||
|
## Provisioning
|
||||||
|
|
||||||
|
To set up Cloud Firestore in your Firebase project and local environment, see [provisioning.md](references/provisioning.md).
|
||||||
|
|
||||||
|
## Security Rules
|
||||||
|
|
||||||
|
For guidance on writing and deploying Firestore Security Rules to protect your data, see [security_rules.md](references/security_rules.md).
|
||||||
|
|
||||||
|
## SDK Usage
|
||||||
|
|
||||||
|
To learn how to use Cloud Firestore in your application code, choose your platform:
|
||||||
|
|
||||||
|
* **Web (Modular SDK)**: [web_sdk_usage.md](references/web_sdk_usage.md)
|
||||||
|
|
||||||
|
## Indexes
|
||||||
|
|
||||||
|
For checking index types, query support tables, and best practices, see [indexes.md](references/indexes.md).
|
||||||
@@ -0,0 +1,82 @@
|
|||||||
|
# Firestore Indexes Reference
|
||||||
|
|
||||||
|
Indexes allow Firestore to ensure that query performance depends on the size of the result set, not the size of the database.
|
||||||
|
|
||||||
|
## Index Types
|
||||||
|
|
||||||
|
### Single-Field Indexes
|
||||||
|
In Standard Edition, Firestore **automatically creates** a single-field index for every field in a document (and subfields in maps).
|
||||||
|
* **Support**: Simple equality queries (`==`) and single-field range/sort queries (`<`, `<=`, `orderBy`).
|
||||||
|
* **Behavior**: You generally don't need to manage these unless you want to *exempt* a field.
|
||||||
|
|
||||||
|
### Composite Indexes
|
||||||
|
A composite index stores a sorted mapping of all documents based on an ordered list of fields.
|
||||||
|
* **Support**: Complex queries that filter or sort by **multiple fields**.
|
||||||
|
* **Creation**: These are **NOT** automatically created. You must define them manually or via the console/CLI.
|
||||||
|
|
||||||
|
## Automatic vs. Manual Management
|
||||||
|
|
||||||
|
### What is Automatic?
|
||||||
|
* Indexes for simple queries.
|
||||||
|
* Merging of single-field indexes for multiple equality filters (e.g., `where("state", "==", "CA").where("country", "==", "USA")`).
|
||||||
|
|
||||||
|
### When Do I Need to Act?
|
||||||
|
If you attempt a query that requires a composite index, the SDK will throw an error containing a **direct link** to the Firebase Console to create that specific index.
|
||||||
|
|
||||||
|
**Example Error:**
|
||||||
|
> "The query requires an index. You can create it here: https://console.firebase.google.com/project/..."
|
||||||
|
|
||||||
|
## Query Support Examples
|
||||||
|
|
||||||
|
| Query Type | Index Required |
|
||||||
|
| :--- | :--- |
|
||||||
|
| **Simple Equality**<br>`where("a", "==", 1)` | Automatic (Single-Field) |
|
||||||
|
| **Simple Range/Sort**<br>`where("a", ">", 1).orderBy("a")` | Automatic (Single-Field) |
|
||||||
|
| **Multiple Equality**<br>`where("a", "==", 1).where("b", "==", 2)` | Automatic (Merged Single-Field) |
|
||||||
|
| **Equality + Range/Sort**<br>`where("a", "==", 1).where("b", ">", 2)` | **Composite Index** |
|
||||||
|
| **Multiple Ranges**<br>`where("a", ">", 1).where("b", ">", 2)` | **Composite Index** (and technically limited query support) |
|
||||||
|
| **Array Contains + Equality**<br>`where("tags", "array-contains", "news").where("active", "==", true)` | **Composite Index** |
|
||||||
|
|
||||||
|
## Best Practices & Exemptions
|
||||||
|
|
||||||
|
You can **exempt** fields from automatic indexing to save storage or strictly enforce write limits.
|
||||||
|
|
||||||
|
### 1. High Write Rates (Sequential Values)
|
||||||
|
* **Problem**: Indexing fields that increase sequentially (like `timestamp`) limits the write rate to ~500 writes/second per collection.
|
||||||
|
* **Solution**: If you don't query on this field, **exempt** it from simple indexing.
|
||||||
|
|
||||||
|
### 2. Large String/Map/Array Fields
|
||||||
|
* **Problem**: Indexing limits (40k entries per doc). Indexing large blobs wastes storage.
|
||||||
|
* **Solution**: Exempt large text blobs or huge arrays if they aren't used for filtering.
|
||||||
|
|
||||||
|
### 3. TTL Fields
|
||||||
|
* **Problem**: TTL (Time-To-Live) deletion can cause index churn.
|
||||||
|
* **Solution**: Exempt the TTL timestamp field from indexing if you don't query it.
|
||||||
|
|
||||||
|
## Management
|
||||||
|
|
||||||
|
### Config files
|
||||||
|
Your indexes should be defined in `firestore.indexes.json` (pointed to by `firebase.json`).
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"indexes": [
|
||||||
|
{
|
||||||
|
"collectionGroup": "cities",
|
||||||
|
"queryScope": "COLLECTION",
|
||||||
|
"fields": [
|
||||||
|
{ "fieldPath": "country", "order": "ASCENDING" },
|
||||||
|
{ "fieldPath": "population", "order": "DESCENDING" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fieldOverrides": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### CLI Commands
|
||||||
|
|
||||||
|
Deploy indexes only:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest deploy --only firestore:indexes
|
||||||
|
```
|
||||||
@@ -0,0 +1,87 @@
|
|||||||
|
# Provisioning Cloud Firestore
|
||||||
|
|
||||||
|
## Manual Initialization
|
||||||
|
|
||||||
|
Initialize the following firebase configuration files manually. Do not use `npx -y firebase-tools@latest init`, as it expects interactive inputs.
|
||||||
|
|
||||||
|
1. **Create `firebase.json`**: This file configures the Firebase CLI.
|
||||||
|
2. **Create `firestore.rules`**: This file contains your security rules.
|
||||||
|
3. **Create `firestore.indexes.json`**: This file contains your index definitions.
|
||||||
|
|
||||||
|
### 1. Create `firebase.json`
|
||||||
|
|
||||||
|
Create a file named `firebase.json` in your project root with the following content. If this file already exists, instead append to the existing JSON:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"firestore": {
|
||||||
|
"rules": "firestore.rules",
|
||||||
|
"indexes": "firestore.indexes.json"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This will use the default database with the Standard edition. To use a different database, specify the database ID and location. You can check the list of available databases using `npx -y firebase-tools@latest firestore:databases:list`. If the database does not exist, it will be created when you deploy:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"firestore": {
|
||||||
|
"rules": "firestore.rules",
|
||||||
|
"indexes": "firestore.indexes.json",
|
||||||
|
"database": "my-database-id",
|
||||||
|
"location": "us-central1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create `firestore.rules`
|
||||||
|
|
||||||
|
Create a file named `firestore.rules`. A good starting point (locking down the database) is:
|
||||||
|
|
||||||
|
```
|
||||||
|
rules_version = '2';
|
||||||
|
service cloud.firestore {
|
||||||
|
match /databases/{database}/documents {
|
||||||
|
match /{document=**} {
|
||||||
|
allow read, write: if false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
*See [security_rules.md](security_rules.md) for how to write actual rules.*
|
||||||
|
|
||||||
|
### 3. Create `firestore.indexes.json`
|
||||||
|
|
||||||
|
Create a file named `firestore.indexes.json` with an empty configuration to start:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"indexes": [],
|
||||||
|
"fieldOverrides": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*See [indexes.md](indexes.md) for how to configure indexes.*
|
||||||
|
|
||||||
|
|
||||||
|
## Deploy rules and indexes
|
||||||
|
```bash
|
||||||
|
# To deploy all rules and indexes
|
||||||
|
npx -y firebase-tools@latest deploy --only firestore
|
||||||
|
|
||||||
|
# To deploy just rules
|
||||||
|
npx -y firebase-tools@latest deploy --only firestore:rules
|
||||||
|
|
||||||
|
# To deploy just indexes
|
||||||
|
npx -y firebase-tools@latest deploy --only firestore:indexes
|
||||||
|
```
|
||||||
|
|
||||||
|
## Local Emulation
|
||||||
|
|
||||||
|
To run Firestore locally for development and testing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest emulators:start --only firestore
|
||||||
|
```
|
||||||
|
|
||||||
|
This starts the Firestore emulator, typically on port 8080. You can interact with it using the Emulator UI (usually at http://localhost:4000/firestore).
|
||||||
@@ -0,0 +1,414 @@
|
|||||||
|
## 1. Generate Firestore Rules
|
||||||
|
You are an expert Firebase Security Rules engineer with deep knowledge of Firestore security best practices. Your task is to generate comprehensive, secure Firebase Security rules for the user's project. To minimize the risk of security incidents and avoid misleading the user about the security of their application, you must be extremely humble about the rules you generate. Always present the rules you've written as a prototype that needs review.
|
||||||
|
|
||||||
|
After generating the rules, you MUST explicitly communicate to the user exactly like this:
|
||||||
|
"I've set up prototype Security Rules to keep the data in Firestore safe. They are designed to be secure for <explain reasons here>. However, you should review and verify them before broadly sharing your app. If you'd like, I can help you harden these rules."
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
Follow this structured workflow strictly:
|
||||||
|
|
||||||
|
#### Phase-1: Codebase Analysis
|
||||||
|
|
||||||
|
1. **Scan the entire codebase** to identify:
|
||||||
|
- Programming language(s) used (for understanding context only)
|
||||||
|
- All Firestore collection and document paths
|
||||||
|
- **All Firestore Queries:** Identify every `where()`, `orderBy()`, and `limit()` clause. The security rules **MUST** allow these specific queries.
|
||||||
|
- Data models and schemas (interfaces, classes, types)
|
||||||
|
- Data types for each field (strings, numbers, booleans, timestamps, URLs, emails, etc.)
|
||||||
|
- Required vs. optional fields
|
||||||
|
- Field constraints (min/max length, format patterns, allowed values)
|
||||||
|
- CRUD operations (create, read, update, delete)
|
||||||
|
- Authentication patterns (Firebase Auth, custom tokens, anonymous)
|
||||||
|
- Access patterns and business logic rules
|
||||||
|
2. **Document your findings** in a untracked file. Refer to this file when generating the security rules.
|
||||||
|
|
||||||
|
#### Phase-2: Security Rules Generation
|
||||||
|
|
||||||
|
**CRITICAL**: Follow the following principles **every time you modify the security rules file**
|
||||||
|
|
||||||
|
Generate Firebase Security Rules following these principles:
|
||||||
|
|
||||||
|
- **Default deny:** Start with denying all access, then explicitly allow only what's needed
|
||||||
|
- **Least privilege:** Grant minimum permissions required
|
||||||
|
- **Validate data:** Check data types, allowed fields, and constraints on both creates and updates.
|
||||||
|
- **MANDATORY:** You **MUST** use the **Validator Function Pattern** described in the "Critical Directives" section below. This involves defining a specific validation function (e.g., `isValidUser`) and calling it in **BOTH** `create` and `update` rules.
|
||||||
|
- **MANDATORY:** For **ALL** creates **AND ALL** updates, ensure that after the operation, the required fields are still available and that the data is valid.
|
||||||
|
- **Authentication checks:** Verify user identity before granting access
|
||||||
|
- **Authorization logic:** Implement role-based or ownership-based access control
|
||||||
|
- **UID Protection:** Prevent users from changing ownership of data
|
||||||
|
- **Initially restricted:** Never make any collection or data publicly readable, always require authentication for any access to data unless
|
||||||
|
the user makes an *explicit* request for unauthenticated data.
|
||||||
|
|
||||||
|
This means the first firestore.rules file you generate must never have any "allow read: true" statements.
|
||||||
|
|
||||||
|
**Structure Requirements:**
|
||||||
|
|
||||||
|
1. **Document assumed data models at the beginning of the rules file:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ===============================================================
|
||||||
|
// Assumed Data Model
|
||||||
|
// ===============================================================
|
||||||
|
//
|
||||||
|
// This security rules file assumes the following data structures:
|
||||||
|
//
|
||||||
|
// Collection: [name]
|
||||||
|
// Document ID: [pattern]
|
||||||
|
// Fields:
|
||||||
|
// - field1: type (required/optional, constraints) - description
|
||||||
|
// - field2: type (required/optional, constraints) - description
|
||||||
|
// [List all fields with types, constraints, and whether immutable]
|
||||||
|
//
|
||||||
|
// [Repeat for all collections]
|
||||||
|
//
|
||||||
|
// ===============================================================
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Include comprehensive helper functions to avoid repetition:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// ===============================================================
|
||||||
|
// Helper Functions
|
||||||
|
// ===============================================================
|
||||||
|
//
|
||||||
|
// Check if the user is authenticated
|
||||||
|
function isAuthenticated() {
|
||||||
|
return request.auth != null;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Check if user owns the resource (for user-owned documents)
|
||||||
|
function isOwner(userId) {
|
||||||
|
return isAuthenticated() && request.auth.uid == userId;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Check if user is owner based on document's uid field
|
||||||
|
function isDocOwner() {
|
||||||
|
return isAuthenticated() && request.auth.uid == resource.data.uid;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Verify UID hasn't been tampered with on create
|
||||||
|
function uidUnchanged() {
|
||||||
|
return !('uid' in request.resource.data) ||
|
||||||
|
request.resource.data.uid == request.auth.uid;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Ensure uid field is not modified on update
|
||||||
|
function uidNotModified() {
|
||||||
|
return !('uid' in request.resource.data) ||
|
||||||
|
request.resource.data.uid == resource.data.uid;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate required fields exist
|
||||||
|
function hasRequiredFields(fields) {
|
||||||
|
return request.resource.data.keys().hasAll(fields);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate string length
|
||||||
|
function validStringLength(field, minLen, maxLen) {
|
||||||
|
return request.resource.data[field] is string &&
|
||||||
|
request.resource.data[field].size() >= minLen &&
|
||||||
|
request.resource.data[field].size() <= maxLen;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate URL format (must start with https:// or http://)
|
||||||
|
function isValidUrl(url) {
|
||||||
|
return url is string &&
|
||||||
|
(url.matches("^https://.*") || url.matches("^http://.*"));
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate email format
|
||||||
|
function isValidEmail(email) {
|
||||||
|
return email is string &&
|
||||||
|
email.matches("^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$");
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Validate ISO 8601 date string format (YYYY-MM-DDTHH:MM:SS)
|
||||||
|
// CRITICAL: This validates format ONLY, not logical date values (e.g., month 13).
|
||||||
|
// Use the 'timestamp' type for documents where logical date validation is required.
|
||||||
|
function isValidDateString(dateStr) {
|
||||||
|
return dateStr is string &&
|
||||||
|
dateStr.matches("^\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}.*Z?$");
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Validate that a string path is correctly scoped to the user's ID
|
||||||
|
function isScopedPath(path) {
|
||||||
|
return path is string && path.matches("^users/" + request.auth.uid + "/.*");
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a value is positive
|
||||||
|
function isPositive(field) {
|
||||||
|
return request.resource.data[field] is number && request.resource.data[field] > 0;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a list is a list and enforces size limits
|
||||||
|
function isValidList(list, maxSize) {
|
||||||
|
return list is list && list.size() <= maxSize;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate optional string (if present, must be string and within length)
|
||||||
|
function isValidOptionalString(field, minLen, maxLen) {
|
||||||
|
return !('field' in request.resource.data) ||
|
||||||
|
(request.resource.data[field] is string &&
|
||||||
|
request.resource.data[field].size() >= minLen &&
|
||||||
|
request.resource.data[field].size() <= maxLen);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a map contains only allowed keys
|
||||||
|
function isValidMap(mapData, allowedKeys) {
|
||||||
|
return mapData is map && mapData.keys().hasOnly(allowedKeys);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that the document contains only the allowed fields
|
||||||
|
function hasOnlyAllowedFields(fields) {
|
||||||
|
return request.resource.data.keys().hasOnly(fields);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that the document hasn't changed in the fields that are not allowed to be changed
|
||||||
|
function areImmutableFieldsUnchanged(fields) {
|
||||||
|
return !request.resource.data.diff(resource.data).affectedKeys().hasAny(fields);
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// Validate that a timestamp is recent (within the last 5 minutes)
|
||||||
|
function isRecent(time) {
|
||||||
|
return time is timestamp &&
|
||||||
|
time > request.time - duration.value(5, 'm') &&
|
||||||
|
time <= request.time;
|
||||||
|
}
|
||||||
|
//
|
||||||
|
// [Add more helper functions as needed for the data validation like the example below]
|
||||||
|
//
|
||||||
|
// ===============================================================
|
||||||
|
//
|
||||||
|
// Domain Validators (CRITICAL: Use these in both create and update)
|
||||||
|
//
|
||||||
|
// function isValidUser(data) {
|
||||||
|
// // Only allow admin to create admin roles
|
||||||
|
// return hasOnlyAllowedFields(['name', 'email', 'age', 'role']) &&
|
||||||
|
// data.name is string && data.name.size() > 0 && data.name.size() < 50 &&
|
||||||
|
// data.email is string && isValidEmail(data.email) &&
|
||||||
|
// data.age is number && data.age >= 18 &&
|
||||||
|
// data.role in ['admin', 'user', 'guest'];
|
||||||
|
// }
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Mandatory: User Data Separation (The "No Mixed Content" Rule)
|
||||||
|
- Firestore security rules apply to the entire document. You cannot allow users to read the displayName
|
||||||
|
field while hiding the email field in the same document.
|
||||||
|
- If a collection (e.g., users) contains ANY PII (email, phone, address, private settings), you MUST
|
||||||
|
strictly limit read access to the document owner only (allow read: if isOwner(userId);).
|
||||||
|
- If the application requires public profiles (e.g., showing user names/avatars on posts):
|
||||||
|
- 1. Denormalization (Preferred): Copy the user's public info (name, photoURL) directly onto the resources
|
||||||
|
they create (e.g., store authorName and authorPhoto inside the posts document).
|
||||||
|
- 2. Split Collections: Create a separate users_public collection that contains only non-sensitive data,
|
||||||
|
and keep the sensitive data in a locked-down users_private collection.
|
||||||
|
- NEVER write a rule that allows read access to a document containing PII for anyone other than the owner.
|
||||||
|
|
||||||
|
#### **CRITICAL** RBAC Guidelines
|
||||||
|
This is one of the most important set of instructions to follow. Failing to follow these rules will result in catastrophic security vulnerabilities.
|
||||||
|
|
||||||
|
- **NEVER** allow users to create their own privileged roles. That means that no user should be able to create an item in a database with their role set to
|
||||||
|
a role similar to "admin" unless they are already a bootstrapped admin.
|
||||||
|
- **NEVER** allow users to update their own roles or permissions.
|
||||||
|
- **NEVER** allow users to grant themselves access to other users' data.
|
||||||
|
- **NEVER** allow users to bypass the role hierarchy.
|
||||||
|
- **ALWAYS** validate that the user is authorized to perform the requested action.
|
||||||
|
- **ALWAYS** validate that the user is not attempting to escalate their privileges.
|
||||||
|
- **ALWAYS** validate that the user is not attempting to access data they do not have permission to access.
|
||||||
|
|
||||||
|
Here's a **bad** example of what **NOT** to do:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
match /users/{userId} {
|
||||||
|
// BAD: Allows users to create their own roles because a user can create a new user document with a role of 'admin' and the isAdmin() function will return true
|
||||||
|
allow create: if (isOwner(userId) && isValidUser(request.resource.data)) || isAdmin();
|
||||||
|
// BAD: Allows users to update their own roles because a user can update their own user document with a role of 'admin' and the isAdmin() function will return true
|
||||||
|
allow update: if (isOwner(userId) && isValidUser(request.resource.data)) || isAdmin();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Here's a **good** example of what **TO** do:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
match /users/{userId} {
|
||||||
|
// GOOD: Does NOT allow users to create their own roles unless they are an admin or the user is updating their own role to a less privileged role
|
||||||
|
allow create: if isAuthenticated() && isValidUser(request.resource.data) && ((isOwner(userId) && request.resource.data.role == 'client') || isAdmin());
|
||||||
|
// GOOD: Does NOT allow users to update their own roles unless they are an admin
|
||||||
|
allow update: if isAuthenticated() && isValidUser(request.resource.data) && ((isOwner(userId) && request.resource.data.role == resource.data.role) || isAdmin());
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Critical Directives for Secure Generation
|
||||||
|
|
||||||
|
- **PREFER USING READ OVER LIST OR GET** `list` and `get` can add complexity to security rules. Prefer using `read` over them.
|
||||||
|
- **Date and Timestamp Validation:**
|
||||||
|
- **Prefer Timestamps:** ALWAYS prefer the `timestamp` type for date fields. Firestore automatically ensures they are logically valid dates.
|
||||||
|
- **String Date Risks:** If using strings for dates (e.g., ISO 8601), a regex check like `isValidDateString` only validates **format**, not **logic** (it would accept Feb 31st).
|
||||||
|
- **Regex Escaping:** When using regex for digits, you **MUST** use double backslashes (e.g., `\\\\d`) in the rules string. Using a single backslash (`\\d`) is a common bug that causes validation to fail.
|
||||||
|
- **Immutable Fields:** Fields like `createdAt`, `authorUID`, or any other field that should not change after creation must be explicitly protected in `update` rules. (e.g., `request.resource.data.createdAt == resource.data.createdAt`). **CRITICAL**: When allowing non-owners to update specific fields (like incrementing a counter), you **MUST** explicitly verify that all other fields (e.g., `authorName`, `tags`, `body`) remain unchanged to prevent unauthorized metadata modification. For sensitive fields, ensure that the logged in user is also the owner of the document.
|
||||||
|
- **Identity Integrity:** When storing denormalized user identity (e.g. `authorName`, `authorPhoto`), you **MUST** validate this data.
|
||||||
|
- **Prefer Auth Token:** If possible, check if `request.resource.data.authorName == request.auth.token.name`.
|
||||||
|
- **Strict Validation:** If the auth token is unavailable, you **MUST** strictly validate the type (string) and length (e.g. < 50 chars) to prevent spoofing with massive or malicious payloads.
|
||||||
|
- **Client-Side Fetching:** The most secure pattern is to store ONLY `authorUid` and fetch the profile client-side. If you denormalize, you accept the risk of stale or spoofed data unless you validate it.
|
||||||
|
- **Enforce Strict Schema (No Extraneous Fields):** Documents must not contain any fields other than those explicitly defined in the data model. This prevents users from adding arbitrary data.
|
||||||
|
- **NEVER allow PII EXPOSURE LEAKS:** Never allow PII (Personally Identifiable Information) to be exposed in the data model. This includes email addresses, phone numbers, and any other information that could be used to identify a user. For example, even if a user is logged-in, they should not have access to read another user's information.
|
||||||
|
- **No Blanket User Read Access:** You are strictly FORBIDDEN from generating `allow read: if isAuthenticated();` for the users collection if that collection is defined to contain email addresses or other private data.
|
||||||
|
- **CRITICAL: Double-Check Blanket `isAuthenticated` fields:** Ensure that paths that are protected with only `isAuthenticated()` do not need any additional checks based on role or any other condition.
|
||||||
|
- **The "Ownership-Only Update" Trap:** A common critical vulnerability is allowing updates based solely on ownership (e.g., `allow update: if isOwner(resource.data.uid);`). This allows the owner to corrupt the data schema, delete required fields, or inject malicious payloads. You **MUST** always combine ownership checks with data validation (e.g., `allow update: if isOwner(...) && isValidEntity(...);`) **AND** validate that self-escalation is not possible.
|
||||||
|
|
||||||
|
- **Deep Array Inspection:** It is insufficient to check if a field `is list`. You **MUST** validate the contents of the array (e.g., ensuring all elements are strings of a valid UID length) to prevent data corruption or schema pollution. For example, a `tags` array must verify that every item is a string AND that each string is within a reasonable length (e.g., < 20 chars).
|
||||||
|
- **Permission-Field Lockdown:** Fields that control access (e.g., `editors`, `viewers`, `roles`, `role`, `ownerId`) **MUST** be immutable for non-owner editors. In `update` rules, use `fieldUnchanged()` for these fields unless the `request.auth.uid` matches the document's original owner/creator. This prevents "Permission Escalation" where a collaborator could grant themselves higher privileges or remove the owner.
|
||||||
|
|
||||||
|
|
||||||
|
### Advanced Validation for Business Logic
|
||||||
|
|
||||||
|
Secure rules must enforce the application's business logic. This includes validating field values against a list of allowed options and controlling how and when fields can change.
|
||||||
|
|
||||||
|
#### 1. Enforce Enum Values
|
||||||
|
|
||||||
|
If a field should only contain specific values (e.g., a status), validate against a list.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// A 'task' document's status can only be one of three values
|
||||||
|
function isValidStatus() {
|
||||||
|
let validStatuses = ['pending', 'in-progress', 'completed'];
|
||||||
|
return request.resource.data.status in validStatuses;
|
||||||
|
}
|
||||||
|
|
||||||
|
allow create: if isValidStatus() && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Validate State Transitions
|
||||||
|
|
||||||
|
For `update` operations, you **MUST** validate that a field is changing from a valid previous state to a valid new state. This prevents users from bypassing workflows (e.g., marking a task as 'completed' from 'archived').
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// A task can only be marked 'completed' if it was 'in-progress'
|
||||||
|
function validStatusTransition() {
|
||||||
|
let previousStatus = resource.data.status;
|
||||||
|
let newStatus = request.resource.data.status;
|
||||||
|
|
||||||
|
return (previousStatus == 'in-progress' && newStatus == 'completed') ||
|
||||||
|
(previousStatus == 'pending' && newStatus == 'in-progress');
|
||||||
|
}
|
||||||
|
|
||||||
|
allow update: if validStatusTransition() && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Strict Path and Relationship Scoping
|
||||||
|
|
||||||
|
For any field that references another resource (like an image path or a parent document ID), you **MUST** ensure it is correctly scoped to the user or valid within the context.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Ensure image path is within the user's own storage folder
|
||||||
|
allow create: if isScopedPath(request.resource.data.imageBucket) && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Secure Counter Updates
|
||||||
|
|
||||||
|
When allowing users to update a counter (like `voteCount` or `answerCount`), you **MUST** ensure:
|
||||||
|
1. **Atomic Increments:** The field is only changing by exactly +1 or -1.
|
||||||
|
2. **Isolation:** **NO OTHER FIELDS** are being modified. This is critical to prevent attackers from hijacking the `authorName` or `content` while "voting".
|
||||||
|
3. **Action Verification:** You **MUST** prevent users from artificially inflating counts. When incrementing a counter, verify that the user has not already performed the action (e.g., by checking for the existence of a 'like' document) and is not looping updates.
|
||||||
|
* **CRITICAL:** Relying solely on `!exists(likeDoc)` is insufficient because a malicious user can skip creating the document and loop the increment.
|
||||||
|
* **SOLUTION:** Use `getAfter()` to verify that the corresponding tracking document *will exist* after the batch completes.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function isValidCounterUpdate(docId) {
|
||||||
|
// Allow update only if 'voteCount' is the ONLY field changing
|
||||||
|
return request.resource.data.diff(resource.data).affectedKeys().hasOnly(['voteCount']) &&
|
||||||
|
// And the change is exactly +1 or -1
|
||||||
|
math.abs(request.resource.data.voteCount - resource.data.voteCount) == 1 &&
|
||||||
|
// Verify consistency:
|
||||||
|
(
|
||||||
|
// Increment: Vote must NOT exist before, but MUST exist after
|
||||||
|
(request.resource.data.voteCount > resource.data.voteCount &&
|
||||||
|
!exists(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) &&
|
||||||
|
getAfter(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) != null) ||
|
||||||
|
// Decrement: Vote MUST exist before, but must NOT exist after
|
||||||
|
(request.resource.data.voteCount < resource.data.voteCount &&
|
||||||
|
exists(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) &&
|
||||||
|
getAfter(/databases/$(database)/documents/votes/$(request.auth.uid + '_' + docId)) == null)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
allow update: if isValidCounterUpdate(docId) && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. **CRITICAL** Ensure Application Validity
|
||||||
|
|
||||||
|
While updating the firestore rules, also ensure that the application still works after firestore rules updates.
|
||||||
|
|
||||||
|
3. **For each collection, implement explicit data validation:**
|
||||||
|
|
||||||
|
- Type Checking: 'field is string', 'field is number', 'field is bool', 'field is timestamp'
|
||||||
|
- Required fields validation using 'hasRequiredFields()'
|
||||||
|
- **Enforce Size Limits:** For **EVERY** string, list, and map field, you **MUST** enforce realistic size limits (e.g., `text.size() < 1000`, `tags.size() < 20`). **Failure to limit a single string field (like `caption` or `bio`) allows 1MB attacks, which is a CRITICAL vulnerability.**
|
||||||
|
- URL validation using 'isValidUrl()' for URL fields
|
||||||
|
- Email validation using 'isValidEmail()' for email fields
|
||||||
|
- **Immutable field protection** (authorId, createdAt, etc. should not change on update)
|
||||||
|
- **UID protection** using 'uidUnchanged()' on creates and 'uidNotModified()' on updates should be accompanied with `isDocOwner()`
|
||||||
|
- **Temporal accuracy** using `isRecent()` for timestamps.
|
||||||
|
- **Range validation** using `isPositive()` or similar for numbers.
|
||||||
|
- **Path scoping** using `isScopedPath()` for storage paths.
|
||||||
|
|
||||||
|
Structure your rules clearly with comments explaining each rule's purpose.
|
||||||
|
|
||||||
|
#### Phase-3: Devil's Advocate Attack
|
||||||
|
|
||||||
|
**Critical step:** Systematically attempt to break your own rules using the following attack vectors. You MUST document the outcome of each attempt.
|
||||||
|
|
||||||
|
1. **Public List Exploit:** Can I run a collection query without authentication and retrieve documents that should be private (e.g., where `visible == false`)?
|
||||||
|
2. **Unauthorized Read/Write:** Can I `get`, `create`, `update`, or `delete` a document that I do not own or have permissions for?
|
||||||
|
3. **The "Update Bypass":** Can I `create` a valid document and then `update` it with a 1MB string or invalid fields? (Tests if validation logic is missing from `update`).
|
||||||
|
4. **Ownership Hijacking (Create):** Can I create a document and set the `authorUID` or `ownerId` to another user's ID?
|
||||||
|
5. **Ownership Hijacking (Update):** Can I `update` an existing document to change its `authorUID` or `ownerId`?
|
||||||
|
6. **Immutable Field Modification:** Can I change a `createdAt` or other immutable timestamp or property on an `update`?
|
||||||
|
7. **Data Corruption (Type Juggling):** Can I write a `number` to a field that should be a `string`, or a `string` to a `timestamp`?
|
||||||
|
8. **Validation Bypass (Create vs. Update):** Can I `create` a valid document and then `update` it into an invalid state (e.g., remove a required field, write a string that's too long)?
|
||||||
|
9. **Resource Exhaustion / DoS:** Can I write an enormous string (e.g., 1MB) to any field that accepts a string or a massive array to a list field? Every string field (e.g., `bio`, `url`, `name`) MUST have a `.size()` check. If any are missing, it's a "Resource Exhaustion/DoS" risk.
|
||||||
|
10. **Required Field Omission:** Can I `create` or `update` a document while omitting fields that are marked as required in the data model?
|
||||||
|
11. **Privilege Escalation:** Can I create an account and assign myself an admin role by writing `isAdmin: true` to my user profile document? (Tests reliance on document data vs. custom claims).
|
||||||
|
12. **Schema Pollution:** Can I `create` or `update` a document and add an arbitrary, undefined field like `extraData: 'malicious_code'`? (Tests for strict schema enforcement).
|
||||||
|
13. **Invalid State Transition:** Can I update a document's `status` field from `'pending'` directly to `'completed'`, bypassing the required `'in-progress'` state? (Tests business logic enforcement).
|
||||||
|
14. **Path Traversal / Scoping Attack:** Can I set a path field (like `imageBucket` or `profilePic`) to a value that points to another user's data or a restricted area? (Tests for regex path scoping).
|
||||||
|
15. **Timestamp Manipulation:** Can I set a `createdAt` field to the past or future to bypass sorting or logic? (Tests for `request.time` validation).
|
||||||
|
16. **Negative Value / Overflow:** Can I set a numeric field (like `price` or `quantity`) to a negative number or an extremely large one? (Tests for range validation).
|
||||||
|
17. **The "Mixed Content" Leak:** Create a second user. Can User B read User A's users document? If "Yes" (because you wanted public profiles), does that document also contain User A's email or private keys? If both are true, the rules are insecure.
|
||||||
|
18. **Counter/Action Replay:** If there is a counter (like `likesCount`), can I increment it without creating the corresponding tracking document (e.g., inside `likes/{userId}`)? Can I increment it twice? (Tests for `getAfter()` consistency checks).
|
||||||
|
19. **Orphaned Subcollection Access:** Can I read/write to a subcollection (e.g., `users/123/posts/456`) if the parent document (`users/123`) does not exist? (Tests for parent existence checks).
|
||||||
|
20. **Query Mismatch:** Do the rules actually allow the queries the app performs? (e.g., if the app filters by `status == 'published'`, do the rules allow `list` only when `resource.data.status == 'published'`?)
|
||||||
|
21. **Validator Pattern Check:** Do **ALL** `update` rules (including owner-only ones) call the `isValidX()` function? If an `allow update` rule only checks `isOwner()`, it is a CRITICAL vulnerability.
|
||||||
|
|
||||||
|
Document each attack attempt and whether it succeeded. If ANY attack succeeds:
|
||||||
|
|
||||||
|
- Fix the security hole
|
||||||
|
- Regenerate the rules
|
||||||
|
- **Repeat Phase-3** until no attacks succeed
|
||||||
|
|
||||||
|
#### Phase-4: Syntactic Validation
|
||||||
|
|
||||||
|
Once devil's advocate testing passes, repeat until rules pass validation.
|
||||||
|
|
||||||
|
**After all phases are complete, create or update the `firestore.rules` file.**
|
||||||
|
|
||||||
|
### Critical Constraints
|
||||||
|
1. **Never skip the devil's advocate phase** - this is your primary security validation
|
||||||
|
2. **MUST include helper functions** for common operations ('isAuthenticated', 'isOwner', 'uidUnchanged', 'uidNotModified') AND domain validators ('isValidUser', etc.)
|
||||||
|
3. **MUST document assumed data models** at the beginning of the rules file
|
||||||
|
4. **Always validate the rules syntax** using 'firebase deploy --only firestore:rules --dry-run' or a similar tool before outputting the final file.
|
||||||
|
5. **Provide complete, runnable code** - no placeholders or TODOs
|
||||||
|
6. **Document all assumptions** about data structure or access patterns
|
||||||
|
7. **Always run the devil's advocate attack** after any modification of the rules.
|
||||||
|
8. **Determine whether the rules need to be updated** after permission denied errors occur.
|
||||||
|
9. **Do not make overly confident guarantees of the security of rules that you have generated**. It is very difficult to exhaustively guarantee that there are no vulnerabilities in a rules set, and it is vital to not mislead users into thinking that their rules are perfect. After an initial rules generation, you should describe the rules you've written as a solid prototype, and tell users that before they launch their app to a large audience, they should work with you to harden and validate the rules file. Be clear that users should carefully review rules to ensure security.
|
||||||
@@ -0,0 +1,183 @@
|
|||||||
|
# Firestore Web SDK Usage Guide
|
||||||
|
|
||||||
|
This guide focuses on the **Modular Web SDK** (v9+), which is tree-shakeable and efficient.
|
||||||
|
|
||||||
|
## Initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { initializeApp } from "firebase/app";
|
||||||
|
import { getFirestore } from "firebase/firestore";
|
||||||
|
|
||||||
|
// If running in Firebase App Hosting, you can skip Firebase Config and instead use:
|
||||||
|
// const app = initializeApp();
|
||||||
|
|
||||||
|
const firebaseConfig = {
|
||||||
|
// Your config options. Get the values by running 'npx -y firebase-tools@latest apps:sdkconfig <platform> <app-id>'
|
||||||
|
};
|
||||||
|
|
||||||
|
const app = initializeApp(firebaseConfig);
|
||||||
|
const db = getFirestore(app);
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Writing Data
|
||||||
|
|
||||||
|
### Set a Document (`setDoc`)
|
||||||
|
Creates a document if it doesn't exist, or overwrites it if it does.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, setDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
// Create/Overwrite document with ID "LA"
|
||||||
|
await setDoc(doc(db, "cities", "LA"), {
|
||||||
|
name: "Los Angeles",
|
||||||
|
state: "CA",
|
||||||
|
country: "USA"
|
||||||
|
});
|
||||||
|
|
||||||
|
// To merge with existing data instead of overwriting:
|
||||||
|
await setDoc(doc(db, "cities", "LA"), { population: 3900000 }, { merge: true });
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add a Document with Auto-ID (`addDoc`)
|
||||||
|
Use when you don't care about the document ID.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, addDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const docRef = await addDoc(collection(db, "cities"), {
|
||||||
|
name: "Tokyo",
|
||||||
|
country: "Japan"
|
||||||
|
});
|
||||||
|
console.log("Document written with ID: ", docRef.id);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update a Document (`updateDoc`)
|
||||||
|
Update some fields of an existing document without overwriting the entire document. Fails if the document doesn't exist.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, updateDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const laRef = doc(db, "cities", "LA");
|
||||||
|
|
||||||
|
await updateDoc(laRef, {
|
||||||
|
capital: true
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Transactions
|
||||||
|
Perform an atomic read-modify-write operation.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { runTransaction, doc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const sfDocRef = doc(db, "cities", "SF");
|
||||||
|
|
||||||
|
try {
|
||||||
|
await runTransaction(db, async (transaction) => {
|
||||||
|
const sfDoc = await transaction.get(sfDocRef);
|
||||||
|
if (!sfDoc.exists()) {
|
||||||
|
throw "Document does not exist!";
|
||||||
|
}
|
||||||
|
|
||||||
|
const newPopulation = sfDoc.data().population + 1;
|
||||||
|
transaction.update(sfDocRef, { population: newPopulation });
|
||||||
|
});
|
||||||
|
console.log("Transaction successfully committed!");
|
||||||
|
} catch (e) {
|
||||||
|
console.log("Transaction failed: ", e);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Reading Data
|
||||||
|
|
||||||
|
### Get a Single Document (`getDoc`)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, getDoc } from "firebase/firestore";
|
||||||
|
|
||||||
|
const docRef = doc(db, "cities", "SF");
|
||||||
|
const docSnap = await getDoc(docRef);
|
||||||
|
|
||||||
|
if (docSnap.exists()) {
|
||||||
|
console.log("Document data:", docSnap.data());
|
||||||
|
} else {
|
||||||
|
console.log("No such document!");
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Multiple Documents (`getDocs`)
|
||||||
|
Fetches all documents in a query or collection once.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, getDocs } from "firebase/firestore";
|
||||||
|
|
||||||
|
const querySnapshot = await getDocs(collection(db, "cities"));
|
||||||
|
querySnapshot.forEach((doc) => {
|
||||||
|
// doc.data() is never undefined for query doc snapshots
|
||||||
|
console.log(doc.id, " => ", doc.data());
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Realtime Updates
|
||||||
|
|
||||||
|
### Listen to a Document/Query (`onSnapshot`)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { doc, onSnapshot } from "firebase/firestore";
|
||||||
|
|
||||||
|
const unsub = onSnapshot(doc(db, "cities", "SF"), (doc) => {
|
||||||
|
console.log("Current data: ", doc.data());
|
||||||
|
});
|
||||||
|
|
||||||
|
// Stop listening
|
||||||
|
// unsub();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Handle Changes (Added/Modified/Removed)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, query, where, onSnapshot } from "firebase/firestore";
|
||||||
|
|
||||||
|
const q = query(collection(db, "cities"), where("state", "==", "CA"));
|
||||||
|
const unsubscribe = onSnapshot(q, (snapshot) => {
|
||||||
|
snapshot.docChanges().forEach((change) => {
|
||||||
|
if (change.type === "added") {
|
||||||
|
console.log("New city: ", change.doc.data());
|
||||||
|
}
|
||||||
|
if (change.type === "modified") {
|
||||||
|
console.log("Modified city: ", change.doc.data());
|
||||||
|
}
|
||||||
|
if (change.type === "removed") {
|
||||||
|
console.log("Removed city: ", change.doc.data());
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Queries
|
||||||
|
|
||||||
|
### Simple and Compound Queries
|
||||||
|
Use `query()` to combine filters.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { collection, query, where, getDocs } from "firebase/firestore";
|
||||||
|
|
||||||
|
const citiesRef = collection(db, "cities");
|
||||||
|
|
||||||
|
// Simple equality
|
||||||
|
const q1 = query(citiesRef, where("state", "==", "CA"));
|
||||||
|
|
||||||
|
// Compound (AND)
|
||||||
|
// Note: Requires an index if filtering on different fields
|
||||||
|
const q2 = query(citiesRef, where("state", "==", "CA"), where("population", ">", 1000000));
|
||||||
|
```
|
||||||
|
|
||||||
|
### Order and Limit
|
||||||
|
Sort and limit results.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import { orderBy, limit } from "firebase/firestore";
|
||||||
|
|
||||||
|
const q = query(citiesRef, orderBy("name"), limit(3));
|
||||||
|
```
|
||||||
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
name: firebase-hosting-basics
|
||||||
|
description: Skill for working with Firebase Hosting (Classic). Use this when you want to deploy static web apps, Single Page Apps (SPAs), or simple microservices. Do NOT use for Firebase App Hosting.
|
||||||
|
---
|
||||||
|
|
||||||
|
# hosting-basics
|
||||||
|
|
||||||
|
This skill provides instructions and references for working with Firebase Hosting, a fast and secure hosting service for your web app, static and dynamic content, and microservices.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Firebase Hosting provides production-grade web content hosting for developers. With a single command, you can deploy web apps and serve both static and dynamic content to a global CDN (content delivery network).
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- **Fast Content Delivery:** Files are cached on SSDs at CDN edges around the world.
|
||||||
|
- **Secure by Default:** Zero-configuration SSL is built-in.
|
||||||
|
- **Preview Channels:** View and test changes on temporary preview URLs before deploying live.
|
||||||
|
- **GitHub Integration:** Automate previews and deploys with GitHub Actions.
|
||||||
|
- **Dynamic Content:** Serve dynamic content and microservices using Cloud Functions or Cloud Run.
|
||||||
|
|
||||||
|
## Hosting vs App Hosting
|
||||||
|
|
||||||
|
**Choose Firebase Hosting if:**
|
||||||
|
- You are deploying a static site (HTML/CSS/JS).
|
||||||
|
- You are deploying a simple SPA (React, Vue, etc. without SSR).
|
||||||
|
- You want full control over the build and deploy process via CLI.
|
||||||
|
|
||||||
|
**Choose Firebase App Hosting if:**
|
||||||
|
- You are using a supported full-stack framework like Next.js or Angular.
|
||||||
|
- You need Server-Side Rendering (SSR) or ISR.
|
||||||
|
- You want an automated "git push to deploy" workflow with zero configuration.
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
### 1. Configuration (`firebase.json`)
|
||||||
|
For details on configuring Hosting behavior, including public directories, redirects, rewrites, and headers, see [configuration.md](references/configuration.md).
|
||||||
|
|
||||||
|
### 2. Deploying
|
||||||
|
For instructions on deploying your site, using preview channels, and managing releases, see [deploying.md](references/deploying.md).
|
||||||
|
|
||||||
|
### 3. Emulation
|
||||||
|
To test your app locally:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest emulators:start --only hosting
|
||||||
|
```
|
||||||
|
This serves your app at `http://localhost:5000` by default.
|
||||||
@@ -0,0 +1,101 @@
|
|||||||
|
# Hosting Configuration (`firebase.json`)
|
||||||
|
|
||||||
|
The `hosting` section of `firebase.json` configures how your site is deployed and served.
|
||||||
|
|
||||||
|
## Key Attributes
|
||||||
|
|
||||||
|
### `public` (Required)
|
||||||
|
Specifies the directory to deploy to Firebase Hosting.
|
||||||
|
```json
|
||||||
|
"hosting": {
|
||||||
|
"public": "public"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `ignore` (Optional)
|
||||||
|
Files to ignore on deploy. Uses glob patterns (like `.gitignore`).
|
||||||
|
**Default ignores:** `firebase.json`, `**/.*`, `**/node_modules/**`
|
||||||
|
|
||||||
|
### `redirects` (Optional)
|
||||||
|
URL redirects to prevent broken links or shorten URLs.
|
||||||
|
```json
|
||||||
|
"redirects": [
|
||||||
|
{
|
||||||
|
"source": "/foo",
|
||||||
|
"destination": "/bar",
|
||||||
|
"type": 301
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### `rewrites` (Optional)
|
||||||
|
Serve the same content for multiple URLs, useful for SPAs or Dynamic Content.
|
||||||
|
```json
|
||||||
|
"rewrites": [
|
||||||
|
{
|
||||||
|
"source": "**",
|
||||||
|
"destination": "/index.html"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "/api/**",
|
||||||
|
"function": "apiFunction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": "/container/**",
|
||||||
|
"run": {
|
||||||
|
"serviceId": "helloworld",
|
||||||
|
"region": "us-central1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### `headers` (Optional)
|
||||||
|
Custom response headers.
|
||||||
|
```json
|
||||||
|
"headers": [
|
||||||
|
{
|
||||||
|
"source": "**/*.@(eot|otf|ttf|ttc|woff|font.css)",
|
||||||
|
"headers": [
|
||||||
|
{
|
||||||
|
"key": "Access-Control-Allow-Origin",
|
||||||
|
"value": "*"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### `cleanUrls` (Optional)
|
||||||
|
If `true`, drops `.html` extension from URLs.
|
||||||
|
```json
|
||||||
|
"cleanUrls": true
|
||||||
|
```
|
||||||
|
|
||||||
|
### `trailingSlash` (Optional)
|
||||||
|
Controls trailing slashes in static content URLs.
|
||||||
|
- `true`: Adds trailing slash.
|
||||||
|
- `false`: Removes trailing slash.
|
||||||
|
|
||||||
|
## Full Example
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"hosting": {
|
||||||
|
"public": "dist",
|
||||||
|
"ignore": [
|
||||||
|
"firebase.json",
|
||||||
|
"**/.*",
|
||||||
|
"**/node_modules/**"
|
||||||
|
],
|
||||||
|
"rewrites": [
|
||||||
|
{
|
||||||
|
"source": "**",
|
||||||
|
"destination": "/index.html"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"cleanUrls": true,
|
||||||
|
"trailingSlash": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
# Deploying to Firebase Hosting
|
||||||
|
|
||||||
|
## Standard Deployment
|
||||||
|
To deploy your Hosting content and configuration to your live site:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest deploy --only hosting
|
||||||
|
```
|
||||||
|
|
||||||
|
This deploys to your default sites (`PROJECT_ID.web.app` and `PROJECT_ID.firebaseapp.com`).
|
||||||
|
|
||||||
|
## Preview Channels
|
||||||
|
Preview channels allow you to test changes on a temporary URL before going live.
|
||||||
|
|
||||||
|
### Deploy to a Preview Channel
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest hosting:channel:deploy CHANNEL_ID
|
||||||
|
```
|
||||||
|
Replace `CHANNEL_ID` with a name (e.g., `feature-beta`).
|
||||||
|
This returns a preview URL like `PROJECT_ID--CHANNEL_ID-RANDOM_HASH.web.app`.
|
||||||
|
|
||||||
|
### Expiration
|
||||||
|
Channels expire after 7 days by default. To set a different expiration:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest hosting:channel:deploy CHANNEL_ID --expires 1d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cloning to Live
|
||||||
|
You can promote a version from a preview channel to your live channel without rebuilding.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest hosting:clone SOURCE_SITE_ID:SOURCE_CHANNEL_ID TARGET_SITE_ID:live
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
Clone the `feature-beta` channel on your default site to live:
|
||||||
|
```bash
|
||||||
|
npx -y firebase-tools@latest hosting:clone my-project:feature-beta my-project:live
|
||||||
|
```
|
||||||
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
name: firestore-security-rules-auditor
|
||||||
|
description: A skill to evaluate how secure Firestore security rules are. Use this when Firestore security rules are updated to ensure that the generated rules are extremely secure and robust.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Overview
|
||||||
|
This skill acts as an auditor for Firebase Security Rules, evaluating them against a rigorous set of criteria to ensure they are secure, robust, and correctly implemented.
|
||||||
|
|
||||||
|
# Scoring Criteria
|
||||||
|
<!-- Please paste the exact criteria on how to score the rules below. -->
|
||||||
|
## Assessment: Security Validator (Red Team Edition)
|
||||||
|
You are a Senior Security Auditor and Penetration Tester specializing in Firestore. Your goal is to find "the hole in the wall." Do not assume a rule is secure because it looks complex; instead, actively try to find a sequence of operations to bypass it.
|
||||||
|
|
||||||
|
### Mandatory Audit Checklist:
|
||||||
|
1. **The Update Bypass:** Compare 'create' and 'update' rules. Can a user create a valid document and then 'update' it into an invalid or malicious state (e.g., changing their role, bypassing size limits, or corrupting data types)?
|
||||||
|
2. **Authority Source:** Does the security rely on user-provided data (request.resource.data) for sensitive fields like 'role', 'isAdmin', or 'ownerId'? Carefully consider the source for that authority.
|
||||||
|
3. **Business Logic vs. Rules:** Does the rule set actually support the app's purpose? (e.g., In a collaboration app, can collaborators actually read the data? If not, the rules are "broken" or will force insecure workarounds).
|
||||||
|
4. **Storage Abuse:** Are there string length or array size limits? If not, label it as a "Resource Exhaustion/DoS" risk.
|
||||||
|
5. **Type Safety:** Are fields checked with 'is string', 'is int', or 'is timestamp'?
|
||||||
|
6. **Field-Level vs. Identity-Level Security:** Be careful with rules that use \`hasOnly()\` or \`diff()\`. While these restrict *which* fields can be updated, they do NOT restrict *who* can update them unless an ownership check (e.g., \`resource.data.uid == request.auth.uid\`) is also present. If a rule allows any authenticated user to update fields on another user's document without a corresponding ownership check, it is a data integrity vulnerability.
|
||||||
|
|
||||||
|
### Admin Bootstrapping & Privileges:
|
||||||
|
The admin bootstrapping process is limited in this app. If the rules use a single hardcoded admin email (e.g., checking request.auth.token.email == 'admin@example.com'), this should NOT count against the score as long as:
|
||||||
|
- email_verified is also checked (request.auth.token.email_verified == true).
|
||||||
|
- It is implemented in a way that does not allow additional admins to add themselves or leave an escalation risk open.
|
||||||
|
|
||||||
|
### Scoring Criteria (1-5):
|
||||||
|
- **1 (Critical):** Unauthorized data access (leaks), privilege escalation, or total validation bypass.
|
||||||
|
- **2 (Major):** Broken business logic, self-assigned roles, bypass of controls.
|
||||||
|
- **3 (Moderate):** PII exposure (e.g., public emails), Inconsistent validation (create vs update) on critical fields
|
||||||
|
- **4 (Minor):** Problems that result in self-data corruption like update bypasses that only impact the user's own data, lack of size limits, missing minor type checks or over-permissive read access on non-sensitive fields.
|
||||||
|
- **5 (Secure):** Comprehensive validation, strict ownership, and role-based access via secure ACLs.
|
||||||
|
|
||||||
|
Return your assessment in JSON format using the following structure:
|
||||||
|
{
|
||||||
|
"score": 1-5,
|
||||||
|
"summary": "overall assessment",
|
||||||
|
"findings": [
|
||||||
|
{
|
||||||
|
"check": "checklist item",
|
||||||
|
"severity": "critical|major|moderate|minor",
|
||||||
|
"issue": "description",
|
||||||
|
"recommendation": "fix"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -0,0 +1,35 @@
|
|||||||
|
version.json,1776641270181,85b44cf9c484465a153bfcbc45e51b5bec0c6b0bbd08ddb748676f1be9148966
|
||||||
|
manifest.json,1776641271305,5dbc40d112feda18ef93319d9dc901933db018ec87c163ee1792eeef94c6c09e
|
||||||
|
flutter_service_worker.js,1776643113637,e1f4135191a080af3d1ba569557b5277252b9b9423a6be83c963274d08d39f5f
|
||||||
|
index.html,1776643085343,e41d5c8da63bc8625a0fb6c6ed8516a6876091a299138e2f34735c479ed68847
|
||||||
|
favicon.png,1776641271305,18f5974cf0a7a778cc393e3cb31725d0a85970c8df2b52688cacce7e4dc99342
|
||||||
|
icons/Icon-maskable-192.png,1776641271305,3b3af8b09702ed3f063219466bd9e044dbd2703372b1402f6632920ba9c251de
|
||||||
|
flutter_bootstrap.js,1776643085338,de59f25cc72fcb37144c8fdbd5d6687f10910d9ce86150d50071a5cea7c723f0
|
||||||
|
flutter.js,1776643085064,c86e9ec2a28b0fa50e6efcc196aec60be02c96a5b0bec888ea7e1c986d1b8c68
|
||||||
|
icons/Icon-512.png,1776641271305,7fdb40ad7ef7200d89786b1266b41d441eea7872d4bdccaf8a954429aab62c5f
|
||||||
|
icons/Icon-192.png,1776641271305,2a485edb60e647cf0496f958c90c1ebdb3b9a82dc44260370b9094842fab3906
|
||||||
|
assets/FontManifest.json,1776641270262,00798c3c5766cdc753371ca1934749c9fe9b8969de56bc54e9ed1c90b3d669fa
|
||||||
|
assets/AssetManifest.bin.json,1776641270262,6dab3bc22d8651a5fa292a09e03cfd63b9c06be7d92099be1e7c492a94623f6b
|
||||||
|
assets/AssetManifest.bin,1776641270262,8b6072cd7e29821eb7524c128faf9c41e69579e2a2fc8dd76b5140802f2e0de6
|
||||||
|
assets/shaders/stretch_effect.frag,1776641270339,5936632aca690916d8c46a627c9bac73438a20ac1a3d10464c93f7fff94eabec
|
||||||
|
assets/packages/cupertino_icons/assets/CupertinoIcons.ttf,1776641271299,10aa1f084fa7612decf021fb5a8aefa4a7d2f427d7a02fa778962dd20b814c29
|
||||||
|
assets/shaders/ink_sparkle.frag,1776641270334,5173811eeecee36d442b414b44ec54993cadd39e44568fb88a24dc7ff3dd1fca
|
||||||
|
icons/Icon-maskable-512.png,1776641271305,8d3ccbc9307d2a1ede82dddcd23c11d39a00be576296b9358903ee746206cde2
|
||||||
|
assets/fonts/MaterialIcons-Regular.otf,1776641271303,ca87fff66393b1255b3684a3cb9148345d7815d0777dd7a75fdab9f8ddb33d72
|
||||||
|
canvaskit/wimp.js,1776643085061,155c03184c816768553bdabac007c209b0b080d5e9c3689afc53a55e08337dcc
|
||||||
|
canvaskit/skwasm_heavy.js,1776643085036,b6844567b8a37fb14d5662b11016d44581d63cad3a8d81002ccda2df422f6b06
|
||||||
|
canvaskit/skwasm.js,1776643085061,ca9eea4a35f28735e5db9891a9761e61362eb920e207c9e92b44bf1b219411a4
|
||||||
|
canvaskit/canvaskit.js,1776643085061,fa4d7a33ccb6339247171eab8a5caddecbb0e5a4434ce9053f35363f2b6b189c
|
||||||
|
canvaskit/chromium/canvaskit.js,1776643085058,4d89e6a67e87c4b16ce2aeeeebc2d45679bd496a261052ee18267abad2c48a7e
|
||||||
|
assets/NOTICES,1776641270263,d95a969eb5a115802462eb514403d473d9def4cd76e6bc20a95f59bfa26d1bd2
|
||||||
|
canvaskit/chromium/canvaskit.js.symbols,1776643085059,9e879f9e28effde984d429c56a54310544f269be49bda13be0c379de9cb71b43
|
||||||
|
canvaskit/canvaskit.js.symbols,1776643085063,c1799ed8ced2395383e855ba33e33f51e00a545c0b0e092886edb8ac26caf4e7
|
||||||
|
canvaskit/skwasm.js.symbols,1776643085035,91d396bb2de3a8e36c8a7aeba131dd588da90c3f3d0bc0aeb1ec27386a79ab51
|
||||||
|
canvaskit/wimp.js.symbols,1776643085061,bfff908f573a0af5c2f9f35f6c62e2bd62b1285be0b6a2be749c9de5bf579968
|
||||||
|
canvaskit/skwasm_heavy.js.symbols,1776643085036,a264488be276a5f5dfed794072e80fdaab6abd0d932aec0ab42046d0efa646ae
|
||||||
|
main.dart.js,1776641270179,3698c745ceb8fce3196e3eb7f8bd7e882a9cf220b36c2670cb547abb87ccbd2f
|
||||||
|
canvaskit/wimp.wasm,1776643085037,1613355926b40c90ff7aef18576a62fdb429e27c98096f3a4b8218117ccabb06
|
||||||
|
canvaskit/skwasm.wasm,1776643085060,4aa67861d67269e617537b1795832531ddd79e95e8afed6a0cbd565ed78a960b
|
||||||
|
canvaskit/chromium/canvaskit.wasm,1776643085041,e545908d5764c6a49776fcaeecbbba2935aab38b7e288549141701cbae597d72
|
||||||
|
canvaskit/skwasm_heavy.wasm,1776643085063,05e0bbd67d9f87e8852a85cc9c7b794564f42b2ce66930579ed4917e447c788f
|
||||||
|
canvaskit/canvaskit.wasm,1776643085039,bdf154888353ea6c1b503ce59585f72eccf24a7f197019266c94d2b0216b5d5c
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
"projects": {
|
||||||
|
"default": "onsol-go"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,45 @@
|
|||||||
|
# Miscellaneous
|
||||||
|
*.class
|
||||||
|
*.log
|
||||||
|
*.pyc
|
||||||
|
*.swp
|
||||||
|
.DS_Store
|
||||||
|
.atom/
|
||||||
|
.build/
|
||||||
|
.buildlog/
|
||||||
|
.history
|
||||||
|
.svn/
|
||||||
|
.swiftpm/
|
||||||
|
migrate_working_dir/
|
||||||
|
|
||||||
|
# IntelliJ related
|
||||||
|
*.iml
|
||||||
|
*.ipr
|
||||||
|
*.iws
|
||||||
|
.idea/
|
||||||
|
|
||||||
|
# The .vscode folder contains launch configuration and tasks you configure in
|
||||||
|
# VS Code which you may wish to be included in version control, so this line
|
||||||
|
# is commented out by default.
|
||||||
|
#.vscode/
|
||||||
|
|
||||||
|
# Flutter/Dart/Pub related
|
||||||
|
**/doc/api/
|
||||||
|
**/ios/Flutter/.last_build_id
|
||||||
|
.dart_tool/
|
||||||
|
.flutter-plugins-dependencies
|
||||||
|
.pub-cache/
|
||||||
|
.pub/
|
||||||
|
/build/
|
||||||
|
/coverage/
|
||||||
|
|
||||||
|
# Symbolication related
|
||||||
|
app.*.symbols
|
||||||
|
|
||||||
|
# Obfuscation related
|
||||||
|
app.*.map.json
|
||||||
|
|
||||||
|
# Android Studio will place build artifacts here
|
||||||
|
/android/app/debug
|
||||||
|
/android/app/profile
|
||||||
|
/android/app/release
|
||||||
@@ -0,0 +1,45 @@
|
|||||||
|
# This file tracks properties of this Flutter project.
|
||||||
|
# Used by Flutter tool to assess capabilities and perform upgrades etc.
|
||||||
|
#
|
||||||
|
# This file should be version controlled and should not be manually edited.
|
||||||
|
|
||||||
|
version:
|
||||||
|
revision: "2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa"
|
||||||
|
channel: "stable"
|
||||||
|
|
||||||
|
project_type: app
|
||||||
|
|
||||||
|
# Tracks metadata for the flutter migrate command
|
||||||
|
migration:
|
||||||
|
platforms:
|
||||||
|
- platform: root
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
- platform: android
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
- platform: ios
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
- platform: linux
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
- platform: macos
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
- platform: web
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
- platform: windows
|
||||||
|
create_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
base_revision: 2c9eb20739dfec95e2c74bd3dfa4601b0a8a36aa
|
||||||
|
|
||||||
|
# User provided section
|
||||||
|
|
||||||
|
# List of Local paths (relative to this file) that should be
|
||||||
|
# ignored by the migrate tool.
|
||||||
|
#
|
||||||
|
# Files that are not part of the templates will be ignored by default.
|
||||||
|
unmanaged_files:
|
||||||
|
- 'lib/main.dart'
|
||||||
|
- 'ios/Runner.xcodeproj/project.pbxproj'
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
# onsolgo
|
||||||
|
|
||||||
|
A new Flutter project.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
This project is a starting point for a Flutter application.
|
||||||
|
|
||||||
|
A few resources to get you started if this is your first Flutter project:
|
||||||
|
|
||||||
|
- [Learn Flutter](https://docs.flutter.dev/get-started/learn-flutter)
|
||||||
|
- [Write your first Flutter app](https://docs.flutter.dev/get-started/codelab)
|
||||||
|
- [Flutter learning resources](https://docs.flutter.dev/reference/learning-resources)
|
||||||
|
|
||||||
|
For help getting started with Flutter development, view the
|
||||||
|
[online documentation](https://docs.flutter.dev/), which offers tutorials,
|
||||||
|
samples, guidance on mobile development, and a full API reference.
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
# This file configures the analyzer, which statically analyzes Dart code to
|
||||||
|
# check for errors, warnings, and lints.
|
||||||
|
#
|
||||||
|
# The issues identified by the analyzer are surfaced in the UI of Dart-enabled
|
||||||
|
# IDEs (https://dart.dev/tools#ides-and-editors). The analyzer can also be
|
||||||
|
# invoked from the command line by running `flutter analyze`.
|
||||||
|
|
||||||
|
# The following line activates a set of recommended lints for Flutter apps,
|
||||||
|
# packages, and plugins designed to encourage good coding practices.
|
||||||
|
include: package:flutter_lints/flutter.yaml
|
||||||
|
|
||||||
|
linter:
|
||||||
|
# The lint rules applied to this project can be customized in the
|
||||||
|
# section below to disable rules from the `package:flutter_lints/flutter.yaml`
|
||||||
|
# included above or to enable additional rules. A list of all available lints
|
||||||
|
# and their documentation is published at https://dart.dev/lints.
|
||||||
|
#
|
||||||
|
# Instead of disabling a lint rule for the entire project in the
|
||||||
|
# section below, it can also be suppressed for a single line of code
|
||||||
|
# or a specific dart file by using the `// ignore: name_of_lint` and
|
||||||
|
# `// ignore_for_file: name_of_lint` syntax on the line or in the file
|
||||||
|
# producing the lint.
|
||||||
|
rules:
|
||||||
|
# avoid_print: false # Uncomment to disable the `avoid_print` rule
|
||||||
|
# prefer_single_quotes: true # Uncomment to enable the `prefer_single_quotes` rule
|
||||||
|
|
||||||
|
# Additional information about this file can be found at
|
||||||
|
# https://dart.dev/guides/language/analysis-options
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
gradle-wrapper.jar
|
||||||
|
/.gradle
|
||||||
|
/captures/
|
||||||
|
/gradlew
|
||||||
|
/gradlew.bat
|
||||||
|
/local.properties
|
||||||
|
GeneratedPluginRegistrant.java
|
||||||
|
.cxx/
|
||||||
|
|
||||||
|
# Remember to never publicly share your keystore.
|
||||||
|
# See https://flutter.dev/to/reference-keystore
|
||||||
|
key.properties
|
||||||
|
**/*.keystore
|
||||||
|
**/*.jks
|
||||||
@@ -0,0 +1,47 @@
|
|||||||
|
plugins {
|
||||||
|
id("com.android.application")
|
||||||
|
// START: FlutterFire Configuration
|
||||||
|
id("com.google.gms.google-services")
|
||||||
|
// END: FlutterFire Configuration
|
||||||
|
id("kotlin-android")
|
||||||
|
// The Flutter Gradle Plugin must be applied after the Android and Kotlin Gradle plugins.
|
||||||
|
id("dev.flutter.flutter-gradle-plugin")
|
||||||
|
}
|
||||||
|
|
||||||
|
android {
|
||||||
|
namespace = "com.stnebula.onsolgo.onsolgo"
|
||||||
|
compileSdk = flutter.compileSdkVersion
|
||||||
|
ndkVersion = flutter.ndkVersion
|
||||||
|
|
||||||
|
compileOptions {
|
||||||
|
sourceCompatibility = JavaVersion.VERSION_17
|
||||||
|
targetCompatibility = JavaVersion.VERSION_17
|
||||||
|
}
|
||||||
|
|
||||||
|
kotlinOptions {
|
||||||
|
jvmTarget = JavaVersion.VERSION_17.toString()
|
||||||
|
}
|
||||||
|
|
||||||
|
defaultConfig {
|
||||||
|
// TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html).
|
||||||
|
applicationId = "com.stnebula.onsolgo.onsolgo"
|
||||||
|
// You can update the following values to match your application needs.
|
||||||
|
// For more information, see: https://flutter.dev/to/review-gradle-config.
|
||||||
|
minSdk = flutter.minSdkVersion
|
||||||
|
targetSdk = flutter.targetSdkVersion
|
||||||
|
versionCode = flutter.versionCode
|
||||||
|
versionName = flutter.versionName
|
||||||
|
}
|
||||||
|
|
||||||
|
buildTypes {
|
||||||
|
release {
|
||||||
|
// TODO: Add your own signing config for the release build.
|
||||||
|
// Signing with the debug keys for now, so `flutter run --release` works.
|
||||||
|
signingConfig = signingConfigs.getByName("debug")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
flutter {
|
||||||
|
source = "../.."
|
||||||
|
}
|
||||||
@@ -0,0 +1,29 @@
|
|||||||
|
{
|
||||||
|
"project_info": {
|
||||||
|
"project_number": "128426171433",
|
||||||
|
"project_id": "onsol-go",
|
||||||
|
"storage_bucket": "onsol-go.firebasestorage.app"
|
||||||
|
},
|
||||||
|
"client": [
|
||||||
|
{
|
||||||
|
"client_info": {
|
||||||
|
"mobilesdk_app_id": "1:128426171433:android:618cd582a847a9a5fe5038",
|
||||||
|
"android_client_info": {
|
||||||
|
"package_name": "com.stnebula.onsolgo.onsolgo"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"oauth_client": [],
|
||||||
|
"api_key": [
|
||||||
|
{
|
||||||
|
"current_key": "AIzaSyAoS2uuI3uX4XwA0oPVyX94j-HV1MeLlOw"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"services": {
|
||||||
|
"appinvite_service": {
|
||||||
|
"other_platform_oauth_client": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"configuration_version": "1"
|
||||||
|
}
|
||||||
@@ -0,0 +1,7 @@
|
|||||||
|
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||||
|
<!-- The INTERNET permission is required for development. Specifically,
|
||||||
|
the Flutter tool needs it to communicate with the running application
|
||||||
|
to allow setting breakpoints, to provide hot reload, etc.
|
||||||
|
-->
|
||||||
|
<uses-permission android:name="android.permission.INTERNET"/>
|
||||||
|
</manifest>
|
||||||
@@ -0,0 +1,47 @@
|
|||||||
|
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||||
|
<uses-permission android:name="android.permission.INTERNET"/>
|
||||||
|
|
||||||
|
<application
|
||||||
|
android:label="Onsol-GO!"
|
||||||
|
android:name="${applicationName}"
|
||||||
|
android:icon="@mipmap/launcher_icon">
|
||||||
|
|
||||||
|
<!-- GOOGLE ADMOB ID (MANDATORY) -->
|
||||||
|
<!-- This is a Test ID. Swap with your real ID from the AdMob Dashboard before Play Store launch -->
|
||||||
|
<meta-data
|
||||||
|
android:name="com.google.android.gms.ads.APPLICATION_ID"
|
||||||
|
android:value="ca-app-pub-3940256099942544~3347511713"/>
|
||||||
|
|
||||||
|
<activity
|
||||||
|
android:name=".MainActivity"
|
||||||
|
android:exported="true"
|
||||||
|
android:launchMode="singleTop"
|
||||||
|
android:taskAffinity=""
|
||||||
|
android:theme="@style/LaunchTheme"
|
||||||
|
android:configChanges="orientation|keyboardHidden|keyboard|screenSize|smallestScreenSize|locale|layoutDirection|fontScale|screenLayout|density|uiMode"
|
||||||
|
android:hardwareAccelerated="true"
|
||||||
|
android:windowSoftInputMode="adjustResize">
|
||||||
|
|
||||||
|
<meta-data
|
||||||
|
android:name="io.flutter.embedding.android.NormalTheme"
|
||||||
|
android:resource="@style/NormalTheme"
|
||||||
|
/>
|
||||||
|
|
||||||
|
<intent-filter>
|
||||||
|
<action android:name="android.intent.action.MAIN"/>
|
||||||
|
<category android:name="android.intent.category.LAUNCHER"/>
|
||||||
|
</intent-filter>
|
||||||
|
</activity>
|
||||||
|
|
||||||
|
<meta-data
|
||||||
|
android:name="flutterEmbedding"
|
||||||
|
android:value="2" />
|
||||||
|
</application>
|
||||||
|
|
||||||
|
<queries>
|
||||||
|
<intent>
|
||||||
|
<action android:name="android.intent.action.PROCESS_TEXT"/>
|
||||||
|
<data android:mimeType="text/plain"/>
|
||||||
|
</intent>
|
||||||
|
</queries>
|
||||||
|
</manifest>
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
package com.stnebula.onsolgo.onsolgo
|
||||||
|
|
||||||
|
import io.flutter.embedding.android.FlutterActivity
|
||||||
|
|
||||||
|
class MainActivity : FlutterActivity()
|
||||||
|
After Width: | Height: | Size: 23 KiB |
|
After Width: | Height: | Size: 11 KiB |
@@ -0,0 +1,12 @@
|
|||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<!-- Modify this file to customize your launch splash screen -->
|
||||||
|
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
|
||||||
|
<item android:drawable="?android:colorBackground" />
|
||||||
|
|
||||||
|
<!-- You can insert your own image assets here -->
|
||||||
|
<!-- <item>
|
||||||
|
<bitmap
|
||||||
|
android:gravity="center"
|
||||||
|
android:src="@mipmap/launch_image" />
|
||||||
|
</item> -->
|
||||||
|
</layer-list>
|
||||||
|
After Width: | Height: | Size: 38 KiB |
|
After Width: | Height: | Size: 81 KiB |
|
After Width: | Height: | Size: 132 KiB |
@@ -0,0 +1,12 @@
|
|||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<!-- Modify this file to customize your launch splash screen -->
|
||||||
|
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
|
||||||
|
<item android:drawable="@android:color/white" />
|
||||||
|
|
||||||
|
<!-- You can insert your own image assets here -->
|
||||||
|
<!-- <item>
|
||||||
|
<bitmap
|
||||||
|
android:gravity="center"
|
||||||
|
android:src="@mipmap/launch_image" />
|
||||||
|
</item> -->
|
||||||
|
</layer-list>
|
||||||
|
After Width: | Height: | Size: 544 B |
|
After Width: | Height: | Size: 6.2 KiB |
|
After Width: | Height: | Size: 442 B |
|
After Width: | Height: | Size: 3.1 KiB |
|
After Width: | Height: | Size: 721 B |
|
After Width: | Height: | Size: 10 KiB |
|
After Width: | Height: | Size: 1.0 KiB |