Firebase AI Logic (Gemini)

Integrate Google's Gemini AI models into your Blazor application with streaming support, function calling, grounding, and multimodal capabilities.

Overview

Firebase AI Logic provides seamless integration with Google's Gemini AI models directly from your Blazor WebAssembly application. FireBlazor wraps the Firebase AI SDK to give you a native C# experience with full type safety.

Model Availability

The recommended model for most use cases is gemini-2.5-flash. This model provides an excellent balance of speed, quality, and cost. For more complex reasoning tasks, consider gemini-2.5-pro. Check the Firebase AI documentation for the latest available models.

Key features include:

  • Text Generation - Generate text responses from prompts
  • Streaming - Receive responses in real-time as they're generated
  • Multi-turn Chat - Maintain conversation context across multiple messages
  • Multimodal Input - Send text and images together
  • Function Calling - Let the model call your defined functions
  • Grounding - Ground responses with Google Search results
  • Image Generation - Generate images using Imagen models

Text Generation

Generate text responses from prompts using Gemini models. First, get a model instance with optional configuration, then call GenerateContentAsync.

Generation Config

Configure the model behavior using GenerationConfig. You can set system instructions, temperature, output token limits, and more.

var model = Firebase.AI.GetModel("gemini-2.5-flash", new GenerationConfig
{
    SystemInstruction = "You are a helpful assistant.",
    Temperature = 0.7f,
    MaxOutputTokens = 1024
});

var result = await model.GenerateContentAsync("Explain quantum computing in simple terms.");
Console.WriteLine(result.Value.Text);

GenerationConfig Properties:

Property Type Description
SystemInstruction string Instructions that define the model's behavior and persona
Temperature float Controls randomness (0.0 = deterministic, 1.0 = creative)
MaxOutputTokens int Maximum number of tokens in the response
TopP float Nucleus sampling parameter
TopK int Top-k sampling parameter

Streaming Responses

For a better user experience, stream responses as they're generated using GenerateContentStreamAsync. This allows you to display text progressively rather than waiting for the complete response.

await foreach (var chunk in model.GenerateContentStreamAsync("Write a story about a robot."))
{
    if (chunk.IsSuccess && !chunk.Value.IsFinal)
    {
        _response += chunk.Value.Text;
        StateHasChanged();
    }
}

The await foreach pattern provides a natural way to process streaming data in C#. Each chunk contains a portion of the response, and you can update your UI progressively.

Performance Tip

When using streaming in Blazor, consider using StateHasChanged() with debouncing or batching to avoid excessive re-renders. You can also use InvokeAsync(StateHasChanged) when updating from async contexts.

Multi-turn Chat

For conversational applications, use the chat interface to maintain context across multiple messages. The chat automatically manages conversation history.

var chat = model.StartChat(new ChatOptions
{
    History = previousMessages
});

// Send a message and get a response
var response = await chat.SendMessageAsync("What's the weather like?");

// Streaming chat response
await foreach (var chunk in chat.SendMessageStreamAsync("Tell me more."))
{
    _response += chunk.Value.Text;
    StateHasChanged();
}

ChatOptions Properties:

  • History - Pre-populate the chat with previous messages for context

The chat maintains state between messages, so follow-up questions like "Tell me more" will have full context of the conversation.

Multimodal Input

Gemini models support multimodal input, allowing you to send both text and images in the same request. Use ContentPart to construct multimodal prompts.

var parts = new List<ContentPart>
{
    ContentPart.Text("What's in this image?"),
    ContentPart.Image(imageBytes, "image/png")
};

var result = await model.GenerateContentAsync(parts);

ContentPart Factory Methods:

  • ContentPart.Text(string text) - Create a text content part
  • ContentPart.Image(byte[] bytes, string mimeType) - Create an image content part from bytes

Supported image formats include PNG, JPEG, WebP, and GIF. The image bytes can come from file uploads, canvas captures, or any other source.

Function Calling

Function calling allows the model to request execution of functions you define. This enables the model to interact with external systems, APIs, or perform calculations.

var config = new GenerationConfig
{
    Tools = new[]
    {
        new FunctionDeclaration
        {
            Name = "get_weather",
            Description = "Get weather for a location",
            Parameters = new { location = "string" }
        }
    }
};

var model = Firebase.AI.GetModel("gemini-2.5-flash", config);
var result = await model.GenerateContentAsync("What's the weather in Tokyo?");

if (result.Value.HasFunctionCalls)
{
    foreach (var call in result.Value.FunctionCalls)
    {
        // Handle function call
        // call.Name - The function name (e.g., "get_weather")
        // call.Arguments - The arguments passed by the model
    }
}

FunctionDeclaration Properties:

Property Type Description
Name string Unique identifier for the function
Description string Description of what the function does (helps the model decide when to call it)
Parameters object JSON Schema describing the function parameters

After receiving function calls, execute the corresponding logic and send the results back to the model for a complete response.

Grounding with Google Search

Ground model responses with real-time information from Google Search. This is useful for questions about current events, recent data, or factual information that may be beyond the model's training data.

var config = new GenerationConfig
{
    Grounding = GroundingConfig.WithGoogleSearch()
};

var model = Firebase.AI.GetModel("gemini-2.5-flash", config);
var result = await model.GenerateContentAsync("What are the latest news about AI?");

if (result.Value.IsGrounded)
{
    // Response includes web search results
    Console.WriteLine(result.Value.Text);
}

When grounding is enabled, the model will search the web for relevant information and incorporate it into the response. The IsGrounded property indicates whether the response was successfully grounded with search results.

Image Generation

Generate images using Google's Imagen models. Image generation uses a separate model endpoint from text generation.

var imageModel = Firebase.AI.GetImageModel("imagen-4.0-generate-001");
var result = await imageModel.GenerateImagesAsync(
    "A serene mountain landscape at sunset",
    new ImageGenerationConfig { NumberOfImages = 1 });

ImageGenerationConfig Properties:

Property Type Description
NumberOfImages int Number of images to generate (1-4)
AspectRatio string Aspect ratio of generated images (e.g., "16:9", "1:1")
NegativePrompt string Elements to exclude from the generated image
Image Model Availability

Image generation models like imagen-4.0-generate-001 may require additional setup in your Firebase project. Check your Firebase console and ensure the appropriate APIs are enabled.