DEV Community

Cover image for Gemini Embedding 2: Our first natively multimodal embedding model
Patrick Loeber for Google AI

Posted on • Originally published at blog.google

Gemini Embedding 2: Our first natively multimodal embedding model

Today we're releasing Gemini Embedding 2, our first fully multimodal embedding model built on the Gemini architecture, in Public Preview via the Gemini API and Vertex AI.

Expanding on our previous text-only foundation, Gemini Embedding 2 maps text, images, videos, audio and documents into a single, unified embedding space, and captures semantic intent across over 100 languages. This simplifies complex pipelines and enhances a wide variety of multimodal downstream tasks—from Retrieval-Augmented Generation (RAG) and semantic search to sentiment analysis and data clustering.

New modalities and flexible output dimensions

The model is based on Gemini and leverages its best-in-class multimodal understanding capabilities to create high-quality embeddings across:

  • Text: supports an expansive context of up to 8192 input tokens
  • Images: capable of processing up to 6 images per request, supporting PNG and JPEG formats
  • Videos: supports up to 120 seconds of video input in MP4 and MOV formats
  • Audio: natively ingests and embeds audio data without needing intermediate text transcriptions
  • Documents: directly embed PDFs up to 6 pages long

Beyond processing one modality at a time, this model natively understands interleaved input so you can pass multiple modalities of input (e.g., image + text) in a single request. This allows the model to capture the complex, nuanced relationships between different media types, unlocking more accurate understanding of complex, real-world data.

multimodal input

Like our previous embedding models, Gemini Embedding 2 incorporates Matryoshka Representation Learning (MRL), a technique that “nests” information by dynamically scaling down dimensions. This enables flexible output dimensions scaling down from the default 3072 so developers can balance performance and storage costs. We recommend using 3072, 1536, 768 dimensions for highest quality.

To see these embeddings in action, try out our lightweight multimodal semantic search demo.

State-of-the-art performance

Gemini Embedding 2 doesn't just improve on legacy models. It establishes a new performance standard for multimodal depth, introducing strong speech capabilities and outperforming leading models in text, image, and video tasks. This measurable improvement and unique multimodal coverage give developers exactly what they need for their diverse embedding needs.

benchmarks

Unlocking deeper meaning for data

Embeddings are the technology that power experiences in many Google products. From RAG where embeddings can play a crucial role in context engineering to large-scale data management and classic search/analysis, some of our early access partners are already using Gemini Embedding 2 to unlock high-value multimodal applications:

Start building today

Get started with the Gemini Embedding 2 model through Gemini API or Vertex AI.

from google import genai
from google.genai import types

# For Vertex AI:
# PROJECT_ID='<add_here>'
# client = genai.Client(vertexai=True, project=PROJECT_ID, location='us-central1')

client = genai.Client()

with open("example.png", "rb") as f:
    image_bytes = f.read()

with open("sample.mp3", "rb") as f:
    audio_bytes = f.read()

# Embed text, image, and audio 
result = client.models.embed_content(
    model="gemini-embedding-2-preview",
    contents=[
        "What is the meaning of life?",
        types.Part.from_bytes(
            data=image_bytes,
            mime_type="image/png",
        ),
        types.Part.from_bytes(
            data=audio_bytes,
            mime_type="audio/mpeg",
        ),
    ],
)

print(result.embeddings)
Enter fullscreen mode Exit fullscreen mode

Learn how to use the model in our interactive Gemini API and Vertex AI Colab notebooks. You can also use it through LangChain, LlamaIndex, Haystack, Weaviate, QDrant, ChromaDB, and Vector Search.

By bringing semantic meaning to the diverse data around us, Gemini Embedding 2 provides the essential multimodal foundation for the next era of advanced AI experiences. We can't wait to see what you build.

Top comments (1)

Collapse
 
theycallmeswift profile image
Swift

Very cool. I spent a few minutes wrapping my head around the Embedding model and specifically asking myself when I would personally deploy it. Sharing in case it's helpful for anyone else.

Core Value Proposition: The utility isn't just in making text search slightly better; it's about seamlessly linking concepts across entirely different mediums (written, video, audio, etc) without having to translate them all into text first.

Example use cases:

  • Better RAG for my AI board game coach project. Currently preprocessing PDFs into text for search. This would allow ingestion of the raw PDFs and enable search across embedded photos and text natively.
  • Hackathon project data is split across written submissions, video demos, and code. Using this model would enable us to natively search across all mediums simultaneously

Would love to see more concrete use case examples in future posts to really crystalize the power of the models!