Skip to content

Searching Images

This example shows how to use the EmbeddingModel from EmbedAnything to perform semantic image search within a directory, leveraging the CLIP model for accurate, language-guided matching.

import numpy as np
import embed_anything
from embed_anything import EmbedData
import time

start = time.time()

# Load the model.
model = embed_anything.EmbeddingModel.from_pretrained_hf(
    embed_anything.WhichModel.Clip,
    model_id="openai/clip-vit-base-patch16",
)
data: list[EmbedData] = embed_anything.embed_image_directory(
    "test_files", embeder=model
)

# Convert the embeddings to a numpy array
embeddings = np.array([data.embedding for data in data])

print(data[0])

# Embed a query
query = ["Photo of a monkey?"]
query_embedding = np.array(
    embed_anything.embed_query(query, embeder=model)[0].embedding
)

# Calculate the similarities between the query embedding and all the embeddings
similarities = np.dot(embeddings, query_embedding)

# Find the index of the most similar embedding
max_index = np.argmax(similarities)

# Print the most similar image
print(data[max_index].text)
end = time.time()
print("Time taken: ", end - start)

Supported Models

EmbedAnything supports the following models for image search:

  • openai/clip-vit-base-patch32
  • openai/clip-vit-base-patch16
  • openai/clip-vit-large-patch14-336
  • openai/clip-vit-large-patch14