Introduction
Quickstart
Mixpeek Quickstart Guide
There is an interactive version of this guide embedded in your dashboard on the Notebooks page
Getting Started
First, install the Mixpeek library:
pip install --upgrade mixpeek
Then, import the necessary libraries and initialize the Mixpeek client:
import pprint
from mixpeek import Mixpeek
mixpeek = Mixpeek(api_key="YOUR_API_KEY")
pp = pprint.PrettyPrinter(indent=4)
# All files for indexing and searching will be sent to this collection
collection_id = "starter"
Simple API
Image
Adding Images to Your Collection
image_urls = [
"https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_jumping.jpg",
"https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_running.jpg",
"https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_with_ball.jpg",
]
for image_url in image_urls:
index_image_response = mixpeek.index.url(
target_url=image_url,
collection_id=collection_id
)
pp.pprint(index_image_response)
Text and Image Queries
# Text query
text_search_response_image = mixpeek.search.text(
input="dog with a ball",
filters={"$or": [{"collection_id": collection_id}]},
modality="image"
)
pp.pprint(text_search_response_image['results'][:3])
# Image query
url_search_response_image = mixpeek.search.url(
target_url="https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_jumping_with_ball.jpg",
filters={"$or": [{"collection_id": collection_id}]},
modality="image"
)
pp.pprint(url_search_response_image['results'][:3])
Video
Adding Videos to Your Collection
video_url = "https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_agility.mp4"
index_video_response = mixpeek.index.url(
target_url=video_url,
collection_id=collection_id
)
pp.pprint(index_video_response)
Text and Video Queries
# Text query
text_search_response_video = mixpeek.search.text(
input="dog jumping",
filters={"$or": [{"collection_id": collection_id}]},
modality="video"
)
pp.pprint(text_search_response_video['results'][:3])
# Video query
text_search_response_video = mixpeek.search.url(
target_url="https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_jumping.mp4",
filters={"$or": [{"collection_id": collection_id}]},
modality="video"
)
pp.pprint(text_search_response_video['results'][:3])
Custom Toolkit
For more customization over your multimodal embeddings and structured data:
video_url = "https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_agility.mp4"
# Process video chunks
processed_chunks = mixpeek.tools.video.process(
video_source=video_url,
chunk_interval=1,
resolution=[720, 1280]
)
results = []
for index, chunk in enumerate(processed_chunks):
print(f"Processing video chunk: {index}")
embedding = mixpeek.embed.video(
model_id="vuse-generic-v1",
input=chunk['base64_chunk'],
input_type="base64"
)['embedding']
result = {
"start_time": chunk["start_time"],
"end_time": chunk["end_time"],
"embedding": embedding
}
results.append(result)
print(f"Embedded chunk {index}:")
print(f" Start time: {result['start_time']:.2f}s")
print(f" End time: {result['end_time']:.2f}s")
print(f" Embedding preview: {embedding[:5] + ['...'] + embedding[-5:]}")
print(f"Processed {len(results)} chunks")
We can do the same with images
# We can also do the same with images
image_url = "https://mixpeek-public-demo.s3.us-east-2.amazonaws.com/starter/aussie_with_ball.jpg"
base64_img = mixpeek.tools.image.process(image_source=image_url)
embedding = mixpeek.embed.image(
model_id="openai-clip-vit-base-patch32",
input=base64_img,
input_type="base64"
)['embedding']
print(f" Embedding preview: {embedding[:5] + ['...'] + embedding[-5:]}")
Next Steps
- Check out file storage integrations (AWS S3, GCP Cloud Storage, Azure Blob Storage) as your unstructured data source
- Explore database integrations like MongoDB, PostgreSQL, Pinecone, etc. for your structured data
- Dive into exciting use cases like Real-Time Video Alerting, Visual Discovery, and Multimodal Search & Retrieval