Introduction
Mixpeek is a developer platform for building multimodal search applications, this enables natural language queries that can understand intention to retrieve results that span multiple types of media. With Mixpeek, you can:
- Extract meaningful features from images, videos, and text
- Build powerful search experiences across different content types
- Create custom search experiences tailored to your use case
- Deploy production-ready applications with scalable infrastructure
How It Works
- Index Your Content: Upload your media (images, videos, text) to Mixpeek
- Extract Features: Mixpeek automatically processes your content to extract meaningful features
- Search & Analyze: Use our APIs to build search, recommendation, and analytics applications
Key Features
π Multimodal Search
Build sophisticated search experiences:
- Natural language queries across all media types
- Visual similarity search for images and videos
- Cross-modal search (find images with text, or vice versa)
- Semantic understanding of content
Learn more in our Search documentation.
π― Feature Extraction
Extract valuable insights automatically:
- Object and scene detection
- Text extraction from images and videos
- Face detection and recognition
- Custom extraction pipelines
Explore our Features documentation to learn more.
ποΈ Core Concepts
- Namespaces: Isolated environments for your applications
- Collections: Logical groupings of related content
- Vector Indexes: Efficient similarity search infrastructure
- Assets: Your indexed media content
- Features: Extracted data points from your content
- Tasks: Background processing jobs
Key Concepts
- Content Types: Different types of media content
- Feature Extraction: Automated processing and analysis
- Metadata: Additional information about assets
- Lifecycle Management: Asset creation to deletion
How Everything Relates
The Mixpeek system is organized hierarchically, with Organizations at the top level managing access and resources across the platform. Hereβs how the different components interact:
Component Relationships
-
Organizations & Users
- Organizations are the top-level entities that contain users and namespaces
- Users belong to organizations and can have access to multiple namespaces
-
Namespaces
- Act as containers for collections and vector indexes
- Provide isolation and organization of resources
- Can be accessed by multiple users
-
Collections & Assets
- Collections organize related assets
- Assets are stored within collections and can have multiple features
-
Features & Vector Indexes
- Features are extracted from assets and stored in vector indexes
- Vector indexes enable efficient similarity search and retrieval
- Each feature is associated with a specific vector index type
π Existing Integrations
Connect with your existing stack:
- Databases: MongoDB, PostgreSQL, Supabase
- Vector Stores: Pinecone, Weaviate, Qdrant
- Caching: Redis integration for high performance
View all Integrations.
Common Use Cases
-
Video Alerting: Real-time monitoring and detection of objects, events, or anomalies in video streams.
-
Visual Discovery: Power visual search engines and recommendation systems based on image similarity and style matching.
-
Multimodal Search: Enable users to search across all content types using natural language or visual inputs.
-
Content Recommendation: Build personalized recommendation systems using visual and semantic understanding.
-
Media Analytics: Gain insights through automated content analysis, object detection, and categorization.
-
Multimodal RAG: Create AI applications that can understand and process information across text, images, and videos.
-
Content Organization: Automatically organize and tag media libraries using AI-powered content understanding.
Getting Started
- Quickstart Guide: Set up your first Mixpeek application
- Client Libraries: Integrate using our SDKs
- API Reference: Explore our REST API
- Studio: Use our visual interface to manage content
Resources
Ready to build? Create your account or check out our Quickstart Guide.
Was this page helpful?