Product Updates
New updates and improvements
Active Tasks Listing & Enhanced Taxonomy Classifications
We’ve added the ability to list active processing tasks and enhanced our taxonomy classification system with new assignment modes.
Active Tasks
You can now monitor all active processing tasks across your collections, providing better visibility into ongoing operations.
Enhanced Taxonomy Classifications
The taxonomy classification system now supports two assignment modes:
- Threshold Mode: Assigns categories based on a confidence threshold
- Nearest Mode: Assigns to the nearest matching category regardless of confidence
Key Updates
- Monitor active tasks with status, progress, and estimated completion time
- Choose between threshold-based or nearest-neighbor taxonomy assignments
- Control whether new classifications append to or replace existing ones
- Set confidence thresholds for more precise category matching
For detailed implementation guidelines, see our Taxonomy Classification Guide.
Breaking Change: Added “/v1” Prefix to All API Endpoints
We’ve standardized our API routing by adding a “/v1” prefix to all API endpoints. This change improves API versioning and follows REST API best practices.
Required Changes
- Update all API endpoint calls to include the “/v1” prefix
- Before:
https://api.mixpeek.com/assets
- After:
https://api.mixpeek.com/v1/assets
Affected Endpoints
All API endpoints are affected. Common examples include:
/features/search
→/v1/features/search
/ingest/videos/url
→/v1/ingest/videos/url
/collections
→/v1/collections
Migration Guide
- Audit your API calls
- Update endpoint URLs to include the “/v1” prefix
- Test your integration with the new endpoints
Feature: Averaged Multi-Input Embeddings
Added support for averaging multiple embeddings when the same embedding model is used across different inputs. This allows for creating more representative embeddings by combining multiple perspectives or descriptions of the same content.
Examples
Use Cases
- Combine multiple text descriptions to create a more comprehensive embedding
- Mix image and text embeddings in the same vector space
- Preserve all original inputs for reference and debugging
Technical Details
- Embeddings from the same model are averaged element-wise
- All original
EmbeddingRequest
objects are stored inoriginal_values
- Works with both text and image inputs
- Compatible with all supported embedding providers (Vertex, SageMaker)
Enhancement: Read-Only API Keys
You can now create API keys with granular permissions: read-only, write-only, or read-write access. This enables building secure client-side applications by restricting access to sensitive operations.
Permission Levels
- Read-only: Allows access to GET endpoints only (e.g.,
/api/v1/search
,/api/v1/items/{id}
) - Write-only: Enables POST, PUT, DELETE operations (e.g.,
/api/v1/items
,/api/v1/batch
) - Read-write: Full access to all endpoints
For a complete list of endpoints and their required permissions, see our API Reference.
How to Configure
- Navigate to Settings > API Keys in the studio portal
- Create a new key or edit an existing one
- Select the desired permission level
- Save changes
Breaking Change: vector_index renamed to embedding_models
We’ve renamed the vector_index
configuration parameter to embedding_models
to better reflect its purpose and align with industry terminology. This change affects all API endpoints and configuration files that previously used vector_index
.
What You Need To Do
- Update all references to
vector_index
in your configuration files to useembedding_models
- Review and update any API calls that included the
vector_index
parameter - Check your integration tests and documentation for any mentions of the old parameter name
For a complete list of supported embedding models and migration guidance, please refer to our available models documentation.
Taxonomies and Clustering
Introducing powerful taxonomies to organize and classify your content using multimodal AI. Create hierarchical categories that understand both text and images, enabling intelligent content organization at scale.
Key Features
- Flexible Hierarchy - Build nested categories with independent node matching and confidence scoring
- Multimodal Support - Define taxonomy nodes using text, images, or both
- Test-Driven Development - Validate taxonomies on sample data before production deployment
- Automated Classification - Automatically categorize new content during ingestion
- Manual Controls - Override or supplement AI classifications with manual assignments
Check out our taxonomy documentation to get started organizing your content library.
Was this page helpful?