Authorizations
Bearer token authentication using your API key. Format: 'Bearer your_api_key'. To get an API key, create an account at mixpeek.com/start and generate a key in your account settings.
Headers
REQUIRED: Bearer token authentication using your API key. Format: 'Bearer sk_xxxxxxxxxxxxx'. You can create API keys in the Mixpeek dashboard under Organization Settings.
"Bearer sk_live_abc123def456"
"Bearer sk_test_xyz789"
REQUIRED: Namespace identifier for scoping this request. All resources (collections, buckets, taxonomies, etc.) are scoped to a namespace. You can provide either the namespace name or namespace ID. Format: ns_xxxxxxxxxxxxx (ID) or a custom name like 'my-namespace'
"ns_abc123def456"
"production"
"my-namespace"
Body
Request model for creating a new bucket.
REQUIRED: A bucket_schema must be defined to enable data processing and validation.
The bucket_schema tells the system what fields your objects will have, enabling:
- Collections to map your data fields to feature extractors via input_mappings
- Validation of object structure at upload time
- Type-safe data pipelines from bucket → collection → retrieval
Every bucket must have a schema that defines the structure of objects it will contain.
Human-readable name for the bucket
Schema definition for objects in this bucket (REQUIRED). Defines the custom fields your objects will have (blob properties, metadata structure, etc.)
Description of the bucket
Additional metadata for the bucket
Response
Successful Response
Response model for bucket operations.
Human-readable name for the bucket
Number of objects in the bucket
Total size of all objects in the bucket in bytes
Unique identifier for the bucket
Description of the bucket
Schema definition for objects in this bucket Schema definition for bucket objects.
IMPORTANT: The bucket schema defines what fields your bucket objects will have. This schema is REQUIRED if you want to:
- Create collections that use input_mappings to process your bucket data
- Validate object structure before ingestion
- Enable type-safe data pipelines
The schema defines the custom fields that will be used in:
- Blob properties (e.g., "content", "thumbnail", "transcript")
- Object metadata structure
- Blob data structures
Example workflow:
- Create bucket WITH schema defining your data structure
- Upload objects that conform to that schema
- Create collections that map schema fields to feature extractors
Without a bucket_schema, collections cannot use input_mappings.
Additional metadata for the bucket
When the bucket was created
Last modification time of bucket metadata
When the last object was uploaded to this bucket
Bucket lifecycle status (ACTIVE, ARCHIVED, SUSPENDED, IN_PROGRESS for deleting)
PENDING, IN_PROGRESS, PROCESSING, COMPLETED, FAILED, CANCELED, UNKNOWN, SKIPPED, DRAFT, ACTIVE, ARCHIVED, SUSPENDED Whether the bucket is locked (read-only)
Batch statistics for this bucket (calculated asynchronously, stored in DB) Statistics about batches in a bucket.
Storage statistics for this bucket (calculated asynchronously, stored in DB) Statistics about object storage in a bucket.

