Skip to main content
GET
/
v1
/
collections
/
{collection_identifier}
Get Collection
curl --request GET \
  --url https://api.mixpeek.com/v1/collections/{collection_identifier} \
  --header 'Authorization: Bearer <token>'
{
  "collection_name": "products",
  "description": "Product catalog",
  "taxonomy_applications": [
    {
      "execution_mode": "on_demand",
      "taxonomy_id": "tax_categories"
    },
    {
      "execution_mode": "materialize",
      "target_collection_id": "col_products_enriched",
      "taxonomy_id": "tax_brands"
    }
  ]
}

Authorizations

Authorization
string
header
required

Bearer token authentication using your API key. Format: 'Bearer your_api_key'. To get an API key, create an account at mixpeek.com/start and generate a key in your account settings.

Headers

Authorization
string | null

Bearer token authentication using your API key. Format: 'Bearer your_api_key'. To get an API key, create an account at mixpeek.com/start and generate a key in your account settings. Example: 'Bearer sk_1234567890abcdef'

X-Namespace
string | null

Optional namespace for data isolation. This can be a namespace name or namespace ID. Example: 'netflix_prod' or 'ns_1234567890'. To create a namespace, use the /namespaces endpoint.

Path Parameters

collection_identifier
string
required

The ID or name of the collection to retrieve

Response

Successful Response

Response model for collection endpoints.

collection_name
string
required

Collection name

source
object
required

Primary source configuration for this collection

collection_id
string

Unique collection identifier

description
string | null

Collection description

schema
object | null

Collection schema

input_schema
object | null

Input schema for the collection Schema definition for bucket objects.

IMPORTANT: The bucket schema defines what fields your bucket objects will have. This schema is REQUIRED if you want to:

  1. Create collections that use input_mappings to process your bucket data
  2. Validate object structure before ingestion
  3. Enable type-safe data pipelines

The schema defines the custom fields that will be used in:

  • Blob properties (e.g., "content", "thumbnail", "transcript")
  • Object metadata structure
  • Blob data structures

Example workflow:

  1. Create bucket WITH schema defining your data structure
  2. Upload objects that conform to that schema
  3. Create collections that map schema fields to feature extractors

Without a bucket_schema, collections cannot use input_mappings.

output_schema
object | null

Output schema after feature extraction Schema definition for bucket objects.

IMPORTANT: The bucket schema defines what fields your bucket objects will have. This schema is REQUIRED if you want to:

  1. Create collections that use input_mappings to process your bucket data
  2. Validate object structure before ingestion
  3. Enable type-safe data pipelines

The schema defines the custom fields that will be used in:

  • Blob properties (e.g., "content", "thumbnail", "transcript")
  • Object metadata structure
  • Blob data structures

Example workflow:

  1. Create bucket WITH schema defining your data structure
  2. Upload objects that conform to that schema
  3. Create collections that map schema fields to feature extractors

Without a bucket_schema, collections cannot use input_mappings.

feature_extractors
FeatureExtractorConfig · object[]

Feature extractors applied to this collection

source_lineage
SingleLineageEntry · object[]

Lineage chain showing the processing history

vector_indexes
any[]

Vector indexes for this collection

payload_indexes
any[]

Payload indexes for this collection

enabled
boolean
default:true

Whether the collection is enabled

metadata
object | null

Additional metadata for the collection

taxonomy_applications
TaxonomyApplicationConfig · object[] | null

List of taxonomies applied to this collection

document_count
integer | null

Number of documents in the collection

I