Skip to main content
GET
/
v1
/
buckets
/
{bucket_identifier}
/
batches
/
{batch_id}
Get Batch Configuration
curl --request GET \
  --url https://api.mixpeek.com/v1/buckets/{bucket_identifier}/batches/{batch_id} \
  --header 'Authorization: <authorization>' \
  --header 'X-Namespace: <x-namespace>'
{
  "bucket_id": "<string>",
  "batch_id": "<string>",
  "status": "DRAFT",
  "object_ids": [
    "<string>"
  ],
  "collection_ids": [
    "col_chunks"
  ],
  "error": "Failed to process batch: Object not found",
  "error_summary": null,
  "type": "BUCKET",
  "manifest_key": "ns_abc/org_123/manifests/tier_0.parquet",
  "task_id": "task_tier0_abc123",
  "loaded_object_ids": [
    "obj_video_001",
    "obj_video_002"
  ],
  "internal_metadata": {
    "include_processing_history": true,
    "last_health_check": {
      "enriched_documents": 98,
      "health_status": "HEALTHY",
      "missing_features": [
        "text_embedding"
      ],
      "processed_documents": 100,
      "recommendations": [],
      "stall_duration_seconds": 0,
      "timestamp": "2025-11-06T10:05:00Z",
      "total_documents": 100,
      "vector_populated_count": 98
    }
  },
  "metadata": {},
  "tier_tasks": [
    {
      "tier_num": 1,
      "source_type": "<string>",
      "task_id": "task_tier0_abc123",
      "status": "PENDING",
      "collection_ids": [
        "<string>"
      ],
      "source_collection_ids": [
        "col_chunks"
      ],
      "parent_task_id": "task_tier0_abc123",
      "started_at": "2025-11-03T10:00:00Z",
      "completed_at": "2025-11-03T10:05:00Z",
      "errors": [
        {
          "error_type": "dependency",
          "message": "<string>",
          "component": "VertexMultimodalService",
          "stage": "gemini_extraction",
          "traceback": "<string>",
          "timestamp": "2023-11-07T05:31:56Z",
          "affected_document_ids": [
            "<string>"
          ],
          "affected_count": 1,
          "recovery_suggestion": "Install google-genai package: pip install google-genai",
          "metadata": {}
        }
      ],
      "error_summary": null,
      "performance": {
        "avg_latency_ms": 234.56,
        "bottlenecks": [
          {
            "avg_time_ms": 113.58,
            "execution_count": 50,
            "max_time_ms": 234.56,
            "stage_name": "gcs_batch_upload_all_segments",
            "total_time_ms": 5678.9
          },
          {
            "avg_time_ms": 69.14,
            "execution_count": 50,
            "max_time_ms": 123.45,
            "stage_name": "pipeline_run",
            "total_time_ms": 3456.78
          }
        ],
        "stage_count": 5,
        "timestamp": "2025-11-06T10:05:00Z",
        "total_time_ms": 12345.67
      },
      "ray_job_id": "raysubmit_9pDAyZbd5MN281TB"
    }
  ],
  "current_tier": 0,
  "total_tiers": 1,
  "dag_tiers": [
    [
      "col_chunks"
    ]
  ],
  "created_at": "2023-11-07T05:31:56Z",
  "updated_at": "2023-11-07T05:31:56Z"
}

Headers

Authorization
string
required

REQUIRED: Bearer token authentication using your API key. Format: 'Bearer sk_xxxxxxxxxxxxx'. You can create API keys in the Mixpeek dashboard under Organization Settings.

X-Namespace
string
required

REQUIRED: Namespace identifier for scoping this request. All resources (collections, buckets, taxonomies, etc.) are scoped to a namespace. You can provide either the namespace name or namespace ID. Format: ns_xxxxxxxxxxxxx (ID) or a custom name like 'my-namespace'

Path Parameters

bucket_identifier
string
required

The unique identifier of the bucket.

batch_id
string
required

The unique identifier of the batch.

Response

Successful Response

Model representing a batch of objects for processing through collections.

A batch groups bucket objects together for processing through one or more collections. Batches support multi-tier processing where collections are processed in dependency order (e.g., bucket → chunks → frames → scenes). Each tier has independent task tracking.

Use Cases: - Process multiple objects through collections in a single batch - Track progress of multi-tier decomposition pipelines - Monitor and retry individual processing tiers - Query batch status and tier-specific task information

Lifecycle: 1. Created in DRAFT status with object_ids 2. Submitted for processing → status changes to PENDING 3. Each tier processes sequentially (tier 0 → tier 1 → ... → tier N) 4. Batch completes when all tiers finish (status=COMPLETED) or any tier fails (status=FAILED)

Multi-Tier Processing: - Tier 0: Bucket objects → Collections (bucket as source) - Tier N (N > 0): Collection documents → Collections (upstream collection as source) - Each tier gets independent task tracking via tier_tasks array - Processing proceeds tier-by-tier with automatic chaining

Requirements: - batch_id: OPTIONAL (auto-generated if not provided) - bucket_id: REQUIRED - status: OPTIONAL (defaults to DRAFT) - object_ids: REQUIRED for processing (must have at least 1 object) - collection_ids: OPTIONAL (discovered via DAG resolution) - tier_tasks: OPTIONAL (populated during processing) - current_tier: OPTIONAL (set during processing) - total_tiers: OPTIONAL (defaults to 1, set during DAG resolution) - dag_tiers: OPTIONAL (populated during DAG resolution)

bucket_id
string
required

REQUIRED. Unique identifier of the bucket containing the objects to process. Must be a valid bucket ID that exists in the system. All object_ids must belong to this bucket. Format: Bucket ID as defined when bucket was created.

batch_id
string

OPTIONAL (auto-generated if not provided). Unique identifier for this batch. Format: 'btch_' prefix followed by 12-character secure token. Generated using generate_secure_token() from shared.utilities.helpers. Used to query batch status and track processing across tiers. Immutable after creation.

status
enum<string>
default:DRAFT

OPTIONAL (defaults to DRAFT). Current processing status of the batch. Lifecycle: DRAFT → PENDING → IN_PROGRESS → COMPLETED/FAILED. DRAFT: Batch created but not yet submitted. PENDING: Batch submitted and queued for processing. IN_PROGRESS: Batch currently processing (one or more tiers active). COMPLETED: All tiers successfully completed. FAILED: One or more tiers failed. Aggregated from tier_tasks statuses during multi-tier processing.

Available options:
PENDING,
IN_PROGRESS,
PROCESSING,
COMPLETED,
COMPLETED_WITH_ERRORS,
FAILED,
CANCELED,
UNKNOWN,
SKIPPED,
DRAFT,
ACTIVE,
ARCHIVED,
SUSPENDED
object_ids
string[]

REQUIRED for processing (must have at least 1). List of object IDs to include in this batch. All objects must exist in the specified bucket_id. These objects are the source data for tier 0 processing. Minimum 1 object, no maximum limit. Objects are processed in parallel within each tier.

Minimum array length: 1
collection_ids
string[] | null

OPTIONAL. List of all collection IDs involved in this batch's processing. Automatically populated during DAG resolution from dag_tiers. Includes collections from all tiers (flattened view of dag_tiers). Used for quick lookups without traversing tier structure. Format: List of collection IDs across all tiers.

Example:
["col_chunks"]
error
string | null

OPTIONAL. Legacy error message field for backward compatibility. None if batch succeeded or is still processing. Contains human-readable error description from first failed tier. DEPRECATED: Use tier_tasks[].errors for detailed error information. For multi-tier batches, typically contains the error from the first failed tier. Check tier_tasks array for tier-specific error details and error_summary for aggregation.

Example:

"Failed to process batch: Object not found"

error_summary
Error Summary · object

OPTIONAL. Aggregated summary of errors across ALL tiers in the batch. None if batch succeeded or is still processing. Maps error_type (category) to total count of affected documents across all tiers. Provides quick batch-wide overview of error distribution. Example: {'dependency': 15, 'authentication': 25, 'validation': 5} means across all tiers, 15 documents failed with dependency errors, 25 with auth errors, 5 with validation errors. Automatically aggregated from tier_tasks[].error_summary. Used for batch health dashboard and error trend analysis.

Example:

null

type
enum<string>
default:BUCKET

OPTIONAL (defaults to BUCKET). Type of batch. BUCKET: Standard batch processing bucket objects through collections. COLLECTION: Reserved for future collection-only batch processing. Currently only BUCKET type is supported.

Available options:
BUCKET,
COLLECTION
manifest_key
string | null

OPTIONAL. S3 key where the batch manifest is stored. Contains metadata and row data (Parquet) for Engine processing. For tier 0, points to bucket object manifest. For tier N+, points to collection document manifest. Format: S3 path (e.g., 'namespace_id/internal_id/manifests/tier_0.parquet'). Generated during batch submission.

Example:

"ns_abc/org_123/manifests/tier_0.parquet"

task_id
string | null

OPTIONAL. Primary task ID for the batch (typically tier 0 task). Used for backward compatibility with single-tier batch tracking. For multi-tier batches, prefer querying tier_tasks array for granular tracking. Format: Task ID as generated for tier 0.

Example:

"task_tier0_abc123"

loaded_object_ids
string[] | null

OPTIONAL. List of object IDs that were successfully validated and loaded into the batch. Subset of object_ids that passed validation. Used to track which objects are ready for processing. None if batch hasn't been validated yet.

Example:
["obj_video_001", "obj_video_002"]
internal_metadata
Internal Metadata · object

OPTIONAL. Internal engine/job metadata for system use. May contain: job_id (provider-specific), engine_version, processing hints, last_health_check. last_health_check: Most recent health check results with health_status, enriched_documents, vector_populated_count, stall_duration_seconds, recommendations, missing_features. Populated asynchronously via Celery task (non-blocking, best-effort). Used for troubleshooting batch processing issues via API. Format: Free-form dictionary for internal tracking.

Example:
{
"include_processing_history": true,
"last_health_check": {
"enriched_documents": 98,
"health_status": "HEALTHY",
"missing_features": ["text_embedding"],
"processed_documents": 100,
"recommendations": [],
"stall_duration_seconds": 0,
"timestamp": "2025-11-06T10:05:00Z",
"total_documents": 100,
"vector_populated_count": 98
}
}
metadata
Metadata · object

OPTIONAL. User-defined metadata for the batch. Arbitrary key-value pairs for user tracking and organization. Persisted with the batch and returned in API responses. Not used by the system for processing logic. Examples: campaign_id, user_email, processing_notes.

tier_tasks
TierTaskInfo · object[]

OPTIONAL. List of tier task tracking information for multi-tier processing. Each element represents one tier in the processing pipeline. Empty array for simple single-tier batches. Populated during batch submission with tier 0 info, then appended as tiers progress. Each TierTaskInfo contains: tier_num, task_id, status, collection_ids, timestamps. Used for granular monitoring: 'Show me status of tier 2' or 'Retry tier 1'. Array index typically matches tier_num (tier_tasks[0] = tier 0, tier_tasks[1] = tier 1, etc.).

current_tier
integer | null

OPTIONAL. Zero-based index of the currently processing tier. None if batch hasn't started processing (status=DRAFT or PENDING). Updated as batch progresses through tiers. Used to show processing progress: 'Processing tier 2 of 5'. Set to last tier number when batch completes. Example: If processing tier 1 (frames), current_tier=1.

Required range: x >= 0
Example:

0

total_tiers
integer
default:1

OPTIONAL (defaults to 1). Total number of tiers in the collection DAG. Minimum 1 (tier 0 only = bucket → collection). Set during DAG resolution when batch is submitted. Equals len(dag_tiers) if dag_tiers is populated. Used to calculate progress: current_tier / total_tiers. Example: 5-tier pipeline (bucket → chunks → frames → scenes → summaries) has total_tiers=5.

Required range: x >= 1
dag_tiers
string[][] | null

OPTIONAL. Complete DAG tier structure for this batch. List of tiers, where each tier is a list of collection IDs to process at that stage. Tier 0 = bucket-sourced collections. Tier N (N > 0) = collection-sourced collections. Collections within same tier have no dependencies (can run in parallel). Collections in tier N+1 depend on collections in tier N. Populated during DAG resolution at batch submission. Used for tier-by-tier processing orchestration. Example: [['col_chunks'], ['col_frames', 'col_objects'], ['col_scenes']] = 3 tiers where frames and objects run in parallel at tier 1.

Example:
[["col_chunks"]]
created_at
string<date-time>

OPTIONAL (auto-set on creation). ISO 8601 timestamp when batch was created. Set using current_time() from shared.utilities.helpers. Immutable after creation. Used for batch age tracking and cleanup of old batches.

updated_at
string<date-time>

OPTIONAL (auto-updated). ISO 8601 timestamp when batch was last modified. Updated using current_time() whenever batch status or tier_tasks change. Used to track batch activity and identify stale batches.