Tasks
Monitor and manage processing tasks for tracking pipeline execution
Tasks represent processing jobs in Mixpeek. They allow you to track the status, progress, and results of pipeline executions and other asynchronous operations.
Overview
In Mixpeek, tasks represent processing jobs that are executed asynchronously. When you trigger a pipeline to process an object or run any other long-running operation, a task is created to track its execution status, progress, and results.
Asynchronous Processing
Tasks allow you to submit processing jobs and check their status later, without having to wait for completion
Execution Tracking
Monitor the progress and status of your processing operations in real-time
Task Lifecycle
Creation
A task is created when you trigger a processing operation, such as running a pipeline on an object
Queuing
The task enters a queue and waits to be picked up by a worker
Processing
The task begins execution, processing the requested operation
Completion
The task finishes successfully, producing the desired results
During this lifecycle, a task can also encounter errors or be cancelled:
Task Statuses
Status | Icon | Description |
---|---|---|
created | 🟢 | The task has been created but not yet queued for processing. |
queued | ⏱️ | The task is in the processing queue, waiting to be picked up by a worker. |
processing | ⚙️ | The task is currently being executed. |
completed | ✅ | The task has been successfully completed, and the results are available. |
failed | ❌ | The task encountered an error during execution and could not be completed. |
cancelled | 🚫 | The task was manually cancelled before completion. |
Creating Tasks
Tasks are typically created indirectly by triggering other operations. The most common way to create a task is by creating an object in your bucket. If there are collections that use it as a source, tasks will be created.
Common Task Types
object_create
Processing an object with a collection
model_tuning
Tuning a machine learning model with custom data
batch_import
Importing multiple objects in a batch
taxonomy_apply
Applying a taxonomy to a collection
clustering
Running clustering algorithms on a collection
index_rebuild
Rebuilding feature store indexes
Best Practices
Use Task IDs
Always store task IDs when triggering operations to allow status checking later
Implement Timeouts
Add reasonable timeouts when waiting for tasks to complete to avoid blocking indefinitely
Handle Failures
Implement proper error handling to address failed tasks and understand the cause
Use Webhooks
Set up webhooks for task notifications rather than continuous polling for better performance
Limitations
- Processing Timeout: Tasks time out after 30 minutes of processing
- Log Retention: Task logs are retained for 7 days
- Record Retention: Completed task records are retained for 30 days
- Concurrency Limits: Maximum of 100 concurrent tasks per namespace
API Reference
For complete details on working with tasks, see our Tasks API Reference.
Was this page helpful?