Skip to main content
POST
/
v1
/
retrievers
/
interactions
Create Interaction
curl --request POST \
  --url https://api.mixpeek.com/v1/retrievers/interactions \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --header 'X-Namespace: <x-namespace>' \
  --data '
{
  "feature_id": "<string>",
  "interaction_type": [
    "view"
  ],
  "position": 1,
  "metadata": {
    "device": "mobile",
    "duration_ms": 5000,
    "page": "search_results",
    "viewport_position": 0.75
  },
  "user_id": "user_abc123",
  "session_id": "sess_abc123",
  "execution_id": "exec_abc123xyz",
  "retriever_id": "ret_abc123",
  "query_snapshot": {
    "text": "wireless headphones"
  },
  "document_score": 0.95,
  "result_set_size": 10
}
'
{
  "feature_id": "<string>",
  "interaction_type": [
    "view"
  ],
  "position": 1,
  "interaction_id": "<string>",
  "metadata": {
    "device": "mobile",
    "duration_ms": 5000,
    "page": "search_results",
    "viewport_position": 0.75
  },
  "user_id": "user_abc123",
  "session_id": "sess_abc123",
  "execution_id": "exec_abc123xyz",
  "retriever_id": "ret_abc123",
  "query_snapshot": {
    "text": "wireless headphones"
  },
  "document_score": 0.95,
  "result_set_size": 10,
  "timestamp": "2025-01-15T10:30:00Z"
}

Headers

Authorization
string
required

REQUIRED: Bearer token authentication using your API key. Format: 'Bearer sk_xxxxxxxxxxxxx'. You can create API keys in the Mixpeek dashboard under Organization Settings.

X-Namespace
string
required

REQUIRED: Namespace identifier for scoping this request. All resources (collections, buckets, taxonomies, etc.) are scoped to a namespace. You can provide either the namespace name or namespace ID. Format: ns_xxxxxxxxxxxxx (ID) or a custom name like 'my-namespace'

Body

application/json

Records a user interaction with a search result.

This model captures user behavior signals that can be used to improve retrieval quality. Each interaction represents a user action (click, view, feedback) on a specific document returned by a retriever.

Use Cases: - Track which search results users actually click on - Collect explicit feedback (thumbs up/down) on result quality - Monitor engagement metrics (time spent viewing, sharing) - Identify problematic queries (zero results, immediate refinements) - Power Learning to Rank models with real user behavior

Requirements: - feature_id: REQUIRED - The document/feature that was interacted with - interaction_type: REQUIRED - Type(s) of interaction that occurred - position: REQUIRED - Where in results list the interaction occurred (for LTR) - query_snapshot: HIGHLY RECOMMENDED - Query input for training optimization - document_score: HIGHLY RECOMMENDED - Initial score for LTR training - result_set_size: OPTIONAL - Total results shown (for context) - metadata: OPTIONAL - Additional context about the interaction - user_id: OPTIONAL - For personalization and user-specific metrics - session_id: OPTIONAL - For tracking multi-query sessions - execution_id: OPTIONAL - Link to full execution context - retriever_id: OPTIONAL - For multi-retriever analytics

Related Concepts: - Retrievers: Interactions measure retriever performance - Evaluations: Interactions provide real-world complement to offline evaluation - Learning to Rank: Interactions train ranking models

feature_id
string
required

ID of the document/feature that was interacted with. REQUIRED. This should be the document_id returned in retriever results. Used to track which specific items users engage with.

interaction_type
enum<string>[]
required

List of interaction types that occurred. REQUIRED. Multiple types can be recorded simultaneously (e.g., VIEW + CLICK + LONG_VIEW for a result the user engaged with). Use the InteractionType enum values.

Minimum array length: 1

Types of user interactions with search results.

These interaction types are used to track user behavior and improve retrieval quality through Learning to Rank (LTR), collaborative filtering, and embedding fine-tuning.

Values: VIEW: User viewed a search result CLICK: User clicked on a search result POSITIVE_FEEDBACK: User explicitly marked result as relevant/helpful NEGATIVE_FEEDBACK: User explicitly marked result as not relevant PURCHASE: User purchased the item (high-value conversion signal) ADD_TO_CART: User added item to cart (mid-funnel signal) WISHLIST: User saved item for later (engagement signal) LONG_VIEW: User spent significant time viewing (dwell time) SHARE: User shared the result (strong engagement signal) BOOKMARK: User bookmarked the result QUERY_REFINEMENT: User modified their search query ZERO_RESULTS: Query yielded no results (helps identify gaps) FILTER_TOGGLE: User modified filters (helps understand intent) SKIP: User skipped over result to click something lower (negative signal) RETURN_TO_RESULTS: User quickly returned to results (negative signal)

Usage in Retrieval Optimization: - LTR (Learning to Rank): Train models to predict click-through rates - Collaborative Filtering: Find similar users/items based on interactions - Embedding Fine-tuning: Adjust embeddings based on what users actually click - Query Understanding: Analyze refinements and zero-result queries - Result Quality: Identify poorly-performing results via skip/return patterns

Available options:
view,
click,
positive_feedback,
negative_feedback,
purchase,
add_to_cart,
wishlist,
long_view,
share,
bookmark,
query_refinement,
zero_results,
filter_toggle,
skip,
return_to_results
position
integer
required

Position in search results where interaction occurred (0-indexed). REQUIRED. Critical for Learning to Rank - helps identify position bias. E.g., position=0 means first result, position=9 means 10th result. Higher engagement at lower positions suggests higher quality.

Required range: x >= 0
metadata
Metadata · object

Additional context about the interaction. NOT REQUIRED. Can include device, duration, viewport info, etc. Use this to enrich interaction data with application-specific context.

Example:
{
"device": "mobile",
"duration_ms": 5000,
"page": "search_results",
"viewport_position": 0.75
}
user_id
string | null

Customer's authenticated user identifier. NOT REQUIRED. Persists across sessions for long-term tracking. Enables personalization and user-specific metrics. Use your application's user ID format.

Example:

"user_abc123"

session_id
string | null

Temporary identifier for a single search session. NOT REQUIRED. Typically 30min-1hr duration. Tracks anonymous and authenticated users within a session. Use to group related queries and understand search journeys.

Example:

"sess_abc123"

execution_id
string | null

ID of the retriever execution that generated these results. NOT REQUIRED but HIGHLY RECOMMENDED for training and optimization. Links the interaction back to the exact search query, pipeline configuration, and stage execution that produced the results the user saw. Essential for: fine-tuning embeddings, training rerankers, query understanding, and tracing which pipeline configs produce better user engagement. Retrieve from the retriever execution response and pass to interactions.

Example:

"exec_abc123xyz"

retriever_id
string | null

ID of the retriever that was executed. NOT REQUIRED but RECOMMENDED for multi-retriever analytics. Enables comparing performance across different retriever configurations. If execution_id is provided, retriever_id can be inferred from the execution record.

Example:

"ret_abc123"

query_snapshot
Query Snapshot · object

Snapshot of the query input that generated these results. HIGHLY RECOMMENDED for training optimization. Storing the query directly enables 10-100x faster training data extraction by avoiding expensive joins to execution records. Use the same format as retriever query input (e.g., {'text': '...', 'filters': {...}}). Essential for: embedding fine-tuning (query-document pairs), query expansion learning, and analyzing which query patterns lead to better engagement. NOT REQUIRED but strongly recommended for production use cases involving model training.

Example:
{ "text": "wireless headphones" }
document_score
number | null

Initial retrieval score of this document when shown to the user. HIGHLY RECOMMENDED for Learning to Rank (LTR). This is a critical feature for reranker training - helps the model learn how to adjust initial scores based on user engagement. Should match the score from the retriever execution results. NOT REQUIRED but strongly recommended for LTR and reranker training.

Example:

0.95

result_set_size
integer | null

Total number of results shown to the user in this search. NOT REQUIRED but useful for context. Helps understand interaction patterns - clicking position 5 of 10 results is different from position 5 of 100 results. Useful for position bias correction and CTR analysis.

Required range: x >= 1
Example:

10

Response

Successful Response

Response model for a stored interaction.

Extends SearchInteraction with system-assigned fields.

feature_id
string
required

ID of the document/feature that was interacted with. REQUIRED. This should be the document_id returned in retriever results. Used to track which specific items users engage with.

interaction_type
enum<string>[]
required

List of interaction types that occurred. REQUIRED. Multiple types can be recorded simultaneously (e.g., VIEW + CLICK + LONG_VIEW for a result the user engaged with). Use the InteractionType enum values.

Minimum array length: 1

Types of user interactions with search results.

These interaction types are used to track user behavior and improve retrieval quality through Learning to Rank (LTR), collaborative filtering, and embedding fine-tuning.

Values: VIEW: User viewed a search result CLICK: User clicked on a search result POSITIVE_FEEDBACK: User explicitly marked result as relevant/helpful NEGATIVE_FEEDBACK: User explicitly marked result as not relevant PURCHASE: User purchased the item (high-value conversion signal) ADD_TO_CART: User added item to cart (mid-funnel signal) WISHLIST: User saved item for later (engagement signal) LONG_VIEW: User spent significant time viewing (dwell time) SHARE: User shared the result (strong engagement signal) BOOKMARK: User bookmarked the result QUERY_REFINEMENT: User modified their search query ZERO_RESULTS: Query yielded no results (helps identify gaps) FILTER_TOGGLE: User modified filters (helps understand intent) SKIP: User skipped over result to click something lower (negative signal) RETURN_TO_RESULTS: User quickly returned to results (negative signal)

Usage in Retrieval Optimization: - LTR (Learning to Rank): Train models to predict click-through rates - Collaborative Filtering: Find similar users/items based on interactions - Embedding Fine-tuning: Adjust embeddings based on what users actually click - Query Understanding: Analyze refinements and zero-result queries - Result Quality: Identify poorly-performing results via skip/return patterns

Available options:
view,
click,
positive_feedback,
negative_feedback,
purchase,
add_to_cart,
wishlist,
long_view,
share,
bookmark,
query_refinement,
zero_results,
filter_toggle,
skip,
return_to_results
position
integer
required

Position in search results where interaction occurred (0-indexed). REQUIRED. Critical for Learning to Rank - helps identify position bias. E.g., position=0 means first result, position=9 means 10th result. Higher engagement at lower positions suggests higher quality.

Required range: x >= 0
interaction_id
string
required

Unique identifier for this interaction record. System-assigned UUID. Use this to reference the interaction in subsequent requests.

metadata
Metadata · object

Additional context about the interaction. NOT REQUIRED. Can include device, duration, viewport info, etc. Use this to enrich interaction data with application-specific context.

Example:
{
"device": "mobile",
"duration_ms": 5000,
"page": "search_results",
"viewport_position": 0.75
}
user_id
string | null

Customer's authenticated user identifier. NOT REQUIRED. Persists across sessions for long-term tracking. Enables personalization and user-specific metrics. Use your application's user ID format.

Example:

"user_abc123"

session_id
string | null

Temporary identifier for a single search session. NOT REQUIRED. Typically 30min-1hr duration. Tracks anonymous and authenticated users within a session. Use to group related queries and understand search journeys.

Example:

"sess_abc123"

execution_id
string | null

ID of the retriever execution that generated these results. NOT REQUIRED but HIGHLY RECOMMENDED for training and optimization. Links the interaction back to the exact search query, pipeline configuration, and stage execution that produced the results the user saw. Essential for: fine-tuning embeddings, training rerankers, query understanding, and tracing which pipeline configs produce better user engagement. Retrieve from the retriever execution response and pass to interactions.

Example:

"exec_abc123xyz"

retriever_id
string | null

ID of the retriever that was executed. NOT REQUIRED but RECOMMENDED for multi-retriever analytics. Enables comparing performance across different retriever configurations. If execution_id is provided, retriever_id can be inferred from the execution record.

Example:

"ret_abc123"

query_snapshot
Query Snapshot · object

Snapshot of the query input that generated these results. HIGHLY RECOMMENDED for training optimization. Storing the query directly enables 10-100x faster training data extraction by avoiding expensive joins to execution records. Use the same format as retriever query input (e.g., {'text': '...', 'filters': {...}}). Essential for: embedding fine-tuning (query-document pairs), query expansion learning, and analyzing which query patterns lead to better engagement. NOT REQUIRED but strongly recommended for production use cases involving model training.

Example:
{ "text": "wireless headphones" }
document_score
number | null

Initial retrieval score of this document when shown to the user. HIGHLY RECOMMENDED for Learning to Rank (LTR). This is a critical feature for reranker training - helps the model learn how to adjust initial scores based on user engagement. Should match the score from the retriever execution results. NOT REQUIRED but strongly recommended for LTR and reranker training.

Example:

0.95

result_set_size
integer | null

Total number of results shown to the user in this search. NOT REQUIRED but useful for context. Helps understand interaction patterns - clicking position 5 of 10 results is different from position 5 of 100 results. Useful for position bias correction and CTR analysis.

Required range: x >= 1
Example:

10

timestamp
string | null

ISO 8601 timestamp when the interaction was recorded. System-assigned. Used for time-based analysis, training data recency weighting, and temporal trends in user behavior.

Example:

"2025-01-15T10:30:00Z"