Skip to main content
Every search system starts static: you configure stages, deploy a retriever, and hope the ranking is good enough. Relevance engineering turns that static system into a learning loop where every user interaction makes results better.
The relevance feedback loop: Search → Results → Interact → Learn → Measure

The Feedback Loop

The loop has five steps:
  1. Search — A user submits a query. Your retriever executes feature searches across one or more embedding indexes, fusing results with a strategy (RRF, weighted, or learned).
  2. Results — The retriever returns ranked documents. Each result has a score, metadata, and position in the list.
  3. Interact — Users click, purchase, skip, or provide feedback. You capture these signals through the Interactions API.
  4. Learn — Mixpeek’s Thompson Sampling algorithm updates Beta distributions for each feature, learning which embedding indexes produce the most engaging results for different user segments.
  5. Measure — You run evaluations against ground truth datasets and monitor analytics to confirm the system is improving.
Then the loop repeats. The next search benefits from everything learned so far.

How It All Connects

Interactions feed two systems simultaneously. Learned fusion uses them to adjust how multiple embedding features are weighted at query time — making real-time improvements without manual tuning. Evaluations use them (via ground truth datasets derived from interaction history) to measure quality offline, giving you confidence before changing production retrievers. Analytics ties everything together by surfacing slow queries, cache performance, and AI-powered tuning recommendations. Benchmarks let you replay historical sessions against candidate configurations to predict impact before going live.
Start simple, graduate to learned. Begin with rrf fusion (the default — no configuration needed). Once you have 100+ interactions, switch to learned fusion to let the system adapt automatically. Use evaluations to verify the improvement.