The Feedback Loop
The loop has five steps:- Search — A user submits a query. Your retriever executes feature searches across one or more embedding indexes, fusing results with a strategy (RRF, weighted, or learned).
- Results — The retriever returns ranked documents. Each result has a score, metadata, and position in the list.
- Interact — Users click, purchase, skip, or provide feedback. You capture these signals through the Interactions API.
- Learn — Mixpeek’s Thompson Sampling algorithm updates Beta distributions for each feature, learning which embedding indexes produce the most engaging results for different user segments.
- Measure — You run evaluations against ground truth datasets and monitor analytics to confirm the system is improving.
What to Read Next
Interaction Signals
Strategy for capturing the right signals and building user preference profiles.
Fusion Strategies
Deep dive into RRF, DBSF, Weighted, Max, and Learned fusion with formulas.
Learned Fusion
How Thompson Sampling adapts fusion weights from interaction data.
Evaluations
Measure retriever quality with NDCG, Precision, Recall, MAP, MRR, and F1.
Analytics
Monitor latency, stage performance, slow queries, and AI tuning recommendations.
Benchmarks
Replay historical sessions to compare retriever configurations before deploying.

