Technology
Cross-encoder
Cross-encoders process sentence pairs simultaneously through a single transformer to achieve maximum semantic accuracy.
Cross-encoders outperform Bi-encoders by feeding both inputs into the model at once, allowing full self-attention across the entire sequence. While Bi-encoders (like SBERT) map sentences to independent vectors for speed, Cross-encoders capture nuanced interactions between words in context. This architecture is the gold standard for re-ranking the top 10 to 100 results from a retrieval system. Using a model like 'cross-encoder/ms-marco-MiniLM-L-6-v2' ensures high-precision scoring for tasks where accuracy outweighs the computational cost of real-time inference.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1