Search Engines Amsterdam: Efficiency in neural IR

STARTS AT 17:00
LAB42, L3.36

Information drives the planet. Search Engines Amsterdam (SEA) organizes talks around implementations of information retrieval, in search engines, in recommender systems, or in conversational assistants. They host monthly meetups followed by drinks.

This will be a hybrid event, the in-person event will take place at Lab42, Science Park, room L3.36.

You will be able to view the Zoom link once you 'attend' the meetup on this page.

In this edition of SEA we will discuss efficiency in neural IR.

Speakers

There will be two mazing speakers: Jurek Leonhardt (TU Delft) and Carlos Lassance (Naverlabs).

17.00: Carlos Lassance (Naverlabs), An overview of 3 years of SPLADE

In this talk I'm going to do an overview over SPLADE a recent technique of Learned Sparse Retrieval (LSR). I will be going through our last 3 years of research on this subject, and detailing the advances on training, efficiency and effectiveness of such models. Finally, I will present what I feel is the next step, with research directions varying from multi-linguality, multi-modality and out-of-domain generalization.

17.30: Jurek Leonhardt (TU Delft), Efficient Neural Ranking using Forward Indexes and Lightweight Encoders

Abstract: Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes---vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform evaluation to show the effectiveness and efficiency of Fast-Forward indexes---our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

Just keep counting: SEA talks #259 and #260.