
“Using our redefinition of BM25 for several non-textual modalities together with textual modalities, we finally build multimodal baselines and test them in evaluation campaigns as well as in operational information retrieval systems. We show that our untrained multimodal baselines reach a significantly better retrieval effectiveness than the textual baseline and even achieve similar performance when comparing them to a trained linear combination of the modality scores for some cases.”
Thesis: Multimodal Information Retrieval (full text)
Date of Defense: July 4, 2018
Supervisors:
Prof. Dr. Jacques Savoy (UNINE)
Prof. Dr. Martin Braschler (ZHAW)
Keywords: Multimodal, Information Retrieval, Probabilistic Information Retrieval Model, BM25 Weighting Scheme

Abstract: Knowledge-intensive business processes, one of the essential drivers of our economy today, often rely on multimodal information retrieval systems that have to deal with increasingly complex document collections and queries. The complexity mainly evolves due to a large and diverse range of textual and non-textual modalities such as geographical coordinates, ratings and timestamps used in the collections. However, this results in a explosion of combinations of modalities, which makes it unfeasible to find new approaches for each individual modality and to obtain suitable training data. Therefore, one of the major goals of this dissertation is to develop unified models to treat modalities for document retrieval.
Further, we aim to develop methods to merge the modalities with little or no training, which is essential for the methods to be applicable in a wide range of applications and application domains.
We base our approach on our experience with several multimodal information retrieval applications and thus also many different modalities. In a first step we suggest a coarse categorization of modalities into two types of modalities, which we further subdivide by their distribution. The categorization is a first attempt to reduce the number of different models. It helps to generalize methods to entire categories of modalities instead of being specific for a single modality.
Since the most popular weighting schemes for textual retrieval have generalized well to many retrieval tasks in the past, we propose to use them as a basis of the unified models for the categories of modalities. We therefore demonstrate as a second step how the three main components of the so-called BM25 weighting scheme (term frequency, document frequency and document length normalization) have to be redefined to be used with several non-textual modalities.
As a third step towards establishing clear guidance for the integration of many modalities into an information retrieval system, we demonstrate that BM25 is a suitable weighting scheme to merge modalities under the so-called raw-score merging hypothesis. We achieve this with the help of a sampling-based approach, which we use as a basis to prove that BM25 satisfies the assumptions of the raw-score merging hypothesis with respect to the average document length and the variance of document lengths.
Using our redefinition of BM25 for several non-textual modalities together with textual modalities, we finally build multimodal baselines and test them in evaluation campaigns as well as in operational information retrieval systems. We show that our untrained multimodal baselines reach a significantly better retrieval effectiveness than the textual baseline and even achieve similar performance when comparing them to a trained linear combination of the modality scores for some cases.