Abstract
This paper addresses the challenge of efficiently querying multimodal related data in data lakes, a largescale storage and management system that supports heterogeneous data formats, including structured, semistructured, and unstructured data. Multimodal data queries are crucial because they enable seamless retrieval of related data across modalities, such as tables, images, and text, which has applications in fields like e-commerce, healthcare, and education. However, existing methods primarily focus on single-modality queries, such as joinable or unionable table discovery, and struggle to handle the heterogeneity and lack of metadata in data lakes while balancing accuracy and efficiency. To tackle these challenges, we propose MQDL, a Multimodal data Query mechanism for Data Lakes, which employs a modality-adaptive indexing mechanism and contrastive learning-based embeddings to unify representations across modalities. Additionally, we introduce product quantization to optimize candidate verification during queries, reducing computational overhead while maintaining precision. We evaluate MQDL using a table-image dataset across multiple business scenarios, measuring metrics such as precision, recall, and F1-score. Results show that MQDL achieves an accuracy rate of approximately 90% while demonstrating strong scalability and reduced query response times compared to traditional methods. These findings highlight MQDL’s potential to enhance multimodal data retrieval in complex data lake environments.