Multi-Modal Data Fusion for Enhanced Analytics: Techniques for Integrating Structured and Unstructured Big Data

Main Article Content

Serena Chen
Md. Murad Hossain

Abstract

With the exponential growth of data from diverse sources, organizations are increasingly looking to integrate and analyze multi-modal data to gain deeper insights. Structured data from databases and sensors can provide quantitative insights, while unstructured data from text, images, video and audio can provide contextual, qualitative understanding. Multi-modal data fusion enables a more comprehensive view by combining the breadth of unstructured data with the depth of structured data. This paper provides an overview of multi-modal data fusion techniques to enhance analytics. It covers methods like entity matching, linking and resolution that integrate structured and unstructured data at the entity-level. Techniques like feature extraction and sensor fusion that consolidate data at the feature-level are also discussed. The relative strengths and limitations of different techniques are considered in the context of analytics objectives. Challenges such as semantic alignment, data veracity, and cognitive load are examined. The paper concludes with best practices and future directions for multi-modal data fusion.

Article Details

How to Cite
Chen, S., & Hossain, M. M. (2022). Multi-Modal Data Fusion for Enhanced Analytics: Techniques for Integrating Structured and Unstructured Big Data. AI, IoT and the Fourth Industrial Revolution Review, 12(12), 32–39. Retrieved from https://scicadence.com/index.php/AI-IoT-REVIEW/article/view/19
Section
Articles