Knowledge Bases and Multiple Modalities

Pouya Pezeshkpour, Anand Mishra, Alice Wang, Hugues Bouchard, Aasish Pappu, Partha Talukdar, Sameer Singh

Please visit https://kb-mm-2020.github.io for more details on the workshop schedule.

Schedule

0:00-
Opening Remarks
3:00-
Invited speaker: Axel Ngonga on Structured Machine Learning for Industrial Machinery
52:00-
Invited speaker: Chenyan Xiong on Representation Learning and Reasoning with Semi-structured Free-Text Knowledge Graph
1:35:35-
Break
1:50:50-
Student presentation: Kenneth Marino on Visual Question Answering Benchmark Requiring External Knowledge
2:06:33-
Invited speaker: Mathias Niepert on Towards Multimodal and Explainable Knowledge Graphs
2:52:50-
Break
3:04:12-
Student presentation: Nitisha Jain on Multimodal Knowledge Graphs for Semantic Analysis of Cultural Heritage Data
3:21:15-
Invited speaker: Mike Tung on The Diffbot Knowledge Graph
4:06:22-
Closing remarks

Abstract:

Recently, there has been growing interest in combining knowledge bases and multiple modalities such as NLP, vision and speech. These combinations have resulted in improvements to various downstream tasks including question answering, image classification, object detection, and link prediction. The objectives of the KBMM workshop is to bring together researchers interested in (a) combining knowledge bases with other modalities to showcase more effective downstream tasks, (b) improving completion and construction of knowledge bases from multiple modalities, and in general, to share state-of-the-art approaches, best practices, and future directions.