N24News

A New Dataset for Multimodal News Classification

Conference Paper (2022)
Author(s)

Zhen Wang (Student TU Delft)

X. Shan (TU Delft - Water Resources)

Xiangxie Zhang (Student TU Delft)

J. Yang (TU Delft - Web Information Systems)

Research Group
Water Resources
Copyright
© 2022 Zhen Wang, X. Shan, Xiangxie Zhang, J. Yang
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Zhen Wang, X. Shan, Xiangxie Zhang, J. Yang
Research Group
Water Resources
Pages (from-to)
6768-6775
ISBN (electronic)
9791095546726
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Current news datasets merely focus on text features on the news and rarely leverage the feature of images, excluding numerous essential features for news classification. In this paper, we propose a new dataset, N24News, which is generated from New York Times with 24 categories and contains both text and image information in each news. We use a multitask multimodal method and the experimental results show multimodal news classification performs better than text-only news classification. Depending on the length of the text, the classification accuracy can be increased by up to 8.11%. Our research reveals the relationship between the performance of a multimodal classifier and its sub-classifiers, and also the possible improvements when applying multimodal in news classification. N24News is shown to have great potential to prompt the multimodal news studies.