A Bilingual, OpenWorld Video Text Dataset and End-to-end Video Text Spotter with Transformer

Part of Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1 (NeurIPS Datasets and Benchmarks 2021) round2

Bibtex Paper Reviews And Public Comment » Supplemental

Authors

威佳 吴, Debing Zhang, Yuanqiang Cai, Sibo Wang, Sibo Wang, Jiahong Li, Zhuang Li, Yejun Tang, Hong Zhou

Abstract

Most existing video text spotting benchmarks focus on evaluating a single language and scenario with limited data. In this work, we introduce a large-scale, Bilingual, Open World Video text benchmark dataset(BOVText). There are four features for BOVText. Firstly, we provide 1,850+ videos with more than 1,600,000+ frames, 25 times larger than the existing largest dataset with incidental text in videos. Secondly, our dataset covers 30+ open categories with a wide selection of various scenarios, Life Vlog, Driving, Movie, etc. Thirdly, abundant text types annotation (i.e., title, caption, or scene text) are provided for the different representational meanings in the video. Fourthly, the MOVText provides multilingual text annotation to promote multiple cultures' live and communication. Besides, we propose an end-to-end video text spotting framework with Transformer, termed TransVTSpotter, which solves the multi-orient text spotting in video with a simple, but efficient attention-based query-key mechanism. It applies object features from the previous frame as a tracking query for the current frame and introduces a rotation angle prediction to fit the multi-orient text instance. On ICDAR2015(video), TransVTSpotter achieves state-of-the-art performance with 44.2% MOTA, 13 fps. The dataset and code of TransVTSpotter can be found at https://github.com/weijiawu/BOVText-Benchmark and https://github.com/weijiawu/TransVTSpotter, respectively.