next up previous contents
Next: Semi-supervised Learning of Feature Up: Summary of References Related Previous: TRECVID 2013 - An   Contents

Subsections

MediaMill at TRECVID 2013: Searching Concepts, Objects, Instances and Events in Video [88]

Original Abstract

In this paper we summarize our TRECVID 2013 video retrieval experiments. The MediaMill team participated in four tasks: concept detection, object localization, in- stance search, and event recognition. For all tasks the starting point is our top-performing bag-of-words system of TRECVID 2008-2012, which uses color SIFT descrip- tors, average and difference coded into codebooks with spa- tial pyramids and kernel-based machine learning. New this year are concept detection with deep learning, concept detec- tion without annotations, object localization using selective search, instance search by reranking, and event recognition based on concept vocabularies. Our experiments focus on es- tablishing the video retrieval value of the innovations. The 2013 edition of the TRECVID benchmark has again been a fruitful participation for the MediaMill team, resulting in the best result for concept detection, concept detection with- out annotation, object localization, concept pair detection, and visual event recognition with few examples.

Main points


next up previous contents
Next: Semi-supervised Learning of Feature Up: Summary of References Related Previous: TRECVID 2013 - An   Contents
Miquel Perello Nieto 2014-11-28