本文共 3952 字,大约阅读时间需要 13 分钟。
The dataset introduced in the is available .
A dataset, introduced in the arXiv paper , is available.
A dataset with motion boundaries annotations, introduced in (CVPR'15), is available .
A video summarization dataset, introduced in (ECCV'14) is available .
The dataset for evaluating human pose estimation in video sequences, introduced in our CVPR'14 paper, is available on the .
is the EVent VidEo dataset used in the paper (CVPR 2013).
This dataset is composed of videos collected from YouTube by querying for the names of 10 object classes. It contains between 9 and 24 videos for each class, and can be downloaded from .
for the dataset used in the paper (ICCV 2011).
for the datasets used in the paper (CVPR 2011).
This extends the data set. It consists of news documents composed of images and captions, we used it for face naming and learning face recognition systems with weak supervision in our and submitted IJCV paper. It is fully annotated for association of faces in the image with names in the caption.
The labeled collected using image search engine. Contains 71478 images and text meta-data in XML format retrieved by 353 text queries, accompanied with relevance label for each image. This data set was used in our CVPR'10 paper .
for 20 object categories and 20 combinations of object categories. This data set was used in published in BMVC 2009.
for the COREL 5K, IAPR TC-12, ESP GAME, PASCAL VOC 2007 and MIR Flickr data sets, as used in the on image auto-annotation and keyword-based retrieval and the on multimodal semi-supervised learning.
collected by Hervé Jégou et al. to test image search methods.
. An extended version of our Hollywood Human Action dataset featuring more action classes and samples. The dataset was used in published in CVPR'09.
. A video dataset focusing on realistic human actions. Short video samples were retrieved from various popular movies and annotated both manually and automatically. The dataset was used in published in CVPR'08 paper (oral). The covered set of human actions includes answering a phone, getting out of a car, handshaking, hugging, kissing, sitting down, sitting up and standing up.
. A follow-up on the popular natural-scene object category prepared at Graz University of Technology. Original dataset images were re-annotated by a team of human annotators led by , who then used the annotations to perform (CVPR 2007 oral). All cars, bikes and people images, annotations and image lists are made available.
. Two data sets collected for the automatic learning of color names as proposed in published in CVPR 2007.
. Data sets containing out of seven soccer teams. Has been used to evaluate various color descriptors on in published in ECCV 2006.
(navigate to INRIA horses). A set of horse and non-horse images collected by and .
. A large set of marked up images of standing or walking people, used to train .
. A set of car and non-car images taken in a parking lot nearby INRIA. It was collected by and and used in the submitted IJCV journal paper .
collected by Krystian Mikolajczyk for testing scaled and affine interest point detectors with various types of local image descriptors.
from: http://lear.inrialpes.fr/data/
转载地址:http://tghef.baihongyu.com/