DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition
Y. Hu, H. Liu, M. Pfeiffer, T. Delbruck
Inst. of Neuroinformatics, Univ. of Zurich and ETH Zurich.
If you have trouble viewing the images in this document, please right click on them and view the original size.
Source document link: https://docs.google.com/document/d/1m4gAHPkPIzVvhHirtSjzUJrKiD5uU2I76hTqXhk8CxI/edit
YouTube video: https://youtu.be/TDhBZf_-yAg?si=AG5cFAO47oz4_D2K
VOT Challenge 2015 Dataset DVS Recordings
TrackingDataset DVS Recordings
Caltech-256 Dataset DVS Recordings
Bounding Box Usage in SpikeFuel
Technical and implementation details can be found at:
In this project, we targeted and converted 4 datasets:
You can access these DVS datasets in several ways:
This dataset is hosted as part of the INI sensors group databases.
If you do not use selective sync, then you will end up with the files below.( A .bts file extension means the file has not yet downloaded completely.) Your share will be read-only, so changes you make will not affect the source files.
The md5_info.txt and checksums.md5 are text files that list the MD5 checksums for the archives. You can use for example HashCheck on Windows to check archive integrities.
The download (and sharing) status can be monitored in the BitTorrent Sync control panel:
jAER is available at http://jaerproject.org.
These figures are generated from a Python package, which is part of the project SpikeFuel. Similar results should be obtained if the data is loaded and displayed via jAER. A detailed description of the datasets can be found in the report.
(See https://youtu.be/TDhBZf_-yAg for a YouTube video). The bounding boxes of the VOT Dataset are available separately and in HDF5 format datasets. You can try this script to produce the VOT bounding boxes and above figures.
You can check out this script for producing amplitude spectrum.
(See https://youtu.be/DFMvQ7r0UA8 for a YouTube video). The bounding boxes of the TrackingDataset are available in HDF5 format dataset. You can try this script to produce the TrackingDataset bounding boxes and above figures.
You can check out this script for producing amplitude spectrum.
See https://youtu.be/WCVIFKLOuhI for a YouTubeVideo.
You can check out this script for producing amplitude spectrum and this script for produce above figures.
See https://youtu.be/Ir3cSqOgkLE for a YouTube video.
You can check out this script for producing amplitude spectrum and this script for produce above figures.
The bounding box is generated by a simple yet effective method. First, we calculated the relative position based on the original bounding box. Unlike the absolute position that uses pixel coordination, the relative position calculates the ratio between the length of left end to the point and the width of the figure, and the ratio between the length of the top end to the point and the height of the figure. For a concrete example, please see following figure:
For the top left point, the horizontal ratio is calculated by h1:(h1+h2), the vertical ratio is calculated by v1:(v1+v2). In this way, no matter how the figure is transformed geometrically, we can easily calculate the absolute position based on these 2 values. Note that there may be border padding for sequences that are not available as 4:3, an additional care of this border is taken. But the essence of the method is the same.
For AEDAT recording, bounding boxes are generated in text file, each file starts with a header and follows a sequence of bounding boxes information. Each line in the file is a bounding box that has 9 values. The first value is the timestamp in microseconds. The rest 8 values constitutes four [X, Y] pairs that describe the polygon of the bounding box. These coordinates are flipped relative to jAER XY coordinates because of a different origin and convention in python for images. These bounding boxes are released with the dataset in groundtruth-for-vot-and-trackingdataset.zip.
You can easily annotate bounding boxes in jAER using the event filter class YuhuangBoundingboxGenerator filter.
(Before you proceed, make sure you get the latest jAER update and compile from the IDE is recommended.)
Drawing bounding box in SpikeFuel is quite easy, there is a pubic API gui.draw_poly_box_sequence where you can draw bounding boxes by given a list of sequence frames and corresponding bounding boxes. A concrete example may be found here.
A camera calibration can be performed using SingleCameraCalibration filter. The calibration file is available in calibration.zip. You can add and use this filter as following instructions:
Note that you should always load the bounding boxes after loading the calibration, because the bounding boxes are transformed using the calibration when they are loaded.
These datasets are hosted as part of the INI sensors group databases.
Questions about these datasets should be directed to yuhuang.hu@ini.uzh.ch and tobi@ini.uzh.ch
This page is maintained as the google doc https://docs.google.com/document/d/1m4gAHPkPIzVvhHirtSjzUJrKiD5uU2I76hTqXhk8CxI/edit?usp=sharing