The goal of the SMELLER project is the classification of road vehicles in a pollution level class. This objective is obtained with the analysis of data related to each vehicle, data including aerial pollution, gathered in a set of specific gates, instrumented with many different sensor devices, gates distributed in the city.
The project aims at the research and the experimentation of the components of a gate.
The contribution of the IRALAB group was:
- the participation to the data-gathering campaigns, for what concerned the video-related activities;
- the development of software for performing the data-gathering;
- the development of software for processing the video sequences in order to determine the 3D dynamics of the vehicle, this will base on the tracking of the blobs of each vehicle in the image plane (the latter task will be performed by the IVL group);
We performed 2 campaigns, a preliminary one in May 2012, and a more-structured second one in December 2012. In the last we had 3 sessions, i.e., days, of data gathering (from December 3rd 2012 to December 5th 2012 included); we recorded a set of videos of vehicles passing an instrumented gate, where of course we had also cameras for determining the vehicle pose and speed. These videos will be used to get an accurate estimate of cars' pose and dynamics, an important piece of information, in order to properly examine the vehicle emission data.
Description of the data gathering system and setup
The setup (click here for details) of the data gathering session consisted in:
- Two cameras, a Prosilica GC750 (Mono) and a Prosilica GC1020, used to gather the videos, both cameras are digital (GIGE) cameras.
- A radar, provided by another research group in the Physics dept., used to track the position of the passing vehicle, independently of the cameras; we hope to be able to use the radar data as Ground Truth for the outcome of our video processing.
- A photocell and its reflector, used to trigger the radar acquisition.
- Many devices measuring different parameters (and/or using different technologies) of the vehicle emissions.
For each data gathering session, the setup is illustrated in the following picture (sketching the position of the 2 cameras, their field of view, the position of the radar, of the line segment representing the gate, the position of the emission-related instruments, etc.).
As the data gathering sessions were not sporting exactly the same setup, a picture from one of the cameras, such as the one below, is provided for each session, in order to show the position of the gate line, i.e., the photocell - reflector line, w.r.t.our world origin, which is in the origin of the planar target used to calibrate the camera projection model.
missing picture (google sketch-up o altro min-effort, in 3D, c'e' la strada, la linea del gate, la linea "baseline" per trovare la posizione della fotocellula rispetto alla camera", il tabellone che fa da world origin, i segmenti che vengono misurati (con un nome per poi spiegare il procedimento), un generico palo su cui ci sono 2 camere, i FOV delle 2 camere con una parziale (ma c'e' poi nei video?) sovrapposizione, etc. etc.).
As it can be seen, the position on the road plane of the photocell and of the reflector, positions that are expressed w.r.t. to the camera calibration target, are written in the picture.
Calibration of the camera projection model
In order to determine the pose of the vehicle in the road scene, one needs to calibrate both the intrinsic and the extrinsic parameters of the camera projection model. For your convenience you can refer to [computervisiontextbook, paperzhang, documentazione bouguet (sua phdthesis?)] in order to obtain an introduction to the topic calibration of the camera projection model. For your convenience we have already calibrated the camera projection model, by making use of the state-of-the-art matlab calib_toolbox by Jean-Yves Bouguet.
questa parte sugli intrinseci va approfondita per
- spiegare cosa si e' fatto (molto veloce, ref. to toolbox)
- link alle immagini usate per fare calibrazione, link all'intero video da cui quelle immagini sono state prese
- rendere credibile che gli intrinseci siano restati costanti (se risultasse vero) dicendo che da una data gathering session all'altra le camere sono state riposte senza moficare la regolazione dell'ottica.
The intrinsic parameters of the cameras are
|Focal Length:||fc = [ 209.34220 209.81627 ] ± [ 0.90886 0.80910 ]|
|Principal point:||cc = [ 362.92449 249.84216 ] ± [ 0.51694 0.92481 ]|
|Skew:||alpha_c = [ -0.00053 ] ± [ 0.00108 ] => angle of pixel axes = 90.03058 ± 0.06206 degrees|
|Distortion:||kc = [ 0.01330 -0.01056 0.00158 0.00057 0.00153 ] ± [ 0.00168 0.00108 0.00043 0.00049 0.00021 ]|
|Pixel error:||err = [ 0.18583 0.16336 ]|
for Prosilica GC750, and
|Focal Length:||fc = [ 260.06910 260.12549 ] ± [ 4.74396 4.70660 ]|
|Principal point:||cc = [ 283.75583 321.61016 ] ± [ 3.99569 3.98443 ]|
|Skew:||alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ± 0.00000 degrees|
|Distortion:||kc = [ -0.00000 0.00000 0.00249 -0.00172 -0.00004 ] ± [ 0.00000 0.00000 0.00445 0.00447 0.00307 ]|
|Pixel error:||err = [ 0.94874 1.11201 ]|
for Prosilica GC1020. Those results are also contained in the calibration folder, as matlab script file or as .mat data file (to be used with calib_toolbox). That folder contains also the set of pictures used to calibrate the cameras (also available here), taken from <video da controllare>. We have done a separate calibration for each day; however the differences in the intrinsic parameters between the days are so negligible that, for convenience, a single set of intrinsics has been provided.
The extrinsic parameters of the cameras, namely the position relative to the camera of the calibration panel visible in the setup picture, have been calculated too. These can be found as text files in the same folder of the videos.
Structure of the archive containing each data gathering session, in terms of files and forlders
The dataset contains three folders, one for each day of data gathering and one for the calibration of the intrinsic parameters (see previous paragraph). Each day of recording has been divided in multiple parts for practical reasons; for each part a text file containing the timestamps of the frames is provided. An example file name is: 3dic_1020_part1.avi, where 3dic is the date, 1020 means that it has been recorded with the Prosilica GC1020 camera and part1 that it's the first sequence of the day. The corresponding timestamp file is named 3dic_1020_part1_stamp.avi. An example timestamp file is:
This means that the first frame of the video is at time 0. The second at 0.220067, the third at 0.440549 and so on. A file containing absolute timestamps is provided too. The syntax is exactly the same as relative timestamps, besides the fact that time is expressed in terms of Local Solar Time. An example is:
This means that the first frame has been recorded on 3 December 2012, at 13:07:49. The second at 13:07 and 49.220067 seconds and so on. The procedure used to obtain these timestamps is described in the section below.
Each folder contains also files with the extrinsic parameters of the cameras. When the extrinsics are the same for the whole day (i.e. the camera has not been moved), the folder contains a single file named <date>_<camera>_ext.txt. Where multiple parameter sets exist, the filename of the video specifies which set has to be used (for example 4dic_1020_part1_set1.avi means that the file 4dic_1020_ext_set1.txt contains the relative extrinsic parameters).
At this link a spreadsheet is available, containing, for each transit and for each camera, the related video file and range of frames. A brief description of the setup and problems faced during each day of data gathering is available at the links below:
The clock of the devices used during the experiment had not been synchronized before the recording session. For the videos we had only relative timestamps, not expressed in terms of local solar time. A log book containing the local time of each car passage (specifying the car model too) was available. For each video we have identified the car model and we have manually found the frames corresponding to the car passing in front of the sensors. At the end of the procedure we had a list of frames, with their camera timestamps, and a list of local times corresponding to a transit of a specific car. The problem is that the size of the two list isn't necessarily the same. Sometimes the were missing passages, and sometimes there were more than recorded on the logbook.
For both of the two lists we have calculated the time delta between two subsequent passages. Given the difference in length, it was not possible to directly compare one list to the other: we needed to choose a sublist of the bigger one. For this reasong we generated every possible sublist and compared them with the other list; eventually we have chosen the one that minimizes the mean difference between corresponding time delta (i.e. we minimized the mean error between the time delta of the two lists). Every match in the two lists gave us a possible local time offset to add to the relative timestamps of the camera. In an ideal world these offset should be all the same, but, due to various kind of errors, this isn't true in a real world. Eventually we have chosen the most frequent offset to correct the whole video sequence.
This algorithm has been used for every video, thus obtaining timestamps synchronized with those of the other devices used during the experiment.
The dataset is freely available here as a big archive. You can also download the data for each day separately, from the links below:
|calibration.tar.gz||Contains the intrinsic parameters of the cameras and two set of pictures (one for each camera), used during the calibration.|
|3dic.tar.gz||Contains videos, timestamps, setup pictures and extrinsic parameters obtained on 3 December|
|4dic.tar.gz||Contains videos, timestamps, setup pictures and extrinsic parameters obtained on 4 December|
|5dic.tar.gz||Contains videos, timestamps, setup pictures and extrinsic parameters obtained on 5 December|
When using this dataset in your research, please cite us:
author = "Simone Fontana, Andrea Galbiati, Andrea Romanoni, Domenico G. Sorrenti",
title = "SMELLER Datasets",
month = "December",
year = "2012",
url = "http://www.ira.disco.unimib.it"