tvaLib
The mainpage documentation

TVALIB REV 590

Licence

Use of this software is governed by the terms and agreements set out in the accompanying LICENSE.TXT file. This software is provided free of charge for non-commercial use. Please use proper attribution wherever appropriate.

Platform

Python 2.7 Developed and tested on: Spyder 2.2.5 through Spyder 3.0, Windows 7 through 10, 64-bit ipython Ubuntu 12.04 through 15.10, 64-bit

Dependencies

Python 2.7 http://www.python.org/ (preferably 64-bit, see below) scipy stack http://www.scipy.org/stackspec.html numpy (included) scipy (included) matplotlib (included) Traffic-Intelligence http://bitbucket.org/Nicolas/trafficintelligence/ openCV http://opencv.willowgarage.com/wiki/ PIL http://www.pythonware.com/products/pil/ SQLAlchemy http://www.sqlalchemy.org/

Optional: Mencoder (video concatenation tools available in Linux) munkres (MOTA optimisation) https://pypi.python.org/pypi/munkres/ Urban Tracker Annotation Tool http://www.jpjodoin.com/urbantracker/

Installation

Clone the repository to a designated location. That's it! Don't forget to install the dependencies.

Be sure to add .../traffic-inteligence/Python/ to your PYTHONPATH. When you first launch main.py, a tva.cfg configuration file will be created. The software will ask you to locate the .../traffic-inteligence/ and your video data repository.

Due to the sheer volume of data, it is recommended to use a 64-bit Python interpreter and libraries. 64-bit Windows Binaries can be found here: http://www.lfd.uci.edu/~gohlke/pythonlibs/

Publications

ST-AUBIN, Paul (2016) Roundabout Driver Behaviour And Road Safety Analysis Using Computer Vision, Doctoral Thesis, Polytechnique Montréal, Montréal, 200 pages

ST-AUBIN, Paul G., SAUNIER, Nicolas, MIRANDA-MORENO, Luis F., (2015) Large- Scale Automated Proactive Road Safety Analysis Using Video Data, Transportation Research Part C: Emerging Technologies, Special Issue: Big Data in Transportation and Traffic Engineering, vol. 58, pp. 363-379

JACKSON, Stewart; MIRANDA-MORENO, Luis Fernando; ST-AUBIN, Paul; SAUNIER, Nicolas (2013) A Flexible, Mobile Video Camera System and Open Source Video Analysis Software for Road Safety and Behavioural Analysis, Transportation Research Record, no. 2365, pp 90-98

Instructions

This package uses trajectory data supplied by Traffic-Intelligence, in the .sqlite format (legacy support: -objects.txt and -features.txt). An accompanying scene.sqlite is expected which holds scene data (see example below). This program can also load cached object and results data previously saved to .traj and .pva files respectively. Please note that cached data is not intended for long-term storage as this cached data is incompatible from one revision to the next. Keep all original .sqlite files!

The general folder structure of a video database should look like this (items marked by an * are generated automatically by the program):

Analysis* `–1_Analysis* <–This is a site-analysis |–result12.pva* |–figure.png* |–results.csv* `–... SiteA <–This is a site |–Cam1 <–This is a camera view | |–12.avi <–This is a sequence (all three 12.* files) | |–12.sqlite* | |–12.traj* | |–13.avi | |–13.sqlite* | |–13.traj* | |–... | `–tracking.cfg |–Cam2 | --... –ortho-cal.png SiteB --Cam1 –... ... scene.sqlite <–This is the master database with metadata

This package provides a library of tools used for scene description, filtering, traffic analysis, and visualisation. Alternatively, the library can be run as a self-contained script to automate the process of video management, tracking optimisation, feature tracking, scene annotation, data cleanup, traffic analysis, and data visualisation. To run the program, simply execute main.py with a series of commands. Use -h for help and full listing of parameters. Example usage:

main.py -e –homo 6

This will give the user a prompt to search for a camera (option -e) in the database and then proceed to search for aerial imagery and any relevant still frame attached to that camera. The homogrpahy calculation dialogue is then launched (–homo 6) using 6 reference points.

main.py -s "SiteA" -c "Cam1" –trafint

This will execute Traffic Intelligence feature extraction (–trafint) on all video sequences associated with SiteA (-s) and Cam1 (-c) using any associated tracking configuration files associated with either individual sequences or with the Cam1.

main.py -e -r

This example will guide the user to annotate (-r) a designated site and/or camera view (-e).

main.py -e -p

This will playback (-p) the tracked objects of a chosen sequence (-e).

main.py -b 1 -i 0-2,5 -w -a -t 8

This example runs a simple analysis on site-analysis #1 (-b) for all sequences simultaneously for conflict prediction methods 0 through 2 and 5 (-i), saving figures (-w) and results (-a) in the relevant site-analysis output folder, caching (-a) filtered vehicle trajectories, and results using up to 8 CPU threads (-t).

scripts/batch-run.py –analysis 2 -i 0-2,5 -w -a -t 8

This example performs the same tasks as the example above, however, multiple site-analysese are performed sequentially as defined by analysis #2 (–analysis).

scene.sqlite file

A scene.sqlite database is a complete listing of all video data and accompanying meta-data. The tvaLib specification is currently an expanded implementation of the a reference specification from Traffic-Intelligence, but it is backwards compatible with Traffic-Intelligence.

scripts/create-metadata.py can be used to generate a new database. Run main.py with the commands –create-camtype through –create-analysis to populate a data base with entries.

The following is an incomplete list of the parameters used throughout the database and program. Fields that are not compliant with the reference specification in Traffic-Intelligence are marked with an *.

[sites] is a list of sites. name: The reference name of the site. Should correspond to the folder name containing data for this site. description: A description of the site for convenience, or a link to an external database. xcoordinate: Unused. ycoordinate: Unused. expansion_factors*: A one-dimensional, bracketed, comma-delimited list of exactly 24 expansion factors: [0.01,0.024,...] satres*: Satellite resolution of any accompanying ortho.png, as a float in metres/pixel (or according to units of measure).

[camera_views] is a list of cameras. siteId: A reference to the parent [site] name: The reference name of the camera. Should correspond to the folder name containing data for this camera. framerate: Frames per second resolution of the data. homographyFilename: Specific homography file associated with the chunk of data. Defaults to homography.txt if blank. cameraCalibrationFilename: Unused. homographyDistanceUnit: Not yet implemented. Use 'm'. configurationFile: Specific tracking file for video extraction. alignments*: List of segmented curves representing the centre paths of each lane. Each segmented curve is itself defined by a list of points representing the nodes. These points are given in[x,y]. The index order is: [alignment][node][coordinate] boundingboxes*: List of polygons bounding the analysis area. Each polygon is itself defined by a list of points representing the corners. These points are given in [x,y]. The index order is: [boundingbox][corner][coordinate] xy_bounds*: Lower and upper bounds of x and y axis for maps. Defaults to the data extents. The index order is: [coordinate][max/min] cm_bounds*: Lower and upper bounds of x and y axis for collision maps. Defaults to xy_bounds. The index order is: [coordinate][max/min] max_speed*: Maximum speed expected for the scene. Defaults to the global setting. hex_grid_x*: Number of bins in x axis for maps. hex_grid_y*: Number of bins in y axis for maps. virtual_loops*: List of of points where a loop detection is performed The index order is: [loop][coordinate]

[video_sequences] is a list of video sequences (files). siteId: A reference to the parent [site] cameraViewId: A reference to the parent [camera_view] name: The filename of the video sequence. startTime: Encoded date and time of start of data. Standard format uses strict ISO 8601 without the 't'. E.g.: 2012-01-30 15:00:00 duration: The duration of the data. durationUnit: Not yet implemented. Use 's'. configurationFile: Specific tracking file for video extraction. This field overrides the same field in [camera_views]. dataFilename*: Unique filename containing the chunk of data. translationX*: Defines a coordinate translation to be applied to the scene (all feature coordinates) along the X axis. The translation is applied once and before the scene rotation. Transformations are not permanent. translationY*: Defines a coordinate translation to be applied to the scene (all feature coordinates) along the Y axis. The translation is applied once and before the scene rotation. Transformations are not permanent. rotation*: Defines a coordinate rotation about the origin [0,0] to be applied to the scene (all feature coordinates) in 2 dimensional space in the format degrees. The translation is applied once and only AFTER scene translation. Transformations are not permanent.

[camera_types]* is a list of video camera specifications. name: Name/model of the camera resX: Width resolution of the image resY: Height resolution of the image camera_matrix: Calibrated camera matrix for opencv (can be generated with cameraCalibration()) dist_coeffs: Calibrated distortion coefficients for opencv (can be generated with cameraCalibration()) FOV: Field of view (currently unused) freeScalingParameter: Alpha parameter for undistorted image scaling in openCV. imageScalingFactor: Image resize after undistortion operation.