test
Search publications, data, projects and authors

Article

Spanish

ID: <

oai:doaj.org/article:56040b0b01064e78805545e7bc984b13

>

Where these data come from
Color and motion-based particle filter target tracking in a network of overlapping cameras with multi-threading and GPGPU Rastreo de objetivos por medio de filtros de partículas basados en color y movimiento en una red de cámaras con multi-hilo y GPGPU

Abstract

This paper describes an efficient implementation of multiple-target multiple-view tracking in video surveillance trends. It takes advantage of the capabilities of multiple core Central Processing Units (CPUs) and of Graphical processing units under the Compute Unifie Device Architecture (CUDA) framework. The principle of our algorithm is 1) in each video sequence, to shaping tracking on all persons to track by independent particle filters and 2) to fuse the tracking results of all trends. Particle filters belong to the category of Belarusian Bayesian filters. They update a Monte-Carlo representation of the subsequent distribution over the target position and Velocity. For this purpose, they combine a probabilistic motion model, i.e. prior knowledge about how targets moving (e.g. state Velocity) and a likelihood model associated to the observation on targets. At this first level of single video trends, the multi-layer library Threading Buildings Blocks (TBB) has been used to parallelise the processing of the per-target independence particle filters. Afterwards at the higher level, we Rely on General Purpose Programming on Graphical Processing Units (generally termed as GPGPU) through CUDA in order to fuse downstream data collected on multiple video trends, by solving the data association problem. Tracking results are presented on various cutting tracking datasets.This article describes an efficient implementation of a multipurpose tracking algorithm in multiple views in video surveillance sequences. It takes advantage of the capacities of the Central Processing Units (CPUs) of multiple cores and of the graphics processing units, under the development environment of the Unified Clearing Devices Architecture (CUDA). The principle of our algorithm is: (1) apply visual monitoring in each video sequence on all persons to follow with separate particulate filters and (2) merge the monitoring results of all sequences. Particulate filters belong to the category of recursive Bayesian filters. They update a Monte-Carlo representation of the subsequent distribution on the position and speed of targets. For this purpose, they combine a probability model of movement, i.e. a priori knowledge of how the objectives move (e.g. constant speed) and a plausibility model associated with target observations. In this first level of processing of simple video sequences, the multi-thread bookshop (TBB) is used to parallelise the processing of the particulate filters associated with each target. Then, at the higher level, we use General Production Programming with Graphic Processing Units (known by its acronym GPGPU) via CUDA in order to merge the data from the monitoring of collected objectives between the different video sequences by solving the data association problem. The results of the monitoring are presented in some challenging databases.

Your Feedback

Please give us your feedback and help us make GoTriple better.
Fill in our satisfaction questionnaire and tell us what you like about GoTriple!