Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Embedded

VCA: Raising video security to a new level

Posted: 16 Jan 2008 ?? ?Print Version ?Bookmark and Share

Keywords:VCA video analytics? DSPs? surveillance application?

There is a push to develop methods that will significantly increase the effectiveness of monitoring security and surveillance video. Video content analysis (VCA), also known as video analytics, electronically recognizes the significant features within a series of frames and allows systems to issue alerts when specific types of events occur, speeding real-time security response. VCA automatically searches captured video for specific content, relieving personnel from tedious hours of reviewing video, and decreasing the number of personnel needed to screen camera video.

VCA techniques are continually being developed to make widespread implementation feasible in the years ahead. One certainty is that VCA will require a great deal of processing to identify objects of interest in the vast stream of video pixel data. Also, VCA systems must be programmable to meet variations in application, recognize different content and adapt to evolving algorithms. Newly available video processors provide an exceptionally high level of performance and programming flexibility for compression, VCA and other digital video requirements. Software platforms and tools that complement the processors simplify development for security and surveillance products. As VCA techniques develop, they can be readily implemented into the enabling technology.

Generic VCA flow
There is no international standard for VCA, but this is the generic flow:

? A longer sequence is separated into individual scenes or shots to be analyzed. Since different scenes have different histograms or color frequency distributions, a frame in which the histogram changes radically from that of the previous frame can be treated as a scene change.

? Changing foreground objects within the scene are detected as separate from the static background.Individual foreground objects are extracted or segmented and then tracked from frame to frame. Tracking involves detecting the position and speed of the object.

? If recognition is necessary, the features of the object are extracted so the object can be classified.

? If the event is something of interest, an alert is issued to the management software and/or personnel.

Foreground/background detection
VCA is built on the ability to detect activity that changes in the foreground against a generally static and uninteresting background. In the past, foreground/background detection was computationally limited. Today, higher-performance DSPs and video processors make it possible to execute more complex detection algorithms.

In general, there are two methods of foreground/background detection: non-adaptive methods, which use only a few video frames and do not maintain a background model, and adaptive methods, which maintain a background model that evolves over time. In adaptive VCA algorithms, feedback from steps 2 through 4 of the VCA flow is sent to update and maintain the background model, which is then used as input for step 1.

Object tracking/recognition
After foreground/background detection, a mask is created. All the parts of a single object may not be connected because of environmental noise, so a computationally intensive process of morphological dilation is implemented before connecting all the parts as a whole object. Dilation involves imposing a grid on the mask, counting foreground pixels in each area of the grid and turning on the rest of the pixels in each area where the count indicates that separated objects should be connected. After dilation and component connection, a bounding box is derived for each object. The box represents the smallest rectangle containing the entire object as it might appear in different frames, resulting in segmentation.

Tracking segmented foreground objects involves three steps: predicting where each object should be located for the current frame, determining which object best matches its description and correcting the object trajectories to predict the next frame. The first and third steps are accomplished by means of a recursive Kalman filter. Since only the object's position can be observed in a single frame, it is necessary to calculate its speed and next position instantaneously using matrix computations.

The complexities of tracking lead to problems associated with classifying objects. For instance, it is easier for the system to issue an alert if an object has crossed a line in front of the camera than if a human being has crossed the line. The dimensions of the object and its speed can provide a vector for rough classification, but more information is required for finer classification. A larger object provides more pixel information, though possibly too much for fast classification. In this case, dimensional reduction techniques are required for real-time response, even though later investigation may use the full pixel information available in the stored frames.

Effective VCA implementation must overcome a number of challenges aside from object classification. These include changes in light levels resulting from nightfall, water surfaces, clouds, wind in trees, rain, snow and fog; tracking the paths of objects that cross, causing the foreground pixels of each to merge briefly and then separate; and tracking objects from view to view in multiple-camera systems. Solving these problems is still a work in progress in VCA.

System design
Implementing VCA and video encoding requires a high-performance processor and varied deployments. The emergence of new analytic techniques demands programming flexibility, which can be addressed with processors that integrate the highest performance with programmable DSP and RISC microprocessor cores besides video hardware co-processors. The right processor also needs to integrate high-speed communication peripherals and video signal chains to reduce system component counts and costs.

Two DaVinci processors handle high-end VCA. They encode 720 x 1,080 HD video sources at 30fps.

Using this type of solution to integrate VCA within a camera offers a robust, efficient form of network implementation. VCA software can also be integrated within PCs that serve as concentration units for multiple cameras. In addition to the VCA flow itself, there may be a need for preprocessing steps that handle de-interleaving before the foreground/background detection and other analytic steps. The application software may add processing steps for object recognition or other purposes. Both one- and two-processor design versions provide headroom for additional software functions.

Adaptive methods of separating foreground objects from the background, then tracking objects and, if necessary, classifying suspicious activities are all aspects of VCA that require a high level of real-time processing computation and adaptability. DSP-based video processors offer the performance needed for VCA and video encoding, along with programming flexibility that can adapt to changes in application requirements and techniques. The net effect is the raising of video security to a new level.

- Cheng Peng
DSP Video Applications Engineer
Texas Instruments Inc.

Article Comments - VCA: Raising video security to a new...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top