Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Optoelectronics/Displays
?
?
Optoelectronics/Displays??

Vision-based AI boosts surveillance applications

Posted: 17 Jun 2014 ?? ?Print Version ?Bookmark and Share

Keywords:automated surveillance? artificial intelligence? embedded vision? DSP? SOCs?

To equip systems with scene understanding sufficient to identify vehicles in addition to traffic lanes and direction, additional system competencies handle feature extraction, object detection, object classification (car, truck, pedestrians, etc.), and long-term tracking. LPR (licence plate recognition) algorithms and other techniques locate licence plates on vehicles and discern individual licence plate characters. Some systems also collect metadata information about vehicles, such as colour, speed, direction, and size, which can then be streamed or archived in order to enhance subsequent forensic searches.

Algorithm implementation options
Traditionally, analytics systems were based on PC servers, with surveillance algorithms running on x86 CPUs. However, with the introduction of high-end vision processors, all image analysis steps (including the previously mentioned traffic systems) can now optionally be entirely performed in dedicated-function equipment.

Embedded systems based on DSPs (digital signal processors), application SoCs (system-on-chips), GPUs (graphics processors), FPGAs (field programmable logic devices) and other processor types are now entering the mainstream, primarily driven by their ability to achieve comparable vision processing performance to that of x86-based systems, at lower cost and power consumption.

Figure 2: In distributed intelligence surveillance systems, networked cameras with local vision processing capabilities have direct access to raw video data and can rapidly analyse and respond to events.

Stand-alone cameras and analytics DVRs (digital video recorders) and NVRs (networked video recorders) increasingly rely on embedded vision processing. Large remote monitoring systems, on the other hand, are still fundamentally based on one or more cloud servers that can aggregate and simultaneously analyse numerous video feeds. However, even emerging 'cloud' infrastructure systems are beginning to adopt embedded solutions in order to more easily address performance, power consumption, cost, and other requirements. Embedded vision coprocessors can assist in building scalable systems, offering higher net performance, in part by redistributing processing capabilities away from the central server core and towards cameras at the edge of the network.

Semiconductor vendors offer numerous devices for different segments of the embedded cloud analytics market. These ICs can be used on vision processing acceleration cards that go into the PCI Express slot of a desktop server, for example, or to build stand-alone embedded products.

Many infrastructure systems receive compressed H.264 videos from IP cameras and decompress the image streams before analysing them. Repeated "lossy" video compression and decompression results in information discard that may be sufficient to reduce the accuracy of certain video analytics algorithms. Networked cameras with local vision processing "intelligence," on the other hand, have direct access to raw video data and can analyse and respond to events with low latency (figure 2).

Although the evolution to an architecture based on distributed intelligence is driving the proliferation of increasingly autonomous networked cameras, complex algorithms often still run on infrastructure servers. Networked cameras are commonly powered by Power Over Ethernet (PoE) and therefore have a very limited power budget. Further, the lower the power consumption, the smaller and less conspicuous the camera can be. To quantify the capabilities of modern semiconductor devices, consider that an ARM Cortex-A9-based camera consumes only 1.8W in its entirety, while compressing H.264 video at 1080p30 (1920x1080 pixels per frame, 30 frames per second) resolution.

It's relatively easy to recompile PC-originated analytics software to run on an ARM processor, for example. However, as the clock frequency of a host CPU increases, the resultant camera power consumption also increases significantly as compared to running some-to-all of the algorithm on a more efficient DSP, FPGA or GPU. Harnessing a dedicated vision coprocessor will reduce the power consumption even more. And further assisting software development, a variety of computer vision software libraries is available.

?First Page?Previous Page 1???2???3???4???5?Next Page?Last Page



Article Comments - Vision-based AI boosts surveillance ...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top