This article was originally published in Security Magazine on August 27, 2019.

It’s helpful to reflect on where we are now versus where we are going.

Today, there is still more discussion about what might be possible than actual physical products on the market. Much of the conversation centers on practical ways to utilize deep learning and neural networks and how these techniques can improve analytics and significantly reduce false-positives for important events. When talking to end users about analytics, it becomes clear that many still don’t take full advantage of analytics. In some cases, this can be due to a history of false-positives with previous generation technology, but in others, it can simply be a case of not believing that reliable analytics are achievable for their unique needs. The good news is that with AI, or more accurately, deep learning and neural networks, we are going to a new level of enhanced analytics and data gathering in two key areas:

Increased accuracy for existing analytics: In the past, developers tried to define what an object is, what motion is and what should be considered an interesting motion that we want to track versus a noise motion that we want to ignore.

A perfect example:

wind blowing the leaves of a tree or a plastic bag floating by. Something as simple as motion detection has been plagued by false-positives generated by the wind for far too long. Users could try and reduce the sensitivity for light breezes, but as soon as a big storm comes through, motion events are triggered. Using neural networks and deep learning to define objects such as humans, cars, buses and animals means that traditional analytics can now focus on objects. These new algorithms have been taught to recognize a human being by seeing thousands of images of humans. That repetitive learning is exactly how a neural network can learn to recognize a human or a car. Once it has learned these examples, it can apply this advanced knowledge to existing analytics. For example, if a car crossed a line at an entry gate it might be acceptable. However, if a person passes that same line, we might want an alert. A shadow should be ignored and trees moving shouldn’t be considered. A car driving in the wrong direction certainly warrants an alert, however people moving about freely is fine. All of the traditional analytics regarding motion such as appear / disappear, line crossing, object left behind and loitering will be more accurate and capable of additional refinement using AI and the power of object recognition.

Better data mining: Using AI, cameras can tell us more about the objects they detect. For example, it’s not just a human, it’s a human wearing a green shirt and black pants with sunglasses on and that car is a small red sedan. This additional information is embedded in the metadata captured along with the video (metadata is really just data about data). Every frame of video has its own header which includes the additional analytics metadata. This means that the metadata is correctly timed with the video. If this data is stored at the recording site, you can look back at the recorded files and search for a green shirt in the metadata.  This can reduce multiple hours of searching to a couple of minutes or less.

Edge versus Server

Analytics can run on a dedicated server or on the edge inside the camera. Server-side AI will be used when more heavy-lifting is required such as large database comparisons typical in facial recognition, ALPR and more. However, even for compute-intensive tasks, efficiency both in processing speed and reduced bandwidth requirements can be found by using a hybrid approach with edge devices and servers working together. AI-derived metadata from the edge can be sent to a server-side application as opposed to sending raw video which requires that the server decode multiple streams just to run an analysis.


With these new AI capabilities coming to the edge, security cameras are more powerful than ever and the accuracy of analytics is increased exponentially.