Using Artificial Intelligence in Surveillance: Is it Worth It?

By: Brenna Hoffman and Aspen Runkel

The crime rate in a portion of a city has gone up. In order to improve the safety of her employees and customers, a business owner of an electronic store decides to install a “smart” security system equipped with facial recognition. This security system has a built-in function to call the police whenever it recognizes someone with an outstanding warrant from a criminal database. As the business owner installs a decal on the door notifying customers of the new technology and improved safety measures, she greets a couple entering the store. They want to buy a desktop computer for their home office. What they do not know is that as soon as they walked through the doors, the new security system recognized the male as one of the most wanted in the area, instantly sending an alert to the police. Since the man has been known to be associated with terrorists in the past, the S.W.A.T team comes in with guns drawn and tackles the man to the floor. It is only after the police cuff him that they realize, they’ve got the wrong guy. The “smart” security system had produced a false positive.

Since the invention of the camera, the world has been under surveillance. However, according to James Vincent, the surveillance cameras of the past were more like “portholes: useful only when someone is looking through them.”[i] With the use of artificial intelligence (AI) and machine learning, surveillance cameras are now becoming “smart”. They can analyze video and audio in real-time without needing a person to look through long hours of footage to find specified activity. This technology has various use cases, each offering their own benefits and concerns. As we explored the various use cases of AI-enabled cameras, we began to wonder: is the privacy given up with the utilization of automated surveillance worth the benefits it offers to society?

 Throughout the remainder of this article, we will discuss the advantages and disadvantages that AI surveillance cameras pose to law enforcement, citizens, and businesses and how this relates to privacy concerns (summarized in Table 1).

Table 1: AI Surveillance Use Cases – Benefits and Concerns from Various Perspectives

Police/ Law EnforcementReal-Time Video Viewing Assistance Crime Detection Avoid Fatigue and Cost of Analyzing Footage  False Positives Causing Legal Issues
Citizens/ CustomersSafety Alert Law Enforcement ProactivelyPrivacy Issues/Paranoia De-Anonymized Data Algorithmic Bias and Discrimination False Positives
BusinessReal-Time Crime Protection Identify Demographics Monitor Customer Behavior Targeted Marketing and Merchandising AidCompromise Customer Relationship/Loyalty

Police/ Law Enforcement Perspective

The police have been using video surveillance cameras for years to help aid them in their work and track criminals. Unfortunately, looking at footage from surveillance cameras can be tiresome and prone to human error. It also does not help that images from surveillance cameras are usually looked at proactively after the crime has occurred. New surveillance cameras based in AI can help the police force by identifying crimes in real-time through an alert and helping them sift through hours of videos quickly.  For example, IC Realtime’s Ella, allows users to search relevant footage by typing in keywords (e.g. red shirt, Jeep Wrangler).[ii] This product can allow police to find information they need quickly without having to search through hours of video where they can be prone to video blindness (the viewer can potentially miss 95% of screen activity after 22 minutes of viewing).[iii]

Companies like Athena Security have produced systems that can detect objects such as knives and guns, provide facial recognition abilities, and recognize specific behavior that indicates a crime is about to occur.[iv] Athena even boasts a 99% accuracy rating of detecting objects (not including concealed objects which they are working on).[v] These systems work by using machine learning and AI to detect anomalies in real time and send an alert to the proper authorities. Authorities can be alerted if a suspect is spotted on a camera and can respond quicker to incidents, potentially saving lives.

Unfortunately, like the opening example illustrates, there is a concerns for using smart cameras to alert the police. They can produce false positives (or even a false negative) which can result in embarrassing and costly situations. However, if there are systems in place to verify a threat or if the authorities use providers with a high accuracy rate, the potential for these costly mistakes can be decreased.

Citizens or Customers’ Perspective

Citizens or customers may feel more protected knowing that AI-enabled cameras are being used to monitor the safety of their surroundings. While this technology promises to provide law enforcement improved abilities to detect and act on criminal behavior, citizens and customers are concerned about protecting their privacy.

            Since the advent of smartphones and the internet, people have felt uneasy about their digital footprint being constantly monitored and tracked. There are articles out there helping consumers reduce their footprint and keep it from prying eyes.[vi] The integration of AI in surveillance cameras has only added to this unease. More than ever, people feel like someone is always watching them. American Civil Liberties Union (ACLU) senior policy analyst Jay Stanley says, “The concern is that people will begin to monitor themselves constantly, worrying that everything they do will be misinterpreted and bring down negative consequences on their life.”[vii] As Stanley puts it, being unable to act and behave freely is an invasion of privacy that may lead to mental and emotional consequences.

In addition to privacy concerns, customers and citizens are also concerned about unforeseen consequences this technology may inflict on their lives. The first is the threat of their data becoming deanonymized. A study by MIT scientists found that anonymized data can be deanonymized pretty quickly when working with multiple datasets.[viii] Without proper regulation, those who control AI surveillance data paired with other datasets could figure out who is who and potentially abuse that information.

There is also concern for possible algorithmic bias and discrimination trained into the software. AI “learns” from the data programmers feed it, so any biases that exist in society are likely to be perpetuated.[ix] A controversial study by researchers at Stanford University used AI to identify someone’s sexual orientation, their political views, criminality, and even their IQ.[x] This could be potentially dangerous in countries that still have penalties for being gay, communities that discriminate based on political views, etc. A store owner or even government leader could program the AI surveillance system (consciously or unconsciously) to send an alert based on bias or discrimination.

As our electronics store example illustrates, a major concern is the threat of false positives. False positives could lead to expensive legal cases, wrongful detention, or worse. The ACLU expressed particular concern in using this technology for  “anomaly detection,” which can single out an individual for unusual, atypical or deviant behavior, and emotion recognition, which promises to discern a person’s mood, though there is little evidence that emotions are universal or can be determined by facial movements alone.[xi] Relying on AI to accurately predict highly nuanced human behaviors is risky business.

Business Perspective

Much like law enforcement, businesses are considering how AI-enabled cameras could give them a competitive edge. AXIS Communications offers a product called AXIS Demographic Identifier which determines the gender and approximate age range of a customer by detecting and analyzing the faces of store visitors in real-time. The result is information a business can use to make targeted marketing and merchandising decisions.[xii] IBM recently showed how its video analytics software could be used to count customers and estimate their ages and loyalty status. The software could monitor the length of a line, identify a manager as he walked through a crowd, and flag people loitering outside the store.[xiii]

This technology could greatly improve business’s ability to monitor and act on consumer behavior, but at what cost? A major concern for businesses utilizing this technology is the possibility of compromising the relationship they have with their customers. As we discussed in the previous section, people tend to feel uneasy about being constantly monitored. Utilizing this technology, especially if it is not transparent to the customer, may push privacy concerned customers away.

Is it worth it?

            The time to ask whether or not people should use smart technology has passed. Society has already integrated items like smart phones, smart appliances, smart speakers, and smart cameras into everyday lives, despite knowing their potential for compromising privacy. Once a boulder is pushed off a hill, it is extremely hard to stop. The real question is how this smart technology can be implemented responsibly to balance the pros and cons discussed in this article. Safeguards can be put in place to limit the boulder’s destruction (e.g. provide caution of dangers, create regulations to manage boulder’s fall, and remove people from the bottom of the hill). Based on our research, we recommend that the use of “smart” surveillance systems be accompanied by rules and measures to ensure customer privacy (e.g. false positives/negatives, privacy issues, customer relationships) of using AI enabled surveillance cameras.

If the issues discussed in this paper can be reduced or eliminated, then privacy advocates can rest assured that the data being collected through AI surveillance cameras are being protected and used appropriately. The business owner, law enforcement, and customer in our opening example all could benefit from the responsible integration and use of AI surveillance, but improvements and precautions must be made to protect privacy.

[i] Vincent, James. “Artificial Intelligence Is Going to Supercharge Surveillance.” The Verge, 23 Jan. 2018,

[ii] Vincent, James. “Artificial Intelligence Is Going to Supercharge Surveillance.” The Verge, 23 Jan. 2018,

[iii] “Avigilon Appearance Search Video Analytics Technology.” iC2, 31 Jan. 2017,

[iv] Tucker, Patrick. “Here Come AI-Enabled Cameras Meant to Sense Crime Before It Occurs.” Defense One, 24 Apr. 2019,

[v] “Gun Detection.” Athena Security | Fever Detection | Gun Detection | Coronavirus Detection, 11 Oct. 2019,

[vi] Norton . “Help Protect Your Digital Footprint from Prying Eyes.” Help Protect Your Digital Footprint from Prying Eyes,

[vii] Vincent, James. “Artificial Intelligence Is Going to Supercharge Surveillance.” The Verge, 23 Jan. 2018,

[viii] Campbell-Dollaghan, Kelsey. “Sorry, Your Data Can Still Be Identified Even If It’s Anonymized.” Fast Company, 10 Dec. 2018,

[ix] Vincent, James. “Artificial Intelligence Is Going to Supercharge Surveillance.” The Verge, 23 Jan. 2018,

[x] Vincent, James. “The Invention of AI ‘Gaydar’ Could Be the Start of Something Much Worse.” The Verge, 21 Sept. 2017,*pba85f*.

[xi] Chokshi, Niraj. “How Surveillance Cameras Could Be Weaponized With A.I.” The New York Times, 13 June 2019,

[xii] “AXIS Demographic Identifier.” Axis Communications,

[xiii] Chokshi, Niraj. “How Surveillance Cameras Could Be Weaponized With A.I.” The New York Times, 13 June 2019,