Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Artificial Intelligence and Crime

Author: Sophia

what's covered
In this lesson, you will learn about how artificial intelligence is used by both criminals and the criminal justice system. Specifically, this lesson will cover the following:

Table of Contents

1. Criminals and Artificial Intelligence

Criminals use artificial intelligence (AI) in many ways to help them commit crimes, especially cybercrimes, or criminal activities that either target or use a computer, a computer network, or a networked device. AI is a broad term for computer systems capable of performing complex tasks, such as reasoning, making decisions, or solving real-world problems. This technology provides cybercriminals with new tools and capabilities that can significantly enhance the scale and effectiveness of their attacks. AI can be used to do the following:

  • Automate attacks, such as those to crack passwords, or SQL attacks to exploit vulnerabilities in databases
  • Generate more convincing phishing emails by analyzing and mimicking the writing style of legitimate senders, making it harder for users to identify them as malicious
  • Develop more sophisticated malware that can adapt its behavior based on the target’s defenses, making it harder to detect and remove (Cohen, 2024)
  • Assist with identity theft in several ways, leveraging its capabilities to enhance malicious activities
  • Create realistic audio, video, and images, which can be used to create fake identities or to impersonate someone else in fraudulent activities
  • Analyze patterns in user behavior and find ways to bypass security measures like two-factor authentication
Criminals use AI for social engineering, which involves deceiving people into providing confidential information that can be used for fraudulent purposes (Interpol, n.d.). AI analyzes large amounts of data from social media and other sources to create highly personalized scams.

EXAMPLE

AI may analyze a target’s interests, behaviors, and relationships to craft a phishing email that appears to be from a trusted source or create realistic voice clones of individuals, which can be used in phone scams to impersonate someone known to the victim, such as a family member, friend, or coworker.

Additionally, social engineering scams can involve AI-powered chatbots engaging with potential victims on social media or messaging platforms, gathering information that can be used in a social engineering attack. This collected information is often used to then create fake social media profiles that appear to be genuine, allowing criminals to establish trust with potential victims before launching an attack. Lastly, AI is used to manipulate social media algorithms to amplify fraudulent messages or to spread disinformation, making it easier to deceive victims (Interpol, n.d.).

IN CONTEXT

One of the largest known social engineering attacks was carried out by a man named Evaldas Rimasauskas (Huddleston, 2019). He and his team created a fake company that claimed to be a legitimate computer manufacturer working with Google and Facebook. Rimasauskas also established bank accounts in the company’s name.

The scammers then targeted specific employees at Google and Facebook with phishing emails, sending invoices for goods and services that the fake company supposedly provided. However, the invoices directed the employees to deposit money into the scammers’ fraudulent accounts. Between 2013 and 2015, Rimasauskas and his team swindled the two tech giants out of more than $100 million (Huddleston, 2019).

To protect yourself from cyberattacks and the potential misuse of AI, it is essential to keep software updated; use strong, unique passwords; and enable two-factor authentication. Additionally, be cautious of phishing attacks and use secure connections for sensitive transactions. Limit data sharing, keep backups of important files, and stay informed about cybersecurity threats. When using AI, review privacy policies and monitor financial statements for unauthorized transactions (Ready.gov, 2022).

terms to know
Artificial Intelligence
A computer system that is able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Cybercrime
A type of criminal activity carried out using digital devices and/or networks. It involves the use of technology to commit fraud, identity theft, data breaches, computer viruses, and scams.
Social Engineering
The use of deception to manipulate individuals into sharing confidential or personal information that may be used for fraudulent purposes.
Chatbot
A computer program designed to simulate conversation with human users.


2. Artificial Intelligence in the Criminal Justice System

Just as people can use AI to commit crimes, the criminal justice system can also use AI in a variety of ways, including the following:

  • Predictive policing, which uses data analysis and AI technologies to identify potential criminal activity and deploy law enforcement resources more effectively
  • Crime mapping, in which algorithms analyze historical crime data to identify patterns and trends in criminal activity
  • Facial recognition technology, which identifies and verifies the identity of an individual from a digital image or video frame
  • Analysis of evidence
  • Identification of victims
  • Prison management
Police departments across the country have been using AI algorithms to examine data from different sources, such as surveillance cameras, license plate readers, and social media, to identify possible threats and forecast patterns of criminal activity. This information is used to create crime maps that help law enforcement agencies allocate resources to areas with a higher likelihood of crime.

In other words, AI identifies “hot spots” where crime is concentrated, allowing law enforcement to target these areas with increased patrols and other crime prevention measures (SATPALDA, 2024). While this can help reduce response times and improve overall effectiveness, critics argue that AI algorithms may perpetuate existing biases in the criminal justice system, leading to unfair treatment of certain communities (Cogent Infotech, n.d.). Thus, it is essential for law enforcement to ensure that AI is used ethically and responsibly.

Another way that the criminal justice system has been using AI is through facial recognition technology, as noted earlier. Police departments use facial recognition to compare images of suspects from surveillance cameras or other sources with databases of known criminals to identify and locate those suspects. This same facial recognition process can also be used to help identify victims of crime.

EXAMPLE

AI can help locate missing persons by comparing images of individuals reported missing with databases of images, such as driver’s license photos (U.S. Government Accountability Office, 2023).

Some law enforcement agencies may even use facial recognition to control access to secure facilities, such as police stations and government buildings.

The criminal justice system also uses AI to analyze evidence in various ways, which can enhance the investigation and prosecution of crimes. AI algorithms process and compare large amounts of data quickly and accurately, helping forensic analysts identify matches and generate leads in criminal investigations. These types of evidence include the following:

  • Forensic evidence, such as fingerprints, DNA samples, and ballistic evidence
  • Video footage and images from which valuable information, such as objects, people, or vehicles of interest, can be extracted
  • Audio recordings, from which potentially relevant voices, sounds, or keywords can be identified
  • Text data, such as emails, social media posts, and other electronic communications, from which potentially relevant patterns, sentiments, and relationships can be identified (Lunter, 2023)
As in the example with facial recognition, AI can assist police in identifying victims of various crimes, including human trafficking, kidnapping, or cybercrimes. It can analyze patterns in data, such as financial transactions or online behavior, to identify potential victims of fraud, exploitation, or abuse. Also, it can analyze social media posts or online chat messages to identify language patterns indicative of distress or victimization. AI can even analyze images and videos for signs of abuse or exploitation, such as bruises or other injuries (Rigano, 2018).

Lastly, AI is used to improve various aspects of prison management, making facilities safer and more efficient.

EXAMPLE

AI-powered surveillance systems can monitor inmate activity and detect unusual behavior, such as fights or unauthorized movements (Bala & Trautman, 2019). This helps prison staff respond quickly to potential security threats.

AI can also analyze existing surveillance footage to detect the presence of contraband, such as drugs or weapons, in prison facilities. This helps prevent the smuggling of contraband into prisons and enhances the overall security of facilities.

terms to know
Predictive Policing
The use of mathematics, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity.
Crime Mapping
A method used by analysts in law enforcement agencies to map, visualize, and analyze crime incident patterns.
Facial Recognition Technology
A technology capable of matching a human face from a digital image or a video frame against a database of faces.
Hot Spot
An area of concentrated criminal activity.

summary
In this lesson, you first learned about the relationship between criminals and artificial intelligence. In recent years, AI has become increasingly more prevalent. Criminals have been using it to enhance their ability to commit crimes, especially in cyberspace. That said, artificial intelligence is also being used in the criminal justice system to combat crime. So far, AI has been used to help with investigations, analyze evidence, and keep prison facilities safer.

In the next lesson, you will have an opportunity to examine a case study dealing with cybercrime.

REFERENCES

Bala, N., & Trautman, L. (2019, May 3). Will artificial intelligence help improve prisons? Pacific Standard. psmag.com/social-justice/should-prisons-use-artificial-intelligence

Cogent Infotech. (n.d.). Predictive policing using machine learning. www.cogentinfo.com/resources/predictive-policing-using-machine-learning-with-examples#:~:text=New%20York%20City%20Police%20Department&text=The%20system%20uses%20machine%20learning

Cohen, L. (2024, January 10). AI advances risk facilitating cyber crime, top US officials say. Reuters. www.reuters.com/technology/cybersecurity/ai-advances-risk-facilitating-cyber-crime-top-us-officials-say-2024-01-09/

Huddleston, T. (2019, March 27). How this scammer used phishing emails to steal over $100 million from Google and Facebook. CNBC. www.cnbc.com/2019/03/27/phishing-email-scam-stole-100-million-from-facebook-and-google.html

Interpol. (n.d.). Social engineering scams. www.interpol.int/en/Crimes/Financial-crime/Social-engineering-scams#:~:text=Social%20engineering%20fraud%20is%20a

Lunter, J. (2023, November 24). Can criminal investigations rely on AI? BiometricUpdate.com. www.biometricupdate.com/202311/can-criminal-investigations-rely-on-ai#:~:text=The%20AI%20applications%20for%20investigations&text=AI%20advancements%20like%20forensic%20cameras

Owen, Q. (2023, October 11). How AI can fuel financial scams online, according to industry experts. ABC News. abcnews.go.com/Technology/ai-fuel-financial-scams-online-industry-experts/story?id=103732051

Ready.gov. (2022, September 14). Cybersecurity. www.ready.gov/cybersecurity#:~:text=You%20can%20avoid%20cyber%20risks

‌Rigano, C. (2018, October 8). Using artificial intelligence to address criminal justice needs. National Institute of Justice. nij.ojp.gov/topics/articles/using-artificial-intelligence-address-criminal-justice-needs

SATPALDA. (2024, February 22). GIS for crime mapping. satpalda.com/blogs/gis-for-crime-mapping/

U.S. Government Accountability Office. (2023, September 12). Facial recognition services: Federal law enforcement agencies should take actions to implement training, and policies for civil liberties. www.gao.gov/products/gao-23-105607#:~:text=Law%20enforcement%20may%20use%20facial

Terms to Know
Artificial Intelligence

A computer system that is able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Chatbot

A computer program designed to simulate conversation with human users.

Crime Mapping

A method used by analysts in law enforcement agencies to map, visualize, and analyze crime incident patterns.

Cybercrime

A type of criminal activity carried out using digital devices and/or networks. It involves the use of technology to commit fraud, identity theft, data breaches, computer viruses, and scams.

Facial Recognition Technology

A technology capable of matching a human face from a digital image or a video frame against a database of faces.

Hot Spot

An area of concentrated criminal activity.

Predictive Policing

The use of mathematics, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity.

Social Engineering

The use of deception to manipulate individuals into sharing confidential or personal information that may be used for fraudulent purposes.