Fresh Batches for IAS / PCS / HAS / HCS starting from 27th June & 4th July | Course Delivery Options: Online & Offline. We are offering following optionals: Public Administration, Sociology, History,PSIR, Psychology. For registration call at 8699010909

Will AI tools help detect telecom fraud?

The article discusses the use of an artificial intelligence-based facial recognition tool called “Artificial Intelligence and Facial Recognition powered Solution for Telecom SIM Subscriber Verification” (ASTR) by the Department of Telecommunications (DoT) in India.  (Source: The Hindu, 28.05.2023)

Context:

The article highlights the success stories of using ASTR to uncover fake mobile connections and prevent fraudulent SIM card use. However, it also raises concerns about the lack of a personal data protection regime or specific regulations for artificial intelligence in India.

How significant is the problem of telecom frauds?

Artificial intelligence is being used to detect telecom frauds because they have become a significant problem in India. Incidents like the blocking of 1.8 lakh SIM cards activated using fake identities and the arrest of 66 individuals involved in fraud highlight the scale of the issue. Furthermore, cyber frauds have caused substantial financial losses, as evidenced by Karnataka losing ₹363 crore in 2022.

What are the shortcomings of current system?

The current text-based analysis is limited in its ability to detect fraud, as it cannot search through photographic data to identify similar faces. Therefore, the use of artificial intelligence, specifically facial recognition, is a more advanced and effective approach to tackle telecom frauds.

How can Artificial Intelligence be a solution for telecom frauds?

To combat this, artificial intelligence is being employed. India has a massive telecom ecosystem with millions of subscribers, making manual verification of documents a daunting task. The Department of Telecommunications (DoT) aims to use a facial recognition-based platform called ASTR to analyze the subscriber base of all telecom service providers. This AI system can detect patterns and anomalies in vast amounts of data, including photographs, and identify fake connections that leverage anonymity.

What is ASTR and how does it work?

ASTR is a facial recognition tool that uses artificial intelligence to detect fake SIM connections. Facial recognition technology works through a series of steps:

  • Detection: The technology uses algorithms to identify and locate faces in images or videos.
  • Analysis: It analyzes the facial image by mapping the geometry and features of the face to create a unique “faceprint” similar to a fingerprint. This process involves extracting mathematical representations of distinctive facial features.
  • Recognition: The facial recognition system compares the facial features to a database of pre-existing images called a gallery dataset. It cross-references the person’s facial features with the images in the database.

In the case of ASTR:

  • The Department of Telecommunications (DoT) obtained subscriber images from telecom service providers (TSPs) as part of their database.
  • ASTR utilizes facial recognition technology to group similar-looking images.
  • It then compares the associated textual subscriber details, such as names and KYC information, with the images in the database using fuzzy logic, a string-matching concept.
  • ASTR’s final step is to determine if the same person has acquired multiple SIMs using different names, dates of birth, bank accounts, address proofs, or other KYC documents.
  • Additionally, it identifies if one person’s name has obtained more than eight SIM connections, which violates DoT rules. ASTR’s facial recognition technology analyzes 68 features of the frontal face and considers two faces similar if they have a 97.5% match.

What are the major concerns of facial recognition AI?

  • Inaccuracy: Facial recognition technology (FRT) can be prone to technical errors due to factors like occlusion, bad lighting, facial expressions, and aging. These errors can lead to misidentification of individuals.
  • Misidentification and Underrepresentation: Additionally, FRT systems may have higher error rates for certain groups of people due to underrepresentation in the training datasets. Studies in India have shown disparities in error rates based on gender and identification of Indian men and women. Globally, FRT accuracy rates vary based on race, gender, and skin color, resulting in false positives and negatives.
  • Privacy and Consent: There are concerns regarding privacy and consent related to facial data. Individuals may lack awareness or consent for the use of their facial data, with limited control over its processing.
  • Ethical concerns: Other ethical concerns about FRT include issues of privacy, consent, and mass surveillance. FRT systems process vast amounts of biometric facial data, and individuals may lack control or awareness of how their data is being used. This lack of control can result in wrongful arrests and exclusion from social security schemes.

Concerns with facial recognition AI include technical inaccuracies, misidentification, underrepresentation, privacy, consent, and ethical issues of surveillance and autonomy.

What is the legal framework governing such technology in India?

  • In India, there is currently no specific legal framework governing facial recognition technology (FRT) or comprehensive data protection laws. Furthermore, the government withdrew the PDP Bill, 2019 last year, and a new draft is currently pending in Parliament.
  • India has over 130 government-authorized FRT projects in various sectors, as tracked by the Internet Freedom Foundation’s Project Panoptic. These projects include authentication for official schemes, airport check-in, and identity authentication for accessing educational documents.
  • NITI Aayog, a policy think tank in India, has published papers outlining the country’s national strategy for AI. It emphasizes the importance of consent, voluntary use, and avoiding mandatory FRT.

Legal framework in other countries:

On the other hand, other jurisdictions like the European Union and Canada have established legal frameworks for FRT. In the EU, FRT tools must comply with strict privacy and security rules in the General Data Protection Regulation.