As computer vision becomes increasingly popular, misuse and misconceptions about it are also becoming more widespread. Many people without a deep knowledge of a subject tend to believe what they've been told, without questioning it. This can be a problem, as the general public can be misinformed about what computer vision or AI really is and how it works. There have also been growing concerns that using computer vision in public surveillance and security systems leads to privacy and sometimes safety violations.
However, there are also many potential uses for computer vision that could be beneficial to society. For example, in public healthcare, computer vision could help save lives by automatically diagnosing disorders and illnesses which have an effect on facial impressions, and detecting cancers and abnormalities in x-ray scans.
Security officials can use facial recognition to identify individuals in public places in real time. This system could be used to identify individuals with criminal convictions or repeat offenders and detect and count people or vehicles in an area.
Calls To Ban Facial Recognition In Law Enforcement
People may trust companies that say they will protect their privacy, but some of these companies may not actually be respectful of people's privacy rights and may completely disregard them. Also, some companies' preventative measures may not be enough. Governments Should protect their citizens' right to privacy by making laws about this problem. For example, one way to protect privacy is by banning facial recognition for some applications. Another way to protect people's data is by creating laws and protocols that companies must follow in order to use and keep people's information safe.
Computer Vision: New Solution, Old Issues
Computer vision and AI technologies do not fundamentally change the way law enforcement works, but rather are tools to help humans do their jobs more efficiently, effectively, and reliably. While human security staff or police are limited to looking for several suspects at a time, AI is able to look for hundreds or thousands simultaneously. This can be really helpful and important in emergency situations where quick and immediate response is crucial. Additionally, facial recognition technologies are normally used to find potential crime suspects or witnesses by scanning through photos or video footage, or live stream, and can also be used for surveillance in public places or during big events.
Privacy VS Safety: Can We Have Both?
Facial recognition technology is being used more and more to help improve safety, but there are still some concerns that need to be addressed. One of the main issues is the high error rate of facial recognition, which can sometimes lead to false arrests or alarms, or detentions. There is also the concern of bias in facial recognition, which could lead to inaccurate identification and even also racial profiling and targeting. These issues put into perspective the need for control and regulation of this technology, and also the delicate trade-off between safety and privacy.
Privacy-Preserving Machine Learning (PPML)
Privacy-Preserving Machine Learning (PPML) is concerned with adversaries trying to infer private data, even from trained models. Model inversion attacks aim to reconstruct training data from model parameters, for example, to recover sensitive attributes such as the gender or genotype of an individual given the model’s output. Membership inference attacks are used to infer whether an individual was part of the model’s training set. Training data extraction attacks aim to recover individual training examples by querying the model.
A general approach that is commonly used to defend against such attacks is Differential Privacy (DP), which offers strong mathematical guarantees of the visual privacy of the individuals whose data is contained in a database.
Methods To Prevent Privacy Breaches During Training and Inference of Computer Vision Models:
● Use of a data anonymization technique such as k-Anonymity.
● Use of a data obfuscation technique such as data masking.
● Use of a data encryption technique such as homomorphic encryption.
Secure enclaves, homomorphic encryption, and federated learning are all methods that can be used to prevent privacy breaches during training and inference.
Secure enclaves protect data that is currently in use, homomorphic encryption allows mathematical operations on ciphertext instead of on the actual data, and federated learning allows multiple data owners to train a model collectively without sharing their private data. Secure multi-party computation (MPC) is a subfield of cryptography with the goal of designing protocols to allow parties to jointly compute a function over their inputs while keeping those inputs private.
MPC is important for privacy-preserving applications such as private query answering and electronic voting.
Methods for Privacy-Preserving Deep Learning in Visual Data
Edge AI processing can be used for privacy-preserving deep learning in visual data. This means sensitive visual data can be processed without being sent or stored. Edge AI systems are fully autonomous and can be used to process data in real time.
Such vision systems are fully autonomous. Private image processing using distributed edge devices can be combined with additional methods:
● Image obfuscation - is a process of sanitizing and anonymizing sensitive visual data. Common methods of obfuscation include blacking, pixelization (or mosaicing), and blurring; however, these deterministic techniques can lead to re-identification by well-trained neural networks. Newer methods of obfuscation, based on metric privacy, allow for sharing of pixelated images with rigorous privacy guarantees.
● Removal of moving objects - An alternative to blurring moving objects in images is to use a moving object segmentation algorithm to remove and paint them. This algorithm uses information from other views to obtain a realistic output image in which the moving object is not visible.
Summary
Looking to the future, we can expect increased implementation of facial recognition and other computer vision technologies in various scenarios and industries.
However, this will also bring rising concerns that need to be addressed swiftly and properly as they become more prevalent in society.
We need to develop a unified system or set of rules and regulations to ensure that facial recognition is used effectively and safely while preserving and respecting the privacy of individuals.
This is a big challenge that will require a collective consensus, and also take time and effort to overcome, but I believe we can do it through deliberate effort, transparency, and dialogue.
Commentaires