The Right to Be Forgotten: Mengdi Huai on Navigating Security Challenges for Ensuring User Privacy with Machine Unlearning

Recent regulations such as GDRP and the California Consumer Privacy Act have given users the right to erase the impact of their sensitive information from the trained models to protect their privacy. The right to be forgotten is gaining attention in the world of artificial intelligence and machine learning. Ensuring that artificial intelligence models can comply with these regulations is not only a legal requirement but a necessary step in building more ethical and user-friendly systems.

However, traditional machine learning models are great at learning from data but not great at erasing or forgetting data. So, how can we ensure compliance? Researchers, such as Mengdi Huai, Assistant Professor of Computer Science at Iowa State University, are studying machine unlearning to answer that question.

Machine Unlearning

Machine unlearning aims to erase data from a model and generate an unlearned model without needing to retrain it from scratch. Machine unlearning, or selective forgetting, empowers individuals to request the removal of their data from AI systems, enhancing their privacy protection.

“In an era where data usage and protection concerns are intensifying, securing the unlearning process is imperative,” said Haui.

The work in machine unlearning is still in its early stages. Existing studies on this technique mainly focus on enhancing unlearning effectiveness and efficiency, neglecting the security challenges introduced by it. Some users who contribute training data may have malicious intentions – executing attacks by submitting misleading unlearning requests to induce malicious behavior in the model. This is where Mengdi Huai’s research is focused – on building models that users can trust with their data.

Exploring Security Vulnerabilities

Mengdi Huai’s current research focuses on trustworthy machine learning. She aims to develop novel techniques to build trustworthy learning systems that are explainable, robust, private, and fair. This research is being published at top-tier conferences, with the paper “Static and Sequential Malicious Attacks in the Context of Selective Forgetting” at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023).

“In our NeurIPS'23, we propose a new class of attack methods, based on which an attacker can generate some malicious unlearning requests to significantly degrade the performance of the unlearned model,” explained Huai. “This research shows that machine learning models are highly vulnerable to our proposed attacks, thereby introducing a novel form of security threat. We view our developed attacks as an initial step towards understanding the robustness of machine learning models to malicious data update requests during the unlearning process.”

The exploration of possible malicious attacks in the unlearning process is essential. Its goal is to identify and reduce the risk of attackers exploiting unlearning methods to disrupt the normal function of machine learning models. Such disruptions could negatively impact individuals associated with these models.

Securing Digital Services

This research can drive advancements in artificial intelligence and machine learning, particularly in strengthening the robustness of unlearning systems against malicious attacks. It advances the understanding of the security vulnerabilities inherent in current machine unlearning techniques and sheds light on their limitations in safeguarding user privacy within the realm of artificial intelligence. By doing so, it paves the way for the establishment of new standards and improved practices in data management.

Amid escalating concerns about data usage and protection, such research contributes to creating a more secure digital environment. It becomes a catalyst for developing and implementing trustworthy machine learning systems across various applications in healthcare, finance, and public service. The goal is to ensure that personal data is handled with the utmost care and consideration.