AI

AI goes anonymous during training to boost privacy protection

Share this post:

Privacy is vital – even more so in the modern era of AI. But AI trained on personal data can be hacked.

Even if the hacker doesn’t access the training data, there’s still a risk of leaking sensitive information from the models themselves. For example, it may be possible to reveal if someone’s data is part of the model’s training set, and even infer sensitive attributes about the person, such as salary.

We’ve tried to address this privacy issue in our latest work.

Our team of researchers from IBM Haifa and Dublin has developed software to help assess privacy risk of AI as well as reduce the amount of personal data in AI training. This software could be of use for fintech, healthcare, insurance, security – or any other industry relying on sensitive data for training.

Using our software, we created AI models that are privacy-preserving and compliant.

Training with differential privacy

Consider a bank training AI to predict the type of customers most likely to default on mortgage payments. The AI has to comply with restrictions and obligations attached to processing personal data, so it wouldn’t be possible to share the model with other banks because of privacy concerns.

Differential Privacy (DP) could help. Applied during the training process, DP could limit the effect of anyone’s data on the model’s output. It gives robust, mathematical privacy guarantees against potential attacks on a user, while still delivering accurate population statistics. DP comes with Diffprivlib, a general-purpose library that provides generic tools for data analysis and implementations of machine learning models with DP.

However, DP excels only when there’s just one or a few models to train. That’s because it’s necessary to apply a different method for each specific model type and architecture, making this tool tricky to use in large organizations with a lot of different models.

Accuracy-guided anonymization

That’s where anonymization can be handy – applied to the data before training the model.

Anonymization applies generalizations to the data, making the records similar to one another by blurring their specific values so they are no longer unique. For example, instead of having a person’s age listed as 34 years old, they can be listed as between 30 and 40 years of age.

But traditional anonymization algorithms don’t consider the specific analysis the data is being used for. What if a 10-year range of ages is too general for an organization’s needs? After all, a 12-year-old is very different from a 21-year-old when it comes to, say, taking medication. When these anonymization techniques are applied in the context of machine learning, they tend to significantly degrade the model’s accuracy.

Our solution: The Machine Learning Model Anonymization tool.

This technology anonymizes machine learning models while being guided by the model itself. We customize the data generalizations, optimizing them for the model’s specific analysis – resulting in an anonymized model with higher accuracy. The method is agnostic to the specific learning algorithm and can be easily applied to any machine learning model, making it easy to integrate into existing MLOps pipelines.

The process starts with a trained machine learning model and training data, and the desired k value and the list of quasi-identifiers as input. The privacy parameter k determines how many records will be indistinguishable from each other in the dataset. For example, a k value of 100 means that every sample in the training set will look identical to 99 others. The quasi-identifiers are features that can be used to re-identify individuals, either on their own or in combination with additional data.

The results of training a machine learning classifier after applying our anonymization algorithm (the blue line, marked AG) compared with a few typical anonymization algorithms (Median Mondrian, Hilbert-curve and R+ tree) on two different datasets. The graphs show the effect of increasing the privacy parameter k on the model’s accuracy.

The results of training a machine learning classifier after applying our anonymization algorithm (the blue line, marked AG) compared with a few typical anonymization algorithms (Median Mondrian, Hilbert-curve and R+ tree) on two different datasets. The graphs show the effect of increasing the privacy parameter k on the model’s accuracy.

 

The software then creates an anonymized version of the training data, later used to retrain the model — resulting in an anonymized version of the model free from any data processing restrictions. This makes the model less prone to inference attacks, as we show in our paper, Anonymizing Machine Learning Models.

Having tested our technology on publicly available datasets, we’ve obtained promising results. With relatively high values of k and large sets of quasi-identifiers, we created anonymized machine learning models with very little accuracy loss. (See image)

Next, we aim to run our models on real-life data and to see if the results hold. And we plan to extend our method from just tabular data to different kinds of data, including images.

To learn more about our tools, please try out our open source toolkit, visit our website, or contact us at abigailt@il.ibm.com.

 

Inventing What’s Next.

Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.

 

Researcher in Data Security & Privacy, IBM Research

More AI stories

IBM researchers investigate ways to help reduce bias in healthcare AI

Our study "Comparison of methods to reduce bias from clinical prediction models of postpartum depression” examines healthcare data and machine learning models routinely used in both research and application to address bias in healthcare AI.

Continue reading

Pushing the boundaries of human-AI interaction at IUI 2021

At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading