You are currently viewing Google Rolls Out New Privacy Testing Module In TensorFlow

Google Rolls Out New Privacy Testing Module In TensorFlow

Sharing is caring!

Google has announced a new privacy testing library in TensorFlow, a cloud-based TensorFlow machine learning service that developers can use to analyze the classification model’s privacy properties. This will be part of the TensorFlow data protection introduced in 2019 to ensure data protection within AI ​​models.

People’s privacy-awareness is higher now than ever before, and it only increases when experts review companies about how they collect and process users’ data. Such circumstances have forced governments worldwide to develop privacy protection laws such as GDPR, PDP, and CCPA. 

One of the biggest challenges faced by companies while maintaining confidentiality is to avoid leakage of information from the AI ​​model. In this case, Google has introduced differential privacy, which adds noise to hide individual instances in the training dataset. However, researchers at Cornell University experimented with various approaches to ensure privacy with the ML model and came up with membership inference attacks.

Membership Inference Attack With TensorFlow

According to Google’s researchers, a membership inference attack is a cost-effective and reliable method that predicts whether specific data was used during training. After using the membership inference tests internally, Google researchers have released support for technology as a library with TensorFlow. One of the key advantages of a membership inference attack is its integrity that does not require any re-training, which prevents the disruption in developers’ workflows.

Key Benefits of Membership Inference Attack With TensorFlow

By determining the availability of data sets in the training model, developers can check whether their model can maintain confidentiality before implementing it in production. The researchers believe that data scientists using the TensorFlow membership attack feature will investigate better architectural options for their models and implement regulatory techniques such as early stopping, dropout, weight loss, and input augmentation.

Also, the researchers hope that a membership inference attack will be the starting point for the people to strive towards introducing new architectures that can prevent data leaks and maintain confidentiality.

Currently, membership inference attack is only limited to classifiers. In the future, the researchers would continue to extend their functionalities to help developers use a membership inference attack with other data science techniques.

For more news on Salesforce, CRM, and technology, visit Cloud Analogy’s news section. Don’t forget to follow us on Twitter, LinkedIn, Facebook, Instagram, and to stay updated with the latest news.

Close Menu
× How can I help you?