Advancements in artificial intelligence and machine learning have been partly achieved through the copious amounts of data used for training artificial intelligence systems. Most of the data need protection and so to ensure that health and behaviour data don’t fall into the wrong hands during AI and machine learning projects, scientists have developed what they call concept of differential privacy.
Scientists guarantee that the published model or result can reveal only limited information on each data subject. While previous methods required one party with unrestricted access to all the data, the new method enables learning accurate models for example using data on user devices without the need to reveal private information to any outsider.
The group of researchers at the University of Helsinki and Aalto University, Finland, has applied privacy-aware methods for example to predicting cancer drug efficacy using gene expression.
“We have developed these methods with funding from the Academy of Finland for a few years, and now things are starting to look promising. Learning from big data is easier, but now we can also get results from smaller data”, Academy Professor Samuel Kaski of Aalto University says.