The discussion is about the current impossibility of artificial intelligence and what this means for the right to privacy and the user base. Miguel Luengo-Oroz, AI expert and social entrepreneur, talks to Ashoka’s Hanae Baruchel. (Full bio below.)
Hana Baruch: ChatGPT and other artificial intelligence (AI) tools are emerging recently due to their ability to have interactive conversations with people. You had your interview recently.
Founder and CEO of SpotLab
Miguel Luengo-Oroz: Yes, I asked GPT-3 “Can neural networks forget?” and its response was that “Artificial intelligence systems, like neural networks, do not have the power to forget in the same way that humans do. The network may adjust its weights to better match the new data, which may cause it to make different predictions for the same input. However, this is not the same as forgetting in the sense that the network is still able to access all the information it learned earlier, it just prioritizes the new information. ”
Baruch: This inability to forget does not get much attention and this affects you. Why?
Luengo-Oroz: One of the main principles of online privacy regulations such as the European General Data Protection Regulation (GDPR) is the idea that the data I generate is mine, and the company can use as long as I allow it. This means I can withdraw my consent at any time and ask for my data back. I might even ask for the right to be forgotten. AI algorithms are trained on user data, however, almost none of the guidelines, frameworks and regulatory proposals emerging from governments and private companies focus on clearly in building AI models that cannot be trained. We have no way to reverse the changes made in their system with a single data point at the request of the data owner.
Baruchel: So users should have the ability to say: “Stop using an AI model trained on my data”?
Luengo-Oroz: Just fine. Let’s give AIs the ability to forget. Think of it as the Ctrl-Z key for AI. Let’s say my image is used to train an AI model that recognizes people with blue eyes and I don’t believe it, or never have. I need to be able to ask the AI model to behave as if my image has never been added to the training dataset. In this way, my data would not contribute to correcting the internal conditions of the model. In the end, this probably wouldn’t affect the AI too much because my image wouldn’t make a huge contribution on its own. But we can also consider the fact that all blue-eyed people ask that their data not influence the algorithm, which makes it impossible to recognize blue-eyed people. Let’s imagine for example that I’m Vincent van Gogh and I don’t want my art to be included in the list of training algorithms. If someone asks a machine to paint a dog in the style of Vincent van Gogh, it will not be able to complete the task.
Baruch: How would this work?
Luengo-Oroz: In an artificial neural network, each time a data point is used to train an AI model it slightly changes the way each artificial neuron behaves. One way to remove this contribution, is to fully train the AI model without the data point in question. But this is not a practical solution because it requires a lot of computing power and is very resource intensive. Instead, we need to find a technical solution that changes the impact of this data point, changing the final AI model without retraining it.
Baruch: Do you see people in the AI community pursuing such ideas?
Luengo-Oroz: So far, the AI community has done little direct research on the idea of not using neural networks, but I am sure that there will be an intelligent solution soon. There are some close-up ideas that draw inspiration from the concept of “lethal forgetting,” the tendency of AI models to forget learned information in the process of learning new information. The big picture of what I’m suggesting here is that we build neural nets that aren’t just sponges that don’t die of all the data they absorb, like stochastic parrots. We need to create strong organizations that adapt and learn from the databases they are authorized to use.
Baruch: Beyond the right to be forgotten, you suggest that this kind of precedent can also bring new trends when it comes to digital property rights.
Luengo-Oroz: If we were able to track how user data contributed to training specific AI models, this could become a way to pay people for their contributions. As I wrote back in 2019, we can imagine a version of Spotify that rewards people with rewards every time someone uses an AI trained on their data. In the future, this type of solution can ease the tension between the creative industry and the AI tools that produce it like DALL-E or GPT-3. It could also lay the groundwork for ideas like Forgetful Advertising, a new type of behavioral digital marketing that can avoid intentionally storing personal behavioral data. Perhaps the future of AI is not just about learning it all – the bigger the data set and the bigger the AI model, the better – but about building AI systems that can learn and forget. how personality and personal needs.
Dr. Miguel Luengo-Oroz is a scientist and entrepreneur who is passionate about thinking and building technology and innovation for social impact. As the first data scientist at the United Nations, Miguel pioneered the use of artificial intelligence for sustainable development and humanitarian action. Miguel is the founder and CEO of social enterprise Spotlab, a digital health platform that uses the best of AI and mobile technology for medical research and universal access to diagnostics. Ten years ago, Miguel has built teams around the world that bring AI to practice and policy in areas including poverty, food security, refugees and migrants, conflict prevention, human rights, economic development, equality, hate speech, privacy and climate change. He is the creator of Malariaspot.org -videogames for collaborative analysis of malaria images-, and he collaborates with the Universidad Politécnica de Madrid. He became an Ashoka Fellow in 2013.
Follow up Next Now/Tech & Humanity to know more about what works and what comes next.