phone interview
Credit: Unsplash/CC0 Public Domain

The law and Artificial Intelligence (AI) applications need to be better aligned to ensure our personal data and privacy are protected. Ph.D. candidate Andreas Häuselmann can see opportunities with AI, but dangers if this does not happen.

Imagine you apply for a job and are rejected because you do not want it enough. Later you discover that an AI application that can read emotions has indicated a lack of enthusiasm in your voice. Or you are unable to get a mortgage because AI gives you a low credit score due to when and how often you charge your phone.

Protecting personal data

These are examples of a future that Häuselmann envisions if the law does not respond better to the rapid developments within AI. ChatGPT, personal recommendations on Netflix and a like Siri or Alexa: it is already hard to imagine a world without AI. But how do we ensure that personal data—including data about our health, thoughts and emotions—is effectively protected?

To put it simply: We have to ensure that legislation is more responsive to developments in AI. Take the "accuracy principle," which is enshrined in European legislation. This means that personal data has to be accurate and up to date. If a company misspells your name, it violates that principle and has to change your name when you enforce your right to rectification, says Häuselmann.

But what if AI makes predictions about your life: What career would suit you? How long will you live? Will you stay healthy and how much money will you earn in the future? Then it is impossible for individuals to prove that that personal data is inaccurate when invoking their right to rectification because predictions relate to the future. I suggest we reverse the burden of proof here. Not you but the organization that used your data has to prove the information generated is correct.

AI companies want clarity

At the same time, says Häuselmann, the EU legislator should also look at another principle, that of fairness. This involves ensuring there are no adverse, discriminatory or unexpected effects, for example on consumers, from using personal data. This principle is very vague, and working with AI would benefit greatly from clarity. More importantly, a better elaborated fairness principle would protect individuals more effectively from the risks of AI.

"The law should do more here to speak the language of AI, so companies know how to respond." Häuselmann, who works at the international law firm De Brauw, can see how tech companies are looking to future-proof their AI in terms of the law too. "We need to move toward legislation that is clear yet flexible enough to respond to the within AI."

Although the development of AI poses risks to our , the law should not block it, says Häuselmann. Technology can be of great value in health care, for example. "Take Neuralink, the implanted chip that can allow people with paralysis to control a computer. Technology is neither good nor bad in itself. The law should look at its use and the intentions behind it."

Two worlds

Looking back at his research, Häuselmann is particularly proud of how he managed to learn the languages of two largely separate worlds. "I'm a lawyer but I also gave a tutorial at MIT. I had expected the tech experts there to be skeptical about my critical view of AI but the opposite proved true. These two worlds should continue to seek each other out for future research."

Andreas Häuselmann will defend his PhD on 23 April.

More information: Effective Protection of Fundamental Rights in a pluralist world: www.universiteitleiden.nl/en/r … in-a-pluralist-world

Citation: Will AI be listening in on your future job interview? On law, technology and privacy (2024, April 11) retrieved 11 April 2024 from https://techxplore.com/news/2024-04-ai-future-job-law-technology.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.