free software
Credit: AI-generated image

There are lots of proposed ways to try to place limits on artificial intelligence (AI), because of its potential to cause harm in society, as well as its benefits.

For example, the EU's AI Act places greater restrictions on systems based on whether they fall into the category of general purpose and generative AI or are considered to pose limited risk, high risk or an unacceptable risk.

This is a novel and bold approach to mitigating any ill effects. But what if we could adapt some tools that already exist? Software licensing is one well-known model that could be tailored so that they could meet the challenges posed by advanced AI systems.

Open responsible AI licenses (OpenRails) might be part of this answer. AI that is licensed with OpenRail is similar to . A developer may release their system publicly under the license. This means that anyone is free to use, adapt and re-share what was originally licensed.

The difference with OpenRail is the addition of conditions on using the AI responsibly. These include not breaking the law, impersonating people without consent or discriminating against people.

Alongside the mandatory conditions, OpenRails can be adapted to include other conditions that are directly relevant to the specific technology. For example, if an AI was created to categorize apples, the developer may specify it should never be used to categorize oranges, as doing so would be irresponsible.

The reason this model can be helpful is that many AI technologies are so general, they could be used for many things. It's really hard to predict the nefarious ways they might be exploited.

So this model allows developers to help push forward open innovation while reducing the risk that their ideas might be used in irresponsible ways.

Open but responsible

In contrast, proprietary licenses are more restrictive on how software can be used and adapted. They are designed to protect the interests of the creators and investors and have helped tech giants like Microsoft to build vast empires by charging for access to their systems.

Due to its broad reach, AI arguably demands a different, more nuanced approach that could promote the openness that drives progress. Currently many big firms are operating proprietary—closed—AI systems. But this could change, as there are several examples of companies using an open-source approach.

Meta's generative AI system Llama-v2 and the image generator Stable Diffusion are open source. French AI startup Mistral, established in 2023 and now valued at US$2 billion (£1.6 billion), is set to soon openly release its latest model, which is rumored to have performance comparable to GPT-4 (the model behind Chat GPT).

However, openness needs to be tempered with a sense of responsibility to society, because of the potential risks associated with AI. These include the potential for algorithms to discriminate against people, replace jobs and even pose existential threats to humanity.

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: AI: A way to freely share technology and stop misuse already exists (2024, February 12) retrieved 12 February 2024 from https://techxplore.com/news/2024-02-ai-freely-technology-misuse.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.