Navigating algorithmic bias amid rapid AI development in Southeast Asia
Indonesia’s Kata.ai’s partner, Prixa, uses Expert System and NLP technology for the diagnosis engine on Prixa’s website. Credit: https://kata.ai/

Artificial intelligence (AI) is no longer an emerging technology in Southeast Asia. Countries across the region are aggressively adopting AI systems for everything from smart city surveillance to credit scoring apps, promising more financial inclusion.

But there are growing rumblings that this headlong rush towards automation is outpacing ethical checks and balances. Looming over glowing promises of precision and objectivity is the specter of algorithmic bias.

AI bias refers to cases where automated systems produce discriminatory results due to technical limitations or issues with the underlying data or development process. This can propagate unfair prejudices against vulnerable demographic groups.

For instance, a facial recognition tool trained predominantly on Caucasian faces may have drastically lower accuracy at identifying Southeast Asian individuals.

As Southeast Asia attempts to navigate the new terrain of automated decision-making, this article delves into the swelling chorus of dissent questioning whether Southeast Asia's AI ascent could leave marginalized communities even further behind.

How bias creates discrimination

In Southeast Asia, the prevalence of AI bias is evident in various forms, such as flawed speech and image recognition, as well as biased credit risk assessments.

These algorithmic biases often lead to unjust outcomes, disproportionately affecting minority ethnic groups.

A notable example from Indonesia demonstrates this. An AI-based job recommendation system unintentionally excluded women from certain job opportunities, a result of historical biases ingrained in the data.

The diversity of the region, with its array of languages, skin tones and cultural nuances, often gets overlooked or inaccurately represented in AI models that rely on Western-centric training data.

Consequently, these AI systems, which are often perceived as neutral and objective, inadvertently perpetuate real-world inequalities rather than eliminating them.

Ethical implications

The rapid evolution of technology in Southeast Asia presents significant ethical challenges in AI applications, due in large part to the breakneck pace at which automation and other advanced technologies are being adopted.

This rapid adoption outpaces the development of ethical guidelines.

Limited local involvement in AI development sidelines critical regional expertise and widens the democracy deficit

The "democracy deficit" refers to the lack of public participation in AI decision-making—facial recognition rolled out by governments without consulting impacted communities being one case.

For example, Indigenous groups like the Aeta in the Philippines are already marginalized and could face particular threats from unchecked automation. Without data or input from rural Indigenous communities, they could be excluded from AI opportunities.

Meanwhile, biased data sets and algorithms risk exacerbating discrimination. The region's colonial history and continuous marginalization of Indigenous communities casts a significant shadow.

Navigating algorithmic bias amid rapid AI development in Southeast Asia
Bindez Myanmar Private Beta Version (August 2014 to January 2015). Credit: GSMA Mobile for Development Impact Report 2015

The uncritical implementation of automated decision-making, without addressing underlying historical inequalities and the potential for AI to reinforce discriminatory patterns, presents a profound ethical concern.

Regulatory frameworks lag behind the swift pace of technological implementation, leaving vulnerable ethnic and rural communities to deal with harmful AI errors without recourse.

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Navigating algorithmic bias amid rapid AI development in Southeast Asia (2024, January 23) retrieved 23 January 2024 from https://techxplore.com/news/2024-01-algorithmic-bias-rapid-ai-southeast.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.