The widespread use of Artificial Intelligence (AI) systems has led to the prioritisation of its regulation by policymakers globally. Regulating AI is a complex task. AI systems can process vast amounts of data, unlike ever before. These system concerns can have adverse impacts at individual, national and global level.

Bangladesh, like most countries, wants to harness the benefits of AI. The question is: should AI be contextualised within the Bangladeshi regulatory landscape? Unfortunately, ethical usage, safety, accountability and transparency are not inherent features of any AI technology. Therefore, it is crucial to seek a legal framework to make it possible to mitigate risks while promoting innovation.

A key question to consider is: how much can AI be regulated?

Revisiting AI

AI is a technology that allows computational machines to mimic human intelligence. It essentially learns from experience through algorithmic training. It is a non-linear technology that can solve problems, offer solutions, answer questions and make predictions. It is quickly becoming indispensable to the human race.

AI systems combine large datasets with intelligent, iterative processing algorithms to identify patterns. Traditional computation systems follow predefined algorithms, much like an airplane runs on set instruction. AI systems, meanwhile, adapt real time based on the data they process, learning patterns autonomously. Each processing round enhances the system’s performance. This enables AI to continuously perform numerous tasks.

AI’s predictive ability tends to be particularly powerful because it leverages vast amounts of data. For example, geographic information system (GIS) mapping can benefit greatly from AI. AI can analyse extensive spatial data to make precise predictions.

Understanding AI means understanding a multidisciplinary system capable of human-like thinking. AI not only mimics human intelligence but also enhances its own accuracy over time. In short, it is a transformative force of technology.

AI and Bangladesh

While the evolution of AI began some time ago, its impact is only now becoming visible in Bangladesh. The country is keeping up with AI integration. Automation and control technologies are widely applied in different industries in the country. Terms like AI, Internet of Things (IoT), big data and blockchain have gained popularity among investors and policymakers in the country.

Like in any other country, life-easing AI technologies are rapidly being integrated into the everyday lives (from ride-sharing to real-time mapping) of Bangladeshis. With 34% of its population being tech-savvy youth, Bangladesh is poised to reap immense benefits from AI uptake.

Like in any other country, life-easing AI technologies are rapidly being integrated into the everyday lives of Bangladeshis.

In 2024, Bangladesh introduced its draft AI Policy to address legal and ethical issues related to AI. This is a guide for the governance, adoption and development of AI across various government sectors, especially high-impact sectors like education and agriculture.

The policy proposes an institutional framework for implementing, through an independent national AI centre for excellence under an AI advisory council. As per the policy, the use of AI technologies in Bangladesh will be guided by principles that align with the country’s core values.

The proposed policy falls short on how to envisage AI in Bangladesh. It doesn’t discuss specific risks and challenges unique to the country. Additionally, it has a focus on human oversight to manage risks, which will be insufficient to address the consequences of AI. A more nuanced approach is needed for effective AI adoption.

Why regulate AI?

Many think that AI regulation is needed to balance innovation and safety. But more importantly, the regulation is about safeguarding rights and social equity. Though AI tools are regarded as ‘intelligent,’ the technology cannot yet replicate the human mind. AI models are unlike humans.

Research shows AI can replicate human biases, leading to significant errors. Take facial recognition technology. Law enforcement in nearly every US state uses facial recognition software. In the city of Detroit in Michigan state in 2019, black men were wrongfully detained, after they were misidentified by facial recognition software, in the first-ever cases of this kind.

A study titled ‘Gender Shades,’ by computer scientists Joy Buolamwini and Timnit Gebru, published in 2018 by MIT Media Lab, found that race, skin tone and gender significantly affected facial recognition accuracy. The software works better for white male faces. For darker-skinned people, the error rate spikes, at 19%. For dark-skinned women, the error rate is 34%.

Joy Buolamwini, who has pioneered research on the race-AI-gender intersection, at Wikimania, Cape Town, South Africa, 25 July 2018 | Photo by Niccolò Caranti.

Similarly, a 2010 study by the National Institute of Standards and Technology and the University of Texas found that facial recognition algorithms and sound detection devices worked best on people from the region where they were developed. The reason for these inbuilt discriminations was biased data.

Imagine this in a Bangladeshi context, where data is not equally created or collected. There will be less data on the marginalised, especially minorities, owing to factors like the digital divide. This underrepresentation will affect AI training data and amplify existing discrimination.

…a decision-maker could misuse biased AI outputs to form biased policies.

Put in another way, a decision-maker could misuse biased AI outputs to form biased policies. A malevolent mindset could exploit AI’s complexity to mask bad intentions. Policymakers must be cautious about such misuse.

In essence, regulating AI isn’t just about managing technology; it’s about upholding principles of fairness. On the other hand, it should also be sensitive to innovation. Human intervention has to be at the centre of the regulatory regime. Through smart oversight, the complexities of AI can be regulated.

Misleading nature

A complaint regarding AI that is bothering regulators globally relates to copyright infringement. In 2023, the New York Times (NYT) filed a lawsuit against Microsoft and OpenAI alleging that their AI product had unlawfully used its articles to create. The NYT accused OpenAI of copyright infringement, and claimed that OpenAI had specifically focused on NYT content to train its generative AI tools, which power products like ChatGPT. In another suit, Getty Images sued Stability AI for processing millions of copyrighted images using a tool called Stable Diffusion. Stable Diffusion essentially generates images from text prompts.

Another concern is the potential for AI to distort original sources and propagate misinformation. This undermines the integrity of information, particularly as AI-generated content becomes seamlessly integrated into our daily lives. The learning process of AI is seriously prone to inaccuracies.

Addressing these threats from AI systems is challenging. AI interaction is more personalised than social media. Therefore, there is a need for clear regulations and ethical guidelines (on copyrights). For example, regulation can easily mandate declaration of sources and disclaimers on training data.

Regulatory regimes

Globally, the debate around regulating the use of AI has gone beyond whether it should actually be regulated. Rather, policymakers are split on the how. The European Union (EU) and the United Kingdom (UK) have taken different regulatory approaches. The EU has formulated its AI Act, which uses a ‘risk-based’ strategy, imposing varying compliance obligations.

In contrast, the UK has decided not to introduce new legislation for now. Instead, it relies on existing regulations, which are supported by some AI-specific guidelines. The UK has taken a ‘context-specific’ approach that focuses on the outcomes AI is likely to generate in particular applications. It assesses general outcomes, weighing them against opportunity costs. It does not label any specific technologies or sectors as risky.

…the US has advocated for national AI standards through executive actions.

Meanwhile, the US has advocated for national AI standards through executive actions. Simultaneously, AI-specific regulations (like privacy laws) have been adopted by different states of the US. On the contrary, China has put into place a complex legal framework for cybersecurity and data protection. Both nations address novel problems from various AI products, with specific rules targeting certain AI applications.

Many governments have also established softer norms to regulate AI. Examples include Singapore’s Model AI Governance Framework of 2019, Australia’s AI Ethics Principles of 2019, China’s AI Governance Principles of 2019 and New Zealand’s Algorithm Charter of 2020. These are not enacted laws but they can act as initial policy guides.

At the intergovernmental level, notable initiatives include the G7’s Charlevoix Common Vision for the Future of AI of 2018, the Organisation for Economic Co-operation and Development’s Recommendation of the Council on AI of 2019 and the United Nations Educational, Scientific and Cultural Organization’s Recommendation on the Ethics of AI of 2021. Even the Pope has endorsed a set of principles, through the Rome Call for AI Ethics in 2020.

An on-duty policeman at a centralised control room equipped with an advanced monitoring solution for the city, Dhaka, Bangladesh, 27 June 2016 | Photo by Mahmud Hossain Opu.

How to regulate AI?

Collectively, there is growing consensus on the norms that should govern AI. Most policy documents since 2018 converge on six key themes:

  1. Human control: AI should enhance human potential and remain under human control.
  2. Transparency: AI systems should be understandable, with decisions that can be explained.
  3. Safety: AI systems should perform as intended and be secure from hacking.
  4. Accountability: AI systems should have mechanisms to be held accountable, with remedies available when harm occurs.
  5. Non-discrimination: AI systems should be inclusive, avoiding unjust bias.
  6. Privacy: AI should safeguard personal data.

Bangladeshi policymakers should assess these six themes. Examining how these themes are addressed globally will help Bangladesh develop a smart regulatory regime for AI.

Cultural psychology vs AI

Cultural psychology shapes human understanding of morality and social norms. Tools like the World Value Survey show significant cultural variations in values, indicating the profound influence of societal norms on people. This is crucial in understanding how AI works within different contexts.

For example, OpenAi’s ChatGPT, trained on publicly available internet text, absorbs human attributes from its vast data pool. However, as its training data are mainly US-centric and filtered according to developers’ cultural norms, it essentially reflects US perspectives. A study using the World Value Survey found ChatGPT responses aligned closely with those of US respondents, diverging significantly from more collectivist societies in Asia or Africa.

Understanding this cultural difference is also crucial for AI regulation. In the rush to adopt AI, there’s a risk of overlooking diverse cultural perspectives. While the focus may be on implementation, it’s crucial to pause and assess how AI adoption could impact local communities. Policymakers need to find native approaches.

In terms of regulations, what works in one jurisdiction may not work in another. For example, the AI Act in the EU required AI models to be ‘explainable.’ This means they must provide clear, accessible explanations of AI processes. Bangladesh’s AI Policy also directs AI systems to be ‘explainable.’ But it fails in addressing the underlying context to explainability.

Developing ‘explainable’ AI models demands significant computing power and resources, driving up costs. Imagine platforms like Facebook, with ample resources, that can easily develop AI compared with smaller firms. This means that the EU’s proposed AI law may disproportionately burden smaller companies and startups, favouring big companies with more resources for compliance. Replicating an exact model in Bangladesh will shrink the country’s innovation and entrepreneurship space.

The EU’s AI law also categorises AI models according to their risks and impacts. The compliance measures are applied as per the risk level. If Bangladesh wants to categorise risks, it must assess risks based on Bangladeshi society’s context. Because what matters in Bangladeshi society does not necessarily matter in Europe.

If Bangladesh wants to categorise risks, it must assess risks based on Bangladeshi society’s context.

Towards an AI regulatory regime

In 1980, David Collingridge, from Aston University in England, pointed out a tricky problem in managing new technology. He noted that, during the early stage of a technology, its harms are unknown. Hence, policymakers cannot justify putting the brakes on its development. By the time the risks are apparent, it’s usually too late to regulate because the technology has fast proliferated.

Thus, instead of viewing AI as an inevitable phenomenon, Bangladesh should carefully consider its implications based on its uniqueness. The first step is recognising that AI adoption, like that of any other technology, is uneven across the globe. Thus, policymakers must come up with a strategy to balance tech advancement, economic prosperity, trust and the social progress of the nation.

They also need to think about some pressing questions. How can AI be held accountable? Can lawsuits be filed against AI? Can AI be a legal person or entity or is it merely an agent? This ambiguity raises issues regarding the moral responsibility of both designers and users of AI systems. To address these challenges, Bangladesh should take the following steps:

Map potential legal issues created by AI: Before going for any legal framework for AI, Bangladesh must conduct a comprehensive study to identify the legal issues associated with AI from different countries. Emphasis should be given to Bangladesh’s own legal framework and its compatibility with AI.

Assess the infrastructure for implementing AI: Bangladesh should determine its capacity to support the widespread implementation of AI. This includes evaluating technological, educational and governance infrastructures, including schools, regulatory bodies, courts and law enforcement agencies.

Collaborate with key stakeholders: Bangladesh should bring together its main government stakeholders on AI. These include the Supreme Court, the Ministry of Law, the Judicial Administration Training Institute, the police, the Telecommunication Regulatory Commission and Bangladesh Computer Council. They should jointly conduct a review of the infrastructure required for AI implementation. This collaborative approach will ensure a comprehensive understanding of needs and challenges.

 

Photo © Mahmud Hossain Opu

a
Moinul Zaber is Professor of Computer Science at the University of Dhaka. He is a data and computational social scientist. He is Co-Lead of the Data and Design Lab at the University of Dhaka and an Editorial Board member for Telecommunications Policy. He is a Senior Academic Fellow of the United Nations University’s E-Government Unit, Portugal. He was research fellow at the Economics and Management Department of Chalmers University in Sweden, Insituto Superior Técnico in Portugal and LIRNEasia. He pursued his doctoral studies in engineering and Public Policy at Carnegie Mellon University, US.
a
Shahrima Tanjin Arni is Lecturer of law at the University of Dhaka. She is an academic. She was an Analyst at the Centre for Research and Information, a research associate at A.S. & Associates and Editor-in-Chief of Dhaka University Law and Politics Review. She was International Affairs Secretary of Dhaka University Central Students’ Union and a Commonwealth Scholar. She pursued her graduate studies in International Law at the University of Cambridge.