REGULATING THE
UNFORESEEN
One year with AI
©Google DeepMind
This image represents how accountability needs to be core to AI systems. It was created by Champ Panupong Techawongthawonas as part of the Visualising AI project launched by Google DeepMind.
In the fast-paced realm of artificial intelligence (AI), a standout performer has taken centre stage over the past year — ChatGPT spotlighted as a critical trend in the Municipal Forecast 2023.
Launched on 30 November 2022, this Generative Pre-Trained Transformer has skyrocketed in popularity, boasting an impressive 180 million users and an astonishing 1.6 billion visits in December 2023. Within five days after its debut, ChatGPT achieved one million users, surpassing the growth rates of tech giants like Spotify (which took five months to reach its first million users in 2008) and Twitter (which took two years to achieve the same milestone in 2006). Only Threads, Meta Platforms’ social media service, launched in July 2023, broke records by garnering its first million users within an hour.
According to Similarweb, approximately 15 per cent of ChatGPT’s user base hails from the United States, with India following closely at nearly 8 per cent and the Philippines, Japan, and Canada each contributing around 3 per cent. In terms of audience composition, the user base comprises 55.88 per cent males and 44.12 per cent females, with the largest age group being 25-34 years (33.43 per cent), closely followed by the 18-24 age bracket (28.07 per cent). The top topics of interest for ChatGPT visitors include Programming and Developer Software, Computers Electronics and Technology and Video Games Consoles.
As AI’s influence permeates our daily lives, a resounding global call for increased regulation has emerged. The AI Index 2023 Annual Report from Stanford University referred to an analysis of the legislative records addressing “artificial intelligence” in 127 countries, revealing a notable surge from just one legislative effort in 2016 to 37 in 2022. The United States, Portugal, Spain, Italy, Russia, Belgium, the United Kingdom, Austria, the Republic of Korea, and the Philippines are the top ten countries in the regulatory efforts between 2016 and 2022.
On 9 December 2023, a historic milestone was reached as the European Parliament provisionally agreed with the European Council on the EU AI Act — the world’s first comprehensive AI law. The European Commission proposed the draft Act in April 2021, and the European Parliament approved it in June 2023. It will become EU law before the European Parliament elections in June 2024. Enforcement of the regulation is anticipated in 2026, following an 18-month transitional period. The primary objective is to ensure that AI systems within the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The EU AI Act categorises AI systems based on risk, imposing obligations on providers and deployers, specifically banning those considered “unacceptable risk AI systems” and rigorously assessing those classified as “high-risk systems.”
Unacceptable risk AI systems are deemed a threat to people, including cognitive behavioural manipulation of individuals or specific vulnerable groups, biometric identification and categorisation of people, or social scoring—classifying people based on behaviour, socio-economic status, or personal characteristics. High-risk systems which negatively impact safety or fundamental rights will undergo an assessment before entering the market and throughout their lifecycle — examples span AI systems in toys, aviation, cars, medical devices, and lifts.
General-purpose and generative AI, like ChatGPT, must comply with transparency requirements by disclosing AI-generated content. These measures empower users to make informed decisions and encompass AI systems that generate or manipulate image, audio, or video content, such as deepfakes.
Simultaneously, several European countries have propelled their national efforts. Spain announced the creation of the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), establishing it as the first AI regulatory body in the EU. Through its Federal Ministry of Education and Research, Germany unveiled an extensive AI Action Plan, with an investment exceeding €1.6 billion in AI research and training.
China enacted the Interim Measures for Management of Generative AI Services in August 2023. These measures urge administrative authorities and courts at all levels to adopt a cautious and tolerant regulatory stance towards AI, striking a balance between innovation and mitigating AI-related risks. It also explicitly mentions privacy invasion, violation of intellectual property rights, and the circulation of false information. AI services are prohibited from generating content “advocating terrorism or extremism, promoting ethnic hatred and ethnic discrimination, violence and obscenity, as well as fake and harmful information.”
In their summit in Johannesburg in August 2023, the BRICS agreed to launch the AI Study Group of BRICS Institute of Future Networks to develop AI frameworks and standards with broad-based consensus to make AI technologies “more secure, reliable, controllable and equitable.”
In October 2023, the United States witnessed the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, emphasising transparency and new standards and establishing the US Artificial Intelligence Safety Institute.
At the end of the same month, on 30 October 2023, G7 members subscribed to the “Hiroshima AI Process,” delineating guiding principles and a voluntary Code of Conduct for artificial intelligence developers. The G7 Leaders’ statement reinforced “the need to manage risks and to protect individuals, society, and our shared principles including the rule of law and democratic values, keeping humankind at the centre.”
The G20 New Delhi Leader’s Declaration reaffirmed their commitment to the group’s 2019 AI principles to leverage AI for the public good by solving challenges in a responsible, inclusive and human-centric manner while protecting people’s rights and safety. Although no additional commitments were adopted, the G20 members agreed to pursue “a pro-innovation regulatory/governance approach” to maximise AI benefits and consider associated risks.
The AI Safety Summit in the UK, attended by nearly 30 countries in November 2023, concluded with the Bletchley Declaration, urging international and national efforts to “build a shared scientific and evidence-based understanding of the IA risks,” as well as promoting “respective risk-based policies across countries to ensure safety.”
The African Union’s summit at the end of November 2023 included the Draft Conceptual Framework of the Continental Strategy on Artificial Intelligence, aiming to shape an ethical and economically fruitful AI strategy for the continent. Noteworthy initiatives include Rwanda’s approval of a National Artificial Intelligence Policy and the Nigerian government’s directive in August 2023, integrating artificial intelligence into the country’s primary education curriculum and formulating its national AI strategy.
At the global level, the United Nations Secretary-General’s AI Advisory Body launched its Interim Report Governing AI for Humanity on 21 December 2023. The report calls for strengthening international governance of AI, “not merely to address the challenges and risks but to ensure we harness its potential in ways that leave no one behind.” Some functions of global AI governance should be regularly assessing the state of AI, harmonising standards, safety, and risk management frameworks, monitoring risks, and coordinating emergency response, among others.
Carme Artigas, co-chair of the AI Advisory Body, said, “As AI is inherently an international matter in nature, there is a need for a global approach to a governance framework that respects human rights and hears the voice of all, including the Global South.” The final report is expected in the summer of 2024, ahead of the Summit of the Future, with stakeholders encouraged to provide feedback online by 31 March 2024.
Bridging the Digital Divide: Inclusive AI
As we embark on the journey into 2024, the regulatory landscape for AI is poised for transformation, aimed at propelling its progress while ensuring inclusivity and upholding human rights. This commitment is crucial for marginalised communities, especially in the Global South. Despite the strides in technological advancements, a pressing concern persists in the form of the digital divide. Shockingly, as per the World Telecommunications Union (ITU), a staggering 2.6 billion individuals remain without Internet access in 2023.
This digital inequality is more pronounced among women in low-income countries, constituting most of the disconnected population. Notably, in Africa, only 32.1 per cent of women enjoy access to the Internet, a stark contrast to the 89.5 per cent of their European counterparts. Further exacerbating the divide, gender disparities within Africa are evident, with 42.2 per cent of men having internet access compared to a lower 32.1 per cent for women. Addressing these disparities is a technological imperative and a fundamental step towards creating an equitable digital future for all.
AI in local
governments
Local governments are actively navigating the challenges posed by the AI boom, with regulatory local initiatives shaping the landscape. In October 2023, New York City Mayor Eric Adams released the “New York City Artificial Intelligence Action Plan” — the first of its kind for a major U.S. city. Addressed to city agencies and government employees, the plan aims to build AI knowledge and skills to carefully evaluate AI tools and associated risks, supporting the seamless integration of these technologies into city government services.
A month earlier, in September 2023, Pennsylvania Governor Josh Shapiro and California Governor Gavin Newsom signed executive orders on using Generative Artificial Intelligence (GenAI). The California executive order urges state agencies to foster a safe and responsible innovation ecosystem, harnessing AI systems and tools to the best uses for Californians. This includes issuing guidelines for public sector procurement, usage protocols and required training for GenAI use.
The order acknowledges California as the home to 35 of the world’s top 50 AI companies, with San Francisco and San José leading the charge, contributing significantly with a quarter of all global AI patents, conference papers, and companies. Notably, both cities have published their own AI guidelines (the San Jose Generative AI Guidelines and the San Francisco Generative AI Guidelines). However, the Newsom order emphasises the need for careful deployment and regulation of GenAI to mitigate and guard against a new generation of risks.
Beyond legislative actions and executive orders, local governments are increasingly grappling with concerns about government workers using AI, potentially sharing sensitive information and introducing cybersecurity risks. In response to the mounting use of AI tools for citizen-centric content creation, the City of Boston published interim guidelines for using generative AI in May 2023 as a resource for their employees. Taking a more stringent approach, the State of Maine, in June 2023, issued a directive through the Maine Information Technology, establishing a moratorium for at least six months on GenAI. This directive prohibits using technology like ChatGPT on any device connected to the State of Maine network.
Buenos Aires developed an AI Plan in 2021 as an overarching strategy for generating a positive impact through the use and development of artificial intelligence. A practical guide on the ethical development of AI systems was published as part of the plan.
Across the Atlantic, the Barcelona City Council took a proactive stance by establishing the Advisory Council on Artificial Intelligence, Ethics, and Digital Rights in April 2023. Comprising 15 experts, its role is to advise and assist the City Council in using artificial intelligence for the common good and assess the development of the municipal strategy for algorithms and data for the ethical promotion of artificial intelligence, among other responsibilities.
Barcelona has been a trailblazer, approving an institutional declaration in June 2020 towards a reliable and ethical technological model. A few months later, in April 2021, a municipal governance measure for the ethical advancement of artificial intelligence, focusing on algorithms and data, received approval. Barcelona is also one of the founding cities of the Cities Coalition for Digital Rights, a coalition led by cities to promote and protect digital rights, working on the issue of AI as well.
The UCLG Community of Practice on Digital Cities, chaired by the City of Bilbao, produces the “Smart Cities Study.” Its 2023 edition focuses on cities as innovation-driven ecosystems, spaces for collaboration, creativity and innovation that drive R&D&I from a public-private perspective and contribute to achieving the United Nations 2030 Agenda.
At the outset of 2023, a collaborative effort among Barcelona, Rotterdam, Eindhoven, Mannheim, Bologna, Brussels, and Sofia resulted in the launch of a transparent and ethical municipal register for algorithms, enhancing public services. This initiative allows citizens to access diverse data and provides transparent information about the algorithmic tools and their intended purposes.
These regulatory initiatives, once on paper, are now set to shape the concrete actions that will define the responsible and ethical use of AI.
Keep an eye on
Cities Coalition for Digital Rights
In 2024, Toronto and New York City are set to host online sessions focusing on generative AI, followed by guidelines to uphold ethical practices and protect vulnerable populations. This falls under the Cities Coalition for Digital Rights (CC4DR) 2024 work plan.
Within the CC4DR framework, the Global Observatory of Urban Artificial Intelligence (GOUAI) launched the first edition of the Atlas of Urban AI in November 2023, mapping AI initiatives from cities worldwide. The Atlas envisions continued growth in 2024.
Amsterdam, Barcelona, and New York City, in partnership with Eurocities, Metropolis, UCLG, and UN-Habitat, are the driving forces behind the Cities Coalition for Digital Rights, established in November 2018, boasting over 50 member cities as of 2023.
More information at:
The AI for Good Governance
The AI for Good Conference will occur in Geneva (Switzerland) from 30-31 May 2024, back-to-back with the World Summit on the Information Society (WSIS)+20 Forum High-Level from 27 to 31 May 2024. AI for Good aims to identify practical applications for AI to accelerate progress towards achieving the Sustainable Development Goals (SDGs).
It is the leading action-oriented United Nations platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. The 2024 programme includes interviews with Geoffrey Hinton, advisor for the Learning in Machines and Brains, colloquially as “the Godfather of Deep Learning,” and Sam Altman, CEO of OpenAI. The Conference can be followed online.
More information at: