Rwanda Pushes for Safe and Responsible AI Development

Rwanda has launched a national policy on artificial intelligence (AI) to guide local companies in developing safe and responsible AI.

The policy outlines the East African nation’s vision for AI, says Victor Muvunyi, a senior official at the Ministry of ICT. For Rwanda, inclusivity and ethical deployment are guiding principles as the country seeks to leverage AI to improve people’s lives. The policy also pushes Rwandan companies to use AI to address the unique challenges its people face.

In addition to policy, the Ministry of ICT has established a national office for artificial intelligence whose mandate includes ensuring that local companies implement the technology “responsibly and effectively.”

“This office will help us lead our AI development journey, addressing challenges and fostering innovation, while keeping our cultural and ethical values ​​at the forefront,” Muvunyi said. said local newspaper The New Times.

AI adoption has accelerated in African nations in recent years as the region tries to catch up with its peers. Some, like Morocco, are using the technology in their courts to conduct searches and retrieve archived texts. Others, like South Africa and Kenya, are using it to solve specific challenges facing the continent, like climate modeling to help farmers better plan.

However, Africa faces greater obstacles to AI adoption than other regions. Challenges such as insufficient structured data ecosystems, skills shortages, poor infrastructure, and restrictive policies have impeded the development of AI.

Rwanda wants to help AI companies mitigate these challenges, Muvunyi said. In addition to the new AI office, it is relying on the Rwanda Utilities Regulatory Authority to promote AI development. The authority also promotes AI principles that keep these companies honest and protect people's rights.

“The principles include beneficence and non-maleficence, among others, which ensure that AI systems not only benefit society but also protect human dignity and prevent harm,” Muvunyi told the newspaper.

AI safety is a global concern. Governments in the United States, the European Union, the United Kingdom, and Asia have pushed AI developers to commit to prioritizing safety when developing the technology.

Last month, Google (NASDAQ:GOOGLE), Half (NASDAQ: GOAL) and OpenAI have been among the industry leaders who have realized a new promise in Seoul to prioritize safety in AI development. Last year, these companies made a similar promise to the Biden administration.

Another challenge the industry faces is data privacy. Companies like Meta and OpenAI have gotten into legal trouble for ignoring data laws when training and deploying their AI models.

In Rwanda, the country’s data protection and privacy law has become crucial to protecting the public in the face of aggressive developments in artificial intelligence. lawcoming into force in 2021, it requires companies to obtain citizens' consent before using their data and to be transparent in their management and storage.

For artificial intelligence (AI) to function properly within the law and thrive in the face of growing challenges, it must integrate an enterprise blockchain system that ensures the quality and ownership of data input, allowing data to be kept secure while ensuring its immutability. Check out CoinGeek’s coverage of this emerging technology to learn more about why enterprise blockchain will be the backbone of AI.

Watch: Artificial Intelligence Needs Blockchain

Youtube Video

New to blockchain? Check out CoinGeek's Blockchain for Beginners, the definitive resource guide for learning more about blockchain technology.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Subir