Herald Editorial

Like fire, AI is a tool that calls for controls


Fire — a human invention that ignited a different way of preparing food nearly 400,000 years ago — is a tool that very much remains in need of control, whether it’s the size of flames in a fire pit or the temperature setting on a kitchen stove.

That same control is needed when handling the latest, fastest-moving new human tool: artificial intelligence. Known as AI, the computer-oriented system allows people to create words, visual perception, speech recognition, decision-making and translation at a speed that outpaces human thought.

But like a motor vehicle that needs brakes to bring it to a safe stop, this emerging powerful force, which is revolutionizing industries, improving efficiency, and transforming the way we live and work, needs guardrails so it does not careen off the metaphorical road.

If the past 20-plus years of the internet and social media have taught us anything, it is that as much fun as there is in cyberspace, with games and hilarious memes, and the way people can connect in minutes during emergencies or create civic engagement, there are uses such as deepfakes — when a video of a person has been digitally altered to appear different from the original — that are usually employed maliciously to spread false information.
This calls for caution and standards.

AI holds the potential to shape the future in remarkable ways — from autonomous vehicles to personalized health care. The systems can analyze massive datasets, identify patterns, and make predictions with unprecedented accuracy.

Yet this transformative technology also carries significant risks and ethical concerns. Being trained based on data created by humans, AI can respond just like humans — both the good and the bad. For example, an AI tool can just as easily stereotype and generate content that someone might deem racist.
We need responsible oversight.

The Biden administration issued an expansive executive order in October establishing standards for AI safety and security within the federal government in an effort to protect privacy, advance equity and civil rights, stand up for consumers and workers, and promote innovation and competition.

It also aims for the federal government to act responsibly, and accelerate the hiring of AI professionals to train employees across agencies and departments.
The Biden blueprint is not required by social media giants, but the Mark Zuckerbergs and Elon Musks of the world should take these guidelines to heart so as not to continue creating the chaos that has followed every digital advance. Silicon Valley needs to apply the experience and knowledge of the past 20 years of social media and technological advances to this newest tool.

The global community needs to follow the White House’s lead to collaborate and create comprehensive regulations and standards that address the challenges AI poses. Those standards should encompass privacy protection, employment impact assessments, algorithmic transparency, and the development of ethical guidelines for AI research and development.

And we should focus on AI usage much closer to home — like in our schools, where we can teach AI skills while also developing curriculum that incorporates this new tool. And for local government, ensuring their hiring and procurement practices are not using biased AIs.

Responsible oversight does not mean stifling innovation. It means ensuring that innovation is aligned with the best interests of society. It means promoting AI technologies that are safe, trustworthy and respectful of individual rights and freedoms.

To achieve this, governments should establish regulatory bodies while encouraging industrial self-regulation. Regulators should invest in AI education and awareness campaigns, and foster a culture of ethical AI use. The AI community should also adopt a proactive stance, self-policing research and applications to ensure they adhere to these principles.

Similar to harnessing the power of fire and aiming to ensure that this dangerous tool does not burn us, responsible AI oversight should include safeguards against its potential harms to help ensure that it is a force for good, and not hazardous to our overall welfare.