Skip to content

AI Expansion Comes with Ethical Challenges

As AI becomes more commonplace, ethical issues need to be weighed, according to a Stanford report.

The role of AI has rapidly expanded with industrialization and globalization in recent years. The rise has also brought focus to the regulatory and ethical measures that require consideration, according to the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

“2021 was the year that AI went from an emerging technology to a mature technology—we’re no longer dealing with a speculative part of scientific research, but instead something that has real-world impact, both positive and negative,” said Jack Clark, the study’s co-chair.

The 2022 AI Index Report examined technical AI ethics and AI policy and governance, along with technical performance, economy and education, and research and development. The objective of the comprehensive report is to guide decision-makers in developing AI ethically and responsibly, based upon AI data trends.

In 2021, there were several important milestones in the AI world. Large funding rounds, worth more than $500 million, doubled the amount of AI private investment compared to 2020.

AI technology performed more efficiently and became more economical. For example, the cost of robotic arms has decreased by over 46% over the last five years. Since 2018, training times have improved by over 94% while the cost to train an image classification has dropped by 64%.

China and the U.S. have led the field in cross-border research ventures. The number of AI patents filed has skyrocketed to 30 times more than in 2015. Publications exploring transparency and fairness have soared fivefold in the last four years.

As large language and multimodal language-vision models exceed performance expectations, ethical challenges are on the rise, like toxic language generation.

Regulatory policies are becoming more focused on AI. Governments have considered 18 times more legislative bills in 25 countries since 2015.

The report “provides a starting point to track the performance of state-of-the-art systems along ethical dimensions and provides researchers, practitioners, and policymakers with an initial set of quantifiable metrics to track over time,” said researcher Helen Ngo, who helped write the report.

“As AI systems become increasingly more capable, it becomes critical to measure and understand the ways in which they can perpetuate harm,” she said.

Written by Helen Hwang and republished with permission from AI Business.

This image has an empty alt attribute; its file name is aib_logo_1.png



Reach out to us at [email protected]

Related Posts

Procter & Gamble on Scaling AI for Enterprise

Procter & Gamble on Scaling AI for Enterprise

Data, talent, platforms and trust: these are the four key pillars firms need when building AI into their applications according…
Scaling Up: AWS’s Allie Miller Outlines the Key Trends in AI and Machine Learning

Scaling Up: AWS’s Allie Miller Outlines the Key Trends in AI and Machine Learning

We are entering an AI world where start-ups are transforming everyday lives through artificial intelligence and machine learning. AWS’ Allie…

ScaleUp:AI uses cookies to enhance your experience and help us analyze website usage. By continuing to browse or dismissing this banner, you indicate your agreement as outlined in our Privacy Policy.