Until now, developments in Artificial Intelligence (AI) have been met with an equal amount of enthusiasm and apprehension by the public. Those in the former camp tend to be from an enterprise background, and see AI as the key to unlocking data insights that can make millions for their business. The latter, by contrast, tend to be consumers who can’t help but associate the term ‘AI’ with images of Arnold Schwarzenegger blowing stuff up, or with images of a Chinese-style surveillance society, where AI not only records our every move, but ultimately, replaces us entirely as a species. Either this, or apprehensive people tend to understand a little more about AI, but have fears about regulation and compliance.
It’s no surprise that the unregulated tech has been causing a stir. Especially as the use of AI-backed facial recognition technology has been hitting the headlines all across the world recently, with a particularly strong reaction surmounting in Europe. Back in April we saw the European Commission draft its Artificial Intelligence Act – a clear move for more regulation in the space. However, the proposal has since come under scrutiny for not going far enough.
Individual nations have instead decided to take matters into their own hands. In Germany, a coalition of three key political parties is backing a ban of facial recognition tech in public places. In the UK, the Information Commissioner’s Office (ICO) is currently investigating ClearviewAI over its handling of personal data, with a $17m fine on the cards if the company is found to be in breach of data privacy laws.
What the pushback on this tech tells us is that companies and public bodies can’t afford to cut corners on making their tech compliant. In fact, it tells us that taking a reduction in accuracy of insights – say, by removing faces from the datasets that train our AI – is a good thing for them in the long run.
There are two key reasons for this. Firstly, companies with more data are more tempted to ‘use and abuse’ it; time and time again we have seen failures on the part of those using AI to get the proper consent and permissions from individuals for the use of their identifiable data. The stopping of collecting that data in the first instant reduces the chance for any company to misuse it.
Secondly, in an age of relentless cyber-attacks, data leaks, and security breaches, even if collected data is being used in a compliant way, the very fact that identifiable data has been collected means there is still a significant threat to the average consumer. Preparing for the inevitable is going to be a key best practice in the field over the coming years.
This is a hard pill to swallow for many organisations, but we need to see a move towards ethical data practices and self-regulation. Attempts by the likes of Meta to delete over a billion people’s faces from its dataset indicate a growing self-awareness among brands, but things need to move more quickly, and we can force the hand of data-collecting corporations with compliance.
Creating a ‘glass ceiling’ in the form of regulation in AI should be encouraged, and the tech needs to be developed to comply from the ground up. AI is ultimately trained by us and it is our responsibility to create ethical best practice in the field. The single most effective way for us to reduce bias, privacy breaches, and gross misuses of power by corporations is to be selective about what information we feed the AI. The AI roadmap must include welcome limitations and restrictions, so that the whole industry can start to unlock its power from a compliant starting point. This must be the future of the industry.
Written by Karen Burns, CEO of Fyma, and republished with permission from TechInformed.