Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Less than 2% of AI practitioners interested in ethical AI adoption

Tens of thousands of tech pros rushed to learn about AI in 2024, but only a fraction sought how to do it responsibly, according to Pluralsight research.

Nov 19, 2024 • 5 Minute Read

Please set an alt value for this image...

AI is transforming our world, and is already affecting almost every aspect of our lives from global politics, job security, economic growth, and more. With the great power of AI to alter the world and humanity’s role in it, you would expect an equal amount of concern about responsibility. 

However, among the tens of thousands of people learning how to use AI, only 1.8% actively searched how to adopt it responsibly, or one in 54 people, according to Pluralsight’s 2024 platform research.

That’s not just a vast divide between those learning to implement AI and those interested in ethical AI — that’s colossal. Outside our search data, Pluralsight has seen similar trends in our public-facing educational materials, with overwhelming interest in any training materials on AI adoption, while similar materials on ethical and responsible AI drawing nothing but crickets. 

Why ethical AI is critical to reducing risk to businesses

Ethical AI adoption isn’t just a matter of being able to sleep at night regarding your societal impact—it’s a way of mitigating risks and negative consequences that can come with using AI, all the while maximizing positive outcomes. Quite a few of those risks can be the business-ending kind—such as danger to humans, breaches of consumer privacy, illegal AI use, or brand damage. 

According to Accenture research, 77% of global consumers believe organizations must be held accountable for AI misuse. Your consumer confidence is gained in drips and lost in buckets, and unethical AI adoption is a gaping hole your customers (and by extension, revenue) can flow right out of.

Unfortunately, 80% of executives and 72% of IT practitioners say their organization often invests in new technology without considering the training employees need to use it, according to Pluyralsight’s recent AI Skills Report. On top of this, 90% of executives don’t completely understand their team’s AI skill and proficiency, and only 12% have significant experience working with AI.

If you are a leader in your organization, don’t assume your internal or external AI practitioners are also trained in ethical and responsible AI adoption. You also need to give them opportunities to continuously refresh their knowledge in this area, as regulations and best practices are not static.

The new EU AI Act makes responsible AI even more important

Up until this year, there were no major legislative frameworks to penalize organizations for not adopting AI ethically. On 2 August 2024, the EU AI Act entered into force, with potential fines of up to EUR 35 million or 7 percent of the global annual turnover of the relevant entity in the previous year, whichever amount is greater. 

While this is European legislation, it applies to anyone who provides, deploys, imports, distributes, or manufactures an AI system with a link to the EU market. Notably, even if your organization is not in the EU, if the output produced by your AI system is being used in the EU, you can run afoul of this legislation. The EU AI Act is being slowly rolled out over time, with levels of enforcement increasing over the next few years.

This is just one example of something AI practitioners should be aware of, and why being trained in responsible and ethical AI use is important. If you don’t know these dangers exist, you can’t build your AI solutions to avoid risks of noncompliance. In short, if nobody knows that fire is hot and you spark up a fire, sooner or later, someone’s going to get burned.

The limit of completely outsourcing AI knowledge for ongoing business need

There is a market popping up in legislation-heavy regions like the EU of AI consultants who specialize in responsible AI adoption. This is a very good idea as a starting point to shore up your own internal knowledge, as they can come in and let you know what pitfalls to avoid. However, what many companies are not aware of is that AI projects are not like other projects. Specifically, the work is never “done”.

Why? Firstly, AI drifts over time, which means once you’ve built and released it, you’ve still got to keep making sure it’s still meeting business requirements, and acting responsibly and efficiently. You’ve got to keep it secure against DDoS attacks that can cost millions of dollars in token-related costs or exfiltration attacks which can steal your customer’s data. There’s also data and infrastructure requirements and dependencies, all of which are hard for an external agency to know in-depth as well as your internal teams do.

While our research shows 91% of executives are at least somewhat likely to replace or outsource talent to successfully deploy AI initiatives—which makes sense given the overall AI talent shortage—this is at best a stopgap solution for what is going to be an ongoing business need. 

Prioritize implementing AI properly and appropriately, not just quickly

If you’re a leader in an organization seeking to adopt AI, make sure your organization gives the right employees the ability to learn about ethical and responsible AI adoption. Remember that AI is not a “one and done” project, but an ongoing endeavor that requires constant upskilling to mitigate risk and negative outcomes, and get those positive business results you’re after. 

Meanwhile, if you’re a current or aspiring AI practitioner, make sure you’re continuously investing time into learning about ethical and responsible AI adoption, not just the mechanics of making an AI solution. If you’re the one in 54 who took the time to learn, you’ll be well positioned to be the one who can help your organization navigate and avoid those legislative and regulatory pitfalls, avoid embarrassing brand damage, and use AI as a major business boon rather than a future liability.

Adam Ipsen

Adam I.

Adam is a Lead Content Strategist at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer, Adam has also designed software for controlling airfield lighting at major airports. He has a keen interest in AI and cybersecurity, and is passionate about making technical content and subjects accessible to everyone. In his spare time, Adam enjoys writing science fiction that explores future tech advancements.

More about this author