Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

AI projects are never complete: deployment is just the beginning

As organizations rush to adopt AI in 2025, the majority are unaware of ongoing the talent, cost, and security requirements, and are not planning accordingly.

Jan 6, 2025 • 9 Minute Read

Please set an alt value for this image...
  • AI & Data
  • Business & Leadership

Unlike other IT projects, an AI project is never done. If you felt a shiver go down your spine at the thought of an unending IT project, that's a good thing! It means you're taking adopting AI seriously. Unfortunately, many organizations don't, and, propelled by FOMO, they rush to adopt AI without adequate preparation—and instead join the ranks of the 80% of AI projects that fail(Well, at least they're not missing out anymore, right?)

Fundamentally, the root cause for these projects failing is a lack of AI skills and education. This manifests as companies trying to use AI to solve the wrong problems, launching a project without having the required data or infrastructure, or run afoul of other pitfalls. According to Pluralsight’s 2025 Tech Forecast report, three in four organizations are either planning to deploy or have already deployed AI technologies and tools, but only 12% of technologists have significant experience working with AI. I'd call that a skills gap, but it's more of a skills chasm.

So, what does this all have to do with my original statement about AI projects never being done? One of the biggest knowledge gaps people have about AI is in relation to the preparation and ongoing requirements, which catches many people off guard. When thinking about an AI project, most people focus on the costs and work around training the AI model. Part of that is the media frenzy—there's constant noise about the latest model passing a benchmark test or a major tech company releasing an AI with new capabilities.

To be fair, all of this is more exciting than discussing the ongoing work involved in AI projects—it’s less attractive, and it doesn’t sell AI solutions. However, failing to take these into account before you embark on your AI journey is a major contributor to project failure. Let's cover what these ongoing requirements are, and why they're so important.



Nine reasons AI projects are never finished

1. The many different forms of drift

Did you know that your AI and the environment it depends on are not static? They drift over time, and left unchecked can lead to your AI acting slow and buggy, and cause it to give inappropriate or misleading output to your users. It’s true. The two main types of model drift are data drift and concept drift.

Data drift

This is when the data your AI was trained on is now different from the data it is actually ingesting and dealing with. Think of changes in user demographics on your website, such as younger users suddenly being the majority of your client base. Your AI is expecting to be dealing with an older audience, and this degrades its performance.

Concept drift

Put very simply, the task your model was trained to do has now changed, and the patterns it has learned are no longer true. For example, you might have an AI trained to detect spam in people’s emails. However, the types of spam that people receive changes significantly, and it can no longer accurately detect it because of this new pattern.

Other types of drift

Feature drift, infrastructure drift, regulatory drift, user expectation drift, and objective drift are all ways that your model can no longer be performing as required. All of these have different causes and require different solutions. The most important thing for you to know is that at a high level, making sure your AI doesn’t drift off course requires hands at the helm steering it back to its intended destination—being a functional, up-to-date, and informed model that meets business and user needs.

2. Cybersecurity requirements

You might not also be aware that certain AI can be subject to cyberattacks that on a regular system is inconvenient, but on an AI solution can potentially put you out of business. 

Take a DDoS attack—essentially spamming your systems with tons of traffic to overload them and cause a disruption to your service. If they’re targeting your website, it might go down. A problem, but not insurmountable. On your LLM-powered solution, though? You can rack up millions of dollars in inference fees (which we’ll talk about later).

Of course, money isn’t the only thing these attacks can cost you—there’s also reputation. For example, someone could red-team your company’s AI to give offensive or racist outputs, or suggest other company’s products instead of yours. All of these could be more damaging in the long term.

I’d highly recommend having someone in your organization keeping up to date with the OWASP Top 10 for LLMs and APIs (Since when we’re talking about AI, it’s often based on a foundation of API calls). 

3. Guardrails and monitoring

If you deploy an AI solution, you’ve got to make sure there are guardrails in place to stop unwanted output and abuse. Let’s say you deploy an innocent chatbot, but someone repurposes it into a tool to generate blackmail and extort people at scale. This is often done through exploiting vulnerabilities or jailbreaking your AI models. Someone needs to be constantly aware of these and pushing for countermeasures.

4. MLOps

There’s a saying in AI: “Garbage in, garbage out.” Your AI solution is only as good as the data you’re putting into it, and any shifts or drifts in data quality—such as missing values, outliers, or corrupted entries—can impact their accuracy. You’ve also got data drift, automated pipeline management and optimization, feedback loops, and incident response. All of these require MLOps which is needed to enable the AI model to deliver accurate, actionable results long after it’s deployed.

5. KPI tracking

Obviously, you’re deploying an AI solution to achieve some sort of business outcome. That means KPIs that need to be tracked so you can justify your continuous investment. Many organizations fail to think about how AI is going to match to business needs at the early stages of AI adoption, and think of AI as an IT project rather than a business project. This should be a consideration across the whole IT asset lifecycle, from planning to disposal.

6. Employee education

There’s three aspects to this: leadership education, specialist education, and user education. This is an ongoing process because of shifts in personnel and changes in the technology in question.

Leader education

You need leaders to be aware at a high level about what AI can do, what it can’t do, what the ongoing requirements and risks are, and any fiscal considerations. They also need to be informed enough to be a top-down advocate for the AI solution, and understand why that’s vital for successful adoption. 

Specialist education

You need to invest in an ongoing training program for the core specialists involved with AI, so they are aware of the different types of AI solutions available, how to implement it using responsible AI frameworks, and also have them keep on top of the latest advancements. Things move fast in AI, and so they need to be empowered to learn about any new tools or techniques that help them meet all the ongoing requirements for your AI solution.

This also includes the members of your AI governance team, which is also something you should have in place. This team should be ensuring the AI systems are safe and secure throughout the whole AI lifecycle by continuing to identify, assess, and manage risk. To do that, they’ll need ongoing education and training to match.

User education

This is where a lot of AI implementation projects fail—a lack of change management planning. Part of this planning is educating everyday users on how to use AI, giving them a safe space to voice any fears they might have about the technology, and giving feedback about where it’s helping, where it’s not, and where it might actually be slowing things down.

7. Regulatory requirements and responsible AI

AI governance is starting to come into force, particularly the EU AI act. At a high level, this act affects those working with AI in the EU, or producing AI outputs that affect the EU, with large fines involved. Depending on the country you’re in, there’ll be different compliance requirements, but someone in your organization needs to be the one aware of them. There are also similar acts such as the GDPR which many companies comply with and are also applicable to AI models.

You want someone who is familiar with responsible and ethical AI principles, so they can make sure your AI solution is meeting these regardless of any regulatory pressures. Your organization’s trust is gained in drips and lost in buckets—someone should be checking to see if it’s trained and retrained in the correct way, and organizes tests to make sure it meets these standards.

8. Infrastructure 

You’ve likely got a finite IT budget, and while many organizations pump money towards AI (and to a lesser extent, cybersecurity), your new AI solution is going to need proper infrastructure to support it. When you adopt AI in your organization, you’re likely to see an uptick on the load of your systems, which means someone needs to be making sure it can cope with it on an ongoing basis.

9. Outsourcing management

If you’ve decided to hire external AI specialists to help you with your solution, you need to be aware they’re going to be needed on an ongoing basis for all the reasons mentioned above (drift, cybersecurity, legislation, guardrails, etc), and not just for the creation of your AI solution. 

Typically, it’s better in the long term to invest internally in AI talent, since an external agency is never going to know your IT infrastructure, business requirements, and data landscape as well as your own staff will. But if you do go with continuous outsourcing, that comes with continuous management internally. 

AI can empower your business, but be prepared for it

So far, we’ve talked a lot about the difficulties and pitfalls around adopting AI solutions. There are a ton of benefits as well, such as:

  • Reduction in human error
  • Faster speed of business operations
  • 24x7 availability
  • Unbiased decision making (if implemented correctly)
  • Productivity gains
  • Cost reduction
  • Improved data acquisition and analysis

And these are great! However, you should look at both sides of the coin when proposing an AI project, and then plan and resource accordingly. Go in with the mindset that this AI project is never going to be finished, and you’ll need to work costs in around continuous education, security, inference, and other factors.


Need to upskill yourself or others in AI? Here’s where to start

Pluralsight offers professionally-led AI training pathways that can equip your people—or yourself—with the skills you need to make the most out of AI tools and tech. It also offers AI sandboxes where you can provide your teams with hands-on practice in critical AI tools without risk to your production environment. It also offers AI courses for non-tech professionals and business leaders. Find out more.

Adam Ipsen

Adam I.

Adam is a Lead Content Strategist at Pluralsight, with over 13 years of experience writing about technology. An award-winning game developer, Adam has also designed software for controlling airfield lighting at major airports. He has a keen interest in AI and cybersecurity, and is passionate about making technical content and subjects accessible to everyone. In his spare time, Adam enjoys writing science fiction that explores future tech advancements.

More about this author