
Many AI projects achieve returns of 25-50% or more within a year. They usually also wring costs out, sometimes 30% or more. If that’s all they’re doing, you’re leaving a lot of value on the table.
From General-Purpose To Vertical AI
Many of the first business AI implementations failed. The technical capabilities simply weren’t there. But even when they were, the wrong assumptions drove many projects into the ground. They overestimated the predictive power of limited data and underestimated the cost of human oversight. They assumed that quality data was abundant, and that employees could fill in the gaps. They overlooked the overwhelming influence of a handful of outliers. And they believed that testing the machine controller’s output would be as easy as cell-checking for the robot.
But they also wielded a blunt hammer of solutionism, letting AI specialists loose on business problems. Specialization has its uses (the biggest investment banks won’t be switching to all-purpose models anytime soon) but in most corporate settings AI is a general-purpose technology. When companies use general models on specific problems, the results are less than general: general training data can produce an overly confident model that starts extrapolating from insufficient evidence, generating an error with 99.9% confidence.
The R&D Acceleration Case
An evident example of AI being able to perform tasks that are not achievable by humans can be seen in research-driven sectors, most notably life sciences. For instance, it usually takes more than ten years to develop a new drug candidate from lab discovery to FDA approval, and most of these fail at some point in the process.
Here AI completely changes the rules of the game. Instead of having to test every prospective drug compound in sequence, leading to exorbitantly high testing costs, an AI model can literally simulate billions of molecular compound interactions in the time needed by a lab to conduct a few bench experiments. Hence, the problem of drug discovery is no longer a computation-expensive process but a labor-expensive one and this is easy to fix. Resources on https://www.sandboxaq.com/learn/ai-drug-discovery highlight how exactly AI-model-simulations are being used in drug discovery to shrink previously calculated timelines of months to a handful of weeks or days. The integration of AI into pharmaceutical R&D alone could generate between $60 billion and $110 billion annually in economic value as a result of accelerating compound discovery.
This is no longer about being more productive. This is about getting to thresholds that were uncoverable before.
People tend to assess AI tools based on performance metrics. However, in many cases, the question is: can we really understand what the model did and why?
Getting Out Of Pilot Purgatory
Many companies have conducted AI pilots. Few have managed to expand them successfully. This difference is often due to one of two reasons: either the problem they initially chose was not suitable or they underestimated the negative impact of existing data silos on the generated model.
To move ahead in a pragmatic way, you must begin with a problem that has defined criteria for success, a limited focus, and easily accessible data. By “easily accessible”, we mean data that is sufficiently well-labeled to use as training data although not necessarily perfect. Examples of this kind of problem include customer service categorization, predictive maintenance on a specific business line or document processing for a single category of documents. None of these are particularly exciting, but they can be solved.
As soon as the solution has been implemented and your colleagues are in a position to assess and potentially act on the model outputs, you will start to face the true challenges of the necessary data preparation effort. Working models are a great way to build the business case that departmental decision makers should invest time and resources in appropriate data handling.
Expanding the solution to a cross-departmental or even an enterprise-wide level is not a question of technology, though. The painful truth is that data scientists won’t face data scientists from other departments and decide to turn knight-errant on behalf of their AI models. Data is carefully guarded in most organizations. It is often power.
What This Means For Competitive Positioning
Viewing AI as an expense for cutting costs overlooks its true potential. Companies that see it this way will cut some costs. But those that see it as an R&D engine will build products and cures and answers that competitive, analog-only companies won’t be able to match for long. The chasm between these two takes will only get wider. The real decision isn’t whether AI is in your mix or not; it is whether AI is focused on problems that will make a difference.







