Blog

 

Expand your horizons with AI.

 

SUBSCRIBE FOR UPDATES 1

 

6 min read

Crossing The AI Deployment Gap

By Alistair Garfoot on Nov 23, 2023 11:07:06 AM

Crossing The AI Deployment Feature Image

With its unparalleled ability to turn data from raw material into a valuable resource, AI is more than an opportunity to innovate. Nevertheless, this doesn’t make it the answer to every problem that involves data. Although understanding and avoiding these misconceptions is essential, there is another equally critical consideration when it comes to realising AI's impact. Namely, the question of how to turn models that are theoretically performant pre-deployment into operationalised systems in the real world.

Going From The Lab to The Real World

Corporate investment in AI has skyrocketed in recent years, growing from $12.75 Billion in 2015 to $91.9 Billion in 2022. Despite this, only 54% of AI projects make it from the pilot stage to deployment. Instead of being operationalised, almost half of all projects remain archived in experimentation limbo. Many organisations mistakenly view AI itself as the goal, and the misplaced priority of “doing more AI” without an accompanying deployment strategy causes an ever-widening chasm between what works theoretically and what can deliver impact in the real world. 

We see this manifesting in a number of ways. AI models may be built to solve a theoretical problem, such as identifying a signal and matching it to a catalogue of known examples. In the real world, however, they are expected to solve the adjacent but fundamentally different problem of identifying signals they’ve never seen before. It could also be expected that AI will detect a signal of interest, but in practice, as we move from demonstration problem to a real problem, there are challenges with data governance, and the required data cannot actually be sent to where the model is hosted. Ultimately, a model that performs adequately in lab testing ends up being totally unfit for purpose when it comes to addressing the problem it was(n’t) built to solve.

The Fallacy of Operationalisation as an Afterthought

There are significant practical challenges that come with using AI in complex problem spaces, and if these challenges are not considered at design time, then we end up in a situation where attempts are made to retrofit a highly complex model with makeshift, post-hoc approaches to cover the deployment gaps. Frequent attempts to mitigate LLM hallucination and misuse with increasingly complex guardrails are a prime example of this in today's world. If a system is to be deployed operationally, it should be designed and optimised for those operational requirements. This means that performance is not just a measure of accuracy or precision but also simplicity, explainability, transparency, stability, adaptability, and many other factors that are equally significant in their own right.

AI model performance is a combination of many factors, not just accuracy or precision

When setting standards for performance in a lab setting, there is often insufficient consideration for how those standards will translate to the system’s use in a real-world situation. Once a system passes its “experimental” phase, it is commonly handed off to a totally separate operations team which then has to deal with all the baked-in choices made in the course of its scientific design, accurate or otherwise. This introduces significant and unnecessary roadblocks to deployment that could be avoided with more joined-up thinking.

In all high-stakes applications, it’s also paramount that decision-makers retain accountability and explain how and why they took certain courses of action. This includes the AI that they might use to inform these decisions. If AI is built without consideration of explainability and governance requirements, it can have devastating consequences. It is a must to consider who will use the system. What level of assurance will they need that the AI is functioning correctly? How are they going to sign off on decisions, and what kind of explanation do they need? The bigger the dataset, the more complex the model, and the less likely that transparency and explainability will occur by default. These characteristics must therefore be priorities, not afterthoughts.  

The Pathway to Operationalising AI 

A car manufacturer designing a luxury saloon would set a particular performance standard in testing for aerodynamics, speed, and power in favour of comfort and aesthetics. All considerations are incorporated into the design phase since, for example, the size of the engine will impact the shape of the car and its subsequent performance in the wind tunnel. If building a race car, their priorities would doubtless be different. Either way, the system is designed with the end-user and their perception of performance in mind, and the design process is optimised accordingly. 

Just because AI is an advanced science doesn’t mean it should be treated differently from any other operational technology. In fact, treating it like a science project is the root of the problem. Instead, the correct approach is to dedicate a single evolving team that takes a capability through from ideation to operationalisation, avoiding handoffs. To understand the problem, this team needs to understand the data, the technical feasibility, the integration environment, the user context and need, and the governance and assurance process. This does, of course, result in a highly challenging problem to solve, but the alternative is failing to consider critical elements at the earliest possible stage, which will result in misplaced assumptions being baked into the system. 

The team working on the solution must have the required end-to-end experience to assure rapid success or failure and subsequent cost-effective iteration. The most effective way to accelerate designed systems to deployment is to consider them in product terms with a user-centric, design-thinking perspective. Product designers and owners will consider deployment from the end user, and ideal experience, backwards. Scientists will consider deployment from the science, and technical feasibility, forward. 

By tackling the problems from both ends, you are far more likely to arrive at a deployable solution. From both perspectives, teams need to favour agility over deep and unwavering certainty and remember they are building a solution to help solve problems that are not yet well understood. Designs therefore need to be refined as new information is discovered and complications arise. Preparing for this eventuality from the start enables the delivery of a system that is fit for purpose on completion. 

The Power of a Multi-Organisation Partnership

In challenging problem spaces, the signal-to-output workflow can be extremely complex. Whether that is a sound or image detected by a sensor, processed through a computer and presented to a human, or data created by a human and presented back to other humans, there are multiple steps in the process. No single hardware manufacturer, end customer, consultancy, data engineer, or, indeed, AI company alone is likely to possess the full range of answers. Look for deep partnerships with a mutual understanding of relative skills and awareness of strengths and limitations. These diverse perspectives, when combined at design time, give a total far greater than the sum of their parts and massively increase the likelihood that the designed system will make it to deployment. 

The creation of solutions that maximise value and impact for the organisations they serve must start at the earliest stages of problem and system design. By beginning with the end goal in mind, we can design for what truly matters and avoid the rabbit holes of innovation-led, scientifically over focused, or technically infeasible ideas. Once deployed, however, the challenge of maintaining a system’s relevance and aptitude for ongoing use is a continuous one. Most Machine Learning models degrade over time as the world changes and new data is introduced, and this needs to be monitored carefully to ensure remedial action can be taken at the precise moment it is needed. In the next blog in this series, we’ll start to delve into what this looks like in practice (sign up for alerts here).

The nature of AI and the complex and ever-changing nature of the problem spaces in which it is used mean there is no precise blueprint for success. Approaching AI adoption in the right way, with the end goal in mind and including partners, is a critical first step toward real impact. With more of this operationally focused attitude, the scope of problems that AI can solve will only increase, and the benefits will be felt across society.

Enjoyed this blog? Check out explaining Why AI Isn't the Answer to Every Data Problem.

Why AI isn’t the answer to every data problem

 

Alistair Garfoot

Written by Alistair Garfoot

Alistair Garfoot is Mind Foundry’s Director of Intelligence Architecture, where he designs applications of AI technology to drive operational impact across verticals. With expertise across AI state-of-the-art and operational problem identification, Alistair accelerates the realisation of AI-driven impact at the enterprise scale.

MindFoundryLogo-Color@4x-1

 

mind-foundry-logo-black
NEW-OSI-Landing-Logo-1

NEW-OSI-Landing-Logo-1200

 

MindFoundryLogo-Color@4x-1

 

mind-foundry-logo-black
NEW-OSI-Landing-Logo-1

NEW-OSI-Landing-Logo-1200

 

Upcoming Events

 

17 sep

 

25 sep

 

14 sep update

 

Human in the loop.

Sign up to get notified next time we publish.