Executive summary
This Tech Navigator sets out to explore what savvy firms
need to do to put their digital and cloud investments to work.
We are at the beginning of a major technological era, one dominated by artificial intelligence (AI). Firms that build on their digital and cloud investments with AI will leapfrog those that don’t and will dominate industries and professions.
This Tech Navigator sets out to explore what savvy firms need to do to put those investments to work. While they need to be human-centric, as we outlined in last year’s report, they will need to use AI to augment and amplify human potential, to become more innovative, to unlock efficiencies at scale, to grow faster, and to build a connected ecosystem.
Generative AI is the latest wave in AI advancement. It follows from machine learning and predictive analytics, then deep learning, and now transformer architecture and foundation models. It is described by Stanford as “models trained on broad data, generally using self-supervision at scale, that can be adapted to a wide range of downstream tasks,” and now used in the latest consumer products such as ChatGPT, which is built on OpenAI’s GPT-4 large language model (LLM).
By leveraging the power of this general purpose AI technology, we are seeing enterprises evolve from using AI to manage operations and specific business functions to using it to reimagine the way customer experience and services are delivered by the business (Figure 1).
Source: Infosys
Throughout the report, we refer to the three waves of AI, categorized into Horizons 1, 2, and 3, or H1, H2, and H3. Horizons are a way to evaluate tech trends. Horizon 1, or H1 technologies, are well established and widely used. H2 is technologies that are in use and most of the ongoing work is happening using these technologies. H3 is emerging technologies that are used in pockets or for innovation pilots and include disruptive ideas across enterprises, but some have the potential to become mainstream. Advances in H3 can also create new risks in compliance, safety, and other areas that must be managed.
AI-first firms will have a strategy in place for deploying these models. They will understand which experiences and processes to amplify through AI, the tooling and automation needed to deploy these models, and will have the right talent and operating model to bring this AI to life.
This is what we explore in the coming pages — a prescriptive framework composed of four building blocks (Figure 2) that we are using in Infosys’s own transformation from a cloud-native to now an AI-native enterprise. The four building blocks are:
AI-first experiences and processes
To evaluate and identify experiences and processes that will benefit most from AI-first reimagination, including AI assistants.
AI engineering excellence
To build engineering processes, tools and automation that ensure AI products and services can be delivered and scaled.
Responsible AI by design
To create controls and processes to ensure that AI is trustworthy and meets regulatory guidelines and policies.
The AI operating model
To build AI-first talent, processes and a product-centric operating model to design and deliver AI services.
Firms must prioritize the products, processes, and features that are most ripe for transformation through AI, and evaluate them in terms of business impact, ease of implementation, and trustworthiness.
An AI canvas can also be drawn up that covers business problems, expected end-user value, and the necessary guardrails and controls. Each experience must have a human-in-the-loop (HIL) and telemetry to improve AI model performance over time.
In order to build good AI, the right engineering foundation and tooling design approaches are needed. A PolyAI approach ensures flexibility to choose the best-fit AI solution for a given problem, while making use of both open and closed AI models is needed depending on the use case.
Closed models such as GPT-4, from Open AI, can be used with generalized use cases to quickly realize value.
For specialized use cases, models, and IP, open models such as BLOOM and CodeGen can be fine-tuned using a narrow transformer approach to create differentiation and competitive advantage.
The entire software engineering and operations processes should also be augmented through AI assistants that can help improve the productivity of developers, testers, and operations teams.
Third, firms should implement the idea of shift-left with responsible AI by design. This isn't easy, however. Although the use of AI is exploding, ethics and governance are scrambling to keep up, with some jurisdictions banning the use of generative models until better regulation is in place.
To ensure failsafe AI systems, we recommend baking ethics into every step of the AI engineering and application life cycle.
Our AI framework comprises five building blocks, starting with the objective of building a trustworthy process or product through appropriate governance, and then monitoring and measuring progress, building capabilities, and making sure the AI is compliant with data protection, record keeping, and reporting.
In our Digital Radar 2023, released earlier this year, we found that more important than the introduction of technology is the way the organization is set up to take advantage of it.
Going AI-first means recognizing that some jobs will be displaced and new roles like prompt engineers and model- tuners will be created in their place.
We recommend that organizations use a product-centric approach for both AI product development and core engineering.
A product-centric mindset will therefore be vitally important, as will what we term “micro is the new mega”— this is a way of turning change projects into a series of micro-sprints that produce exponential results and business outcomes.
AI is not just another technology, but one that will upend the way that organizations make money in the future and remain competitive. No wonder then that according to Stanford’s AI Index report 2023 and CB Insights “State of Generative AI” report, we are witnessing an explosion in AI development, with more than 37 LLMs available, $2.6 billion in equity funding and 500,000 AI publications released in 2022 alone.
Revenue
Staffing
Reputation
Profit
Share price
As compared with other report readers