Building block 4
The AI operating model.
AI talent is one of the top four challenges for executives to transform their enterprise to AI-first (Figure 9). What specific talents and skillsets should they seek?
Prompt engineering is a key skill required. Anthropic, Google’s latest $300 million investment, is hiring prompt engineers at salaries up to $335,000 a year, underscoring the value placed on prompt engineering and its role as a crucial skillset.
Prompt engineering is not easy. Orchestrating systems is a complex, multifaceted task that requires a deep understanding of the interplay between foundational models, external systems, data pipelines, and user workflows.
Workers in this new normal will discover, test, and document best practices for a wide range of tasks customers use when collaborating with AI.
Prompt engineers will also build a library of prompts or prompt chains and build tutorials and interactive tools to pass that knowledge to others. According to experts at Infosys, firms should seek people with a hacker spirit who love solving puzzles, have excellent communication skills to teach both human and machine technical concepts, and who are familiar with the architecture and operation of large language models.
In our previous work on AI, Tech Compass, we examined the skills required for the AI horizons: H1, H2 and H3.
Previously, these roles required data scientists who majored in mathematics and econometrics for H1, while the requirement shifted toward data engineers for H2. Now, with H3 in sight, organizations need programmers who can craft prompts to elicit trustworthy responses from foundation models and interact with third-party systems. For instance, the prompt “Buy me an iPhone and send it to X address” will only be effective if a robust orchestrating framework exists, allowing the large language model to seamlessly integrate with an e-commerce giant like Amazon.
With much of the AI power vested in a few companies such as Meta, Google, OpenAI, and Microsoft, the need to learn prompt engineering will ripple out to the whole AI-first organization. It’s important to recognize that some of the existing jobs will get displaced and new roles like prompt engineers and model tuners will be created. Existing roles will have to be enhanced to use AI tooling and twins to make themselves hyper-productive, in addition to softer skills and characteristics — including empathy, creativity, problem solving, and integrity.
For now, it’s beneficial to think about the AI-first skills dimension through three prisms. The first is value: people who can scale AI throughout the organization will be in high demand, as there is little value to use AI in pockets.
Second, value must be buttressed by trust, as we discuss in our Data + AI Radar. Firms should employ experts who understand the privacy, security, and legal aspects of AI.
Third, for those systems not yet in H3, data engineers with cross-domain knowledge are required, as well as AI architects with both depth and breadth of software engineering knowledge. This includes understanding containers and code dependencies, peer code review practices, coding standards, logging techniques, clean code and code optimization, and code modularity. At the same time, existing ETL approaches taken by software engineering must be enlarged to support AI trust, risk, and security functions. Organizations should look at areas such as bias, differential privacy, data quality and capabilities around synthetic data generation.
In all of this, firms must understand that AI-first is a mindset shift that everyone in the business needs to embrace. AI is even more fundamental than mobile, and cloud, and good talent will realize that they have to take advantage of AI.
While some worry that AI will take their jobs, someone who is an expert at AI will certainly do so. AI will supercharge the performance of programmers, designers, artists, marketers, and manufacturing planners.
AI-first organizations must also be increasingly product- centric. They will organize the firm around dedicated customer journeys or value streams, rather than traditional functions, so AI products are delivered at the pace required (Figure 10).
Source: Infosys
Product-centricity is less about products and more about the value delivered. AI products are owned by product managers, a role already in demand. Our Agile Radar research found that 74% of C-suite and IT executives across the US and Europe invest in product management, underlining it as a key business priority.
There are many benefits to the product-centric approach. AI-first, product-based, data-driven firms take advantage of business opportunities by tracking what works well in real time and feed the insights back into product design.
AI projects are iterative and based on continuous learning, as demonstrated in this report’s section on engineering excellence. In the product-centric operating model, AI product teams are long-lived and fully invested in growth beyond simply launching the capability.
Data and AI product teams integrate pipelines and roles and responsibilities are defined to ensure handoffs are seamless and schedules are aligned. At Peloton, for example, new data captured on AI-based virtual fitness studios is used to build on product development. AI teams thus develop an ongoing understanding of customers and users, which in turn leads to better design research processes. This human-centered approach then spurs more innovation, driving deeper, richer user engagement and business outcomes from AI.
This operating model ensures that IT is a stronger force in the business, as it becomes integral to product strategy and that it is vested in the success of AI.
As we discuss in our paper on product-centricity, this also creates new business models and revenue sources, and paves the way for a platform ecosystem to flourish. For example, OpenAI and Google have ensured that their AI chatbots also become platforms for other products that are easily integrated, generating significant additional data and value for their businesses.
Product-centricity, however, doesn’t detail how core AI engineering, tooling and playbooks are used. For this, the concept of “hub-and-spoke” is useful and is analogous to the platform engineering methodology introduced earlier in this report.
As a middle ground between centralized and decentralized AI and data usage, a hub-and-spoke approach can provide the agility and consistency that AI product-based teams need.
This approach introduces a central team or center of excellence (the “hub”) that owns the AI and data platform, tooling, and process standards. This is complemented by business teams (the “spokes”) that own the AI products for their domains.
This approach resolves the “anything goes” phenomenon of decentralized AI team topologies while empowering subject matter experts (SMEs), or AI stewards, to independently create AI products that cater to their specific needs.
For this approach to be scalable, a common platform that supports AI model sharing, with conformed dimensions, collaboration, and ownership, is critical, and is similar to platform engineering.
Common models and dimensions, such as time, product, and customer, are established, while the domain experts own and define their business process models. This enables self-service and model re-use, increasing efficiencies and innovation by allowing product owners and domain experts to combine their AI models with models from other domains to create new mashups for answering deeper questions.
Further questions to consider include: how do data and analytics teams work with software engineering and AI teams on the data pipeline? How do firms accelerate the volume of feedback that they can incorporate from domain experts?
As AI booms, executives might struggle with the change management that's required. From rethinking the operating model to establishing governance; from upskilling teams so that they understand responsible AI practices, to performing core engineering work, the process of assimilating AI into an organization involves many complex and interdependent moving parts. Few executives are able to measure product KPIs from pilot to launch and to guarantee a seamless integration of AI.
Microchange management (or “micro is the new mega”) is a way to overcome this inertia. Instead of making drastic changes all at once in a big-bang program, cross-functional product-based teams deconstruct work into a series of small components. These teams iterate change and achieve adoption through an Agile cadence.
AI-first organizations use microchange for core engineering and applied AI, and to change employee behavior (upskilling, for example) through slight modifications to habits and routines.
This is important where product-based culture needs to catch up with advances in AI-first engineering. AI products are piloted on a small scale, and then lessons from that pilot scheme can then be used to refine and then scale the roll-out, whether for a generative AI chatbot or IDP.
According to Infosys research, realizing the full potential of change management demands a detailed, meticulous approach that prioritizes high-value use cases, that starts with small-scale implementations, and and which emphasizes the importance of people and processes over the technology.
Here, microchange management provides a low-risk approach to turning complex transformations into manageable, bite- sized changes, thus “minimizing the leap of faith required to reach the other side,” as authors Jeff Kavanaugh, head of Infosys Knowledge Institute, and Rafee Tarafdar, Infosys CTO, write in the Harvard Business Review.
With time, this leads to real AI adoption, the overarching goal of an AI-first organization.
As compared with other report readers