Confident but cautious
Europe’s regulatory regime encourages a slower, more cautious approach, but it also leads to more confidence about data quality and ethics
Generative AI Radar: Europe
European companies are still overwhelmingly upbeat about generative AI, despite their slower spending and adoption. On average, 76% of European respondents have positive expectations about the impacts of generative AI across their business.
But it is fair to say that European companies do take a slightly more conservative view. This is not only reflected in the fact that more respondents are neutral or negative about generative AI than in North America (Figure 8). It also shows in the fact that they are more concerned about ethics and bias challenges, and that they have much more senior governance and sponsorship involved in their generative AI initiatives.
Figure 8. Europeans are positive but less so than North Americans
Source: Infosys Knowledge Institute
Yet there are indications that European companies are more confident and certain about managing generative AI, the data that underpins it, and the risks that could emerge from it.
It’s clear that European and North American companies take a very different view regarding the challenges that they need to overcome in order to work effectively with generative AI.
Companies in both regions agree that the biggest obstacle to success is overcoming data privacy and security concerns, though it’s worth noting that this is seen as a significantly bigger issue by Europeans than North Americans. After this, the views diverge.
North Americans cite data usability and lack of skills, knowledge, or resources as the next two biggest obstacles to their success. Yet these are much smaller concerns for Europeans. Instead, European companies rate concerns about ethics, bias, fairness, and safety as a much more important challenge to overcome (Figure 9).
Figure 9. Security and ethics are biggest concerns
Ethics and bias concerns were identified by European respondents as being the most difficult obstacles to implementing generative AI 23% of the time. This compares with North American respondents who said the same just 14% of the time.
These concerns are likely why European companies have a significantly higher level of senior involvement in generative AI initiatives.
For approximately one in three companies in Europe, the board of directors is responsible for setting regulations and policies around generative AI.
In comparison, this figure is just over 20% in North America. For European companies, the board of directors is also more likely to be the primary sponsor of generative AI initiatives (19%) compared with North America (10%).
The more stringent regulatory environment in Europe — notably the GDPR data framework and the EU AI Act — is most likely driving Europe’s different approach to generative AI.
On one hand, it is fair to say that Europe’s slower adoption and lower spending could be a result of concerns about regulations. However, the years of data regulations that have existed in Europe have also resulted in European companies being more familiar with managing data effectively.
Indeed, European companies are significantly more confident about their ability to manage and control generative AI systems than their North American counterparts. More than 70% of European respondents have a positive view of their generative AI management abilities, compared to less than 60% of North American respondents (Figure 10).
Figure 10. Europeans are more confident about managing AI
The confidence in the ability to manage and mitigate the impact of AI across multiple levels in the organization is also reflected in Europe’s confidence in upskilling and recruiting talent to manage their AI initiatives. European companies were much more likely to say they would upskill and retrain or hire new recruits for their generative AI initiatives than North American companies. And they plan to be much less reliant on skills from external vendors (Figure 11).
Figure 11. Upskilling and recruitment favored to fill skills gap
The significant preference for upskilling and reskilling their own talent, and the belief in the ability to recruit new talent, indicates a confidence and commitment in European companies towards building firm generative AI foundations within their business.
It’s worth noting though, that despite the confidence shown by European companies, their workforces are not much more ready for generative AI than their North American counterparts. In Europe 59% of respondents felt positive about the readiness of the company’s workforce to adopt and use generative AI technologies as compared with 56% in North America.
The confidence reflected by European companies presents a region that is prepared to move forward with a sure-footed stance. The regulatory environment, with GDPR playing an important role, data strength and safety, leadership buy-in, and involvement of the board all allow for organizations to step into the potential of generative AI in a safer and more constructive way.
However, European companies do need to accelerate the creation of value through generative AI, while ensuring they continue to do this in a responsible manner. Infosys’s AI-first operating model and responsible by design approaches can provide a pathway for European companies to develop faster, more effectively and more safely.
For future-ready firms in Europe, building an AI-first operating model is key. Firms that want to take advantage of generative AI should take a five-pronged approach — across product, design, data, talent, and engineering — guided by the central tenets of shared digital infrastructure, micro-change management, and use of a partner ecosystem (Figure 12).
Figure 12. A digital operating model for the AI-first enterprise
Product+ means going product-centric for speed and innovation. European firms should organize the firm around generative AI products or value streams, and deliver solutions in short, “proof of concept” bursts of energy. This eliminates internal team silos, increases business velocity, and prioritizes customer needs.
Design+ means creating generative AI solutions where designers collaborate with data scientists, AI specialists, and ethicists to create better experiences.
European firms cite ethics and bias as impediments to generative AI success, whereas data privacy, security, and usability most often curtail adoption of generative AI in North America. A future-ready operating model requires AI-ready data — with all data assets available, accessible, discoverable, and of high quality.
Firms that are serious about taking advantage of generative AI should create a range of “live” data products, across many data types, for use by the product-led engineering teams (creators) and business units (consumers) who need it.
AI-savvy firms will also need a great engineering shop in place, with AI-first architecture using a MACH (Microservices-based, API-first, Cloud-native, and Headless) approach so that new generative AI systems can be quickly plugged in and old ones removed.
Talent is also very important. With most firms in Europe upskilling their workers rather than partnering to get the best talent, AI-led learning paths will become ever more important. From product managers, experience designers, and on to digital specialists and platform engineers, making employees future-ready requires firms to invest significant time in AI literacy, transparency, and ethics.
Generative AI solutions rely on high-quality, diverse data sets. This data needs to be connected, protected, and finally, consumed. This demands appropriate governance, with high levels of data health, authority, and compliance with GDPR and other applicable laws. There should also be secure and responsible usage and monitoring processes in place. This leads to what we term “responsible by design.”
All generative AI systems should have a layer of responsible by design in the architecture to ensure that it filters out inappropriate requests, profanity, and unethical use. The responsible by design layer checks the model outputs for:
Trust, including a check for explainability, transparency, safety, and standards
Ethics, including a check for bias, core values, and corporate social responsibility
Security
We recommend creating an AI council to govern and guide design, deployment, and use of generative AI and its outputs.
Generative AI Radar 2023: Europe
Chief information officer
Chief information security officer
Chief executive officer
Business unit leader
Don’t know