Section 4 - Overcoming adoption challenges
Businesses are alert to the challenges of data and ethics in their implementations of generative AI
Almost half of all respondents cite data challenges — either privacy and security, or usability — as the biggest obstacles to generative AI implementation in their businesses.
The high-profile sponsorship of generative AI allows companies to avoid the key barrier to implementing new technologies: change management at the top. With new innovative technologies, top executives often either don’t recognize the potential of new technologies or don’t prioritize their adoption. Fortunately, generative AI does not have that problem. Only 4% of respondents cited C-suite buy-in among their most significant challenges.
Figure 8. Data, skills and ethics are primary generative AI challenges
Source: Infosys Knowledge Institute
With executive support and growing budgets, generative AI has a clearer path to success than many other new technology initiatives. But that does not guarantee a smooth road to adoption. Privacy concerns, data quality, and lack of talent all weigh on the minds of business leaders. In fact, these issues contribute to the “pilot purgatory” faced by companies struggling to move from small experimental projects to adoption at scale.
Almost half of all respondents cite data challenges — either privacy and security, or usability — as their biggest obstacles to generative AI implementation. “Hallucinations” and intellectual property (IP) infringement are significant inherent risks in using generative AI that relies on public data. For that reason, many businesses look to build organization-specific tools trained on corporate rather than public data.
Yet corporate data is often not full, complete, and formatted for effective use. Further, data quality is a particularly difficult problem to solve. One option is to use synthetic data, a process that uses statistical algorithms to fill the gaps in data sets.
This approach can be used to model real-world situations and train generative AI models and algorithms. However, it comes with several concerns, including the cost of building and keeping consistent with the original data.
Synthetic data also tends to mimic and replicate the biases inherent in the original data that underlies the synthetic dataset. While in theory synthetic data does not contain personal information, it nonetheless is linked to real data about real people.
Businesses considering synthetic data to build their own generative AI tools must work closely with data and ethics practitioners to manage these risks. If these biases are not dealt with, any tools built on top this data can lead to inaccurate results.
All data, whether synthetic or real, needs to achieve an acceptable state before it is used to build generative AI models and tools. Firms also need access to data scientists to clean and classify data before using in generative AI.
Synthetic data and subsequently, models fed with generative AI-created information, risk creating a downward spiral of quality that will undermine the utility of and trust in public foundation models.
Success is challenged as much by talent as data science. As we discuss in Tech Navigator, the Horizon Technology Innovation model is useful to map the stages of an organization’s AI journey, expressed as three horizons: H1, H2, and H3 (Figure 9).
The first wave is driven by machine learning, where data scientists with math and econometric skills are in demand. In H2, driven by deep learning, the need shifts to data engineers.
Figure 9. Three horizons companies cross in their AI journey
Generative AI is an H3 technology, and a key talent requirement here is prompt engineers, who straddle the boundary between programming and creative writing. They are very much in demand and highly compensated. As we stated earlier in this report, many companies seek to upskill and reskill their current employees, though significant numbers also look outside for new skills.
Our research finds that while most respondents believe generative AI will have a positive impact in all business outcomes, a small group expects to see negative effects on their talent, business models, or cost efficiency (Figure 3).
As with many digital innovations, companies need to consider their holistic operating model. Based on our research, internal experience, and client work, an operating model has emerged that harnesses AI’s potential and addresses risk while evolving operations. While generative AI’s initial outlook may be positive, success will require careful planning and design as talent and operating model constraints mount.
While our respondents voiced data and talent as their primary concerns, leaders should not overlook ethics, which was only mentioned by 14% as a top concern (Figure 8).
AI ethics, bias, and model transparency are widely discussed beyond the corporate worlds: consumers and shareholders are aware of these issues, even if they don’t understand the underlying tech. They are concerned that opaque generative AI tools make decisions based on data with embedded biases, perpetuating societal disadvantages such as discrimination against certain ethnicities or genders.
Other enterprise challenges for companies using generative AI include malicious use of AI-generated malware and misinformation, as well as copyright issues. US courts have ruled that generative AI outputs cannot be copyrighted, while lawsuits are pending in the US about the use of original works scraped to train foundation models.
With copyright concerns, businesses need active governance to oversee employee use of consumer tools that encourage them to upload corporate IP and commercially sensitive material.
Building and deploying governance and generative AI oversight in the workplace requires a responsible by design approach, not a reactive, ad-hoc exercise, and with senior executive oversight.
Generative AI Radar 2023: North America
Data privacy and security
Identifying and filtering right data
Skilled workforce
Implementing generative AI ethically