Connect with us

Daily News

Swift Gen AI adoption spotlights compliance

47% responded that they are in the initial stages of evaluating risk and risk mitigation strategies

Published

on

In March of 2023, KPMG initiated its 2023 KPMG Generative AI Survey to delve deeper into the realm of generative AI (gen AI) and unravel the practical avenues for enterprises to harness its true potential. The prospects presented by generative AI in revolutionising content creation, user engagement, software development, and data analysis appear boundless. Nevertheless, as is often the case with emerging technologies, the journey from mere buzz to tangible business value is neither straightforward nor uncomplicated. It’s important to acknowledge that generative AI is still in its infancy and undergoing rapid evolution.

The report draws its insights from a survey encompassing 300 executives hailing from diverse industries and geographic locations. It also benefits from the perspectives of KPMG advisors specialising in artificial intelligence, technology enablement, strategy, and risk management. The survey respondents highlighted a varied set of obstacles to AI implementation. Foremost among these challenges are the scarcity of skilled talent for AI development and implementation, financial constraints, a lack of a well-defined business case, ambiguity surrounding implementation strategies, and a dearth of leadership comprehension and strategic direction.

Despite the palpable excitement surrounding the potential of generative AI, the majority of business leaders express hesitancy about adopting the technology on a large scale or fully capitalising on its capabilities. These leaders are still grappling with the comprehensive repercussions of generative AI on their systems, operations, and workforce. A staggering 69% of them anticipate dedicating the next six to 12 months primarily to enhancing their understanding of the objectives and strategies for generative AI adoption, recognising it as a top-priority endeavour.

Key report findings

·         Enterprises lack the right skills to implement generative AI. Only a miniscule percent of respondents (1%) say they already have the skills necessary in-house. The rest plan to hire/acquire new talent (24%), train existing talent (12%) or do both (63%).

·         Companies also often find it difficult to get the value they want from emerging technologies when they take a siloed approach. Yet 68% of respondents have not appointed a central person or team to organize their response to the emergence of generative AI. For the time being, the IT function is leading the charge.

·         Integrating generative AI into the business stands out as a potential roadblock on the path to value creation. Views about four integration capabilities — having the right people, appropriate prioritisation by executive leadership, having the right technology and data infrastructure, and having the right governance models and policies — indicate a clear lack of preparedness.

·         47% responded that they are in the initial stages of evaluating risk and risk mitigation strategies around the technology with cybersecurity and data privacy as the top risk management focus areas.

More than anything else, implementation decisions are likely to reflect the level of enterprise risk tolerance. As we explore in the next section, this is a very new technology with many risks. To steer industries toward responsible action around AI broadly, governments around the world have introduced regulations such as the US AI Bill of Rights and the EU AI Act that require businesses to consider the consequences of adopting the technology alongside opportunities.

Given the rapid adoption of generative AI and the predicted massive impact across business and operational models, attention on AI regulatory guidelines is growing and compliance is becoming increasingly important to reputation and trust.

Non-compliance could have significant monetary impacts. For example, the EU AI Act — which will require organisations to determine AI system risk and monitor high-risk systems post-market — will penalise violating organisations €30M or 6% of annual income for using prohibited AI practices or not complying with data requirements.

A large majority of executives (72%) believe generative AI can play a critical role in building and maintaining stakeholder trust. Yet almost half (45%) also say the technology can negatively impact their organisation’s trust if the appropriate risk-management tools are not implemented.

Although business leaders recognise generative AI risks, immature organisational structures, and processes for controlling them are barriers to seizing generative AI opportunities. Few companies have evaluated and implemented risk and risk-mitigation strategies as part of their generative AI development and deployment strategy.

Forty-seven per cent are still at the initial evaluation stages, and 25% have evaluated risks but are still in the process of implementing risk-mitigation strategies. Further, 50% intend to but have not yet set up a responsible AI governance programme, framework, or practices, and only 5% have one already in place.

Shalini is an Executive Editor with Apeejay Newsroom. With a PG Diploma in Business Management and Industrial Administration and an MA in Mass Communication, she was a former Associate Editor with News9live. She has worked on varied topics - from news-based to feature articles.