Connect with us

Artificial Intelligence

Understanding risks is vital for productive gen AI growth

Organisations are actively identifying and addressing these diverse risks



The latest annual McKinsey Global Survey on the current state of AI confirms the explosive growth of generative AI (gen AI) tools. However, the findings show that these are still early days for managing gen AI-related risks, with less than half of respondents saying their organisations are mitigating even the risk they consider most relevant: Inaccuracy.

Hence, the next step, where generative AI can go, is the need to manage the wide range of risks, understand the implications on people and the tech stack, and be clear about how to find the balance between banking near-term gains and developing the long-term foundations needed to scale. These are complex issues, but they are the key to unlocking the significant pools of value out there.

According to the survey, few companies seem fully prepared for the widespread use of gen AI — or the business risks these tools may bring. Just 21 per cent of respondents reporting AI adoption say their organisations have established policies governing employees’ use of gen AI technologies in their work.

Respondents cite inaccuracy more frequently than both cybersecurity and regulatory compliance, which were the most common risks from AI overall in previous surveys. Just 32 per cent say they’re mitigating inaccuracy, a smaller percentage than the 38 per cent who say they mitigate cybersecurity risks. Interestingly, this figure is significantly lower than the percentage of respondents who reported mitigating AI-related cybersecurity last year (51 per cent). Overall, as we’ve seen in previous years, most respondents say their organisations are not addressing AI-related risks.

The survey data reveals that when it comes to the adoption of generative AI, the following risks are most frequently cited by organisations as concerns they are actively addressing.

  • Inaccuracy: 32 per cent of respondents acknowledge the risk of inaccuracies in generative AI.
  • Cybersecurity: 38 per cent of organisations are concerned about the cybersecurity implications of generative AI and are actively working to mitigate this risk.
  • Intellectual property infringement: 25 per cent of respondents are focusing on safeguarding against intellectual property infringement.
  • Regulatory compliance: 28 per cent of organisations are addressing the need to comply with relevant regulations.
  • Explainability: 18 per cent of respondents are working on ensuring the explainability of generative AI systems.
  • Personal/individual privacy: 20 per cent are concerned about safeguarding personal and individual privacy.
  • Workforce/labour displacement: 13 per cent are considering and mitigating the risk of workforce displacement.
  • Equity and fairness: 16 per cent of organisations are addressing issues related to equity and fairness in generative AI.
  • Organisational reputation: Another 16 per cent of respondents are working to protect their organisational reputation in the context of generative AI.
  • National security: Four per cent of organisations have concerns about national security implications.
  • Physical safety: Six per cent are focusing on ensuring physical safety in their generative AI applications.
  • Environmental impact: Five per cent are considering and addressing the potential environmental impact of generative AI.
  • Political stability: 10 per cent are concerned about the impact on political stability.
  • None of the above: Two per cent of respondents do not consider any of these risks relevant to their organisation.

While there is broad awareness about the risks associated with generative AI, at the same time, the prevailing anxiety and fear are making it challenging for leaders to effectively address the risks. The real trap, however, is that companies look at the risk too narrowly. There is a significant range of risks — social, humanitarian, sustainability — that companies need to pay attention to as well.

The unintended consequences of generative AI are more likely to create issues for the world than the doomsday scenarios that some people espouse. Companies that are approaching generative AI most constructively are experimenting with and using it while having a structured process in place to identify and address these broader risks. Being deliberate, structured, and holistic about understanding the nature of the new risks — and opportunities — emerging is crucial to the responsible and productive growth of generative AI.

Organisations are actively addressing a range of risks associated with the adoption of generative AI, with cybersecurity and inaccuracy being the most prominently cited concerns. These efforts aim to ensure responsible and ethical use of generative AI technologies.

Shalini is an Executive Editor with Apeejay Newsroom. With a PG Diploma in Business Management and Industrial Administration and an MA in Mass Communication, she was a former Associate Editor with News9live. She has worked on varied topics - from news-based to feature articles.