Connect with us

Daily News

How to use Gen AI responsibly; tips for CEOs

Organisations will need to understand what data went into training and how it’s used

Published

on

Generative AI poses a variety of risks. CEOs will want to design their teams and processes to mitigate those risks from the start — not only to meet fast-evolving regulatory requirements but also to protect their business and earn consumers’ digital trust. The excitement around generative AI is palpable, and C-suite executives rightfully want to move ahead with thoughtful and intentional speed.

This article offers business leaders a balanced introduction to the promising world of generative AI.

Fairness: Models may generate algorithmic bias due to imperfect training data or decisions made by the engineers developing the models.

Intellectual property (IP): Training data and model outputs can generate significant IP risks, including infringing on copyrighted, trademarked, patented, or otherwise legally protected materials. Even when using a provider’s generative AI tool, organisations will need to understand what data went into training and how it’s used in tool outputs.

Privacy: Privacy concerns could arise if users input information that later ends up in model outputs in a form that makes individuals identifiable. Generative AI could also be used to create and disseminate malicious content such as disinformation, deepfakes, and hate speech.

Security: Generative AI may be used by bad actors to accelerate the sophistication and speed of cyberattacks. It also can be manipulated to provide malicious outputs. For example, through a technique called prompt injection, a third party gives a model new instructions that trick the model into delivering an output unintended by the model producer and end user.

Explainability: Generative AI relies on neural networks with billions of parameters, challenging our ability to explain how any given answer is produced.

Reliability: Models can produce different answers to the same prompts, impeding the user’s ability to assess the accuracy and reliability of outputs

Organisational impact: Generative AI may significantly affect the workforce, and the impact on specific groups and local communities could be disproportionately negative.

Social and environmental impact: The development and training of foundation models may lead to detrimental social and environmental consequences, including an increase in carbon emissions (for example, training one large language model can emit about 315 tons of carbon dioxide).

Shalini is an Executive Editor with Apeejay Newsroom. With a PG Diploma in Business Management and Industrial Administration and an MA in Mass Communication, she was a former Associate Editor with News9live. She has worked on varied topics - from news-based to feature articles.