LLMs need a curated data approach and IT consulting support to ensure its use in an organization does not create more problems than good.
This article below summarizes some of the key questions that need to be addressed:
by Tsedal Neeley
Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing which to use, and how, operational and ethical considerations are inevitable. This article delves into eight of them, including how your organization should prepare to introduce AI responsibly, how you can prevent harmful bias from proliferating in your systems, and how to avoid key privacy risks.
How should I prepare to introduce AI at my organization?
How can we ensure transparency in how AI makes decisions?
How can we erect guardrails around LLMs so that their responses are true and consistent with the brand image we want to project?
How can we ensure that the dataset we use to train AI models is representative and doesn’t include harmful biases?
What are the potential risks of data privacy violations with AI?
How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?
How worried should we be that AI will replace jobs?
How can my organization ensure that the AI we develop or use won’t harm individuals or groups or violate human rights?
Billions of people around the world are discovering the promise of AI through their experiments with ChatGPT, Bing, Midjourney, and other new tools. Every company will have to confront questions about how these emerging technologies will apply to them and their industries. For some it will mean a significant pivot in their operating models; for others, an opportunity to scale and broaden their offerings. But all must assess their readiness to deploy AI responsibly without perpetuating harm to their stakeholders and the world at large.
picture source: https://cobusgreyling.medium.com/