homepage | How to Manage AI Risk and Use Artificial Intelligence Responsibly 

August 21, 2023

How to Manage AI Risk and Use Artificial Intelligence Responsibly 

by Pete Dulin

Artificial intelligence advisor Doug Hohulin shared insights that can help businesses shape practices and policies to manage AI risk.

Using artificial intelligence (AI) products is easier than ever. That’s the point of the hype, flood of new product releases, and abundance of free tutorials and tips. Try AI now, learn fast, innovate, reduce costs and be more productive. Eventually, the other shoe will drop – accountability. Businesses bear responsibility to be accountable to their employees, customers, suppliers, and partners for their practices and use of data. Artificial intelligence is no different. 
 
Businesses can make smart choices about how they use AI to not only grow, increase sales, improve service, and boost productivity, but to also protect all parties from reasonable risk. 
 
How can businesses use AI in a safe and responsible manner?  

Artificial intelligence advisor and NextCoLabs Tech Scout Doug Hohulin shared insights and guidelines that can help businesses shape decisions, practices, and policies to manage AI risk. 

AI Use in the Workplace 

Let’s begin with low-hanging fruit. Many workers are already using generative AI tools to assist with writing and search functions. Hohulin recommended that U.S. businesses use existing tools on commonplace platforms from Microsoft and Google when incorporating AI into workflows to better manage AI risk.  

Of course, OpenAI’s ChatGPT has swiftly become a popular AI chatbot tool to generate content, respond to questions, and more. 

Elsewhere, Google offers its AI-supported Search Generative Experience (SGE). Duet AI in Google Workspace provides AI-enabled support to help write an email in Gmail, write a proposal in Docs, prepare custom visuals in Slides, track projects in Sheets, and more. Duet AI can also assist cloud developers of all skill levels in Google Cloud. 
 
Microsoft Bing already delivers AI-powered chat, search results, and image creation. Bing Chat Enterprise “gives organizations AI-powered chat for work with commercial data protection” that will help minimize risk. 
 
Starting in November 2023, Microsoft’s Window 11 will incorporate Copilot, an artificial intelligence assistant for Microsoft 365’s suite of applications and services. The AI assistant will help users with “creativity in Word, analyzes data in Excel, designs presentations in PowerPoint, triages your Outlook inbox, summarizes meetings in Teams.”  

Further, users will be able to work with Copilot in SharePoint to “access their data in Microsoft Graph and apply best practices to create engaging web content” while maintaining data security and privacy.” 

Manage AI Risk  

What risks do businesses face when using ChatGPT and other Gen AI tools, especially as these tools become embedded in Microsoft and Google products? Access to confidential information and leakage of confidential information present two primary risks of using these AI tools. 

“Data privacy and ethical concerns pose significant challenges for organizations looking to use GenAI. There are risks associated with using large amounts of sensitive data, particularly when it comes to issues such as privacy, bias, and discrimination. Organizations need to carefully consider the ethical implications of their use of GenAI, and take steps to ensure that they are using the technology in a responsible and transparent manner,” said Google Cloud AI data scientist Aishwarya Srinivasan on LinkedIn.

More risks of using ChatGPT and Gen AI

  • Bias:  Generative AI is built on large language models (LLMs). LLMs are trained on massive datasets of text, and these datasets can be biased. LLMs can generate text that is biased, or that can reinforce existing biases. For example, ChatGPT has been shown to generate text that is sexist and racist. 
  • Misinformation:  LLMs can be used to generate fake news articles or social media posts. LLMs are good at generating text that sounds realistic, even if it is not true. 
  • Privacy:  LLMs can be used to collect and store personal data. This data can then be used to track users, target them with advertising, or even to commit identity theft. 
  • Security:  LLMs can be used to create malware or other malicious software. This software can then be used to harm users or to steal their data. 
  • Job displacement:  LLMs are becoming increasingly capable of performing tasks that were once done by humans, such as customer service and writing. This could lead to job displacement, as companies no longer need to hire humans to do these tasks. 

“Be aware of these risks before using ChatGPT and Gen AI. By understanding the risks, you can take steps to mitigate them and use these LLMs safely and responsibly,” Hohulin said.  

Tips for using ChatGPT and Gen AI safely and responsibly

  • Be aware of the biases of the training data. 
  • Use LLMs for tasks that are appropriate for them. For example, do not use them to make important decisions or to generate sensitive content. 
  • Monitor the output of LLMs for signs of bias or misinformation. 
  • Do not share personal data with LLMs. 
  • Keep your software up to date with the latest security patches. 
  • Be aware of the potential for job displacement 

3 Recommendations to Manage AI Risk

Start Small and Simple – Kick the tires with AI products on a project with low complexity that offers high value to customers. Validate the model, learn from mistakes, and minimize risk. 

Know How to Mitigate Risk and Damage – Before ramping up AI use in operations, develop a plan for response and message if results go wrong. Preparation trumps panic and helps manage problems before they become a crisis. 

Walk First, Then Run – Rather than race to be first to market or push to dominate the market with a new service or product, develop a clear picture and roadmap on how AI adds value to customers. Establish best practices before scaling up and moving forward too fast.