Skip page header and navigation

Balancing the opportunities and risks of AI

Staffing Stream

Balancing the opportunities and risks of AI

Jason Pyle
| May 15, 2024
Image
Artificial intelligence Machine Learning Business Internet Technology Concept

Main article

As AI has become more entrenched in business life, it has opened up both opportunities and risks to companies. Many support the efficiencies it provides employees, which allow them to focus on more value-add activities. However, the accessibility of AI means that sometimes employees use tools that fall outside the employer’s knowledge and consent. Research from Salesforce found that 49% of people already use generative AI for automating tasks and writing communications.

Employees often don’t realize they need to get approval for using these tools, or they may not want to wait for an official vetting process. While this unsanctioned use of AI, known as shadow AI, can help employees in their daily work life, it also exposes the company to risk including accidental disclosure of sensitive or confidential data, use of inaccurate information that leads to poor decision-making or a complete compromise of the employer’s tech security.

While it’s virtually impossible to control every tool employees use, there are some initial steps an organization can take to help minimize the risks of shadow AI while also giving workers the freedom to explore new technologies. Here are some examples:

Create clear policies and procedures. These will help govern employee use of AI, particularly outside, unsanctioned tools. While this step may seem obvious, many organizations either haven’t considered it or don’t view it as a priority. Harvey Nash’s Digital Leadership Report found that just 21% of respondents have an AI policy in place, and more than a third (36%) have no plans to attempt a policy at this time.

Without guardrails around AI, the balance can quickly shift from a tool that enables employee efficiency to one that can put a company in jeopardy. Clear policies around AI’s adoption, vetting and usage should be created, ideally by a team of members that each bring a different perspective to the table. Often this is led by a corporate governance group that may include senior leaders from operations, legal, IT and finance. The policies should also fully address the dangers of shadow AI and be clear about how the company plans to monitor and enforce adherence to the rules.

PREMIUM CONTENT: May 2024 Jobs Report

Prioritize communication and education. Policies only work if they’re properly communicated and employees have a clear understanding of their intent. It also helps to understand what employees are trying to achieve through their shadow AI use. Some questions to consider include:

  • Are there limitations with the organization’s technology resources that are leading employees to use outside tools?
  • What unsanctioned tools are employees using and what are the main challenges they help them overcome?
  • Is the vetting and approval process for new technology too slow or cumbersome?
  • How can the IT department better assess and address each department’s unique technology needs?
  • Is senior leadership and/or financial considerations a barrier to faster technology adoption?

This information can help organizations determine how to improve internal processes that streamline technology vetting while also putting in place restrictions to protect the organization. Ongoing education and training will help employees understand that the organization isn’t looking to stifle them but rather protect them with these rules. It will also help the business detect and address future risks as they emerge.

Target the entire workforce. In today’s hybrid and geographically diverse workforce, the use of contractors, temporary workers and freelance employees can also open the door to organizational risks. In creating policies as well as training and support programs, employers should consider their entire workforce, including those who may be off-site or not full-time employees.

By raising awareness about the risks of AI, business leaders can help their entire workforce assess and determine acceptable AI use. This goes beyond just looking at the actual technology but understanding that sensitive data and information is also in the mix. Doing so will allow employees to protect the company and themselves while still benefiting from the rewards AI brings to the table.