AI in the Workplace: What to Know
By JOHN R. DIETRICK | Attorney at Law | HR Partners
Artificial Intelligence (AI) is generally defined as causing computing devices to perform human-like thinking1. Generative Artificial Intelligence (GAI) means an artificial system that is trained on data; interacts with a person using text, audio or visual communications; and generates non-scripted outputs similar to outputs created by a human in a fraction of the time, with limited or no human oversight2. AI and GAI are rapidly evolving and constantly changing technologies with significant potential to transform individuals, workplaces, communities and society, which pose both risks and rewards.
As Warren Buffett, chairman and chief executive of Berkshire Hathaway, opined at the company’s annual meting in early May 2024, “We let the genie out of the bottle when we developed nuclear weapons. AI is somewhat similar — it’s part way out of the bottle.” He added that both could have devastating consequences on society if used improperly, and summarized his thoughts on AI by saying, “It has enormous potential for good and enormous potential for harm. And I just don’t know how that plays out.”
To help you navigate the potential “harm” and “good” of AI, the purpose of this article — written by humans, not AI — is to provide business owners and nonprofit leaders with a high-level overview of some of the do’s and don’ts of AI, as well as best practices for utilizing AI in the workplace.
AI, and especially GAI, offer companies and organizations the opportunity to make their operations more efficient and productive. But AI and GAI carry a lot of risks. As a result, some companies and municipalities have established policies for the use of AI/GAI in the workplace. Some of those rules are listed on the following page.
DO BE WARY OF BIAS.
This is particularly important in terms of recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, dismissal and referral. Title VII, as well as the Americans with Disabilities Act (ADA) and the Age Discrimination in Employment Act (ADEA), apply to all employment practices of covered employees, and prohibit intentional discrimination or disparate treatment regarding race, color, religion, sex, national origin, as well as age and disability. Demographic biases in an AI tool can impact employment practices in a discriminatory manner, resulting in potential liability to an employer.To avoid liability, employers should use only approved AI tools; make certain all employees utilizing AI are trained in the associated benefits and risks; and be certain employees ensure the accuracy and reliability of information generated by AI tools.
DON’T SHARE SENSITIVE BUSINESS INFORMATION WITH PUBLIC VERSIONS OF AI PROGRAMS.
Examples of information that could be exposed this way include computer codes, customer information, transcripts of company meetings or email exchanges, and company financial or sales data.
To protect your business from a breach, use only approved AI tools; use AI tools other than public versions; and make certain your AI tools and data are secured according to your IT policies to protect against unauthorized access and cyber threats.
DON’T USE AI-GENERATED CONTENT WITHOUT DISCLOSURE.
As stated at the outset, this article was written and prepared by humans. However, had portions been generated by AI, disclosing that fact would have been crucial to maintaining transparency and credibility with you, the reader. Similarly, your clients deserve the same duty of care, and should know up front that the materials they are receiving and/or using were partially (or fully) generated by AI. A best business practice, for sure.
DO BE PICKY ABOUT WHICH AI PROGRAM YOU USE.
As a best practice, companies should avoid public AI programs as there are safer and more trustworthy alternatives, such as “enterprise-grade” models, which are typically paid subscriptions that offer more security for business data.
A January 2023 report from the United States Department of Labor, entitled “Artificial Intelligence Risk Management Framework (AI RMF 1.0)” found the characteristics of trustworthy AI systems include: valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy enhanced; and fair with harmful bias managed.
Finally, when using GAI programs, companies should fully understand how data inputted into the system will be stored and who will have access to the data.
DON’T FULLY TRUST AI RESULTS TO BE ACCURATE.
Be wary of GAI “hallucinations,” as they are called, which are responses generated by AI that include false or inaccurate information, which could mislead anyone who sees it, including, for example, judges who receive legal briefs with false or inaccurate legal citations. And remember, these hallucinations can oftentimes look legitimate enough to go undetected.
As a best practice, checking the source of the data before using it can reduce inaccuracies. Further, companies can negotiate their own contracts with AI/GAI vendors to train the AI on a database provided by the company so that no outside, potentially inaccurate information is included.
In closing, AI/GAI is rewriting norms and changing the way we interact in the workplace and with the world. And while there are not, as yet, specific AI laws or regulations currently enacted in the U.S. by Congress, various federal agencies such as the EEOC, the U.S. Department of Labor, the Federal Trade Commission, as well as several states and many municipalities, are promulgating rules and guidance to assist employers with the revolutionary impact of AI on consumer goods and retail, manufacturing, commerce, media and entertainment, health care and financial services, among other industries and businesses.
Be aware of these rules, regulations and possible legislation, and utilize that information to more safely navigate the changing world of AI/GAI in your workplace.