As technology advances, workplaces are constantly searching for ways to improve efficiency and consistency. Enter generative artificial intelligence (AI) programs, such as ChatGPT and GPT-4, which create content based on the user’s text prompts.
While language-based programs garner most of the attention, other applications, such as DALL-E 2, Midjourney, and DreamStudio, can create images based on text descriptions entered by the user. These programs are having a significant impact in the workplace, and rightfully so. Employees have begun using these programs to screen applicants, draft employee evaluations, and research and draft content. Generative AI has helped to streamline productivity and, in some cases, replaces the need for human interaction altogether. However, these tools carry significant risks that employers must recognize and address before considering widespread adoption.
If employers allow the use of generative AI in the workplace, they must remind employees to double-check their work for inaccuracies. There have been multiple reports of generative AI programs producing inaccurate information and, when pressed, creating fake or erroneous sources. Users have developed the term “hallucinations” to describe circumstances where generative AI programs provide responses unsupported by the training data.
For example, in a 2023 court case out of the Southern District of New York, a plaintiff’s lawyers admitted to using ChatGPT to research the legal authority used to support their case. The court determined that the attorney’s affidavit included six “bogus judicial decisions with bogus quotes and bogus internal citations.” When questioned, the lawyers provided screenshots of chats with ChatGPT in which the lawyers asked the program whether the cited cases were real. ChatGPT responded “Yes,” provided case citations, and stated incorrectly that the cases could be found on Westlaw and LexisNexis.
Although ChatGPT and other similar programs can provide fluent responses which seem legitimate, those responses may be inaccurate or completely fabricated. Employers also need to be aware that the data used by these programs may not be entirely up to date. It has been widely reported that ChatGPT, for example, was trained on a dataset that cut off in September 2021, so it will not provide information after that point.
Given that these AI programs create outputs based on a limited dataset, employers should be mindful of potential biases that may creep into produced information. The generative AI programs learn to make decisions and produce results based on the set of training data to which the programs are exposed. Even if sensitive variables like gender, race, and sexual orientation are removed, the training data may still include biased human decisions or reflect historical inequalities. Biases may also be exposed based on flawed data sampling, in which certain groups or characteristics may be over- or under-represented in the training data.
In 2021, the U.S. Equal Employment Opportunity Commission (EEOC) launched an agency-wide initiative to ensure that software, including generative AI, machine learning, and other emerging technologies used in hiring and other employment decisions, complies with federal civil rights laws. The initiative’s goal is to guide all involved parties (employers, employees, applicants, vendors, etc.) to ensure that these technologies are used fairly and consistently in employment practices. For more information on the EEOC’s findings, click here.
The EEOC has also recently released a document titled, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” accessible here, and guidance on “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” viewable here. Employers should take the time to thoroughly review the EEOC's current findings and guidance.
Employers must exercise caution before potentially exposing confidential or trade secret information to any generative AI program. Many of these programs rely on machine learning to improve the systems, which means that any information input into the system may become part of the information dataset the applications rely on to create future responses. Given the allure of these programs, employees may unwittingly disclose confidential, protected, or personal information to facilitate the AI functions. These potential disclosures may expose employers to liability under privacy laws, including the California Consumer Privacy Act (CCPA) and Health Insurance Portability and Accountability Act (HIPAA), among others.
The use of generative AI programs also increases the risk of infringement on third-party intellectual property rights. Without knowing where the AI programs are gathering data, the information banks may include information that is subject to copyright, trademark, or other legal protection that limits or prohibits the use of that content.
Given the above risks (and others which have yet to be revealed), and before allowing employees to use these new AI tools, employers should develop and implement policies regarding the use (whether broad or limited) or prohibition of generative AI programs in the workplace. If an employer chooses to allow the use of generative AI in the workplace, policies might include the following:
The appeal of these programs is real – in a paper published in April 2023, generative AI programs were shown to increase worker productivity by 14% when used by customer service agents. Given the risk of creating content with inaccurate information, the possibility of inadvertent discriminatory employment practices and the potential of violating privacy or trademark laws, employers should strongly consider establishing limitations on the use of these programs and clearly communicate to employees whether and how much the use of these applications is appropriate or permissible. When deciding whether to allow the use of generative AI programs, employers should strategize internally and with competent legal counsel about the tasks that these programs might be asked to handle, as well as the strengths and limitations of the tools for those purposes.
For questions regarding the use of generative AI programs in the workplace, or assistance with other employment law issues, contact the attorneys at LightGabler.For questions regarding new hire documentation or assistance with other employment law issues, contact the attorneys at LightGabler.
Copyright © LightGabler LLP • Contact | Our People | Website by Dan Gilroy Design