What Impact Will President Biden’s AI Executive Order Have in the Workplace?
By Fiona Ong - Shawe Rosental LLP
October 31, 2023
Well I think we all recognize that Artificial Intelligence (AI) has created some seismic shifts in the way things can be done, including in the workplace (and I previously covered many of the risks and concerns of generative AI for employers). Governments at all levels are taking action to try to put guardrails on the use of AI. And now, President Biden has signed an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence,” as summarized in a Fact Sheet. This is a wide-ranging EO, but one of the areas it specifically addresses is the impact on workers.
Now, while the President can control agencies and government contractors, he does not have the authority to dictate directly to private employers. Only Congress has the power to pass laws that directly impact private employers – and then federal agencies can interpret and enforce those laws. They can also issue guidance to private employers – but such employers are not necessarily required to comply with such guidance (although failure to do so could result in enforcement action by the agency – which can result in a prolonged litigation battle over the extent of the agency’s authority).
So as to those federal contractors, the EO states that guidance will be provided to prevent AI from exacerbating discrimination. Contractors should note, however, that the Office of Federal Contract Compliance Programs is already focused on the use of AI – as we noted, OFCCP audits of contractors will now include new documentation of policies and procedures relating to employment recruiting, screening, and hiring, including the use of artificial intelligence (AI) and other technology-based selection processes.
As for private employers, the EO warns against the dangers of “increased workplace surveillance, bias, and job displacement.” In keeping with this Administration’s pro-union bent, the EO emphasizes that the following actions are being directed in order “to mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all“:
• Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.
• Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.
Hm – principles and best practices, and a report? Well. that doesn’t seem too bad, right? But employers should keep in mind that the federal workforce agencies – including non-traditional agencies like the Department of Justice, the Consumer Financial Protection Bureau and the Federal Trade Commission – have already targeted AI as a hot-button issue. Within the past 12 months, their activities included the following:
• On September 11, 2023, the Equal Employment Opportunity Commission issued a press release regarding settlement of a suit it brought against a tutoring services group, in which it alleged that the group had programmed its tutor application software to reject applicants on the basis of age.
• Also in September 2023, the EEOC released its Strategic Enforcement Plan for FY 2024-2028 (which we discussed here), and its first listed priority is the use of recruiting and screening technology with a discriminatory impact, channeling or steering individuals into certain jobs, and limiting access to training or advancement opportunities.
• In its August 2023 update of its Guidance on Visual Disabilities in the Workplace ( which we discussed here), the EEOC specifically notes that employers must provide accommodations in connection with the use of software that uses algorithms or AI as decision-making tools. Employers should take steps to ensure that these tools do not screen out or disadvantage those with disabilities.
• In April 2023, the heads of the EEOC, the FTC, the CFPB, and the DOJ’s Civil Rights Division joined together to issue a statement on their enforcement efforts against discrimination and bias in the use of automated systems or artificial intelligence (AI) in the workplace. The statement, which we discussed here) identifies the roles each agency plays, as well as the specific concerns raised by the workplace use of AI.
• In March 2023, the National Labor Relations Board’s General Counsel issued General Counsel Memorandum 23-04, (which we discussed here) providing an update concerning her prosecutorial priorities, and she reiterated that issues concerning electronic surveillance and AI-related management of employees should be submitted to the Division of Advice.
• In November 2022, the NLRB GC announced that she will crack down on employers’ increasing use of automated technologies and electronic management systems. In General Counsel Memorandum 23-02, (which we discussed here) the GC stated her belief that employers’ use of these technologies can violate the National Labor Relations Act.
So it’s pretty clear that the federal workforce agencies will continue to target the use of AI in the workplace, and we can expect such efforts to be rather aggressive.