Artificial intelligence (AI) isn’t just creating buzz. It’s also sparking both fear and enthusiasm, with some employers worried about the downsides and others eager to dive in and capitalize on the potential. No matter how it’s viewed, though, employers must focus on how this brave new world intersects with legal compliance.
With AI being incorporated in the automated systems employers use, the U.S. Department of Labor (DOL) saw the need to issue guidance to its staff across the country on how the new technology affects compliance with federal laws on wages and other employment matters.
In a memo issued to staff in April, Jessica Looman, administrator of the DOL’s Wage and Hour Division, noted that AI can enhance efficiency and accountability, but “without responsible human oversight, the use of such technologies may pose potential compliance challenges with respect to federal labor standards.”
One of the laws implicated is the Fair Labor Standards Act (FLSA), which requires that covered employees be paid at least the federal minimum wage for all hours worked and at least time and a half for each hour worked in excess of 40 in a single workweek.
The memo says that some AI and employee monitoring tools detect worker activity and determine when an employee is active or idle. Reliance on such systems without proper human oversight may categorize time as noncompensable even when it should be.
Besides the FLSA, the Family and Medical Leave Act (FMLA) must be considered when using AI and automated systems, the memo says. The FMLA provides job protected leave for eligible employees of covered employers for qualifying family and medical reasons. The law requires continuation of group health benefits during FMLA leave. In addition, employees must be restored to the same or nearly identical position when FMLA leave ends.
“Without responsible human oversight, relying on automated systems to process leave requests, including determining eligibility, calculating available leave entitlements, or evaluating whether leave is for a qualifying reason, can create potential compliance challenges,” the memo says.
AI and automated systems also can complicate compliance with the Providing Urgent Maternal Protections for Nursing Mothers (PUMP) Act, which requires employers to provide nursing employees reasonable break time each time the employee has a need to pump breast milk at work.
The employee and employer may agree to a certain schedule based on the nursing employee’s need to pump, but an employer can’t require an employee to adhere to a fixed schedule that does not meet the employee’s need for break time each time the employee needs to pump, the memo notes. An automated scheduling or timekeeping system that limits the length, frequency, or timing of a nursing employee’s breaks to pump violates the PUMP Act.
The Employee Polygraph Protection Act (EPPA) also must be considered, the memo notes. The EPPA generally prohibits private employers from using lie detector tests on employees for pre-employment screening, but such tests are allowed in certain industries and under prescribed conditions.
Some AI technologies use eye measurements, voice analysis, micro-expressions, or other body movements to detect deception. Therefore, an employer’s use of any lie detector test, including devices that use AI technology, is prohibited by the EPPA unless it meets the exceptions included in the law.
The memo also reminds employers to take care to avoid using technology to retaliate against workers in violation of labor standards.
The Equal Employment Opportunity Commission (EEOC) issued guidance to employers on the use of AI in May 2023. The document, titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures under Title VII of the Civil Rights Act of 1964,” focuses on preventing discrimination against jobseekers and workers.
Title VII prohibits employment discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin. The law generally prohibits intentional discrimination referred to as “disparate treatment.”
Title VII also prohibits “disparate impact” or “adverse impact” discrimination. Disparate or adverse impact discrimination occurs when employers use neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin if the tests or selection procedures are not “job related for the position in question and consistent with business necessity,” the EEOC memo explains.
The EEOC memo includes a set of questions and answers related to determining whether an AI tool presents a Title VII problem. The memo says that if such a tool has an adverse impact on individuals of a particular race, color, religion, sex, or national origin, the use of the tool will violate Title VII unless the employer can show that such use is “job related and consistent with business necessity” pursuant to Title VII.
Tammy Binford is a Contributing Editor.
The post Excited About the Potential of AI? Don’t Forget Legal Compliance appeared first on HR Daily Advisor.