Practical Strategies to Manage AI Hazards in the Workplace

What to know

Top takeaway: Two recently published articles provide strategies for keeping workers safe while using artificial intelligence in the workplace.
By: John P. Sadowski, Ph.D. and John Howard, M.D.
image with words AI and OSH

Summary

The National Institute for Occupational Safety and Health (NIOSH) has been at the forefront of efforts to understand the implications of artificial intelligence (AI) on workplace safety and health, and to teach workers and occupational and environmental health and safety (OEHS) professionals how to use it safely. Keeping workers and workplaces safe can help ensure that AI technologies do not lead to novel risks that could potentially outweigh their benefits. The following provides summaries of two recent articles that present practical strategies to ensure the safe use of AI in the workplace.

Read More

Search the NIOSH Science Blog for more blogs on artificial intelligence.

Understanding workplace AI hazards

This summer, The Synergist's June/July 2025 issue featured NIOSH-funded research in the article "You vs. the Robot Factory: Some Principles for Understanding AI Hazards in the Workplace".1 This article, written by John P. Sadowski, proposes that although AI may at first seem radically different from other potentially hazardous aspects of the workplace, it can indeed be understood using established OEHS principles.

The article details that broadly established hazard identification and exposure assessment methodologies can be adapted to cover AI's novel characteristics, using the example of an algorithmically controlled chemical factory. The article suggests a need for a rigorous science of industrial hygiene for algorithms, or "algorithmic hygiene," that explicitly links the characteristics of algorithmic systems to health and safety outcomes. Additionally, the article notes that OEHS professionals' presence throughout all industrial sectors gives them a responsibility and an opportunity to have a significant impact through their existing education and assessment activities.

A framework

A framework for algorithmic hygiene.
Figure 1. A framework for algorithmic hygiene. Several categories of AI system characteristics can each cause any of the standard occupational hazard types; they can mediate but not directly create any tangible hazards but can directly create psychosocial hazards. Hazard controls can be work-design controls carried out within the end-user's organization, or software-design controls that must be applied by software developers who may work for a vendor. Figure copyright 2025 by John P. Sadowski.

The article provides a framework (Figure 1) that identifies several core characteristics of algorithmic systems and links them to existing, well-known categories of occupational hazards and controls. The framework was created through an informal literature review and conversations with multiple OEHS researchers and practitioners. It is a starting point to guide field work on hazard identification and control, and to formulate research questions on the OEHS impacts of algorithms. The goal is to provide a scientific basis for actionable guidance for individual end-users, developers, and policymakers. The following five principles are proposed as a basis for conceptualizing what a science of algorithmic hygiene might be:

  • The term "trained algorithm," meaning use of a training data set to create an algorithm for a system's output or behavior, is a better conceptualization than "artificial intelligence" as it lacks the latter term's vagueness, and is a practical description of a process.
  • Algorithms must be distinguished from the physical platforms they may be embedded in, but platforms can help identify trained algorithms being used in a workplace.
  • Algorithms are software with no physical substance. While they cannot directly create any new "tangible" (physical, chemical, or biological) hazards, they do alter the risk profile of physical platforms or substances they control or interact with. Algorithms can, however, directly cause psychosocial hazards by changing work organization and skills.
  • OEHS professionals can still use familiar exposure assessment tools and methods for each hazard type, which can discern the impact of introducing algorithmic systems, even if their characteristics are not known. Taking the algorithmic system's characteristics into account may be more novel and challenging from an OEHS perspective but will enhance the assessment.
  • There is a distinction between "prevention through work design" hazard controls that can be carried out within the end-user's organization, and "prevention through software design" controls that must be applied by the software developers.

Challenges

The challenge for OEHS practitioners and researchers is to determine, in a structured way, how each system characteristic interacts with each hazard and control type in a workplace. The article concludes with a call for OEHS researchers and practitioners to focus on advancing the science of algorithmic hygiene into a solid body of evidence that can drive practical guidance and standards that support safe and healthy use of AI in workplaces throughout all sectors of the economy.

Managing workplace AI risks

In September 2024, a commentary in the American Journal of Industrial Medicine, "Managing workplace AI risks and the future of work",2 discussed challenges with the adoption of AI in the workplace. It presented five risk management options to promote the use of trustworthy and ethical AI in workplace devices, machinery, and processes.

  • Effective AI risk management may require reskilling or upskilling to acquire a set of computer science competencies that would be challenging for employers, workers, and safety and health practitioners. These new skills would give safety and health practitioners greater ability to manage AI systems.
  • AI developers and safety and health practitioners could conduct collaborative AI system evaluations assessing the safety, capabilities, and alignment of AI systems. An alignment evaluation is focused on ensuring that the operational outcomes of an AI system match those intended by a developer's design parameters.
  • An independent audit could be used to assess the risks of AI system capabilities through tools like algorithmic transparency.
  • AI system certification is a way to incentivize AI developers to adopt trustworthy AI principles in the design and development phase and to enable downstream users to validate the inclusion of trustworthy AI in a deployed system.
  • To help safety practitioners develop the detailed evidence that a workplace system is safe to operate and understand what that evidence base looks like, two approaches, the "safety system approach" and the "safety case approach" bear consideration as methodologies for the identification, analysis, and evaluation of high-risk AI system risks.

The article is further summarized in the blog post Exploring Approaches to Keep an AI-Enabled Workplace Safe for Workers.

Conclusion

NIOSH is excited about the many advantages AI can provide in the workplace and is committed to supporting this through ensuring these technologies are safe for the workers who use them. These articles further these goals by providing practical strategies that can help OEHS professionals, workers, and employers safely navigate the use of AI in the workplace.

Author information

John P. Sadowski, Ph.D., was recently a technical analyst (contractor) in the NIOSH Office of the Director specializing in the workplace health and safety impacts of emerging technologies.

John Howard, M.D., is the NIOSH Director.

  1. Sadowski JP [2025]. Some Principles for Understanding AI Hazards in the Workplace. Synergist. June/July 2025: 23-28. Link for AIHA members: https://synergist.aiha.org/202506/understanding-ai-hazards
  2. [i] Howard J, Schulte P [2024]. Managing workplace AI risks and the future of work. Am. J. Ind. Med 67 (11):959-1054. https://onlinelibrary.wiley.com/doi/10.1002/ajim.23653