AI can be used to automate complex technical processes and high-stress activities associated with a high accident risk, and also complex decision-making processes. It is used in the most diverse of systems, across almost all sectors. Applications include collaborative robots (cobots), wearable technologies, smart exoskeletons [25], smart personal protective equipment (PPE), self-driving vehicles, chatbots and AI-based programs for legal case management or HR management [26].
AI is suitable for repetitive, standardized and comparatively simple tasks [27]. Generative AI, however, also enables knowledge workers to perform creative tasks significantly better and faster. Workers with below-average performance benefit particularly strongly from it [28]. AI is able to streamline and speed up processes, reduce employees' workload, alleviate staff shortages [29], or make work easier for older people and people with impairments and thus enable them to return to employment [26]. AI-based further training systems have the potential to support employees individually and efficiently during the learning process, and to close skills gaps in organizations [30].
AI can help to recognize patterns in accidents (including near-accidents) and make predictions based upon them. It is able to evaluate large volumes of real-time data and provide timely warnings to workers, for example of sudden exposure to hazardous substances [31]. It can also be used to analyse specific work situations based on data pools of information on typical hazardous situations, thereby enabling the risk assessment to be partially automated [27; 32]. Technical data sheets and product recall databases can be processed automatically by means of machine learning methods (natural language processing). AI can then identify hazards, highlight correlations and propose suitable risk mitigation measures [30]. In addition, data from company audits and accident records can be used to identify companies with a stronger need for consulting support [33].
AI systems can be used directly to improve the safety of employees, for example in the machinery sector, above and beyond conventional safety functions. For instance, a camera-based assistance system can use AI to detect persons or parts of the body entering danger zones and use this information to place the machine in the safe state sufficiently swiftly to avoid accidents.
At the same time, AI poses a plethora of physical and psychological risks to employees. For example, the integration of AI components in machines and systems can influence their overall safety. Safety deficits in AI can lead to system failures, physical risks to employees, accidents or material loss for the company. Operator errors or misinterpretation, and manipulation from outside due to poor cybersecurity, are also possible [27].
From 20 January 2027, the new EU Machinery Regulation will apply, which for the first time will explicitly take current developments in AI into account. Machine products exhibiting fully or partially self-developing behaviour by the use of machine learning are subject to a special conformity assessment procedure that includes certification by an accredited and notified test body. The topic of AI in safety-critical industrial applications necessitates considerable effort on the part of both manufacturers and test bodies. As yet, the conformity assessment process contains no specific, acknowledged technical rules or standards that define benchmarks for manufacturers and test bodies. Standardization processes in this area are already in progress at European and international level, but are protracted. Benchmarks for the accreditation and notification of bodies tasked with testing such AI-based safety-critical solutions have also yet to be defined. The new AI Act will at least provide guidance. Overall, the inadequate regulatory situation at present may give rise to new safety risks, or prevent them from being adequately averted.
Robots equipped with AI are becoming increasingly mobile, collaborative and fully automated, making their actions less predictable. Inadequate monitoring or faulty algorithms can lead to unexpected actions and an increased risk of collision accidents. In addition, problems with the reliability of AI systems increase the risk of failures and malfunctions [35]. Automation of tasks can lead to more sedentary work and less task rotation, resulting in employees performing more repetitive work [26], and physical inactivity increasing.
The growing use of highly automated AI systems is also encroaching into the organization of work. Many AI models produce opaque results that are not comprehensible to human logic and limit workers' ability and freedom to act in human-machine interaction. The more all-encompassing the support provided by AI systems to employees, the greater the risk that human skills will fade [27]. Excessive dependence on technology can thus lead to de-skilling and harbour safety risks. Impacts on cooperation and mutual support between employees are also conceivable [26]. Humanoid design elements in AI systems can lead to excessive confidence being placed in technology, and to it being misjudged [30]. Conversely, AI systems may trigger fears among employees of being replaced by machines, potentially leading to stress and loss of performance. The required willingness to learn and high mental flexibility may also overtax employees [35].
AI can be used for management and surveillance of employees: AI-based personnel management systems collect data, often in real time, on the workplace, the employees, their work and the digital tools they use to perform it. Such systems can significantly restrict employees' independence in performing their work, and cause stress. The resulting performance pressure can cause health problems such as musculoskeletal disorders or fatigue, increase the risk of accident, and fuel fears of job security [26; 36].
Generally speaking, the use of AI presents both opportunities and risks, especially for complex tasks. AI can expand and facilitate managers' access to data, speed up communication and promote networking. Provision of relief with routine tasks offers greater scope for employee-oriented management. At the same time, loss of control, excessive demands, loss of trust or neglecting of qualitative aspects are conceivable [37]. In the education sector, generative tools can also improve, personalize and at least partially automate the learning and teaching process. At the same time, however, problems may arise in the areas of data privacy and security, dependency upon technology, social competence and responsibility [38; 39].
Large language models and other AI systems have already acquired the ability to deceive humans by manipulation and cheating in security tests [40]. Proactively addressing the issue of deception by AI is therefore important, particularly in view of the debate over whether AI could actually develop consciousness. At present, this does not seem plausible; however, since new chips are already being developed for AI that are no longer computer processors in the conventional sense, artificial intelligence with consciousness cannot be ruled out in the future [41].