Artificial intelligence (AI)

AI generated picture of an autonomous mobile robot, navigating between shelves in an industrial warehouse.

Source: Na-No Photos - stock.adobe.com

The Organisation for Economic Co-operation and Development (OECD) defines artificial intelligence (AI) as follows: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. AI systems differ in the degree of their autonomy and adaptability [1]. A distinction is drawn between "weak" and "strong" AI. Weak AI is limited to the training of recognition patterns or comparison and searching of large volumes of data. Strong AI refers to systems that are capable of thinking logically on their own. Achievement of strong systems is not yet within reach [2].

The majority of current AI systems, including the most powerful, are based on artificial neural networks (ANNs). Their neurons, which are modelled on the human brain, are arranged in layers and used in machine learning. An AI system learns by recognizing patterns from available sample data, developing models from these patterns and applying its knowledge to new situations. For a learning process to be efficient, large, meaningful volumes of data are required. The "deeper" or more complex an ANN, the greater its possible degree of abstraction. "Deep learning" can be used to process challenging tasks such as image and speech recognition [3]. Generative AI refers to deep learning models that are able to generate high-quality text, images and other content from vast quantities of data. Large language models, such as ChatGPT, are an example [4].

In May 2024, the EU Member States passed the Artificial Intelligence Act, the world’s first law regulating AI. The EU Artificial Intelligence Act describes a number of risk categories. Applications that pose an unacceptable risk are prohibited; those posing a high risk are subject to particular legal requirements; low-risk systems and applications must satisfy only limited transparency and information obligations [5]. The aim of the AI Act is to enable the risks of AI systems to be controlled throughout the systems’ entire life cycle.


  • What is accelerating the trend, and what is slowing it down?

    AI is considered a key technology with the capacity to drive the digital transformation forward. Conversely, the fundamental driver of AI is the dynamic spread of technology throughout all areas of life. The performance of IT systems continues to increase at a dramatic pace; increasingly powerful computers (supercomputers) are being developed, and even the emergence of quantum computers is foreseeable in the near future. At the same time, the availability of vast quantities of data (big data) is increasing - often in real time. The digital transformation extends to all areas of value creation. Data and processes (such as consulting, sales, payment processes, training, workshops, meetings, job interviews) take place online; the use of smartphones and other smart devices for leisure activities is also on the increase. The transfer of physical objects into data ("dematerialization") makes them available for use within AI processes [6].

    Digital networking is also becoming increasingly dynamic. Objects, data, processes (for example for purchase, payment, delivery and production) and living beings are being interconnected; consequently, experts are already talking about an "Internet of Everything", extending the boundaries of the Internet of Things (IoT). New technologies are expected to provide further impetus for AI. These include the further development of self-driving vehicles, which are dependent upon AI. Another example is the 6G wireless communications standard, for which AI and the 6G network are mutually dependent [6; 7].

    Cybercrime and cybersecurity are rising in their relevance to companies and institutions. AI is being used by criminals to refine their attack methods, which conventional security tools are almost incapable of detecting. Conversely, AI also offers potential for risk detection and enables malicious attacks to be recognized, analysed and responded to more quickly [8]. In the "arms race" between attackers and defenders, experts believe that companies have no option but to make use of self-learning AI. This technology uses intelligent algorithms to detect patterns and anomalies that precede a cyberattack [9].

    The growing pressure to tackle climate change is also a potential driver of AI. Machine learning can support strategies for climate protection and adaptation to climate change: in the power generation and distribution sector, the production of goods, in agriculture and forestry and in disaster prevention. A specific example is the recognition of patterns in historical weather data, which can be exploited to develop systems for early detection of climate change. AI applications also enable climate models to be simulated in a way that requires less computing power and thus consumes less energy [10].

    Implementing AI technologies calls for relevant expertise and the required infrastructure, changes in operational processes [11; 12] and high financial investments [13]. This overhead is often beyond the means of companies, particularly small companies and start-ups, and may be an obstacle to the wider adoption of AI. The complexity of the technology, often with opaque decision-making methodologies and barely verifiable results in "black box applications", may also limit the acceptance of AI in companies and among decision-makers [12]. Concerns regarding data privacy and ethics are a further potential reason for resistance to AI [14].

    The requirements of the EU's Artificial Intelligence Act are complex and place high demands on regulated organizations. Many AI applications give rise to high risks, necessitating considerable additional overhead [15]. Over-regulation (whether actual or perceived) and a plethora of prohibitions and sanctions can fuel fears and inhibit the development of new AI technologies. Bureaucratic hurdles can limit the growth of AI, particularly if the AI Act is not implemented uniformly across the EU, and fails to address new technological developments [16].

    The relationship between AI and the shortage of skilled and other personnel is not clear, not least because the technology is still in its infancy. Examples of how AI can reduce the workload of employees can be found in almost all sectors. However, AI creates new, demanding tasks, such as in programming and monitoring [14; 17], particularly in the initial phase of its implementation. Further progress of AI will also be influenced by the ability of employees to interact effectively with it (including generative AI) and to acquire the necessary knowledge of the possibilities it offers, and its limitations [18].

  • Who is affected?

    According to a business survey conducted by ifo, AI was particularly widespread in industry (including in the automotive, machine construction and pharmaceutical sectors) in 2023, but was also increasingly being used in commerce and the service sector [19]. Large companies use AI significantly more frequently than small and medium-sized enterprises [20]. In 2024, the non-profit sector is leading the way in AI job advertisements, followed by the aerospace and defence sector and the media and communications sector. The IT sector is only in eleventh place; there, AI expertise would appear to be only one of many desired skills [21].

    With regard to generative AI, management consultancies McKinsey (2023) [22] and PwC (2024) [23] see the greatest potential for the near future in complex, highly skilled areas of work, and in sectors where large volumes of data are processed. These include financial companies, pharmaceutical companies, life sciences, the teaching professions, the tech and software sector and the media and entertainment industry.

    The relevance of AI will increase enormously in all sectors. A survey conducted by the IFBG (2023) predicts that AI will surpass all other trends in coming years in the increase in its importance [24].

  • Examples
  • What do these developments mean for workers' safety and health?

    AI can be used to automate complex technical processes and high-stress activities associated with a high accident risk, and also complex decision-making processes. It is used in the most diverse of systems, across almost all sectors. Applications include collaborative robots (cobots), wearable technologies, smart exoskeletons [25], smart personal protective equipment (PPE), self-driving vehicles, chatbots and AI-based programs for legal case management or HR management [26].

    AI is suitable for repetitive, standardized and comparatively simple tasks [27]. Generative AI, however, also enables knowledge workers to perform creative tasks significantly better and faster. Workers with below-average performance benefit particularly strongly from it [28]. AI is able to streamline and speed up processes, reduce employees' workload, alleviate staff shortages [29], or make work easier for older people and people with impairments and thus enable them to return to employment [26]. AI-based further training systems have the potential to support employees individually and efficiently during the learning process, and to close skills gaps in organizations [30].

    AI can help to recognize patterns in accidents (including near-accidents) and make predictions based upon them. It is able to evaluate large volumes of real-time data and provide timely warnings to workers, for example of sudden exposure to hazardous substances [31]. It can also be used to analyse specific work situations based on data pools of information on typical hazardous situations, thereby enabling the risk assessment to be partially automated [27; 32]. Technical data sheets and product recall databases can be processed automatically by means of machine learning methods (natural language processing). AI can then identify hazards, highlight correlations and propose suitable risk mitigation measures [30]. In addition, data from company audits and accident records can be used to identify companies with a stronger need for consulting support [33].

    AI systems can be used directly to improve the safety of employees, for example in the machinery sector, above and beyond conventional safety functions. For instance, a camera-based assistance system can use AI to detect persons or parts of the body entering danger zones and use this information to place the machine in the safe state sufficiently swiftly to avoid accidents.

    At the same time, AI poses a plethora of physical and psychological risks to employees. For example, the integration of AI components in machines and systems can influence their overall safety. Safety deficits in AI can lead to system failures, physical risks to employees, accidents or material loss for the company. Operator errors or misinterpretation, and manipulation from outside due to poor cybersecurity, are also possible [27].

    From 20 January 2027, the new EU Machinery Regulation will apply, which for the first time will explicitly take current developments in AI into account. Machine products exhibiting fully or partially self-developing behaviour by the use of machine learning are subject to a special conformity assessment procedure that includes certification by an accredited and notified test body. The topic of AI in safety-critical industrial applications necessitates considerable effort on the part of both manufacturers and test bodies. As yet, the conformity assessment process contains no specific, acknowledged technical rules or standards that define benchmarks for manufacturers and test bodies. Standardization processes in this area are already in progress at European and international level, but are protracted. Benchmarks for the accreditation and notification of bodies tasked with testing such AI-based safety-critical solutions have also yet to be defined. The new AI Act will at least provide guidance. Overall, the inadequate regulatory situation at present may give rise to new safety risks, or prevent them from being adequately averted.

    Robots equipped with AI are becoming increasingly mobile, collaborative and fully automated, making their actions less predictable. Inadequate monitoring or faulty algorithms can lead to unexpected actions and an increased risk of collision accidents. In addition, problems with the reliability of AI systems increase the risk of failures and malfunctions [35]. Automation of tasks can lead to more sedentary work and less task rotation, resulting in employees performing more repetitive work [26], and physical inactivity increasing.

    The growing use of highly automated AI systems is also encroaching into the organization of work. Many AI models produce opaque results that are not comprehensible to human logic and limit workers' ability and freedom to act in human-machine interaction. The more all-encompassing the support provided by AI systems to employees, the greater the risk that human skills will fade [27]. Excessive dependence on technology can thus lead to de-skilling and harbour safety risks. Impacts on cooperation and mutual support between employees are also conceivable [26]. Humanoid design elements in AI systems can lead to excessive confidence being placed in technology, and to it being misjudged [30]. Conversely, AI systems may trigger fears among employees of being replaced by machines, potentially leading to stress and loss of performance. The required willingness to learn and high mental flexibility may also overtax employees [35].

    AI can be used for management and surveillance of employees: AI-based personnel management systems collect data, often in real time, on the workplace, the employees, their work and the digital tools they use to perform it. Such systems can significantly restrict employees' independence in performing their work, and cause stress. The resulting performance pressure can cause health problems such as musculoskeletal disorders or fatigue, increase the risk of accident, and fuel fears of job security [26; 36].

    Generally speaking, the use of AI presents both opportunities and risks, especially for complex tasks. AI can expand and facilitate managers' access to data, speed up communication and promote networking. Provision of relief with routine tasks offers greater scope for employee-oriented management. At the same time, loss of control, excessive demands, loss of trust or neglecting of qualitative aspects are conceivable [37]. In the education sector, generative tools can also improve, personalize and at least partially automate the learning and teaching process. At the same time, however, problems may arise in the areas of data privacy and security, dependency upon technology, social competence and responsibility [38; 39].

    Large language models and other AI systems have already acquired the ability to deceive humans by manipulation and cheating in security tests [40]. Proactively addressing the issue of deception by AI is therefore important, particularly in view of the debate over whether AI could actually develop consciousness. At present, this does not seem plausible; however, since new chips are already being developed for AI that are no longer computer processors in the conventional sense, artificial intelligence with consciousness cannot be ruled out in the future [41].

  • What observations have been made for occupational safety and health, and what is the outlook?
    • AI can change work at an organizational and individual level and make activities, processes and individual work steps fundamentally easier and more inclusive. It can do so for example through cobots, self-learning systems, intelligent PPE or assistance systems providing physical support. At the same time, however, AI also harbours risks, for example in terms of cybersecurity, data privacy and ethics. The occupational safety and health community must continue to observe and assess both the emerging potential of the technology and the risks presented by it, in order to provide new or adapted prevention services in good time.
    • A confident and critical approach to AI in all areas of life is a key competence for future generations. Promoting acquisition of this competence as early as possible is a task for society as a whole, and one which also affects the German Social Accident Insurance.
    • As the direct point of contact for their member companies, the German Social Accident Insurance Institutions must be familiar with the use of AI under the Machinery Regulation and the AI Act. This also includes the development of expertise in the test bodies in DGUV Test, and involvement in standardization activity.
    • Involving the workers themselves in the shaping of work processes is an important factor for the acceptance of AI in such processes. This should also be reflected in the available OSH-related consulting and information [42].
    • AI is bringing about structural changes in the world of work, new vocations are emerging and competencies of a different nature - including skills relating to safety and health at work - will be in demand. Suitably adapted training provision is also required in the area of occupational safety and health, in order to assure workers’ employability and their ability to retain it. Finally, it is important that the labour inspectors of the German Social Accident Insurance are trained in this area and thereby equipped to provide expert advice on the complex topic of AI.
    • A considerable need for research still exists, for example into issues relating to the robustness and accuracy of systems employing AI, and also into further development of the concept of trustworthy AI and its transfer into practice. In view of AI’s cross-sector relevance, occupational safety and health must both strengthen the expertise and in-house research of the German Social Accident Insurance in this area, and expand networking with universities and other research institutions.
    • The Competence Centre for Artificial Intelligence and Big Data at the Institute for Occupational Safety and Health of the DGUV (IFA) supports the individual German Social Accident Insurance Institutions in planning and implementing specific AI projects. It is also a point of contact for policymakers, research and wider society.

Contact

Dipl.-Psych. Angelika Hauke

Interdisciplinary Services

Tel: +49 30 13001-3633


Dipl.-Übers. Ina Neitzner

Interdisciplinary Services

Tel: +49 30 13001-3630
Fax: +49 30 13001-38001