Original article published by AIHS
While AI can add significant value to organisations and the work environment, the OHS impact and potential harm to employees of such emerging technology is less understood and acknowledged, according to the NSW Government’s Centre for Work Health and Safety.
Although there is growing emphasis on the general impact and ethical implications of AI solutions, there is very little attention on its impact in the workplace and on the health and safety of employees, said Sazzad Hussain, senior research and data science officer for the Centre for Work Health and Safety.
“In fact, there is not only a gap in the understanding of potential OHS implications in using AI but also a lack of resources and tools for assessing and mitigating potential hazards and risks,” said Hussain, who was speaking ahead of the AIHS National Health and Safety Conference which will be held from 25-26 May 2022.
For example, the physical health and safety of employees might be impacted by the use of AI if it influences the intensification of workflows, according to Hussain, who said this could cause workers to accelerate their pace of work and create new OHS hazards.
“But even so, AI is perceived to likely have more impact on employees psychologically than physically,” he said.
“For instance, the lack of transparency and explainability of predictions or recommendations made by AI systems, such as in monitoring and tracking individuals, may cause anxiety and stress to employees.
“As a result of task automation, AI could be seen as competence-enhancing for employees but instead may lead to their competence-destroying. Moreover, the loss of task autonomy could negatively impact employees with their sense of self-accomplishment.”
Hussain explained the rapid ascent of any novel technology poses emerging OHS challenges in terms of risk assessment models and how they are managed.
“What is essential to consider is how the technology, such as AI, may transform work roles and functions, as the creation of new workflows through AI solutions may entail important elements of work redesign for employees,” he said.
Like any technological innovation associated with operating environments, Hussain said AI may potentially elevate levels of uncertainty and pose the possibility of unknown risks and hazards to employees, only being apparent after the implementation of the innovation.
“In making predictions and recommendations, AI systems may shift the nature of work by minimising human involvement and oversight of traditional operational processes, such as scheduling or allocating employee workloads,” he said.
“As the AI system evolves over time, learning and adapting to new information, its prediction and recommendations would too continuously reshape; thus, making such systems less transparent and explainable than traditional systems.
“Moreover, besides using AI to optimise workflows and identify the most diligent employees, it can be misused to even discipline employees.”
Hussain observed there is little evidence of organisations taking strategic approaches to anticipate the impacts of AI risks on OHS beyond the intended operational process or workflow change.
A survey by McKinsey Digital detailed a range of AI risks, where only a minority of organisations recognised the risks of AI use. Although 40 to 60 per cent of organisations expressed concern about cybersecurity, regulatory compliance, and personal/individual privacy risks of AI, Hussain said risks that directly affect workforce displacement (31 per cent) and physical safety (19 per cent) appeared much less of a concern.
“Moreover, even fewer organisations were working to minimise the risks,” said Hussain, who explained current OHS management schemes appear to be best suited for addressing situations where there is a straightforward link between the causes of OHS hazards and their resolution and generally tend to focus on physical hazards.
“They are less suited for scenarios where the hazard is ambiguous, and its resolution is complicated and multi-faceted, as is the case with using AI in the workplace,” he said.
Research led by the Centre for Work Health and Safety has found a common consensus on the importance of understanding the OHS impacts of AI and managing risks.
“The organisational implications of AI use are potentially resulting in new data-sharing arrangements, new job descriptions and the creation of new positions,” said Hussain.
“However, OHS implications of AI were more typically late considerations, commonly raised during the use of AI rather than in its design stage.”
Hussain observed there is a lack of resources with a specific focus of OHS and the use of AI, however, guidelines for ethical AI have been developed – through government and industry-led initiatives.
For example, Hussain said the OECD Principles on AI were adopted by member countries and subsequently by G20 in 2019 to promote innovative and trustworthy AI that respects human rights and democratic values.
Another example is Australia’s Artificial Intelligence Ethics Framework which aims to guide businesses and governments to responsibly design, develop and implement AI, that is by setting out eight ethical principles designed to ensure the safe, secure and reliable use of AI.
“To bring the OHS angle to the ethical use of AI, the Centre for Work Health and Safety (the Centre) and its future world of work program have done just that,” said Hussain.
“With this program, the Centre is leading a variety of innovative projects in workplace harm prevention – in creating new knowledge and tools to support emerging work types, technologies and environments – in helping businesses as well as WHS/OHS regulators.”
Hussain said the Centre, in partnership with South Australian Centre for Economic Studies, Australian Institute for Machine Learning (University of Adelaide), and Australian Industrial Transformation Institute (Flinders University of South Australia), developed an AI risk assessment tool that incorporates AI Ethics Principles with the Characteristics of Work from Safe Work Australia’s Principles of Good Work Design
“This was an evidence-based approach, conducting research through a series of consolations with AI experts, WHS professionals, regulators and policymakers, representatives from organisations adopting or having adopted AI, and others with knowledge in the field,” he said.