How will artificial intelligence (AI) shape the future of work? What impact will it have on our workplace, our jobs and careers? Geneva Macro Labs recently raised these questions in a conference on the opportunities and risks of algorithms for employment, alternative forms of management, skills development and career prospects. Together with representatives from businesses, trade unions and academia, the conference addressed the limitations of current AI applications in human resources and at the workplace.
One of the central concerns is the application of AI to measure the performance of an individual or a team. As an example, we discussed how large companies make use of machine learning to monitor in particular their freelance workforce, often with the intention to provide automated management decisions, including sensitive decisions like terminating contracts. Using AI for such surveillance purposes – without developing its support function – paves the way for a Big Brother version of the future of work. It sidesteps underlying ethical issues, such as bias in decision making or the lack of adequate accountability and litigation mechanisms to challenge automated management decisions.
A second important challenge lies in the impact of AI on decision-making processes. One key motivation for introducing AI in management in the first place can be identified as reducing human error and noise. With sufficient data and resources, AI can support and inform complex decision-making processes and get involved in scheduling, logistics or planning. At the same time, this increased efficiency in decision making is also its most important drawback, since it holds the potential to harm not only individuals but entire teams or firms: Machine learning is not a team player. It potentially ignores more complex qualities like team spirit or sidesteps underlying management principles such as resilience and good governance. Furthermore, it lacks common sense when it comes to taking into account specific circumstances that might explain delays in delivery and production. Without a careful balancing between human and machine, AI might destabilize teams and disempower both supervisors and employees.
Another point that was regularly raised is that the algorithm is only as good as the underlying data. When the data are biased – for instance due to historical discrimination – or drawn from a limited sample of the population, the resulting automated decisions will not be any help in making society more inclusive. The challenge therefore lies in carefully selecting and interpreting data as well as wisely choosing where to implement and where to ignore the results. This requires not only appropriate regulation but also the involvement of a large array of stakeholders and social partners to ensure that workers’ rights are protected.
What else needs to be done in a collective effort to create a secure and responsible future of work?
To date, AI has been subject only to limited regulations around data privacy and product safety which are not specifically designed to address the challenges of machine learning. An international framework similar to the European General Data Protection Regulation might prevent a Wild West of individual approaches driven by market-dominant companies, each prioritizing their own purposes. Such a framework agreement would ideally include an open data policy and regular algorithmic stress tests to ensure that algorithms are secure, inclusive and transparent. At a national level, governments could implement mission-driven innovation programs, promoting the development and deployment of such algorithms that meet safety standards and create a human-machine interaction for the benefit of a sustainable future of work. Overall, the limitations of AI for the future of work are not in the technology itself, but in how we use it.