Milan, September 2025 – If one of the biggest concerns of Italians in recent years has been that of “AI Anxiety”, i.e. the fear of being replaced in tasks by Artificial Intelligence, today the new nightmare of employees, especially large multinationals, is to be fired because an AI algorithm has decided that their performance is no longer adequate to production “standards”.
And there is no shortage of examples, from the tens of thousands of people laid off by Californian Big Tech, to the multinational oil company BP, which seems to be about to cut over 7,000 employees and contractors using AI.
In Italy – where personnel control software is already not compatible with labor law – according to legal experts in labor law, in the event of the use of an algorithm managed by artificial intelligence that decides the fate of a worker, the way would be opened, in the event of disputes, to procedural problems in relation to the burden of proof.
“When an employer uses an algorithm to make decisions,” explains Rita Santaniello, one of the leading experts in labor law in Italy, a lawyer at the multinational firm Rödl & Partner, ”think of the case of dismissal for poor performance, the worker may have difficulty challenging this decision as he does not have access to the so-called ‘source code’, that is, the set of instructions written in a programming language that defines the operation of the algorithm itself. In these situations, the burden of proof shifts to the employer, who must prove that the algorithm has operated correctly. If the employer fails to provide adequate evidence regarding the correct use of the algorithm, the judge could declare the measure illegitimate.”
The AI Act, the European Union regulation that establishes rules for the development, use and marketing of AI systems in the EU, and which provides for specific requirements and obligations for artificial intelligence systems considered high-risk – such as those used to select and skim resumes, as well as those to evaluate employees, fits into this panorama. deciding on promotions and dismissals – with the aim of ensuring that the use of AI is safe, ethical and respectful of rights.
“The employer who intends to correctly use high-risk AI systems, such as those described above – clarifies the lawyer. Santaniello – should first refer to Article 26 of the AI Act, in particular paragraph 7, which states that before putting into service or using a high-risk AI system in the workplace, employers shall inform the workers’ representatives and workers concerned that they will be subject to the use of the high-risk AI system and Article 14 (in conjunction with Art. 26, paragraph 2) which provides for adequate human oversight to minimize risks to health, safety or fundamental rights since decisions taken with the support of AI systems are always the responsibility of the employer”.