How Prosegur is strengthening ethical and responsible use of Artificial Intelligence

Madrid, February 6th 2024.- Prosegur, Spain’s leading security group, is once again taking the initiative to protect society by publishing its first Responsible Artificial Intelligence Policy, which will apply to all the countries where it operates. At the same time as guaranteeing the rights and freedoms of anybody who could be affected by the use of AI, establishing the fundamentals governing its use in all projects that incorporate it, the company continues to use cutting-edge technology in all its products and services, keeping it at the forefront of innovation. 

"While AI brings great benefits, we must also prevent its risks," explains Miguel Soler, Chairman of the Responsible AI Committee and Legal and Compliance Director of the Prosegur Group, adding: “The ability of machines to use algorithms, learn from data and use knowledge like humans has increased substantially, and there are more and more uses in which it is employed. However, being such a powerful technology in every sense, it also brings with it a great responsibility in terms of its application and monitoring.”  

To prevent these potential risks, Prosegur has established three pillars whereby AI developed or acquired by the company must be lawful (with the aim of ensuring respect for all applicable laws and regulations), ethical (to ensure compliance with ethical principles and values) and robust (both from a technical and social point of view, as AI systems can cause accidental damage). 

Soler explains: "The use of AI is growing day by day, particularly in relation to video analytics solutions, as we have experienced at Prosegur. With this new policy of responsible use of AI we want to join other large companies that place technology at the center of their operations, but we intend to do so in an ethical, lawful and robust manner, as indicated by our regulations.”

Ethical principles, key requirements and methodology 

In drawing up this policy, Prosegur has aligned with the ethical principles set out by the European Commission, which aim to improve individual and collective well-being: respect for human autonomy and the guarantee that there will be control over the work processes of AI systems to enhance and complement people's skills. Another priority will be the prevention of harm, ensuring safe and robust use. Prosegur’s Responsible Artificial Intelligence Policy also highlights the principle of equity to ensure a fair and equal distribution of their benefits and costs, as well as making it possible to oppose decisions taken by AI systems. And finally, the principle of explainability, which requires that all AI development processes are transparent and clearly communicated. 

A series of mechanisms have also been outlined to ensure responsible AI throughout the life cycle of these processes. This will involve human supervision to ensure that the processes are safe and based on sound techniques. Furthermore, data management must be robust in order to protect privacy, transparent and ensure inclusiveness, diversity and equal access, in addition to environmental and social well-being. Finally, accountability for AI systems and their results will also be relevant. 

Simultaneously, the company has designed a detailed, specific methodology to implement this policy. First, the project manager will present the business model and the purpose of the solution. Subsequently, the AI Committee will supervise and ensure compliance of the AI solution and define the requirements to be evaluated. With the approval of the committee, the AI developer will design, implement and test the solution; finally the AI operator will execute and monitor the solution.  

In this way, ethical and moral values and ensuring compliance with standards can be respected, promoting the use of technology through the development and acquisition of AI solutions.