CORRESPONDENCE FROM MADRID Kaspersky calls for more transparency on the use of Artificial Intelligence

George Marinescu
English Section / 2 iulie

Clement Domingo, Liliana Acosta, Marc Rivero

Clement Domingo, Liliana Acosta, Marc Rivero

Kaspersky is calling for full transparency on the use of Artificial Intelligence, a technology that, while it can bring significant efficiency gains and unlock new potential for companies and individuals, can also be misused by cybercriminals to facilitate the creation of malware and more.

At the Kaspersky Horizons conference, which is taking place these days in Madrid, the company's specialists are presenting a recent survey that shows that 58% of companies fear the loss of confidential data, 52% the loss of trust, as well as financial damage (also 52%) if they do not improve their protection against AI-based attacks. However, the necessary knowledge is often lacking: 41% of respondents say they do not have enough information from external experts on the current threats posed by AI-supported attacks.

According to company representatives, AI is developing rapidly, and while its potential is largely geared towards beneficial effects for companies and individuals, less sophisticated cybercriminals are already using AI to expand their malicious capabilities.

Jochen Michels, Head of Public Affairs Europe at Kaspersky, said: "Ethical AI is the foundation of trust, compliance and sustainable business success. It enables companies to effectively mitigate the risk of data breaches and AI-based threats, while complying with legal requirements such as the EU AI Act. In an increasingly digital world, the responsible use of AI is not just a technological issue, but a matter of integrity and long-term viability. Our guidelines for the safe development and deployment of AI, as well as the principles for the ethical use of AI in cybersecurity, enable companies to use AI safely and responsibly - protecting themselves from AI-generated threats without giving up the benefits of the technology,”

"Artificial intelligence is increasingly being used to strengthen cybersecurity, from threat detection to anticipating attacks. But who decides how far these tools can go? When breaches occur, the most damaging "Large organizations often suffer the most from these, while large organizations may have fewer consequences. We need systems that are not only effective, but also fair and ethical, to ensure a fair distribution of power, responsibility, and impact in the digital world,” said Liliana Acosta, founder and CEO of Thinker Soul, a consulting firm that applies ethics, critical thinking, and philosophy to guide companies in their digital transformation.

Clement Domingo, an ethical hacker, shared his findings and experience as a cybersecurity advocate on the first day of the conference: "In recent months, I have witnessed the drastic use of AI by cybercriminals. They understand how revolutionary this technology is and are already successfully using it to improve their attacks. Therefore, it is extremely important for companies to also incorporate AI into their defense strategies, to strengthen their overall security measures. To do this, it is essential to understand the methods of criminals - sometimes even putting ourselves in their shoes - in order to be able to fight them more effectively. In this sense, AI can be a powerful tool - if used wisely and responsibly - to protect valuable data, infrastructure, privacy and the business as a whole.” That is why Kaspersky has developed a guide for the safe development and implementation of AI systems, in which it provides concrete recommendations that companies can use to implement AI safely and responsibly. In addition, the document "Principles for the Ethical Use of AI Systems in Cybersecurity” supplements these guidelines with ethical principles for the development and use of AI in the field of cybersecurity.

According to the guide developed by Kaspersky, the essential aspects of the development, implementation and operation of AI systems, including design, security best practices and integration, include: cybersecurity awareness and training; threat modeling and risk assessment; infrastructure security, including cloud; supply chain and data security; continuous testing and validation of models, defense against attacks specific to machine-learning systems; regular updates and maintenance; compliance with international standards (e.g. GDPR, EU AI Act) and regular audits to ensure legal, ethical and privacy alignment.

Regarding the "Principles for the Ethical Use of AI Systems in Cybersecurity”, they promote education and clear standards focused on transparency, security, human control and data protection, to effectively prevent manipulation and misuse of AI applications.

Reader's Opinion

Accord

By writing your opinion here you confirm that you have read the rules below and that you consent to them.

www.agerpres.ro
www.dreptonline.ro
www.hipo.ro

adb