AI fuels a new era of cybercrime, says Kaspersky

Kaspersky-shares-AI-cybersecurity-predictions-for-2026-AAA.jpg

Kaspersky shares AI cybersecurity predic

by AKANI CHAUKE
JOHANNESBURG, (CAJ News) – KASPERSKY has unveiled its cybersecurity predictions for 2026, warning that rapid advances in artificial intelligence (AI) are fundamentally reshaping digital security for consumers and enterprises alike.

According to the company’s experts, large language models are enhancing defensive capabilities while simultaneously equipping cybercriminals with scalable tools for deception, fraud, and intrusion.

One of the most visible shifts is the normalization of deepfakes. Once a niche threat, synthetic media is becoming a permanent fixture on corporate risk agendas.

Organizations are increasingly training employees to recognize manipulated content, while everyday users are encountering fake videos, images, and audio more frequently.

As awareness improves, deepfakes are moving from an emerging risk to a structural security challenge that demands formal policies, continuous education, and technical safeguards.

Kaspersky predicts that deepfake quality will continue to rise, particularly in audio. While visuals have already reached high levels of realism, voice synthesis remains the fastest-advancing frontier.

At the same time, the barrier to entry is collapsing. User-friendly tools now allow non-experts to generate convincing synthetic content in minutes, dramatically expanding the pool of potential abusers and increasing overall threat volume.

More advanced online deepfakes, such as real-time face and voice swapping, will improve but remain largely in the hands of skilled operators.

Their technical complexity limits mass adoption, yet in targeted attacks their impact will grow as virtual cameras and improved realism make impersonation significantly harder to detect and investigate.

Efforts to label AI-generated content will intensify, but Kaspersky notes that no universal or tamper-resistant standard yet exists.

Current labels are easily removed, especially in open-source environments, prompting new technical and regulatory initiatives to address the problem.

The company also warns that open-weight AI models are rapidly approaching closed systems in cybersecurity-relevant tasks.

With fewer safeguards and broad availability, these models blur the line between legitimate innovation and malicious exploitation.

AI will increasingly span the entire cyber kill chain. “While AI tools are being used in cyberattacks, they are also becoming a more common tool in security analysis,” said Vladislav Tushkanov, Research Development Group Manager at Kaspersky.

“Agent-based systems will continuously scan infrastructure, surface vulnerabilities, and provide context, allowing security teams to focus on decisions rather than manual data collection.”

– CAJ News

scroll to top