Cybersecurity Is Changing: When AI Expands Both Attack and Defense Capabilities

For years, artificial intelligence has been positioned as a tool to strengthen cybersecurity. It detects anomalies, filters spam, and helps analysts respond faster. But a shift is underway. The same technologies, especially large language models, are now being used to scale and automate attacks. More recently, they are also able to identify vulnerabilities in software code. This changes not only the speed of threats but also their nature.


What's happening

Cyberattacks are becoming more automated, adaptive, and convincing. Large language models can generate highly personalized phishing emails in multiple languages, without the typical signs of fraud. They can imitate writing styles, summarize stolen data, or assist in writing malicious code.

A more recent development is that these models can also analyze code and identify vulnerabilities. For example, Anthropic warned that their latest model is capable to identify and then exploit zero-day vulnerabilities in major software. What used to require specialized expertise in secure coding or penetration testing can now be partially automated.

At the same time, attackers no longer need deep technical expertise. Tools built on AI lower the barrier to entry. A small group or even an individual can orchestrate campaigns that previously required teams.

This is not entirely new. Automation has always played a role in cybersecurity. What is different now is the combination of scale and accessibility. AI systems can iterate quickly, test variations, and adjust based on responses. Attacks become more like continuous experiments than one-off attempts.

Defenders are also using AI. Security teams rely on machine learning to detect unusual patterns, prioritize alerts, and increasingly to scan their own codebases for vulnerabilities. But the asymmetry is increasing. Attackers can often move faster than organizations can adapt, especially when the same tools are available on both sides.


Why this matters

The ability of AI to find vulnerabilities in code adds a new layer of risk. Software weaknesses can be discovered faster, at scale, and sometimes before organizations are even aware of them. This shortens the window between a vulnerability existing and being exploited.

Trust is also affected. When phishing emails are no longer easy to spot, and when software itself may contain AI-discovered weaknesses, employees and users operate in a more uncertain environment. A well-crafted message that references real projects or colleagues can bypass even cautious behavior, especially if combined with technical exploits.

From a productivity perspective, the noise level increases. Security teams already deal with alert fatigue. AI-driven attacks can generate more signals, more variations, and more false leads. At the same time, AI-generated vulnerability reports can flood teams with findings that require prioritization and verification.


How this impacts you

For organizations, this means that cybersecurity is more than ever not only a technical topic. Development, communication, and security teams become more closely linked. If AI can identify weaknesses in code, secure development practices and regular code reviews become even more important.

Leadership teams need to assess risks that are harder to quantify, because they evolve quickly. The question is no longer only whether systems are secure today, but how quickly new vulnerabilities could be discovered and exploited.

For communication and knowledge transfer, the challenge is to explain these changes without creating fear. The goal is not to suggest that everything is unsafe, but to show that the criteria for trust have shifted. Recognizing this shift is the first step toward responsible use.


What to do next

Integrate AI into secure development practices. Use it to scan code, identify weaknesses, and support developers, but combine this with human review and clear prioritization. Not every detected issue is equally critical.

Invest in targeted awareness. Generic phishing training is no longer sufficient. Use examples that reflect current capabilities, such as well-written, context-rich messages or realistic technical scenarios.

For leadership, integrate AI risk into broader decision processes (see also our article on AI audits). This includes procurement, partnerships, and communication strategies. Ask not only how AI can improve efficiency, but also how it might introduce new vulnerabilities.

Finally, build a basic understanding of how these systems work. You do not need deep technical expertise, but a clear mental model helps. Knowing that AI can generate, analyze, and sometimes exploit code is already a useful perspective when evaluating risks.

If this topic is relevant for your organization, learn more about our executive AI advisory and hands-on workshops to build internal capabilities.