According to training firm SoSafe, one in five people clicks on AI-generated phishing emails. This stat alone emphasizes the importance of leveraging digital systems to empower people to detect phishing attempts themselves. However, generative AI has also simultaneously advanced the power of cybersecurity threat prevention to meet this challenge.
So, how has generative AI affected security in businesses? Also, what capabilities can we expect from it in the future?
The opportunities and threats AI brings to the industry mean that businesses dealing with larger groups, such as federal organizations, need to meet more stringent requirements to succeed. So, read on to learn:
- How to mitigate AI-driven security risks
- Methods for leveraging generative AI in cybersecurity
- Practical strategies to safeguard digital assets
- How a combination of AI and training can protect your business in the AI era
How Has Generative AI Affected Security Efforts in 2025?
With generative AI able to automate not only cybersecurity but the cyberattacks themselves, attempts to gain access to individuals' accounts have become more common than ever. Not only is it easier to produce phishing emails quickly, perfectly tailored to an individual's personal habits and life, but a system can also help with coding and deploying malware at the touch of a button. We have seen this over the past two years, with Investopedia claiming that phishing emails have increased by 1,265% since the fourth quarter of 2022 alone.
AI systems can also perform many more processes that would have been human-led before in order to engage with cyberattack victims. At the same time, it can learn from every interaction and change its tactics moving forward if it doesn't succeed the first time. As such, it is only getting better at:
- Simulating human behavior
- Evading detection by traditional systems
- Compromising systems in record time
- Mimicking legitimate communication channels
Generative AI is so powerful that research firm AI Multiple reports that it cannot detect its breach attempts without also deploying AI technologies in response. However, it also states that using AI can reduce the time to detect threats by up to 90%, proving how much the growing "arms race" in the world of cybersecurity is accelerating.
Ethical Challenges in AI-Driven Security Innovation
Due to AI's ability to have both positive and negative effects, many groups are now discussing the need for stricter safeguards against its use for malicious applications. However, this will not prevent the use of such technology by national security or intelligence organizations or countries not beholden to treaties that would prevent such attacks.
Addressing the misuse of AI in hacking requires an entire technology industry, including its criminal elements, to stick to ethical standards, which is an unrealistic goal. So, governments and organizations need to balance their attempts to regulate AI to mitigate abuse with innovating to ensure they stay ahead of malicious AI development.
AI Security Risks Emerging from Deepfakes
Deepfake technology is already used in disinformation campaigns worldwide to erode public trust in major institutions. Videos exist online of people already having to explain to older generations which videos online are by AI and which are real.
Organizations will need to educate their employees on identifying such manipulated media as a part of a larger cybersecurity training pivot. The realistic nature of both text and photos together may cause significant morale concerns in a company or otherwise cause reputational issues for C-suite members. It could even blackmail people with images that could cause issues in someone's daily work life.
AI-Powered Social Engineering Concerns
Generative AI phishing, as mentioned before, has long been a concern. However, it is no longer limited to only emails. In addition, malicious actors can instead leverage generative AI to produce a combination of:
- Email text
- Chatbot conversation
- Deepfake imagery
- Audio messages
- Video clips
Alone, one of these could create a problem. Together, they could create an escalating set of circumstances, even to the extent of radicalizing one or more members of the company against the business itself.
As such, manipulation tactics are not limited to internal systems and may contact an employee by their home email address or online profiles. These intelligent threats can bypass traditional security measures. Instead, the company must train its employees to recognize when they are a target so that they can flag the incident to security teams.
Vulnerable AI as a Security Risk
While AI may be imperative for the future of cybersecurity protection, it may also be an attack vector itself. Generative AI can learn from biases or flaws in AI training, for example, in the same way it could with people. It can then learn how to feed misleading data to the system to influence another AI's behavior for the benefit of the hacker.
Security experts must strengthen their AI model training moving forward to reduce this risk of exposure. They must also audit their AI systems regularly to check for and mitigate potential vulnerabilities.
Leveraging Generative AI for Cybersecurity Defense
Groups like Egress already report that 91% of organizations experienced incidents caused by data loss or theft in 2024. Thus, it stands to reason that defense technologies must improve to prevent even more problems across the industry.
AI Solutions for Detecting and Countering Deepfake Threats
While AI tools can detect and counter deepfake threats with enough pattern recognition training and validation, the learning process will always be ongoing. However, with the speed at which deepfake technology is advancing, we will see many of these in the near future. Advancements will likely take a significant amount of time before they can confidently report whether an image or video is fake enough of the time for it to be useful.
For example, real-time scanning tools can start identifying discrepancies in manipulated content, but malicious actors can combine sections of flawless video or crop imagery to remove such concerns. As such, training employees on deepfake detection and how to be more critical of media they consume is likely to accelerate your organization's resilience to such issues.
Automating Cyber Threat Detection with Generative AI
Generative AI can do more than detect examples of its own work in audiovisual media. With the right information, it can start identifying unusual patterns in any other dataset.
It starts by modeling what it understands as "normal" behavior. As such, a cybersecurity team can feed it fast amounts of an organization's normal daily data so that it can work out where discrepancies might lie. Then, using generative AI models, it will learn to model the same data.
Using its own generative models, it identifies:
- Patterns
- Relationships
- Dependencies
- Outliers
It can then create a baseline of typical behavior. When it then reads new data, it compares it to its learned baseline, flagging any unusual or anomalous events.
The system can then analyze these events itself or pass them on to a human to investigate. However, it can also pick up on the best methods to mitigate them once it begins to learn what they look like, taking action as approved by a system administrator to prevent more significant intrusions.
One primary advantage of this is that the model can model both individual data points and their context within the larger dataset. This process allows the AI to make more accurate predictions of anomalies, reducing the number of false positives it finds.
Uses for Generative AI Network Analysis
Generative AI can use this pattern recognition process for several different purposes, including:
Network traffic analysis and detecting breaches or unauthorized access. The system can then use these to detect further patterns in the future.
Detecting fraud on individual customers' accounts due to sudden changes in their patterns of behavior.
Pinpointing potential future threats based on watching for repeated testing of the system.
Searching for existing advanced persistent threats (APTs) that could cause significant damage further down the line.
Insider threat detection through spotting unusual access patterns or data usage stemming from employees.
With all these uses, generative AI provides a comprehensive suite of responses to almost any issue that may come in the present day.
Generative AI's Influence on DoD Security Standards
Due to the emergence of significant AI-based threats, the CMMC requirements for working with the DoD and many other federal agencies are starting to evolve to address such dangers. To keep up with security to a level required by the 110 controls CMMC demands, it will soon be crucial to implement AI as a defensive tool across your company.
Similarly, we are starting to find businesses that offer such protection. Services such as secure cloud enclaves, which can be deployed with pre-customized generative AI tools, are now appearing, helping companies to resolve such concerns without detracting from their core activities.
Grasping the Possibilities for Generative AI in 2025
The question, "How has generative AI affected security in 2025?" is as open-ended as the possibilities of AI itself. Businesses must remain proactive to combat the threats posed by such technology.
Hermathena Labs simplifies this compliance process with decades of experience and solutions tailored to your needs, including secure cloud enclaves to protect your data and other AI-integrated tools.
Schedule a demo with us today, and let us show you how our services can help ensure your business's continued safety.