top of page

Top 7 negative impacts of generative A.I on cyber security

Before you think 'not another blog about A.I' this one is worth reading I promise......


Are we all going to be destroyed by A.I ?

It’s the question of the age, and the topic is going to get bigger.

You’ve seen the films, mostly slick futuristic dystopian worlds that depict A.I. in a negative light. Often, a rogue AI destroying the world or more importantly humanity, VIKI, Skynet, HaL9000, The Borg, Ultron and many more.

Johnny 5 is alive


Occasionally it’s a story about good A.I. Baymax, Johnny 5, David, WALL-E, JARVIS, and Chappie being some of my favourites.


(I hope you are enjoying the movie references as much as I am)


The future is here

It leads us to an interesting conundrum that we are all facing now… right now… not in the future.

Is A.I. Friend or Foe?

In real life I guess that depends on your viewpoint. The 10,000 people at BT who are likely to lose their jobs probably have a different opinion to the patients that have had their cancer diagnosed months or years earlier by Sybil. If the films can be believed though, we are going to have good and bad A.I. While neither of those real-life circumstances used “Bad A.I.” there were still negative outcomes. It is also possible for bad people to use A.I. to do bad things, in cyber security we are starting to see that emerge.



Force for good or force for evil ?

I asked ChatGPT for the top seven negative impacts that it believes generative AI will have on cyber security…

  1. Increased sophistication of cyberattacks: AI can create highly convincing phishing emails and malware, enabling more successful attacks.

  2. Evasion of traditional defence mechanisms: AI can generate malicious code that bypasses traditional security systems.

  3. Amplification of misinformation campaigns: AI can generate large volumes of fake news, spreading confusion and undermining trust.

  4. Realistic social engineering attacks: AI-generated fake profiles or personas can manipulate victims into compromising security.

  5. Creation of weaponized AI: Malicious actors can automate the creation of AI systems for identifying vulnerabilities and launching attacks at scale.

  6. Manipulation of biometric authentication: AI can create synthetic biometric data to deceive biometric security systems.

  7. Privacy concerns and data manipulation: AI can forge realistic synthetic data, posing risks to privacy and unauthorized access.

Scary, right?

It would be if we didn’t have A.I. on our side… Fortunately, at Simoda, we use cutting edge tools to help us to tackle A.I. used by bad people.

Like Jarvis and ultimately Vision taking on Ultron. We fight A.I. with A.I.

We have Endpoint protection, able to spot zero-day threats quicker than it takes you to blink.

Email Security able to detect the most ingenious phishing attempts.

Web filtering blocking malicious websites and phishing URL’s using Deep Learning.

And AI-powered Security awareness training to name a few.

There’s a way to go though, so far none of them have come up with the solution “42”. We might not be at Deep Thought’s level yet, but we do have Deep Instinct.

I would like to invite you to come and join us on the 12th of July of below to join Deep Instinct on their webinar - “Fight AI with AI: Going Beyond ChatGPT”.



What will be covered?

  • The generative AI tools in an attacker’s arsenal, including AutoGPT and DarkBERT

  • The impact of generative AI on existing cybersecurity solutions

  • The biggest concerns around the abuse of Generative AI

  • How deep learning-based AI is best positioned to fight against AI-created threats

  • The impact generative AI will have on cybersecurity and how we can defend against it.

If you want to find out more about how we can help protect you from Skynet (other AI baddies are available) then give me, Siimoda’s in-house Cyborg Security Specialist, Bryn Hawkins, a nudge on.



Feel free to include your favourite Movie A.I. good or bad.


44 views0 comments

Comentários


bottom of page