BLOG ARTICLE
Open AI’s cutting-edge technology, ChatGPT, recently took the world by storm thanks to its ability to come up with ideas and generate conversational and contextually relevant messaging automatically. So far, it’s used for things like chatbots, virtual assistants, content generation, code debugging, and possibly even a replacement for Google search. It can even translate information into other languages.
Intelligence like this can empower millions and opens up incredible possibilities for the future. It’s an exciting time for professionals and businesses. That is, unless you’re the victim of cybercrime.
ChatGPT and similar AI language technologies have the potential to revolutionise the way we interact with machines – and each other.
But, with such powerful tools in the hands of literally anyone, they can also pose a significant threat to cybersecurity. Especially when it comes to email.
Here are some of the ways this is currently playing out:
Email Phishing Scams
Phishing scams are a common tactic used by cybercriminals to steal sensitive data, like logins to high-value systems or financial details. Typically, these scams involve sending emails that look like they’re from a legitimate source and asking the recipient to either share their personal information, or click through to a fake website where they can steal it.
Luckily, ChatGPT itself has some restrictions in place when it comes to generating crime-based content. For instance, if you ask it to write a phishing email, it won’t. But there are other ways around it – like asking it to write an email or marketing message for a particular brand. It’s largely a matter of how you phrase the command. See an example here.
Writing Automation
ChatGPT can also be used to automate the creation of phishing emails, targeting thousands of victims with unique messaging in a short time. According to CSO Online, people with more advanced technical skills “can create an infinite number of mass-produced customised communications using AIs that can learn in real time what works and what doesn’t.”
Fake Web Presence
Email scams aren’t always limited to emails either. They often include other online collateral, like fake websites, personas, or even phony profiles of real people. ChatGPT can assist in creating intelligible written content for all of these elements – quickly and freely.
Message Translation
ChatGPT also has a built-in translation tool that’s been tested with nearly 100 languages. This enables people to target audiences outside of their mother tongue. Until now, if scammers wanted content written in other languages, they needed to hire someone to do it for them – a process that added friction to the process of launching the scams. ChatGPT dissolves those barriers entirely.
DMARC stands for Domain-based Message Authentication, Reporting & Conformance. It’s an email authentication protocol that enables you to control how emails from your domain are handled by receiving servers when they can’t authenticate your emails. This helps you to protect your domain from being used by unauthorised people, like cybercriminals.
DMARC isn’t used in isolation though. It builds on other email security protocols called SPF and DKIM.
DMARC essentially ties these protocols together with a consistent set of policies that all email servers can understand. Even though this may sound simple in principle, it isn’t always easy to get right.
ChatGPT has the potential to shift how we do a lot of things in the future. But despite all its potential for good, the increased threat to email security is very real too. With an email authentication protocol like DMARC – and Sendmarc, which makes it easy for you to launch and manage – you can mitigate these dangers to your domain and reputation entirely… in 90 days, guaranteed.
Take the necessary measures to protect yourself, your stakeholders, and your organisation from today’s email security dangers. Start by understanding where your domain may be at risk. Alternatively, contact us for more info.
LATEST ARTICLES