This week, I heard from an ex-colleague I hadn’t heard from in a few years. Let’s call him Jeremy. He wrote me a lovely email, referring to a moment I have such a clear memory of. He asked about my health, mentioned that he’d seen I recently had pneumonia. After telling me a little bit about his own life, he tried to sell me a consulting service I absolutely did not need.
There were three strange things about that interaction: I know this person’s writing style pretty well, and, well, he’s a man of few words. This email was a lot more eloquent than the many other emails I’ve received from him. It was also a little odd to receive an email from him out of the blue. We were never quite friends, but we were good colleagues who’d share coffee a handful of times per month. And finally, it was weird to be sold to when I suspect Jeremy knows better; I’m a journalist and a consultant, and I definitely am not in need of outsourced software development.
After reading this email twice, I realized what had happened. Jeremy had used some sort of an AI-powered writing tool to write the email to me. It was good enough that I didn’t immediately realize it. Doubly so because it seems like this tool was reading my Twitter and other public data sources to build up a picture of what was happening in my life.
That’s when it fully clicked: Sales email, phishing and spam are about to go to a whole new level.
And that makes a lot of sense.
As artificial intelligence continues to evolve, it’s becoming more adept at generating human-like text. This means that the days of easily identifying spam emails due to their awkward phrasing or blatant sales pitches are fading. Instead, we’re moving toward an era where AI, specifically generative AI, can craft convincing, personalized emails that are difficult to distinguish from those written by a human.