Artificial intelligence, in addition to providing some tremendous advances in technology, is also now one of humanity’s biggest threats.
When Bill Gates, of all people, says artificial intelligence could “run out of control” and decide “humans are a threat” and
researchers at Oxford and Google Deepmind say AI will “likely” eliminate humanity, that’s an issue.
Another thing that is going to be an issue, thanks to artificial intelligence, is the fact that AI-generated spam email is going to rapidly become a thing.
Gone are the days of the easy to spot emails from Nigerian princes asking for financial assistance.
Instead, according to John Licato , Assistant Professor of Computer Science and Director of AMHR Lab at the University of South Florida, AI could allow spammers “to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.”
“Spammers and marketers alike would benefit from being able to predict more about individuals with less data. Given your LinkedIn page, a few posts and a profile image or two, LLM [large language model]-armed spammers might make reasonably accurate guesses about your political leanings, marital status or life priorities,” he explains.
“Our research showed that LLMs could be used to predict which word an individual will say next with a degree of accuracy far surpassing other AI approaches, in a word-generation task called the semantic fluency task. We also showed that LLMs can take certain types of questions from tests of reasoning abilities and predict how people will respond to that question. This suggests that LLMs already have some knowledge of what typical human reasoning ability looks like.”
That means spammers might find it easier to sneak past your email filters, which then, with the right approach and choice of words, get you to actually open it and read.
“Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics to public health policy,” Licato writes.
It is not all bad news, however.
“…as AI gets better at understanding spam messages, filters could get better at identifying and blocking unwanted spam – and maybe even letting through wanted spam, such as marketing email you’ve explicitly signed up for,” he explains. “Imagine a filter that predicts whether you’d want to read an email before you even read it.”
That would be a … good thing? Right?
Eh, none of this will probably matter since there’s a pretty good chance that either artificial intelligence has already taken over and we just don’t know it or we’re all already living in a simulation anyway.