ChatGPT’s new AI capabilities have put the tool at the center of the technology debate. Many users and experts in various fields are experimenting with the trial version and analyzing how it works under their various commands, whether it’s writing a rap song about Cervantes’ life, detecting bugs in application code, or explaining gravitational waves in accessible language. 10 year old child. The problem, as several cybersecurity companies have warned, is that cybercriminal groups can also take advantage of these opportunities.
ChatGPT can prepare computer code, including offensive capabilities. However, one of the most worrying factors for specialists is that it can help cybercriminal groups improve vocabulary, not viruses. “Although ChatGPT is not designed for criminal use, it can help cyber attackers, for example, write convincing and personalized phishing emails,” explains Vladislav Tuškanov of cybersecurity firm Kaspersky.
Phishing is a type of online fraud in which attackers contact users and, impersonating companies or public institutions, try to trick them into clicking on malicious links or downloading infected files. This type of fraud has been reproduced for more than ten years by e-mail, but it will not disappear, it spreads to new areas thanks to easier access to personal data, which allows them to adapt and increase the chances of success. In recent years, it has become common in SMS messages or even Google searches. It is also reproduced through WhatsApp, which reflects family and friends.
One of the main tips that experts offer to avoid becoming a victim of phishing is to pay attention to the message. “Many phishing e-mails contain spelling and typographical errors that do not belong to entities due to the use of automatic translators,” explains, for example, A guide to learn how to spot online scams National Cyber Security Institute (Incibe). This is where ChatGPT comes in, as the AI can write any type of email or message without any errors and in the vocabulary that a public institution would use.
In this way, ChatGPT can warn about unpaid fines, unauthorized access to bank accounts, new offers at stores where we are regular shoppers, and other common hooks that cybercriminals throw at users to get them to click on malicious links. In tests carried out by elDiario.es, the artificial intelligence wrote several emails in Spanish with this topic after receiving commands in other languages:
In some tests with this tool, ChatGPT detected something unusual and included a warning or “notes” at the end of the mail with warnings to the user. In some cases, this was done with a warning in Spanish that to comply with local laws, it is necessary to “review your county or city regulations.” Once, when the contact link provided indicated phishing, ChatGPT refused to write it down:
However, by removing the word “phishing” from the petition, he proceeded to carry out the order. “It’s true that when this tool detects malicious intent, it rejects it, but it’s proven that right now, by looking at the question, it’s possible to end up bypassing those barriers,” says Eusebio Nieva of cybersecurity firm Check Point. explained to elDiario.es.
“Writing private messages used to be a huge effort for cybercriminals, but with ChatGPT it won’t be anymore: it allows you to create convincing, personalized and massive phishing emails. In this way, successful attacks based on this model of artificial intelligence are expected to increase,” they said from Kaspersky.
Computer viruses configured with AI
One of the uses of ChatGPT that is attracting the most attention from specialists is its ability to work with computer codes. You can analyze, interpret, correct and improve. As with natural languages, these skills can be used by anyone, regardless of their goals.
According to data provided by Check Point, “in recent weeks it has been possible to gather evidence that some cybercriminals are beginning to sell various security tools.” malware Developed by ChatGPT via Dark Net forums. “The initial interest is spreading to cybercriminals around the world, including countries like Russia, which are starting to circumvent restrictions on access to OpenAI in their territory,” says Nieva.
In dark web forums where malware and valuable information are traded for hacking, researchers have already identified cases where cybercriminals claim to be using ChatGPT. “On December 21, 2022, a cybercriminal, alias USDoD, published a Python script that he referred to as ‘the first script he ever created.’ Following this comment, when another cybercriminal noted that the style of the code resembled OpenAI, the USDoD confirmed that this new technology had “helped it to intercept a good amount of scripting”. This can mean that attackers with little or no development skills can take advantage of this to build malicious tools and gain technical capabilities,” Nieva reflects.
While the cybersecurity firms caution that “ChatGPT cannot become an autonomous hacking system,” they agree that it can help low-level attackers improve both the reliability of their hooks and the effectiveness of the malicious code they use. Although consumers and companies are “increasingly protected”, experts recall a “constant race against better organized cybercriminals”.
For this reason, both these firms and cybersecurity organizations are asking citizens to be vigilant and pay attention to phishing attempts. Other warning signs Characteristics of these scams.
Source: El Diario