According to a recent report by CheckPointResearch, hackers are already using the new artificial intelligence chatbot ChatGPT to create new low-level cyber tools, such as malware and encryption scripts. As Sam Sabin reports from the Axios website, security experts have warned that OpenAI’s ChatGPT tool could help cybercriminals accelerate their attacks, and all in a short period of time.
The report lists three cases in which hackers figured out various ways to use ChatGPT to write malicious software, create data encryption tools and write code creating new dark web marketplaces.
Hackers are always looking for ways to save time and speed up their attacks, and ChatGPT’s artificial intelligence-based responses often provide a good starting point for most hackers writing malware and phishing emails.
CheckPoint noted that the data encryption tool created could easily become hijacking software once some minor issues are fixed.
OpenAI has warned on several occasions that ChatGPT is a research preview and that the organisation is constantly looking for ways to improve the product to prevent potential abuse.
The AI-enabled chatbot that has stunned the tech community can also be manipulated to help cybercriminals hone their attack strategies.
The arrival of OpenAI’s ChatGPT tool could allow the fraudsters behind email- and text-based phishing attacks, as well as malware groups, to speed up the development of their schemes.
Several cybersecurity researchers have been able to get the AI-enabled text generator to write phishing emails or even malware for them over the past few weeks.
But it should be clear that hackers were already becoming very adept at incorporating more humane and harder-to-detect tactics into their attacks before ChatGPT came on the scene.
And, often, hackers can gain access through simple computer errors, such as hacking into the corporate account of a former employee still active.
ChatGPT arguably speeds up the hackers’ process by giving them a launching pad, although the responses are not always perfect.
Although OpenAI has implemented some content moderation warnings in the chatbot, it is easy for researchers to circumvent the current system and avoid penalties.
Users still need to have some basic knowledge of coding and attack launching to understand what works correctly in ChatGPT and what needs to be adjusted.
Organisations were already struggling to defend against the most basic attacks, including those in which hackers use a stolen password leaked online to log into accounts. AI-enabled tools such as ChatGPT could only exacerbate the problem.
Therefore, network defenders and IT teams must intensify efforts to detect phishing emails and text messages to stop these types of attacks.
Aquest apunt en català / Esta entrada en español / Post en français