New Study Highlights the Dual-Edged Nature of AI in Programming and Cybersecurity

Recent academic research has illuminated some compelling, albeit concerning, capabilities of advanced artificial intelligence models, particularly OpenAI’s GPT-4, in the domain of programming and cybersecurity.

This study, conducted by a team from an American university, underlines the potential for such AI tools to both aid in cybersecurity defenses and, conversely, to facilitate cyberattacks.

hacking chatgpt

The Potency of AI in Exploiting Vulnerabilities

The focus of the research was to evaluate how effectively different AI models, including GPT-4, could utilize publicly available security data to exploit software vulnerabilities. The researchers employed a dataset of 15 vulnerabilities from the Common Vulnerabilities and Exposures (CVE) registry— a publicly funded database that lists known security threats in software components.

Remarkably, GPT-4 was able to exploit 87% of these vulnerabilities, a stark contrast to the 0% exploitation rate by earlier versions like GPT-3.5, other large language models (LLMs), and popular open-source vulnerability scanners such as ZAP and Metasploit. This efficiency not only demonstrates GPT-4’s advanced understanding of complex datasets but also its potential utility in real-world cybersecurity applications.

Ethical and Security Implications

This capability, while impressive, also ushers in a host of ethical and security concerns. The accessibility of the CVE list, while necessary for transparency and the improvement of cybersecurity across the board, also means that AI systems like GPT-4 can access this data to potentially aid malicious activities. The research highlights a critical question: how can we balance the openness necessary for collaborative security with the need to mitigate potential misuse of the same information by AI systems?

Cost-Effective Cybersecurity, or a Tool for Cybercrime?

Another significant finding of the study is the cost-effectiveness of using AI like GPT-4 for cybersecurity tasks. The study estimates that using GPT-4 to carry out a successful cyberattack could cost as little as $8.80, about 2.8 times less expensive than hiring a human cybersecurity professional for the same task. This cost-efficiency could revolutionize cybersecurity strategies, making advanced defense mechanisms more accessible to organizations. However, it also poses a risk of equally low-cost cyberattacks, potentially increasing the frequency and sophistication of these threats.

Future Directions and Recommendations

The research does not suggest restricting access to the CVE list, as the benefits of open access outweigh the potential risks. Instead, it calls for a more nuanced approach to the deployment of highly capable LLMs in sensitive fields like cybersecurity. This includes potential regulatory measures, enhanced monitoring of AI activities, and ongoing research into safe and ethical AI use.


The findings from this study offer a profound insight into the double-edged nature of AI in cybersecurity. While AI can significantly enhance our ability to protect digital infrastructures, it also requires careful management to prevent its misuse. As AI continues to evolve, so too must our strategies for harnessing its potential responsibly, ensuring that it serves as a tool for protection rather than a weapon against us.