ChatGPT is The New Jarvis or the Next Ultron?


ChatGPT::- Since its release, OpenAI’s large language model (LLM)-based AI text generator has attracted a lot of attention from the IT community. It’s critical to comprehend how ChatGPT may affect both organisations and customers given its potential to transform the way code is created and audited.

ChatGPT: The Basics

ChatGPT ::- is a GPT-3 (Generative Pre-Trained Transformer) model that’s trained on billions of words from the internet. This extensive training allows ChatGPT to generate coherent and well-structured text on a variety of topics, including code. Given a plain English prompt or existing code, ChatGPT can generate code in response.

The Impact on Code Writing and Auditing

The ability of ChatGPT to generate code in response to a prompt has significant implications for the software development process. By using ChatGPT, developers can get near-instant security audits of their code, potentially discovering vulnerabilities and exploits before they even launch their applications. This can lead to more thorough deployment processes and reduced vulnerabilities once applications are deployed, which could significantly contribute to the fight against cyberthreats.

However, it’s important to note that ChatGPT is not perfect. As an algorithm based on mathematical principles, weights, and biases, ChatGPT may miss important preconceptions, knowledge, emotions, and subtleties that only humans can see. This means that while ChatGPT could significantly improve the quality of coding, it’s not yet at a level where we can fully trust its output.

The Short-Term Threats

In the short term, ChatGPT’s ability to generate code could be used for malicious purposes. Bad actors could use ChatGPT to find vulnerabilities in popular coding standards, Smart Contract code, or even known computing platforms and operating systems. This means that thousands of complex and vulnerable environments could suddenly be exposed.

Regulating ChatGPT AI

The impact of ChatGPT on the software development process highlights the need for updated regulation in this field. However, current regulation is analogue in nature and often reactive rather than proactive, making it slow to evolve and potentially out of touch with the rapidly changing AI landscape. To ensure quick reactions and a positive impact, regulators should be advised by specialists in the field and in academia and consider creating a separate Regulatory Body or Council for Ethics to regulate the use of such powerful dual-use technologies.


ChatGPT has the potential to be a game-changer for code writing and cybersecurity, but it’s important to approach it with caution. While it may expose vulnerabilities in the short term, its ability to generate code in response to a prompt could significantly improve the quality of coding and reduce vulnerabilities once applications are deployed. To ensure a positive impact, it’s essential to have updated regulation that’s fit for purpose and advised by specialists in the field.

Leave a Comment