CYE Insights

Why ChatGPT is an Opportunity and a Threat to Cybersecurity

Why ChatGPT is an Opportunity and a Threat to Cybersecurity

By now, we have undoubtedly all heard about the futuristic ChatGPT (Generated Pre-trained Transformer), introduced by OpenAI, which can answer queries of all levels in a conversational, human-like manner by leveraging an enormous pool of data. Much has already been written about its enormous potential. However, we have also heard about its possible dangers, including spreading misinformation and making it easier for students to cheat.

Not surprisingly, ChatGPT also raises important questions about its potential effects on cybersecurity. Here are some of the issues to consider:

The Pros

It’s a powerful cybersecurity tool.

ChatGPT can be helpful to everyone, including CISOs, by conducting research, writing reports, creating a playbook for dealing with various incidents, and examining data. Security officials have used ChatGPT to create a detailed explanation of the best ways to deal with cyber risks, increase knowledge on a security issue, produce data integration between different security domains, and more. These are just some examples of how this powerful new technology can be used to improve cybersecurity and make the CISO’s job easier.

The Cons

It’s a possible data leak vector.

Employees who use ChatGPT could unintentionally be exposing sensitive data to the public. For example, if someone submits a query with specific information about a customer, this—as well as the answer—can then be shared with anyone.

It can compromise organizational confidentiality.

By sending questions to ChatGPT, employees may unwittingly share details about what the organization is working on. For example, a query about how to deal with a cyber incident can reveal that the organization is currently dealing with a cyber incident. In addition, queries about technology issues can reveal a direction in business development that is of interest to the organization.

It can generate malware.

Much has already been written about the potential for using ChatGPT for malicious activities. Although it will not write malware code if it is specifically asked to do so, there have been developers who have succeeded in bypassing its protocols to create mutating malware. What this means is that using this tool, hackers will be able to work faster, and even script kiddies will be able to get ahead with very little knowledge.

It can create phishing emails.

Unlike many phishing emails that contain typos and other revealing traits, ChatGPT can rapidly generate very authentic-looking emails urging the recipient to provide confidential information. In addition, it can create variations based on the prompt to create completely unique emails. As a result, it is quite possible that business email compromise (BEC) could increase significantly.

Recommendations

1. Limit sensitive queries

Since information shared with ChatGPT can be made available to the general public, we recommend limiting queries to those that are not sensitive, both on personal and organizational levels. You should only share information that will not produce any harm or result in compromising the organization.

2. Test AI-generated code

Recently, there have been several incidents involving programmers who used open source code containing vulnerabilities. Indeed, a significant portion of open source software was developed without a secure development process. For this reason, we expect that an AI system that learns from open source is also likely to produce code that can and will contain inherently harmful code, as well as code that does not meet the standard of secure development. Therefore, we recommend testing any AI-generated code before use and assimilation into a production environment.

3. Check accuracy

If you will be using ChatGPT for research or reports, it’s important to check that they are accurate. Much like performing research using Google, you must keep in mind that just because AI provides you with an answer does not mean that it is correct.

4. Be mindful about data

The lack of privacy with ChatGPT means that users should be mindful of the data that they are sharing. There’s a possibility that in the future, OpenAI may create a paid version that maintains privacy; this would alleviate many of the current cyber concerns with ChatGPT.

5. Manage correctly

Regardless of all the possible issues, ChatGPT still has the potential to be a powerful tool for cybersecurity if managed correctly.

Want to learn more about protecting your organization from cyber threats? Contact us.

Path Copy 3

By Shmulik Yehezkel
CISO & Chief of Critical Cyber Operations at CYE
January 25, 2023