Note: This blog post is the third in a series on AI and how to make the most of it in your Security Awareness, Culture and Human Risk efforts. This post covers the concerns, issues and limitations of AI with a focus on Generative AI. You can access previous posts below.
Limitations of AI
AI is an extremely powerful tool; in many ways we are just now discovering the different ways it can exponentially accelerate our cybersecurity efforts. However, like any other tool, AI has its issues and limitations. Before fully utilizing AI, you must be aware of its limitations regarding a few things, like:
- Intellectual Rights
First and foremost, remember that AI is not always correct. AI learns from vast datasets, including data from the entire Internet. If the data AI analyzes is incorrect, so too is its output. This is why you can think of AI as a trusted friend, a resource to help give you ideas, suggestions and points you never thought of. But ultimately you are responsible for the final result. This can be tricky because AI outputs often appear so confident. Also, do not ask AI to verify its own output, instead what you can do is ask it to identify its sources or the logic it used to come to its conclusions.
Additionally, prompt engineering is critical. AI’s output is only as good as the original input data, i.e., the prompt. You need to be sure that your prompts are clear. Prompts that are ambiguous or confusing can lead to inaccurate or incorrect output.
Machine language AI (which is what Generative AI is based on) leverages algorithms developed by people. Those algorithms are only as good as the people themselves, to include biases. Be aware that the algorithm developers may have introduced their own biases without realizing it.
In addition, be careful of your own biases. AI is designed to please you, the user. If you introduce biases in your prompts, you are likely to get biased results. For example, if you enter the prompt,
“Give me ten reasons why cats are better than dogs,”
AI will respond in a way that is heavily biased towards cats, perhaps implying that cats truly are better than dogs in all ways. Instead, a less biased prompt would be to ask AI something like,
“What are the advantages and disadvantages of owning a cat versus a dog?"
Biases are something we humans cannot simply turn off, and quite often we introduce those biases without realizing it.
Security / Privacy
Public AI platforms, such as ChatGPT or Google Bard, continuously learn from user inputs. This means when you are using a tool like ChatGPT, it not only reads your inputs, but may store, process, and learn from them. For example, let’s say you upload your company’s security policies to ChatGPT for it to review and improve them (a fantastic use of AI). Perhaps you are concerned your security policies are far too complicated and difficult to understand, so you ask ChatGPT to make them easier for your workforce to follow. This not only saves you a tremendous amount of time, but also enables you to dramatically improve a key challenge in many organizations: complicated policies.
ChatGPT will happily read and can vastly improve your policies. But your security policies are now stored in ChatGPT and can potentially be shared with others as part of its future output. The same can be said for any sensitive information, such as personally identifiable information (PII). For example, you upload a spreadsheet to ChatGPT not realizing it contains the names, phone numbers, and home addresses of thousands of people. Once again, all that data is now stored in ChatGPT. There are two ways to approach this:
- Sanitize: Be sure you sanitize any information that you upload. For example, if you were to ask AI to review your security policies you should remove any references to your organization fully anonymizing the policies.
- Enterprise: There are enterprise versions of AI that you or your organization can purchase that does not store any information you share with it. Think of this privacy as a feature you pay for. Now you can upload all the security policies or other sensitive data you want, and that data is processed but never stored or shared by the AI solution.
Policies / Ethics
Organizations are just now starting to develop policies and guidelines on how their workforce can use AI. Not only are these policies in their infancy, but they will most likely change in the future. Make sure you understand what your organization’s policies are before using AI at work. For example, what data can you share with AI, what are you allowed to use it for, and what AI solutions can you use? Some additional things to consider:
- Judgement: AI does not know what is right or wrong, it just knows the data it harvests and analyzes. AI cannot replace human judgement, you are still responsible for that.
- Transparent: When you use AI to create an image, document, or other resource, be honest and transparent about it. For example, if I use AI to create an image or document or to help suggest improvements to something I created, I’ll cite AI as a resource. In some cases, I’ll even share the prompts I used to create the content.
Regardless of how AI helped you create content, you are still ultimately responsible for the final output.
Intellectual rights surrounding AI-generated content remain a complex issue. First and foremost, check with your organization’s legal department. The reason intellectual property (IP) and ownership can be so confusing is because of how AI works; by analyzing the works and data created by others. If you use AI to give you ideas, such as how to improve a project plan, business case, or security policy, and you created the original document but asked AI for suggestions, in most cases you likely own the rights to that document. However, things get more confusing when AI created the resource. For example, when you create an image using AI, who owns the image? The artists who created the millions of images that were used in the Machine Learning process, the AI algorithm that created the image, or you, the individual who created the specific prompts to generate the image? Perhaps no one owns the resulting work and it is public domain.
Unfortunately, I don’t have a good answer for you other than be sure to read the website / AI engine documentation and their policies on content generation and get guidance from your organization and their AI / legal policies. In addition, expect different countries and regions to begin publishing regulations on the use of AI.
AI is an incredibly powerful tool, one that you will most likely be using more and more. However, as with any tool, be aware of its issues and limitations. In next week’s AI blog, we will begin a deeper dive into Generative AI and advanced prompt engineering.
PS: Once I wrote this blog post, I asked ChatGPT to review and provide suggestions on how I can improve it. What amazes me is not only the detailed feedback, but how positive and encouraging ChatGPT is with its feedback, showing far greater empathy than some people I know. This was the prompt I provided:
“I'm going to give you a blog post I want you to review and provide suggestions on how to improve. Provide feedback on how to improve grammar, structure and content. Do not re-write the article, simply review it and provide feedback with short, concise bullet points.”
Interested in reducing your organization’s human risk? Check out my course LDR433: Managing Human Risk and sign up for a FREE course preview here.