The power of Generative AI (GenAI) is a no-brainer. As you are well aware by this point, releasing GenAI to your workforce can have all sorts of benefits. From faster strategic decision-making, the ability to analyze data to detect patterns, accelerating workflows through the automation routine processes, and enhancing forecasting by continuously reviewing sales data, economic indicators, and market signals to helping people jump-start new projects or tasks. The use cases are limitless.
But so are the GenAI challenges. The main challenges revolve around privacy, security, and regulations.
Let’s step back and understand why these can be a thorn in your organizational side. You may be thinking, well, we use traditional AI apps in the cloud and understand our risks. Well, GenAI apps are different.
The TLDR response for those that want to skip the explanation paragraph.
GenAI thinks, and produces creative & unique results.
These creative results are heavily influenced by the way the questions are asked on a case by case basis. Because of this, GenAI companies need to analyze your data to see if it’s doing what they expect it to.
Traditional AI applications are designed to perform specific narrow tasks, like recommending products or identifying objects, using limited data inputs. In contrast, GenAI is an artificial general intelligence assistant intended to help users comprehensively across many domains of knowledge and abilities. GenAI needs more extensive and complex data access and processing capabilities to achieve this. It reasons across its knowledge using neural networks and symbolic AI to deduce connections, patterns, and insights tailored to individual users’ preferences and goals (basically, it thinks.) As it thinks it can respond with a plethora of different answers. These different answers can either be exactly what a user is looking for, a hallucination, or worse, something that can harm someone. There’s a recent example of To continuously make cloud based GenAI products better a majority of cloud app provides use your data to understand how things are answered in order to improve their service. To do this in a majority of cases another human is reading all your questions and responses. Though anonymized not to know personal identifiable information the human would still have access to confidential information.
Because of the need to analyze conversations to improve their service offering cloud based AI apps need to include language in their terms of service that allow them to legally do this. So you end up with examples like this taken from section 5 of a popular cloud GenAI app through Zoom that listens to an entire meeting in order to keep your organizations meeting notes and summarize them, company “By providing User Content, you grant us a worldwide, non-exclusive, royalty-free, fully paid, perpetual right and license (with the right to sublicense) to host, store, transfer, display, reproduce, modify for the purpose of formatting for display, and distribute your User Content to provide and improve the Service.”
From a third-party access perspective, you also need to consider how they process your data. This report from the Times, shows that if your organization is using ChatGPT, your data is being processed by workers making 2$ an hour in Kenya.
Perhaps we can trust the items that are not explicitly stated and say these organizations have best practices in place to vet every human that processes “User Content” for product improvement.
With that aside every organization should consider the following before rolling out this innovative technology.
What is your organizational impact from a privacy standpoint:
Data collection – any cloud-based GenAI app will have access to a significant amount of your organizational data, including personal, location, preferences, schedules, etc., but more importantly, every conversation anyone has had with said cloud-based AI apps. Do they collect every piece of data, or is it randomly sampled?
Data usage & Third-party access – What are the limitations on how the cloud-based GenAI app uses, analyzes, or shares user data? Do they allow third-party access to your data? Can you opt out of sharing?
Data storage & retention – How does the cloud-based GenAI provider store all of your organizational data? Do they encrypt the data at rest, who has access to the key stores, who has access to their data centers, and how long do they keep your data stored?
From a cybersecurity perspective, here are some things to consider:
Account Hijacking – If someone gains access to a user’s account, they could exploit all the conversations and or data ever processed. Will you be alerted if a third party is hacked?
Eavesdropping – Intercepting audio commands or conversations with cloud-based GenAI apps could expose private information. Do they implement End-to-end encryption? Where is homomorphic encryption on their roadmap?
Insider threats – What is the cloud-based GenAI apps policy regarding the identification and removal of rogue employees both in-house and through their outsourced providers?
Finally, here are some things to consider from a compliance standpoint:
Regulatory compliance – Does the cloud-based GenAI app adhere to GDPR, CCPA, HIPAA… practices?
Transparency mandates – Laws may require disclosures around the cloud-based GenAI app’s data practices, algorithmic accountability, and use of sensitive personal data. Does this open your organization up to unknown agency audits?
Intellectual property laws – GenAI conversations and content generation could implicate copyright and fair use laws. Proper licensing and attribution are necessary. Does the cloud-based GenAI app grant you this ownership?
When considering leveraging cloud-based GenAI apps, all of these items and more must be considered to protect your organization’s future. This may seem daunting, but there are answers, and the future will be dominated by those who leverage this technology. It’s just a case of how to set yourself up for success.