Generative AI for Business - 5 Risks, Huge Rewards
Businesses can't ignore the benefits, or the risks
Since ChatGPT and other generative AIs have burst onto the scene every business leader is being interrogated by the Business Analyst Industrial Complex demanding to know, “What’s your generative AI strategy?” It’s not an entirely unfair question. Gartner, McKinsey, IDC, and Forrester all predict Great Things for generative AI in enterprises.
Even with a “bias for action”, enterprises can’t “move fast and break things.” Their decisions have to be deliberative because the business has a fiduciary responsibility to be legal, compliant, and “not incredibly dumb.”
Here are the 5 things enterprises need to think about when adopting generative AI.
Definition of Generative AI - Artificial Intelligence technology that can create text, code, images, video, sound, and other media from simple text prompts. Such as, “Write a marketing blurb for a cordless smart spoon.” ChatGPT is the most popular example of generative AI.
1. Security
Let’s say your team comes across a very promising company. We’ll call them MarketingGen.ai (because, of course, they have a .ai domain name). The team decides MarketingGen.ai will be ideal for drafting marketing copy for an upcoming product release.
To get MarketingGen.ai to do its thing, you have to put information about your upcoming, code-named, pre-release, super-secret, product into its website.
This presents at least three security risks.
Problem #1 - AI Vendor Gets Hacked - It’s always bad if any of your SaaS vendors get hacked, but it might be especially bad if it’s generative AI SaaS vendor. To get that sweet marking copy out, you might feed sensitive information in. Like design docs. Now consider that MarketingGen.ai is a very fast moving “growth hacking, focus on security later” startup. They get hacked and, Surprise! Your product just got announced early.
Problem #2 - AI Hacked - If MarketingGen.ai uses your information to train future versions of its AI (because, why wouldn’t it?), that new AI can be tricked into disclosing its training data through a wide variety of hacking techniques. This is an area of active security research, and what’s viewed as a secure AI today may be a “gut spilling” AI tomorrow.
Problem #3 - Arming the AI to Work Against You - If the AI is trained on your confidential information, it now knows, and will use, these non-public insights in its future outputs. Imagine a competitor trying to write a “battlecard” against your product, and innocently asking the AI, “what are the top limitations of product X.” Unfortunately, the AI knows, because you told it. AIs aren’t taught to keep secrets.
If you doubt these risks, keep in mind that Microsoft (a huge investor in OpenAI) has instructed employees to avoid using OpenAI’s ChatGPT for sensitive work.
Mitigation Strategies:
Educate employees on AI security issues and have policies against feeding sensitive data to generative AIs.
Push vendors to offer opt-out functionality that keeps the vendor from logging customer prompts and using customer data for future training.
Favor vendors that can pass the usual data protection checklist, and have completed security audits.
2. Intellectual Property
Generative AI is being sued.
Getty Images is suing Stability AI alleging that Stability used copyrighted images to train their image generating algorithms.
Three artists also filed a class action lawsuit against Stability AI, Midjourney, and DeviantArt (an image hosting platform). They’re seeking damages and an injunction to stop ‘further harms’.
Microsoft, GitHub, and OpenAI face a class action lawsuit for Copilot, a code generation tool. The suit alleges Copilot regurgitates licensed code snippets in violation of the licensing parameters.
In response, the generative AI companies are claiming “fair use” as a defense. Pundits are certain both sides will win.
In addition to the lawsuits, the US copyright office has ruled that AI generated art can’t be copyrighted because it “lacks the human authorship necessary to support a copyright claim.”
The net-net is that these high value tools are under a legal cloud at the moment. Even so, use these tools (the benefits are simply to great to ignore), but use them smartly to limit the “blast radius” of unfavorable legal rulings.
Mitigation strategies:
Involve legal and risk teams in conversations about generative AI usage. Monitor developments in this space.
Avoid using generative AIs for high value IP until the legal issues are resolved (i.e. don’t have a generative AI come up with your next logo).
3. Reputational Harm
One of the highest potential use cases for Large Language Models (LLMs), like ChatGPT, is customer service. The idea is to train an LLM on your customer service knowledge base so it can answer some (many? most?) customer service questions, and do so far more effectively than today’s more rules-based chatbots.
That’s a good idea, if the chatbots are used by internal customer support reps. But to really offload customer support, an LLM chatbot would have to be customer facing. Here’s the problem. The world has shown that if you put an LLM chatbot online, three things will happen:
Very clever people will relentlessly pepper it with prompts trying to get it to say bizarre things.
They will be successful because LLMs are (by design) unpredictable in their output. They’re also untestable because there’s no way to anticipate everything humans may throw at it.
If people get it to say something bizarre enough, the media will go all-in on clickiest-baitiest headlines you’ve ever seen.
Imagine headlines of “ACME Airlines customer service chatbot reveals it prays every night for planes to fall from the sky.”
There’s no PR firm big enough.
If OpenAI and Microsoft can’t launch an LLM chatbot without triggering histrionic media alarmism, you might not be able to either.
Mitigation strategies:
Keep the most unpredictable generative AI in the hands of internal staff. Even then, realize that employees may screenshot and leak egregious outputs.
Allow LLM technology to further mature. The best and brightest are working on mitigating AI “hallucinations” and other naughty behavior.
Ensure customer facing LLMs have sufficient guard rails to stick to their expected usage.
4. Bias
AIs are trained on data, and data is biased. Data is especially biased if it’s human generated. Just looking at customer support:
Not all support reps enter data the same way.
Not all customers get treated and coded into the system the same way.
Some issues might be regional, or seasonal, or have other confounding variables that you don’t even know about.
Maybe your product used to spontaneously combust often, and now it’s a lot less often. They point is, you may have a support database full of biased or otherwise inaccurate information.
What impact does this have on the AI? Generative AIs tend to amplify biases. Want proof? What percentage of Chief Financial Officers are older white men? Many, for sure, but a simple search of LinkedIn shows lots and lots of diversity.
But if you ask Midjourney (an image generating AI) to generate portrait photos of CFOs you get this:
You could generate 100 photos and you’d get 100 older white men. Even though the training data, for certain, has young CFOs, women CFOs, and BIPIC CFOs. The algorithm doesn’t produce outputs that are proportional to its training data. Be aware of this with your own data and AI outputs.
Mitigation strategies:
This is honestly one of the harder problems in AI right now, and the solutions can be incomplete and technical in nature (overrepresent underrepresented cases in the training data, for example).
Avoid situations where bias will be detrimental, unethical, and/or illegal. For example, be very reluctant to screen people in or out of anything based on an AI determination. On the other hand, if the AI is just presenting solutions to customer tech support issues, it might be wrong, but it probably won’t be discriminatory.
5. Privacy
In a little place called Europe, companies must disclose, and users must consent to, what data is collected and how it will be used. Considering how brand-spank’n-new generative AIs are, it’s reasonable to assume that customers haven’t consented to training AIs on their data.
If we go back to the lawsuits mentioned under point #1 of this article, the essence of the issue is that people (artists, in specific) never consented for their art to be used to train an AI. And they especially didn’t consent to train an AI that could possibly replace them. And replace them it might. With an image generating AI you can ask it to do things like generate a “portrait of a chief financial officer in the style of Van Gogh”
The “style of Van Gogh” isn’t so much of a problem, but you can say, “in the style of” followed by any popular contemporary artist and the AI will dutifully copy their style. Popular contemporary artists are not amused.
Do you want lawsuits? Because this is how you get lawsuits.
Mitigation strategies:
Use of data, especially in ways that threaten the creators of the data, should be avoided at all costs.
Make it easy and obvious for customers to opt-out of their data being used for AI training. Provide ways for customers to retroactively revoke consent.
Generative AI offers enormous potential to businesses. At least on par with such minor technologies as: the internet, electricity, and industrialization. These benefits can be realized with careful consideration of the risk and barriers, and through the implementation of smart mitigation strategies.
Final Detour
Circling back to the beginning of this article, generative AI was defined as being able to create media from a simple prompt like, “Write a marketing blurb for a cordless smart spoon.” Here’s what Microsoft’s Bing Chat came up with off that 9-word instruction:
Cordless Smart Spoon: The ultimate solution for eating with ease.
Do you struggle with shaky hands or poor grip when eating? Do you wish you could enjoy your meals without worrying about spills or messes? If so, you need the Cordless Smart Spoon!
The Cordless Smart Spoon is a revolutionary device that stabilizes your spoon and adjusts to your movements. It uses the same technology as cameras for image stabilization and has a rechargeable battery that lasts for hours. The Cordless Smart Spoon also has a sleek design and comes in different colors to suit your style.
Order your Cordless Smart Spoon today and get ready to experience a new level of eating satisfaction!