AI for Product Teams
Everybody is talking about AI at the moment, and so the question “should we be using AI” is the “should we be on the blockchain” for 2024.
People know I am in Product, and that I have a technical background. I have been asked for my opinion so many times, I've decided to do a write-up instead!
The Promise of AI
There are a lot of repetitive tasks in our line of work that have the possibility for error. The type of task that requires more context and knowledge than we can automate, but is nonetheless mind-numbing. That's where AI could help the most, but that's not where the focus is at the moment.
Like close-up magic, we collectively focus on the flashy movements and big gestures, and not on what's happening underneath the table. Chat-GPT, Midjourney and recently Sora AI are all results where absolutely massive amounts of data have been churned (often without explicit consent…) into features that anyone can use.
This makes AI seem like a great equalizer: if anyone can use it, it's for everyone, right?
Problem #1: Biases
What we currently call “AI” are basically very, very, very sophisticated if/else engines. Using its own internal logic, the AI interprets a given prompt and then very quickly runs through an immense tree of possibilities to give you the best possible response from the solutions it has available. The system “knows” the correct answer because it has been trained by reinforcement, like you would train a puppy:
- On the carpet? Bad!
- Out in the gutter? Good!
But as anyone who's ever talked with a data-scientist knows: “shit in means shit out”. This is true for any dataset, and especially one as large as our AI tools. If the dataset has flaws or errors, the result will replicate those errors. If you ask an AI to use all the hiring data from the very beginning of your 120 year old Dutch company, it will start hiring 30 year old males named “Jan de Boer”. Statistically they have the highest success rate.
Problem #2: Replication
Luckily, we can train our puppy. You can tell it to take into account certain factors and constraints, or to disregard a given trend if it upsets the outcome. But the more abstract your question, the less concrete the outcome. And the question “Generate a picture of a green jeep in a field of tulips”, while providing a very specific set of inputs, leaves a lot open to interpretation:
- What model jeep?
- What colour green?
- What angle should the picture be at?
- Or the jeep for that matter?
Let's not even start about the tulips…
And even if your AI has “learned” the correct path, you can't always extract that learning and add it to a different AI, since a lot of them are black box systems. We know they give the correct answer, but it cannot show its internal workings. There is no logic in the answer, it just “feels right” to the AI because we told it so many times that it was.
Problem #3: Data-ownership
Systems like Chat-GPT and Midjourney are trained on huge amounts of data. This data is often scraped from the publicly accessible Internet. But having access to something does not mean you can just use it for your own advantage.
People have been publishing summaries of popular books for a long time, basically saying “If you don't have the time to read Lord of the Rings, but you have to do the book report, I've read it for you and I will tell you what you need to know”. It's very tempting to think that AI follows the same logic, and companies like OpenAI are making that exact same point. But using some creative prompting, engineers have seen complete passages from copyrighted works tumble out of Chat-GPT. And artists showcasing their work online have seen their art reflected in Midjourney's output. We're still debating as a society whether this constitutes copyright infringement.
Bonus Problem: Sustainability
The final problem in this list is one that is usually forgotten: Training AI systems involves a huge amount of energy consumption. So while we are using recycled coffee-cups and installing motion-activated LED lights in our office, the tool we are most curious about might negate all our good work in just one day of training.
Be mindful that even querying an AI system consumes a lot of energy: asking Chat-GPT a question has an energy expenditure of charging your phone. Heavy use might even impact the CO2 state of your company.
Remeber: Sometimes, not doing something can be the best choice for the environment.
The EU is working on legislation that will make companies responsible for every step in the production of their product, including whatever they buy in from third parties. If this applies to digital products, your green certification might be null and void if you use an AI-as-a-Service tool. This is not the case today, but personally I would like to see that happen.
So what can we use AI for?
About ten years ago, most organizations started measuring every metric they could. As a result they are now sitting on huge piles of “big data”, and they would like to see their investments back ASAP. If you are a Product Owner in such an organization you most probably have been using that data to inform your choices. An AI can see trends in that data that even your best data scientist did not spot. This could yield some fascinating new truths for you to innovate upon.
If your team is more technical, you can ask a public AI “What tech stack would you recommend for our new feature? And do you have a deployment template for that?”. It will then use all its scraped knowledge from the internet to come up with an answer. This can help with discovery, but of course you should not blindly follow what it has to say.
And unlike I recently heard in a podcast, I don't think we'll say “Hey Siri, here's my product goal, please give me the user stories to build it” any time soon.
My Conclusion
AI will not solve all our problems, but it can certainly help us do the grunt work that is needed for our solutions. I'mp personally very excited to have a computer do my homework for me, but there's still a lot of trial & error (or adapt & improve) to do until I don't have to double-check it's answers.