Another perspective I have recently about Data/AI Projects is this.
Imagine there is a balancing scale, one side being “Costs” and the other being “Benefits”. Individual AI project is all about ensuring the scale tips towards “Benefits”, as much as possible, in fact, the best is to have it topple with a HEAVY “Benefits”.
Sounds easy right? Well, pretty sure you expect the, “Not Really” answer from me. Let’s jump right into the discussion.
AI projects have many risk factors actually that might increase the “Costs” side. Just to name a few are Data Quality, Talent Availability, Infrastructure Availability, Team Cohesiveness, Business Processes Set-up, and Execution (after AI creates the output).
You can see the management of AI projects is truly a balancing act, ensuring that the “Benefits” continue to show that it outweighs the “Costs”. However, there are thousands and one ways that an AI project can go very wrong.
What is my recommendation then?
Only experienced personnel can foresee (not all) potential risks and come up with possible risk-reduction techniques. The chances that a graduate from a boot camp/program can handle this are very low.
Given the limited pool of talents, companies need their HR department (assuming they are strategic rather than administrative) to ensure there are knowledge transfers mechanism from the experienced to the “green”, to keep the AI momentum going!
In conclusion, there are many ways that an AI project can lose its luster, that the “Costs” of the project go over the “Benefits” (The project is dead!). And the remedy, IMO, is constantly having someone experienced to evaluate and manage it.
What are your thoughts? Click on the “Chat Bubble” to leave your comments! :)
Recommended Content
Symbolic Connection podcast just released our new episode! We brought Google’s Bard into the creative process, as we explained what Generative AI is about! Your feedback is welcomed! <Podcast>
Do you know that Stanford publishes AI Trends annually? Here are my comments for 2022-2023 trends. There are 5 parts to it, here is Part 1. <Blog>
Talking about ‘Balancing’, if you want to understand how the computing chip can impact geopolitical relationships. This is a primer I recommend. <Goodreads>
Good post! It isn't easy to bootstrap internal AI talent. In fact, it is very hard. But I wonder if it will be necessary? I believe the engineering involved will become greatly simplified over time. So good for latecomers. There are benefits to jumping on the bandwagon now, which is where your advice is valuable.