10 Comments

I admit to being one of those are are not buying in to every AI project. I especially disagree with AI as a replacement for writing and other arts forms as well as education, where it makes things too easy for students. I know the development and use of AI is growing fast, but I believe people need to be cautious and slow down.

Expand full comment
Nov 26, 2023Liked by Koo Ping Shung

I don’t disagree that to transform, you must first inform. That is, literacy precedes efficacy. That being said, if it is a colleagues’ mindset you need to change, they don’t necessarily need to know how generative AI works to see its value and begin to consider its application. Give them unfettered access to an enterprise version of ChatGPT or a safe, reliable system such as Firefly to experiment with. Sometimes, that it works trumps how it works.

Expand full comment

But aren't you suggesting we give up control? Shouldn't we know what AI is really doing given the many reports of its potential danger?

Expand full comment

I do agree about not being overly focused on accuracy metrics as an end in itself, but surely validation studies are a good start point to get a feel of what a model is capable of? or are there other methods you would suggest?

Expand full comment
author

Possible to give more details on what you meant by validation studies?

I do agree that in order to understand what the model is capable for, small controlled experiments should be done rather than an immediate full-blown implementation of models.

Expand full comment

say for example to utilise computer vision in real world setting. It is definitely not possible to account for all different lighting conditions etc, hence a practical approach will be to augment the model with design features such as user training.

Then there is some tension between how much resource to assign to small controlled studies for model validation, and real world usage evaluation. Would there be any point to chase X% accuracy for controlled studies when real world usage has vastly more variables? Would appreciate your thoughts!

Expand full comment
author

We might be referring to different things here. I meant that once a model is finalized, we should still do a small scale deployment of it to get some data during implementation before moving on to the full-scale deployment. Not build another model for actual implementation. :)

We all agree not to chase accuracy metrics but even other metrics, trying to increase X% should also think abt it, because there will always be a cost to increase the X% but we will never know the additional value it can bring about. :)

Expand full comment

ah, thanks for clarifying! indeed I was thinking about the stage before model finalization.

I suppose the struggle I have is what sort of accuracy metric levels do companies or teams use, before small scale deployment?

most likely it depends on use cases but I am not aware of any industry standards- would you know of any?

Expand full comment
author

Yes greatly dpd on use cases but I'll recommend using a quick n dirty model to benchmark. For instance using a linear regression model with all variables thrown in to hv an idea if the accuracy or whatever metrics you've chosen gives a level that is wanted by the biz before full scale into model training, like using deep learning etc.

Expand full comment

cool, would you be able to recommend some case studies for reference?

Expand full comment