I would like to admit at the start of this post that I am not very familiar with Literary Arts, like prose, poems, novels, etc. Whatever I understand about writing and storytelling is because of my work in presenting insights from data. But it’s something I find interesting to share with my readers and for me to have a bookmark here to look at it later on…when the interest is piqued again. Here goes!
At the time of writing, I was reading this book from MIT Press which I bought a long while ago. The book title is “The Artist In the Machine” by Arthur I. Miller.
The content of the book is mainly, before the GenAI era, how computer scientists get machines to be creative, generating images in certain styles (single and combination of known), doing short novel writing, writing poetry, etc.
For those who have been in Artificial Intelligence long enough will have known about image generation before GenAI days such as Google’s Deep Dreams.
There is a common “plot” thread across the many short research stories that were shared in the book. Individual scientist collects the data, feed it into a neural network of certain structures, add some randomness to the model and it starts to generate the respective literal art. After the first few generations, it learned about what the model fumbles on and what the model did well and improved from there by tweaking the model. It’s pretty interesting to read, even in this day of better Generative AI. Why is that the case? Through the research, the scientists learn the meta-structure of the literal art, for instance, what makes a good poem, or a good story, etc. Through the study of this meta-structure, it tried to improve it and make the machine closer to a human-generated literal art.
So why this issue? Well, when GPT-3 proved its effectiveness in late 2022, the NLP research industry was thrown into disarray. Researchers have no idea what to research further because LLMs can do so much better than doing the individual models for different functions put together. It’s very likely that previous research on producing an individual model to generate poetry might have paused, especially since there was a lack of research funding in the first place. Since research funding goes to those that are likely to generate profits in this capitalistic society we are in.
So have LLMs stopped humanity from exploring the literal arts then? Has LLMs stopped researchers from moving forward to understand the hidden structures in literal arts and in turn humans are going to lose creativity in this area?
To be honest, time will tell if anyone is right or wrong. But I hope through this issue, I get to hear anyone who is conducting research in this area or Natural Language Processing in general, how has LLMs impacted their research directions. Please PM me on my LinkedIn or share it in the comments. Really appreciate any perspectives you give to widen my horizon! :)
I very much enjoy discussions that present many perspectives. If you do, consider reaching out to me on LinkedIn.
Consider supporting my work! You can make a “book” donation and drop me some wisdom! :)
Have motorcycles stopped us from running?