top of page
Search
Fred Smith

I Asked ChatGPT to Review my Screenplay. Here’s What it Said…

Updated: May 31, 2023


As I type these words, more than 11,500 Writers Guild of America members are striking. One of their chief demands against the studios at large involves guarantees that they won’t soon be replaced by artificial intelligence.


It’s a healthy fear.

Each year, tens of thousands of mediocre to awful scripts circulate in Hollywood, which AI could produce just as well as any human hack. Why wouldn’t the studios take their chances with the robots and see what shakes out on the other side? After all, the studios incur the expense of paying script readers to pour over those scripts and weed out the mediocre from the awful with the hope that somewhere in the ooze of the slush pile lies a gem that will spawn a hit and a franchise.


Script readers have to kiss a lot of frogs as it were. It comes with the job of providing coverage, which is essentially a formal summary and review of the script so studio executives up the chain don’t have to suffer in the dreck.


I once spoke to a former reader who covered an average of ten scripts per week, 500 per year, for three years. That’s a total of 1500 scripts read and covered.


Asked how many scripts she ended up “recommending” or “considering” (as opposed to passing completely) for the studio, she said…two.


Two. Out of 1,500.


That’s 0.1333% for you number crunchers Out There. The math is accurate. I checked with Siri.


Suppose it takes, a professional script reader, on average, about 90 minutes to two hours to read a script once and about two to four hours to provide the three to five pages of coverage notes, which include a summary of the movie and an analytic breakdown of its key ingredients–premise, story, structure, characters, dialogue, and marketability.


Times may vary, but I’ll say for the purpose of discussion that it takes about four hours to cover a script. Using my source as a basis, that means she spent roughly 2000 hours a year reading junk!


Could AI make the coverage process more efficient?


I wanted to find out, and I also wanted to see just how intelligent current, available-to-the-public AI platforms such as ChatGPT are.


ChatGPT is an artificial intelligence chatbot, which is a software application that aims to mimic human conversation through text or voice interactions.


It was developed by Open AI–an American AI research laboratory headquartered in San Francisco–and released to the public in November 2022. As of this writing, it is available for anyone to use as part of a free research preview.


Instead of challenging ChatGPT to write a script, I asked it to analyze one of my existing scripts. If a chatbot’s aim is to mimic human interaction, I figured I’d treat it as if it were a human script reader.


Here’s how the opening salvo went down:


We were off to a good start.


To be clear, a 90-page script is too long for this version of ChatGPT to analyze. So I submitted, a few scenes at a time and monitored how, within seconds, the chatbot summarized the text and provided notes as asked.


Here is how ChatGPT responded after reading the opening scenes of my screenplay titled “‘Melo,” about a teenage boy with locked-in syndrome who is magically released from his affliction when he receives a kiss from a girl on his 18th birthday.


In the opening scenes, we meet the title character and his family whose lives revolve around taking care of their loved one.


This is impressive.


As a [human] writer, my ultimate goal is to arrange words on the page in just the right way so another human down the line can read them, relate them to his or her own human experience, and be intrigued by the story that’s unfolding.


The chatbot appears to “get it” exactly as I intended.


Or is it just being nice?


When I asked ChatGPT for anything that related to an opinion or a feeling, it promptly responded that as a text-based AI model, it can not feel emotion blah blah.


Yet look at this response it gave after reading another scene from the screenplay:

Words such as “emotional and intense” sound a lot like feelings to this sentient being. Herein lies what I found to be the most intriguing part of my little experiment.


ChatGPT can’t feel emotion, but it certainly knows how to identify writing that living humans will find emotional. And here it is providing succinct feedback, just as I asked, in a conversational way just as it was designed to do.


Once my final scene was submitted, I asked ChatGPT to essentially act as if it were a professional script reader and provide coverage for my screenplay.


Here’s its response:


I have had several scripts covered by professional screenwriters in Hollywood and I feel comfortable saying the coverage ChatGPT provided would fool any studio executive into thinking he or she was reading the work of a human.


If a living script reader were to provide favorable coverage to my script as ChatGPT had, I’d be happy and feel good about the project’s future.


But the hardened cynic in me still wondered if the chatbot was somehow telling me what I wanted to hear as if it were incapable of being anything other than nice.


Here is where I asked ChatGPT to focus on potential negatives the script may possess. Pay attention to how I posed my question:


In answering the question as it was asked, ChatGPT isn’t admitting confusion on its part. It can’t feel confusion any more than it can feel love. But it knows what elements most often confuse audiences in movies and lists them. Notice how the chatbot prefaces the answer in a way that doesn’t accuse the script of being outright confusing. Rather, it simply points out that these types of elements can be confusing for audiences, and your script has them so be careful.


Not bad for an AI language model.


For the record, I do NOT believe ChatGPT or any AI platform should be employed by Hollywood studios with the intention of replacing professional script readers. But I do think AI might help drastically reduce the size of hay readers have to scour in search of their coveted needles.


I spent about 20 minutes copying and pasting individual scenes into ChatGPT and reading the clear and concise responses that the chatbot took mere seconds to generate. If I were a script reader who valued my time, it seems AI might have some immediate worth as a first line of defense that could at minimum help me steer clear of dreadfully written scripts not worthy of my precious attention.


It would appear, however, ChatGPT “liked” my screenplay and thinks “‘Melo” should be recommended for production.


But as much as I’d like AI’s glowing words to carry enough weight so as to avoid involving any meddling humans who might disagree on giving my movie the green light, here is how ChatGPT responded when asked if “‘Melo” should be recommended for production:

ChatGPT seems to know its limitations and its place. Remember that, next time you ask an AI language model to do a human’s job.

 

Fred Smith is a a living, sentient human being of modest yet nonetheless real intelligence. He is an author and screenwriter with several screenplays currently grasping for attention in Hollywood.





A Crack in the Room Tone

Stories for a noisy world 

© 2024 Fred Smith

bottom of page