Treegoat Introduces an AI Model That Measures How Interesting Speeches Are

Treegoat Introduces an AI Model That Measures How Interesting Speeches Are


Voicebot Research today published a report on an analysis of a new AI model introduced by Treegoat. The model analyzes the text of speeches and assigns an “interest” score between 0.0 – 1.0 for every sentence of a speech’s text. It can then aggregate the measurements to offer an overall interest score for the entire speech. What you have is a new way to evaluate speeches based on their text alone.

Treegoat asked Voicebot Research to assess the model output to get an independent evaluation of what we can learn from how AI evaluates speeches. You can download the full report with Voicebot’s analysis here.

What the AI ​​Model Sees

Martin Luther King, Jr.’s “I Have a Dream” speech is a good example to demonstrate what the model produces. It is worthwhile noting that this famous speech is one of the highest-scoring that we evaluated in terms of its “interest” score. You can see from the model output that this speech begins with text that is deemed by Treegoat’s model as very interesting and that continues through most of the speech until the very end.

King’s most famous speech began with an interest score above 0.8 and shifted a bit throughout the address but generally stayed high until the very end. The AI ​​model generated an overall average interest score of 0.88. We can compare this with a less well-known speech by the former First Lady of the United States, Barbara Bush. Her Wellesley College commencement address in 1990 shows a lower overall interest score of 0.33 with a much wider variance.

Franklin Delano Roosevelt, the 32nd President of the United States, fared even worse in arguably his most famous speech. The speech, known for the famous line, “a date that will live in infamy,” notched only a 0.18 interest score. You may note that Roosevelt’s speech is shorter horizontally than that of King. The horizontal x-axis reflects the number of sentences in the speeches evaluated. The “Pearl Harbor Address to the Nation” is only 28 sentences compared to 78 for “I Have a Dream.”

There are a couple of good reasons for Roosevelt’s low performance on the interest scale in the speech about the Pearl Harbor attack that launched the United States into World War II. The report breaks this down in detail and shows why it was consistent with the former president’s other speeches of the era.

These were different speakers in different time periods, delivering different types of speeches. However, the Treegoat model enables us to directly compare how interesting the text of the speech was without the encroachment of opinion. That’s important. To date, opinion has been the one and only method for assessing the quality of speeches. More on that topic is below.

Download Report

The AI ​​Model Origins

“The creation of the speech analysis model came out of the work Treegoat was doing in creating models to analyze and identify the most interesting moments in podcasts for our Marbyl application, which is coming to market in the iOS and Android app stores by end of the year,” said Matthew Groner, chief product officer of Treegoat. He added, “In working with various training data sets and long blocks of audio, we began investigating speeches and their similarities and differences to podcasts and then created separate models specifically to analyze speeches.”

We have seen a lot lately about the power of AI models to generate text. OpenAI’s GPT-3 is the most famous, and many new applications and entire businesses are being built upon the famous AI model. There are also models such as that provided by Grammarly, which will evaluate text for adherence to grammar rules and recommend edits. Applying AI models to evaluate text based on how interesting the language is across an entire speech or any document for that matter, is a novel application.

The Power of Subjective Language

The analysis includes the results of more than 120 speeches. Voicebot had no role in developing or training Treegoat’s AI model; however, we were given complete freedom to submit speeches for AI model evaluation and independently assess the results.

In addition to interest, the model also evaluates the level of subjective language used in each sentence and then aggregates that into an overall “subjectivity” score which also is presented on a scale of 0.0 – 1.0. That enabled us to chart the relationship between “interest” and “subjectivity” for speeches.

You can see in the chart above that there is a strong correlation between the use of subjective language and how interesting a speech is rated. While there are some outliers in the data set, the pattern is easy to identify through the visualization. There is also a bifurcation of the results, which has an interesting explanation highlighted in the full report, where the chart data callouts are addressed individually.

Four Dimensions of Speeches

While most of our analysis focused on the model output, we also identified two important factors about speeches that go beyond what the AI ​​model attempts to measure. First, we learned that all speech evaluations today are entirely based on opinion. There is no objective or mathematical method for measuring how interesting a speech is. Influential people in media, finance, academia, and government tell us all whether or not a speech is good, interesting, or uninteresting. The idea that Treegoat could inject a more objective evaluation of speeches that is disassociated from preconceptions and individual biases is intriguing.

Second, the way people evaluate speeches goes well beyond the text of a speech. Their opinions are formed by four dimensions that include context, speaker, delivery, and text. The text is unique in that it has objective elements that the other dimensions lack. It strips away the influence of opinion bias. It is also the only element that is under the speaker’s complete control.

While you cannot control what preconceptions and biases speech listeners harbor about the speaker, topic, or other characteristics, you can determine what words you say in a speech and how they are arranged. It makes sense that speakers would want to write more interesting speeches. One way they can do that is to optimize the text. Treegoat’s Groner added:

There are so many possible applications of this model, including for researchers or educators who want to delver further into the analysis of political speeches, speechwriters who could use the model output to pre-test and refine speeches, and in the training of other AIs to create speeches that are engaging.

You will find more than 30 charts and 25 pages of analysis in the full report. You can download it at no cost by clicking the button below. Let me know what you think.

DOWNLOAD NOW

OpenAI Leads $50M Raise for AI Video Editor Descript

The Latest Details on the Amazon Layoffs and the Impact on Alexa

GitHub Preview Coding by Voice Feature for AI Programming Assistant Copilot




Leave a Comment

Your email address will not be published. Required fields are marked *