Fortunately, there is a page called www.opensubtitles.org, where you can get subtitle (.SRT) files for virtually every movie. Now let's see what we can do with these. SRT files are in plain text format (human readable) and can thus be read quite easily with R.

First thing we need is a reading function for an SRT file. This function is quite long and boring. I won't talk you through it. Here it is. If you ever use this function, please tell people where you got it! Note that it needs the SRTs in UTF-8 charset.


read.srt <- function (file) {
  scan.file <- scan(file, what = "character", sep = "\n", encoding = "UTF-8", quiet = T)
  arrows <- grep("-->", scan.file, fixed = T)
  subtitles <- c()
  for (arrow.i in 1:length(arrows)) {
    if (arrow.i < length(arrows)) {
      subs <- scan.file[(arrows[arrow.i]+1):(arrows[arrow.i+1]-2)]
      subtitles <- c(subtitles, subs) }
    else {
      subs <- file[arrows[arrow.i]+1] } }
  words <- c()
  for (sent in subtitles) {
    sent <- gsub("<i>", "", sent)
    sent <- gsub("</i>", "", sent)
    sent <- tolower(gsub("[[:punct:]]", "", sent))
    sent.spl <- strsplit(sent, " ", fixed = T)[[1]]
    words <- c(words, sent.spl) }
  words[words != ""] }

This function returns a looong vector with all the words spoken in the respective movie. With this vector, we can do more stuff. One thing particularly interesting is bad language - swear words. Let us compare movies in terms of how many percent of all the words spoken in the movie are considered "bad words". So I defined a bad-word-list (f-word, several excremental expressions, expressions for female and male genitalia, expressions for the buttocks and so on), stem all the words (using the Rstem package) in a vector and search using regular expressions (we'll come back to this later). The function for this is called swear.ratio(). It's been some time since I wrote this function, and I today I would use sets in the regular expressions. But it works...


swear.ratio <- function (words, language = "english") {
  if (language == "english") {
    swear.words <- <<insert many mean words here>> }
  else {
    stop("Only language == 'english' implemented.") }
  n.words <- length(words)
  stems <- wordStem(words, language = language)
  n.swear <- 0
  for (swear.word in swear.words) {
    n.swear.word <- length(grep(swear.word, stems))
    n.swear <- n.swear + n.swear.word }
  cat(round((n.swear / n.words) * 100, 3), "percents swear words.\n")
  n.swear / n.words * 100 }

Now I search for all the SRT files in one directory, read those files and put them into a list. The element names in this list are the names of the SRT files without the ".srt". In my case, these are the movie titles. 

srt.files <- list.files(<<path to SRT files>>, full.names = T)
srt.list <- list()
for (srt in srt.files) {
movie.name <- strsplit(srt, "/", fixed = T)[[1]]
movie.name <- gsub(".srt", "", movie.name[length(movie.name)], fixed = T)
srt.list[[movie.name]] <- read.srt(srt) }

What we get for one movie is this. These are the first 9 words spoken in the third movie in the list, do you know the movie? I'm sure you do!):

> srt.list[[3]][1:9]
[1] "people" "always" "ask"    "me"     "if"     "i"      "know"   "tyler"  "durden"

So now I'm iterating over this list and create a dataframe with movie information, for now only swear ratio and type-token-ratio using word stems (the number of all different words vs. the number of all words, maximum 1).

type.token <- function (words, language = "english") {
  stems <- wordStem(words, language = language)
  unique.stems <- intersect(stems, stems)
  length(unique.stems) / length(stems) }

srt.df <- data.frame()
for (movie in names(srt.list)) {
  subs <- srt.list[[movie]]
  srt.df <- rbind(srt.df,
  data.frame(movie, swear.ratio = swear.ratio(subs), type.token = type.token(subs))) }

Finally, here is the fun part: Plotting swear ratios:
dotchart(srt.df$swear.ratio, labels = srt.df$movie, col = "blue", pch = 19)
abline(v = srt.df[srt.df$movie == "inglourious basterds", "swear.ratio"], col = "red", lwd = 2)

Click on this plot to read the labels.

"Reservoir Dogs" takes home the "Swear-a-lot Cup" with roughly 3% of all spoken words being bad words. It's kind of relieving that "Finding Nemo" being the only real movie for kids in the sample indeed is the one with the least swear word ratio. By the way, all of the hits in "Finding Nemo" are:
- butt (4 times)
- butter (1 time)
- class (3 times)
- passed (3 times)
- assure (1 time)
Here we see a problem using regular expression matching. "class", "passed" and "assure" are also found by a search for "ass". So, the 4 occurrences of "butt" seem to be the only real bad words used in "Finding Nemo". 

You might wonder what the red line in the plot indicates. As you can see in the plotting command, the red line indicates how many swear words there are in "Inglourious Basterds" (IG). IG is the Tarantino movie with the least amount of swear words (as measured by total words / swear words ratio which is roughly 1% for IG). So I call the red line the "Tarantino threshold".

There are several movies getting over the Tarantino threshold. For example "Shawshank Redemption" (excuse the extra space in the plot) and also "Fight Club". Together with "Goodfellas" these are the only three movies in our sample getting over the Tarantino threshold which are not directed by Quentin Tarantino.

I will do more stuff with these swear word ratios another time. For now, let's plot type-token-ratios, which can be considered a measure for "lexical diversity" throughout the movie.

No clear patterns for Tarantino's movies this time. Indeed, the movie with the least lexical diversity ("Jackie Brown") and the one with the second highest ("Kill Bill") are both directed by Tarantino. There is one problem with this plot: Type token ratio is positively correlated with the length of the movie because the longer the movie is, the higher the probability for the occurrence of new words is. This could be a reason for "The Godfather 2" ranking that high in this plot. So, in one of my next posts I'll try to incorporate the length of a movie into the analyses and see what we can get out of this.

But that's it for today... bye and see you soon.







0

Add a comment

Hi all, this is just an announcement.

I am moving Rcrastinate to a blogdown-based solution and am therefore leaving blogger.com. If you're interested in the new setup and how you could do the same yourself, please check out the all shiny and new Rcrastinate over at

http://rcrastinate.rbind.io/

In my first post over there, I am giving a short summary on how I started the whole thing. I hope that the new Rcrastinate is also integrated into R-bloggers soon.

Thanks for being here, see you over there.

Alright, seems like this is developing into a blog where I am increasingly investigating my own music listening habits.

Recently, I've come across the analyzelastfm package by Sebastian Wolf. I used it to download my complete listening history from Last.FM for the last ten years. That's a complete dataset from 2009 to 2018 with exactly 65,356 "scrobbles" (which is the word Last.FM uses to describe one instance of a playback of a song).
3

Giddy up, giddy it up

Wanna move into a fool's gold room

With my pulse on the animal jewels

Of the rules that you choose to use to get loose

With the luminous moves

Bored of these limits, let me get, let me get it like

Wow!

When it comes to surreal lyrics and videos, I'm always thinking of Beck. Above, I cited the beginning of the song "Wow" from his latest album "Colors" which has received rather mixed reviews. In this post, I want to show you what I have done with Spotify's API.

Click here for the interactive visualization

If you're interested in the visualisation of networks or graphs, you might've heard of the great package "visNetwork". I think it's a really great package and I love playing around with it. The scenarios of graph-based analyses are many and diverse: whenever you can describe your data in terms of "outgoing" and "receiving" entities, a graph-based analysis and/or visualisation is possible.
12

Here is some updated R code from my previous post. It doesn't throw any warnings when importing tracks with and without heart rate information. Also, it is easier to distinguish types of tracks now (e.g., when you want to plot runs and rides separately). Another thing I changed: You get very basic information on the track when you click on it (currently the name of the track and the total length).

Have fun and leave a comment if you have any questions.
3

So, Strava's heatmap made quite a stir the last few weeks. I decided to give it a try myself. I wanted to create some kind of "personal heatmap" of my runs, using Strava's API. Also, combining the data with Leaflet maps allows us to make use of the beautiful map tiles supported by Leaflet and to zoom and move the maps around - with the runs on it, of course.

So, let's get started. First, you will need an access token for Strava's API.

I've been using the ggplot2 package a lot recently. When creating a legend or tick marks on the axes, ggplot2 uses the levels of a character or factor vector. Most of the time, I am working with coded variables that use some abbreviation of the "true" meaning (e.g. "f" for female and "m" for male or single characters for some single character for a location: "S" for Stuttgart and "M" for Mannheim).

In my plots, I don't want these codes but the full name of the level.

It's been a while since I had the opportunity to post something on music. Let's get back to that.

I got my hands on some song lyrics by a range of artists. (I have an R script to download all lyrics for a given artist from a lyrics website.
4

Lately, I got the chance to play around with Shiny and Leaflet a lot - and it is really fun! So I decided to catch up on an old post of mine and build a Shiny application where you can upload your own GPX files and plot them directly in the browser.

Of course, you will need some GPX file to try it out. You can get an example file here (you gonna need to save it in a .gpx file with a text editor, though). Also, the Shiny application will always plot the first track saved in a GPX file.
9
Blog Archive
BlogRoll
BlogRoll
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.