Alright, let's test some parallelization functionalities in R.

The machine:
MacBook Air (mid-2013) with 8 GB of RAM and the i7 CPU (Intel i7 Haswell 4650U). This CPU is hyper-threaded, meaning (at least that's my understanding of it) that it has two physical cores but can run up to four threads.

The task:
Draw a number of cases from a normal distribution with a mean of 10 and a standard deviation of 30. Do this a hundred times and combine the result in one vector. The number of cases is varied from half a million to two millions. The number of cores used by R is also varied (between 1 and 4). All this is done 5 times, hence we get multiple estimates of each run's properties. Altogether, 80 runs are made: 5 times x 4 n-cores x 4 n-cases = 80 runs.

The results:

This is quite interesting: We clearly see that there is virtually no performance gain for the 3- and 4-core runs. I guess this is because we do not really have 4 physical cores available on the hyper-threaded CPU. So, it does not really make a difference if we assign 2 or 3 or 4 cores to a task on a hyper-threaded CPU. The performance gain from 1 to 2 cores, however, is quite clear.

Code (plotting code not supplied):
library(doParallel)
library(parallel)
result.df <- data.frame()

for (i in 1:5) {
  cat(i,"\n")
  for (cases in c(500000, 1000000, 1500000, 2000000)) {
    cat(cases, "\n")
    for (cores in c(1,2,3,4)) {
      n.cores <- cores
      n.cases <- cases
      cluster <- makeCluster(n.cores)
      registerDoParallel(cluster)
      t1 <- Sys.time()
      result.vec <- foreach(i = 1:100, .combine=c) %dopar% {
        rnorm(n.cases, mean = 10, sd = 30)
      }
      difft <- difftime(Sys.time(), t1, units = "secs")
      result.df <- rbind(result.df, c(n.cores, n.cases, difft))
    }}}

4

View comments

  1. "This CPU is hyper-threaded, meaning (at least that's my understanding of it) that it has two physical cores but can run up to four threads." Not exactly; you can always run (just about) as many threads as you want, but hyperthreading reduces contention between two threads running on the same core. The key insight is that on some tasks, your code above would indeed show that four threads was faster than two, because of that reduced contention; but going to higher numbers of threads than four should never be faster than four threads, for any task, because it will always increase contention (on your machine, with two physical cores and four virtual cores). It might be interesting to look at larger numbers of threads than four, using your code; you should see performance go down, but I don't know by how much.

    Intel claims that hyperthreading can result in a speedup of 15-30% for some applications, but it is extremely dependent on details of exactly what the threads are doing, on their memory usage patterns, and a million other factors. If you want to know whether a given task will benefit from hyperthreading, you basically have to try it and see. I use hyperthreading quite often on my 8-physical-core Mac Pro desktop, but the tasks I'm running are quite heterogeneous, which would tend to make hyperthreading more beneficial. Your code is doing a task that is extremely homogeneous (probably spending almost all of its time running a tight loop inside C code called by rnorm); hyperthreading might not be able to help much there because all four threads are trying to use exactly the same processor resources, and even with hyperthreading, since there are only two physical cores, a given processor resource (such as, I might speculate, the physical circuitry that calculates an exponential, in your case) will only have two physical instantiations. It would be interesting to try a more heterogeneous task – something like fitting a linear model to a large dataset, for example.

    ReplyDelete
    Replies
    1. Thanks for your comments and clarifications, Ben. I will try some other tasks and let you know the results. It would be interesting to find some tasks in R which benefit from hyper-threading and others that don't.

      Delete
  2. thanks for sharing this. I understand everything, except for the foreach line. you never use the "i = 1:100". what is that for?

    ReplyDelete
    Replies
    1. Hey Ben, thanks for the question. The corresponding text in my post to this line of code is:

      "Do this a hundred times and combine the result in one vector."

      So, the iteration variable i is not used - it is only there to do the task a hundred times.

      Best, Sascha

      Delete

Hi all, this is just an announcement.

I am moving Rcrastinate to a blogdown-based solution and am therefore leaving blogger.com. If you're interested in the new setup and how you could do the same yourself, please check out the all shiny and new Rcrastinate over at

http://rcrastinate.rbind.io/

In my first post over there, I am giving a short summary on how I started the whole thing. I hope that the new Rcrastinate is also integrated into R-bloggers soon.

Thanks for being here, see you over there.

Alright, seems like this is developing into a blog where I am increasingly investigating my own music listening habits.

Recently, I've come across the analyzelastfm package by Sebastian Wolf. I used it to download my complete listening history from Last.FM for the last ten years. That's a complete dataset from 2009 to 2018 with exactly 65,356 "scrobbles" (which is the word Last.FM uses to describe one instance of a playback of a song).
3

Giddy up, giddy it up

Wanna move into a fool's gold room

With my pulse on the animal jewels

Of the rules that you choose to use to get loose

With the luminous moves

Bored of these limits, let me get, let me get it like

Wow!

When it comes to surreal lyrics and videos, I'm always thinking of Beck. Above, I cited the beginning of the song "Wow" from his latest album "Colors" which has received rather mixed reviews. In this post, I want to show you what I have done with Spotify's API.

Click here for the interactive visualization

If you're interested in the visualisation of networks or graphs, you might've heard of the great package "visNetwork". I think it's a really great package and I love playing around with it. The scenarios of graph-based analyses are many and diverse: whenever you can describe your data in terms of "outgoing" and "receiving" entities, a graph-based analysis and/or visualisation is possible.
12

Here is some updated R code from my previous post. It doesn't throw any warnings when importing tracks with and without heart rate information. Also, it is easier to distinguish types of tracks now (e.g., when you want to plot runs and rides separately). Another thing I changed: You get very basic information on the track when you click on it (currently the name of the track and the total length).

Have fun and leave a comment if you have any questions.
3

So, Strava's heatmap made quite a stir the last few weeks. I decided to give it a try myself. I wanted to create some kind of "personal heatmap" of my runs, using Strava's API. Also, combining the data with Leaflet maps allows us to make use of the beautiful map tiles supported by Leaflet and to zoom and move the maps around - with the runs on it, of course.

So, let's get started. First, you will need an access token for Strava's API.

I've been using the ggplot2 package a lot recently. When creating a legend or tick marks on the axes, ggplot2 uses the levels of a character or factor vector. Most of the time, I am working with coded variables that use some abbreviation of the "true" meaning (e.g. "f" for female and "m" for male or single characters for some single character for a location: "S" for Stuttgart and "M" for Mannheim).

In my plots, I don't want these codes but the full name of the level.

It's been a while since I had the opportunity to post something on music. Let's get back to that.

I got my hands on some song lyrics by a range of artists. (I have an R script to download all lyrics for a given artist from a lyrics website.
4

Lately, I got the chance to play around with Shiny and Leaflet a lot - and it is really fun! So I decided to catch up on an old post of mine and build a Shiny application where you can upload your own GPX files and plot them directly in the browser.

Of course, you will need some GPX file to try it out. You can get an example file here (you gonna need to save it in a .gpx file with a text editor, though). Also, the Shiny application will always plot the first track saved in a GPX file.
9
Blog Archive
BlogRoll
BlogRoll
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.