I reread Brühlmann and Schmid’s (2015) article where they evaluate PENS scale and noticed that they found issues with the reversed worded (E1) and argued that the scale quality benefit removing or rephrasing the item. I used all presence scale items in my embodiment analysis published in MindTrek.
Here are some more (explorative) analyses from the embodiment data used the Embodiment in character-based video games.
I collected also workload data using raw Nasa TLX when gathering data for EFA and CFA, but then I did not use workload data in analyses. My assumption was that workload would correlate with the embodiment, but did not look at this.
A lecture about who to analyse (board) games using statistics, probability theory and simulations.
Link to the slides if Slide Share plugin does not work: http://www.slideshare.net/lankoski/analysis-for-design
Scripts used to analyse games and visualise data: http://www.mediafire.com/download/whucaos4v9chv40/AnalysisForDesignScripts.zip
The figures from the poster Modeling Player-character Engagement in Single-player Character-Driven Games in ACE Netherlands (2013):
Below is link to the data file and R code used to in the final models in “Models for Story Consistency and Interestingness in Single-Player RPGs” (in Mindtrek 2013) and “Modeling Player-character engagement in Single-player character-driven games” (in ACE 2013 Netherlands). The models q4 and q7 are used in the first paper and and the model q8 is used in the second paper.
A free R book:
I wrote some code to check my ordinal / clmm models against the data (and to learn to use ggplo2).
The function pred() is from clmm tutorial to calculate predictions based on the model. The function plot.probabilities3() is for plotting prediction and distribution form the data.
Update: changed extreme subject visualization. Area seemed not appropriate when average player is not always inside the area.
Update: Added visualizations produced with the scripts
Update 2: Updated plot.probabilities() so that response variable can have arbitrary levels
I have been using ordinal package to crunch data. Tutorial for mixed models is only for clmm2 and not for clmm. Here are code for visualizing predicted probabilities for clmm. All the code is based on clmm2 tutorial.
Here is code that I come up with for drawing a pie diagram using rpy2. The function takes two lists. Function takes two lists: data contains frequencies and labels labels. The argument title set the title text of chart. Radius sets the radius of pie (max 1.0). Font size 1 is default, 1.5 is 50% bigger, 0.5 is 50% smaller. If file name is provided the chart will be saved to png file. Radius, font_size, and file name are optional parameters.
import rpy2.robjects as robjects from rpy2.robjects.packages import importr def draw_pie(data, labels, title, radius=0.8, font_size=1.0, file_name=None): total = 0 for x in data: total = total + x labels_with_percentages =  for td, tl in zip(data, labels): percentage = 100.0 * float(td)/float(total) labels_with_percentages.append("%s %.1f%%" % (tl, percentage)) l = robjects.StrVector(labels_with_percentages) d = robjects.IntVector(data) grdevices = importr('grDevices') if file_name: grdevices.png(file_name) font = robjects.r['par'](cex=font_size) robjects.r.pie(d,l, main=title, radius=radius) if file_name: grdevices.dev_off()
To use this we can do:
draw_pie([10,8],["Male", "Female"], "Sex", font_size=1.5, file_name="chart.png")