Tag Archives: r tutorial

Using R for Nonparametric Statistics: The Kruskal-Wallis Test, Part Two


Using R for Nonparametric Statistics:  The Kruskal-Wallis Test, Part Two

A Tutorial by Douglas M. Wiig

Before we can run the Kruskal-Wallis test we need to define which column contains the factors (independent variables) and which contains the authoritarianism scores (dependent variable). Once we define the factor column R will match the correct score to each of the 14 observations.
As set up in the study, ‘Group’ is the factor(independent variable), and ‘authscore’ is the dependent variable. Use the command:

> Group <-factor(1,2,3)

This designates which observation belongs to each group. To make sure the data structure has been set up correctly use the command:

> str(kruskal)
‘data.frame’: 14 obs. of 2 variables:
$ Group : num 1 1 1 1 1 2 2 2 2 2 …
$ authscore: num 96 128 83 61 101 82 124 132 135 109 …
>

The output of this command shows a summary of the structure of the data frame created. We can now run the Kruskal Wallis test with the command:

> kruskal.test(authscore ~ Group, data=kruskal)

The output will be:

Kruskal-Wallis rank sum test

data: authscore by Group
Kruskal-Wallis chi-squared = 6.4057, df = 2, p-value = 0.04065

>

As seen in the above output the analysis of authoritarianism score by group indicates that the probability of differences in scores among the three groups being due to chance alone is less that the .05 alpha level that was set for the study. (pobt < .05). Further post hoc analysis would be necessary to determine the exact nature of the differences among the scores of the three groups. This will be the topic of a future tutorial.

More to come:  Part Three will explore the use of multiple comparison techniques to analyze ranked means

Using R for Nonparametric Analysis: The Kruskal-Wallis Test, Part One


 

Using R for Nonparametric Data Analysis: The Kruskal-Wallis Test

A tutorial by Douglas M. Wiig

Analysis of variance(ANOVA) is a commonly used technique for examining the effect of an independent variable on three or more dependent variables. There are several types of ANOVA ranging from simple one-way ANOVA to the more complex multiple analysis of variance, MANOVA. ANOVA makes several assumptions about the sample data being used such as the assumption of normal distribution of the variables in the parent population, underlying continuous distribution of the variables, and interval or ratio level measurement of all variables. If any of these assumptions cannot be met a researcher can turn to a nonparametric counterpart to ANOVA for the analysis. This tutorial will discuss the use of the Kruskal-Wallis test, the nonparametric counterpart to analysis of variance.

In this tutorial I will explore a simple example and discuss entering the sample data into a data file using the R data editor. I will then discuss setting up the data for analysis and using the Kruskal-Wallis test.

I am going to assume that the reader has a working knowledge of ANOVA with parametric data. Since ANOVA uses sample means and variances as the basis of the statistical test interval or ratio level measurement is necessary to insure valid results in addition to the assumptions indicated above. With the nonparametric Kruskal-Wallis test the only assumptions to be met are ordinal or better measurement and the assumption of an underlying continuous measurement. The example to be used here is taken from a book on nonparametric statistics by Sidney Seigel.(Sidney Seigel, Nonparametric Statistics for the Behavioral Sciences, New York: McGraw-Hill, 1956, pp-184-196).

A researcher wishes to test the hypothesis that school administrators are typically more authoritarian than classroom teachers. He also believes that many classroom teachers are adminstration-oriented in their professional aspirations which may, in turn, have an effect on their authoritarianism. 14 subjects are selected and divided into three groups: teaching-oriented teachers (classroom teachers who wish to remain in a teaching position), administration-oriented teachers (classroom teachers who aspire to become administrators), and practicing administrators.(Seigel, p. 186). The level of authoritarianism of each subject is measured through a survey that assigns an authoritarianism score that is considered to be at least ordinal in nature. Higher scores indicate higher levels of authoritarianism. (Siegel, p. 186). The null hypothesis is that there is no difference in mean authoritarianism scores among the three groups. The alternative hypothesis is that the mean authoritarianism scores among the three groups are different. The alpha level for rejecting the null hypothesis is p = .05. (Seigel, p. 186).

Since we make no assumption about a normal distribution of scores, have a small sample size of n = 14, and ordinal measure we will use the nonparametric test which is based on median scores and ranks rather than means and variances as used in parametric ANOVA. The mathematical details of how this is done is beyond the scope of this tutorial. See Seigel, p. 187-189 for details. The authoritarian scores for the three groups are shown below:

Authoritarianism Scores of Three Groups of Educators

Teacher-Oriented        Admin-oriented    Administrators

teachers   n=5                teachers   n=5                n=4

—————————————————————————————-

96                                  82                               115

128                              124                               149

83                               132                               166

61                               123                               147

101                             109

—————————————————————————————-

(Seigel, p. 187)

The first task is to create an R data frame with the scores from the table. We will enter the scores using the R data editor. We will name the data frame ‘kruskal.’   Invoke the editor using the following commands:

  > kruskal <- data.frame()

   > kruskal <- edit(kruskal)

You should see the data entry editor open in a separate window. In order to process the data properly it needs to be entered into two columns. The first column will be the factors (which group the scores belong to), and the second column will contain the actual scores. Label column 1 ‘Group’ and column 2 ‘authscore.’ When the data are entered your editor should look like this:

———————-

Group  authscore

1    1               96

2    1            128

3    1             83

4    1            61

5    1           101

6    2            82

7    2           121

8    2          132

9    2          135

10   2       109

11   3       115

12   3       149

13   3       166

14   3       147

———————-

Make sure that each column of numbers is of the data type “Real.” l Close the data editor by clicking ‘Quit’ and the data will be saved in the working directory for access. To see what has been entered in the data editor use the command:

> kruskal

Group authscore

1     1             96

2     1            128

3     1             83

4     1            61

5     1           101

6     2            82

7     2           121

8     2            132

9     2            135

10     2         109

11     3         115

12     3         149

13     3         166

>

You should see the output as above. If you need to make changes simple invoke the editor with:

> kruskal <-edit(kruskal)

The editor will open and you can make any changes you need to. Be sure to click on ‘Quit’ to save the changes to the working directory.

Part Two will continue the analysis

R Tutorial: A Simple Script to Create and Analyze a Data File, Part Two


A simple R script to create and analyze a data file:part two:    A tutorial by D.M. Wiig

In part one I discussed creating a simple data file containing the height and weight of 10 subjects.  In part two I will discuss the script needed to create a simple scatter diagram of the data and perform a basic Pearson correlation.  Before attempting to continue the script in this tutorial make sure that you have created and save the data file as discussed in part one.

To conduct a correlation/regression analysis of the data we want to first view a simple scatter plot. Load a library named ‘car’ into R memory. Use the command:

> library(car)

Then issue the following command to plot the graph:

> plot(Height~Weight, log=”xy”, data=Sampledatafile)

The output is seen below:

scatter1

We can calculate a Pearson’s Product Moment correlation coefficient by using the command:

> # Pearson rank-order correlations between height and weight

> cor(Sampledatafile[,c(“Height”,”Weight”)], use=”complete.obs”, method=”pearson”)

Which results in:

Height Weight

Height 1.0000000 0.8813799

Weight 0.8813799 1.0000000

To run a simple linear regression for Height and Weight use the following code. Note that the dependent variable (Weight) is listed firt:

> model <-lm(Weight~Height, data=Sampledatafile)

> summary(model)

Call:

lm(formula = Weight ~ Height, data = Sampledatafile)

Residuals:

Min 1Q Median 3Q Max

-30.6800 -16.9749 -0.8774 19.9982 25.3200

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) -337.986 98.403 -3.435 0.008893 **

Height 7.518 1.425 5.277 0.000749 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 21.93 on 8 degrees of freedom

Multiple R-squared: 0.7768, Adjusted R-squared: 0.7489

F-statistic: 27.85 on 1 and 8 DF, p-value: 0.0007489

>

To plot a regression line on the scatter diagram use the following command line. Note that we enter the y (dependent)variable first and then the x (independent)variable:

> scatterplot(Weight~Height, log=”xy”, reg.line=lm, smooth=FALSE, spread=FALSE,

+ data=Sampledatafile)

>

This will produce a graph as seen below. Note that box plots have also been included in the output:

scatter2

This tutorial has hopefully demonstrated that complex tasks can be accomplished with relatively simple command line script. I will explore more of these simple scripts in future tutorials.

More to Come:

 

R Tutorial: Using R to Work With Datasets From the NORC General Social Science Survey


R Tutorial: Using R to Work With Datasets From the NORC General Social Science Survey

A tutorial by D. M. Wiig

Part One:

When I teach classes in social science statistics and social science research methods I like to use “live” data as much as possible both in classroom lectures and in homework assignments. For the social sciences one excellent and readily available source of live data is the ongoing General Social Science Survey project, The National Data Program for the Sciences. This is a project of NORC, a National Science Research Center at the University of Chicago (see www.norc.org for the projects main web site.)

There a a number of datasets available in different formats. The quick download datasets that I like to use are primarily SPSS data files. Many institutions have SPSS available for students and faculty but the use of SPSS is my no means universal. I have found that it is easy to use R to read the .sav format files into an R data frame and then write the file out to a comma separated value, .csv format that can be read my almost any statistics software package. As I will discuss in this an future tutorials it is also quite effective to use R to analyze the GSS files.

To create R datasets using the GSS files we can use some of the file import/export features available in R. To begin, make sure that the R packages “Hmisc” and “foreign” are installed and loaded in your R session environment. This can be accomplished using:

> install.packages(“Hmisc”) #need for file import

> install.packages(“foreign”) #need for file import

As an example, the following code will load the GSS data file “gss2010x.sav” into an R data frame using the spss.get function:

>library(Hmisc)

>gssdataframe <- spss.get(“/path-to-your-file/gss2010x.sav”, use.value.labels=TRUE)

The file “gss2010x.sav” contains 500 observations of 47 variables. Codebooks and other information about the data in these datasets is readily avaiable for download from the NORC web site. After the data is loaded into the data frame it can be viewed using:

>gssdataframe

To convert and save the file to a comma separated value (.csv) format use the following use the write.table function:

>#write dataframe to .csv file

>write.table(gssdataframe, “/path-to-your-file/gss2010x.csv”,sep=”,”)

The file, now in a .csv format can be accessed with virtually any statistics package or other software. In my next tutorial I will discuss working with GSS data using the various table and cross table functions available in R.

 

Using R in Nonparametric Statistics: Basic Table Analysis, Part Three, Using assocstats and collapse.table


A tutorial by D.M. Wiig

As discussed in a previous tutorial one of the most common methods displaying and analyzing data is through the use of tables. In this tutorial I will discuss setting up a basic table using R and exploring the use of the assocstats function to generate several commonly used nonparametric measures of association. The assocstats function will generate the association measures of the Phi-coefficient, the Contingency Coefficient and Cramer’s V, in addition to the Likelihood Ratio and Pearson’s Chi-Squared for independence. Cramer’s V and the Contigency Coefficient are commonly applied to r x c tables while the Phi-coefficient is used in the case of dichotomous variables in a 2 x 2 table.

To illustrate the use of assocstats I will use hypthetical data exploring the relationship between level of education and average annual income. Education will be measured using the nominal categories “High School”, “College”, and “Graduate”. Average annual income will be measured using ordinal categories and expressed in thousands:

“< 25”; “25-50”; “51-100” and “>100”

Frequency counts of individuals that fall into each category are numeric.

In the first example a 4 x 3 table created with hypothetical frequencies as shown below:

Income                                Education
(thousands)          High School   College   Graduate

<25                                    15                       8                  5

26-50                              12                       12                8

51-100                           10                       22                25

>100                                  5                       10                 32
The first table, table1, is entered into R as a data frame using the following commands:

#create 4 x 3 data frame
#enter table1 in frequency form
table1 <- data.frame(expand.grid(income=c(“<25″,”25-50″,”51-100″,”>100″), education=c(“HS”,”College”, “Graduate”)),count=c(15,12,10,5,8,12,22,10,5,8,25,32))

Check to make sure the data are in the right row and column categories. Notice that the data are entered in the ‘count’ list by columns.

> table1
income  education     count
1 <25             HS                  15
2 25-50        HS                  12
3 51-100     HS                 10
4 >100          HS                   5
5 <25             College         8
6 25-50        College        12
7 51-100     College        22
8 >100          College        10
9 <25             Graduate      5
10 25-50     Graduate      8
11 51-100   Graduate    25
12 >100       Graduate    32
>

If the stable structure looks correct generate the table, tab1, using the xtabs function:

> #create table tab1 from data.frame
> tab1 <- xtabs(count ~income + education, data=table1)
Show the table using the command:

>tab1
                               education
income         HS College Graduate
<25                   15     8             5
25-50             12     12           8
51-100          10     22          25
>100                 5     10          32
>
Use the assocstats function to generate measures of association for the table. Make sure that you have loaded the vcd package and the vcdExtras packages. Run assocstats with the following commands:

> assocstats(tab1)
X^2 df P(> X^2)
Likelihood Ratio 31.949 6 1.6689e-05
Pearson 32.279 6 1.4426e-05

Phi-Coefficient : 0.444
Contingency Coeff.: 0.406
Cramer’s V : 0.314
>

The measures show an association between the two variables. My intent is not to provide an analysis of how to evaluate each of the measures. There are excellent sources of documention on each measure of association in the R CRAN Literature. Since the Phi-coefficient is designed primarily to measure association between dichotomous variables in a 2 x 2 table,collapse the 4 x 3 table using the collapse.table function to get a more accurate Phi-coefficient. Since we want to go from a 4 x 3 to a 2 x 2 table we essentially collapse the table in two stages. The first stage collapses the table to a 2 x 3 table by combining the “<25” with the “25-50” and the “51-100” with the “>100” categories of income.

The resulting 2 x 3 table is seen below:

Education
Income                High School      College        Graduate

<50                                 27                        20                    13

>50                                15                        32                     57

To collapse the table use the R function collapse.table to combine the “<25” and “26-50” categories and the “50-100” and “>100” categories as discussed above:

> #collapse table tab1 to a 2 x 3 table, table2
> table2 <-collapse.table(tab1, income=c(“<50″,”<50″,”>50″,”>50″))

View the resulting table, table2, with:

> table2
                                education
income          HS        College       Graduate
<50                  27             20                   13
>50                  15             32                   57
>

Now collapse the table to a 2 x 2 table by combining the “College” and “Graduate” columns:
> #collapse 2 x 3 table2 to a 2 x2 table, table3
> table3 <-collapse.table(table2, education=c(“HS”,”College”,”College”))

View the resulting table, table3, with:

> table3
                               education
income             HS             College
<25                     27                  33
>100                  15                  89
>

Use the assocstats function to evaluated the 2 x 2 table:

> #use assocstats on the 2 x 2 table, table3
> assocstats(table3)
X^2 df P(> X^2)
Likelihood Ratio 18.220 1 1.9684e-05
Pearson 18.673 1 1.5519e-05

Phi-Coefficient : 0.337
Contingency Coeff.: 0.32
Cramer’s V : 0.337
>

There are many other table manipulation function available in the R vcd and vcdExtras packages and well as other packages to provide analysis of nonparametric data. This series of tutorials hopefully serves to illustrate some of the more basic and common table functions using these packages. The next tutorial looks at the use of the ca function to perform and graph the results of a basic Correspondence Analysis.