+1 (208) 254-6996 essayswallet@gmail.com
  

Statistics Excel Week 3 Assignment

Attached

I need only Week 4 done now

I need in 12 hours from now maximum

BUS308 Week 4 Lecture 1

Examining Relationships

Expected Outcomes

After reading this lecture, the student should be familiar with:

1. Issues around correlation 2. The basics of Correlation analysis 3. The basics of Linear Regression 4. The basics of the Multiple Regression

Overview

Often in our detective shows when the clues are not providing a clear answer – such as we are seeing with the apparent continuing contradiction between the compa-ratio and salary related results – we hear the line “maybe we need to look at this from a different viewpoint.” That is what we will be doing this week.

Our investigation changes focus a bit this week. We started the class by finding ways to describe and summarize data sets – finding measures of the center and dispersion of the data with means, medians, standard deviations, ranges, etc. As interesting as these clues were, they did not tell us all we needed to know to solve our question about equal work for equal pay. In fact, the evidence was somewhat contradictory depending upon what measure we focused on. In Weeks 2 and 3, we changed our focus to asking questions about differences and how important different sample outcomes were. We found that all differences were not important, and that for many relatively small result differences we could safely ignore them for decision making purposes – they were due to simple sampling (or chance) errors. We found that this idea of sampling error could extend into work and individual performance outcomes observed over time; and that over- reacting to such differences did not make much sense.

Now, in our continuing efforts to detect and uncover what the data is hiding from us, we change focus again as we start to find out why something happened, what caused the data to act as it did; rather than merely what happened (describing the data as we have been doing). This week we move from examining differences to looking at relationships; that is, if some measure changes does another measure change as well? And, if so, can we use this information to make predictions and/or understand what underlies this common movement?

Our tools in doing this involve correlation, the measurement of how closely two variables move together; and regression, an equation showing the impact of inputs on a final output. A regression is similar to a recipe for a cake or other food dish; take a bit of this and some of that, put them together, and we get our result.

Correlation

We have seen correlations a lot, and probably have even used them (formally or informally). We know, for example, that all other things being equal; the more we eat. the more

we weigh. Kids, up to the early teens, grow taller the older they get. If we consistently speed, we will get more speeding tickets than those who obey the speed limit. The more efforts we put into studying, the better grades we get. All of these are examples of correlations.

Correlations exist in many forms. A somewhat specialized correlation was the Chi Square contingency test (for multi-row, multi-column tables) we looked at last week, if we find the distributions differ, then we say that the variables are related/correlated. This correlation would run from 0 (no correlation) thru positive values (the larger the value the stronger the relationship).

Probably the most commonly used correlation is the Pearson Correlation Coefficient, symbolized by r. It measures the strength of the association – the extent to which measures change together – between interval or ratio level measures. Excel’s Fx Correl, and the Data Analysis Correlation both produce Pearson Correlations.

Most correlations that we are familiar with show both the direction (direct or inverse) as well as the strength of the relationship, and run from -1.0 (a strong and perfect inverse correlation) through 0 (a weak and non-existent correlation) to +1.0 (a strong an perfect direct correlation). A direct correlation is positive; that is, both variables move in the same direction, such as weight and height for kids. An inverse, or negative, correlation has variables moving in different directions. For example, the number of hours you sleep and how tired you feel; the more hours, the less tired while the fewer hours, the more tired.

The strength of a correlation is shown by the value (regardless of the sign). For example, a correlation of +.78 is just as strong as a correlation of -.78; the only difference is the direction of the change. If we graphed a +.78 correlation the data points would run from the lower left to the upper right and somewhat cluster around a line we could draw thru the middle of the data points. A graph of a -.78 correlation would have the data points starting in the upper left and run down to the lower right. They would also cluster around a line.

Correlations below an absolute value (when we ignore the plus or minus sign) of around .70 are generally not considered to be very strong. The reason for this is due to the coefficient of determination(CD). This equals the square of the correlation and shows the amount of shared variation between the two variables. Shared variation can be roughly considered the reason that both variables move as they do when one changes. The more the shared variation, the more one variable can be used to predict the other. If we square .70 we get .49, or about 50% of the variation being shared. Anything less is too weak of a relationship to be of much help.

Students often feel that a correlation shows a “cause-and-effect” relationship; that is, changes in one thing “cause” changes in the other variable. In some cases, this is true – height and weight for pre-teens, weight and food consumption, etc. are all examples of possible cause- and- effect relationships; but we can argue that even with these there are other variables that might interfere with the outcomes. And, in research, we cannot say that one thing causes or explains another without having a strong correlation present.

However, just as our favorite detectives find what they think is a cause for someone to have committed the crime, only to find that the motive did not actually cause that person to commit the crime; a correlation does not prove cause-and-effect. An example of this is the example the author heard in a statistics class of a perfect +1.00 correlation found between the barrels of rum imported into the New England region of the United States between the years of 1790 and 1820 and the number of churches built each year. If this correlation showed a cause- and-effect relationship, what does it mean? Does rum drinking (the assumed result of importing rum) cause churches to be built? Does the building of churches cause the population to drink more rum?

As tempting as each of these explanations is, neither is reasonable – there is no theory or justification to assume either is true. This is a spurious correlation – one caused by some other, often unknown, factor. In this case, the culprit is population growth. During these years – many years before Carrie Nation’s crusade against Demon Rum – rum was the common drink for everyone. It was even served on the naval ships of most nations. And, as the population grew, so did the need for more rum. At the same time, churches in the region could only hold so many bodies (this was before mega-churches that held multiple services each Sunday); so, as the population got too large to fit into the existing churches, new ones were needed.

At times, when a correlation makes no sense we can find an underlying variable fairly easily with some thought. At other times, it is harder to figure out, and some experimentation is needed. The site http://www.tylervigen.com/spurious-correlations is an interesting website devoted to spurious correlations, take a look and see if you can explain them. 😊😊

Regression

Linear. Even if the correlation is spurious, we can often use the data in making predictions until we understand what the correlation is really showing us. This is what regression is all about. Earlier correlations between age, height, and even weight were mentioned. In pediatrician offices, doctors will often have charts showing typical weights and heights for children of different ages. These are the results of regressions, equations showing relationships. For example (and these values are made up for this example), a child’s height might be his/her initial height at birth plus and average growth of 3.5 inches per year. If the average height of a newborn child is about 19 inches, then the linear regression would be:

Height = 19 inches plus 3.5 inches * age in years, or in math symbols:

Y = a + b*x, where y stands for height, a is the intercept or initial value at age 0 (immediate birth), b is the rate of growth per year, and x is the age in years.

In both cases, we would read and interpret it the same way: the expected height of a child is 19 inches plus 3.5 inches times its age. For a 12-year old, this would be 19 + 3.5*12 = 19 + 42 = 61 inches or 5 feet 1 inch (assuming the made-up numbers are accurate).

Multiple. That was an example of a linear regression having one output and a single, independent variable as an input. A multiple regression equation is quite similar but has several independent input variables. It could be considered to be similar to a recipe for a cake:http://www.tylervigen.com/spurious-correlations

Cake = cake mix + 2* eggs + 1½ * cup milk + ½ * teaspoon vanilla + 2 tablespoons* butter.

A regression equation, either linear or multiple, shows us how “much” each factor is used in or influences the outcome. The math format of the multiple regression equation is quite similar to that of the linear regression, it just includes more variables:

Y = a + b1*X1 + b2*X2 + b3*X3 + …; where a is the intercept value when all the inputs are 0, the various b’s are the coefficients that are multiplied by each variable value, and the x’s are the values of each input.

A note on how to read the math symbols in the equations. The Y is considered the output or result, and is often called the dependent variable as its value depends on the other factors. The different b’s (b1, b2, etc.) are coefficients and read b-sub-1, b-sub-2, etc. The subscripts 1, 2, etc. are used to indicate the different coefficient values that are related to each of the input variables. The X-sub-1, X-sub-2, etc., are the different variables used to influence the output, and are called independent variables. In the recipe example, Y would be the quality of the cake, a would be the cake mix (a constant as we use all of what is in the box), the other ingredients would relate to the b*X terms. The 2*eggs would relate to b1*X1, where b1 would equal 2 and X1 stands for eggs, the second input relates to the milk, etc.

Summary

This week we changed our focus from examining differences to looking for relationships – do variables change in predictable ways. Correlation lets us see both the strength and the direction of change for two variables. Regression allows us to see how some variables “drive” or explain the change in another.

Pearson’s (for interval and ratio data variables) and Spearman’s (for rank ordered or ordinal data variables) are the two most commonly used correlation coefficients. Each looks at how a pair of variables moves in predictable patterns – either both increasing together or one increasing as the other decreases. The correlation ranges from -1.00 (moving in opposite directions) to +1.00 (moving in the same direction). These are both examples of linear correlation – how closely the variables move in a straight line (if graphed). Curvilinear corrections exist but are not covered in this class.

Regression equations show the relationship between independent (input) variables and a dependent (output variables). Linear regression involves a pair of variables as seen in the linear correlations. Multiple regression uses several input (independent) variables for a single output (dependent) variable.

The basic form of the regression equation is the same for both linear and multiple regression equations. The only difference is in the number of inputs used. The multiple regression equation general form is:

Y = Intercept + coefficient1 * variable1 + coefficient2 * variable2 + etc. or

Y = A + b1*X1 + b2*X2 + …; where A is the intercept value, b is a coefficient value, and X is the name of a variable, and the subscripts identify different variables.

Summary

This week we changed focus from examining differences to examining relationships – how variables might move in predictable patterns. This, we found, can be done with either correlations or regression equations.

Correlations measure both the strength (the value of the correlation) and the direction (the sign) of the relationship. We looked at the Pearson Correlation (for interval and ratio level data) and the Spearman’s Rank Order Correlation (for ordinal level data). Both range from -1.00 (a perfect inverse correlation where as one value increases the other decreases) to +1.00 (a perfect direct correlation where both value increase together). A perfect correlation means the data points would fall on a straight line if graphed. One interesting characteristic of these correlations occurs when you square the values. This produces the Coefficient of Determination (CD), which gives us an estimate of how much variation is in common between the two variables. CD values of less than .50 are not particularly useful for practical purposes.

Regression equations provide a formula that shows us how much influence an input variable has on the output; that is, how much the output changes for a given change in an input. Regression equations are behind such commonly used information such as the relationship between height and weight for children that doctors use to assess our children’s development. That would be a linear regression, Weight = constant + coefficient*height in inches or Y = A + b*X, where Y stands for weight, A is the constant, b is the coefficient, and X is the height. A multiple regression is conceptually the same but has several inputs impacting a single output.

If you have any questions on this material, please ask your instructor.

After finishing with this lecture, please go to the first discussion for the week, and engage in a discussion with others in the class over the first couple of days before reading the second lecture.

Data

IDSalaryCompaMidpointAgePerformance RatingServiceGenderRaiseDegreeGender1GradeDo not manipuilate Data set on this page, copy to another page to make changes
160.21.056573485805.70METhe ongoing question that the weekly assignments will focus on is: Are males and females paid the same for equal work (under the Equal Pay Act)?
227.70.893315280703.90MBNote: to simplfy the analysis, we will assume that jobs within each grade comprise equal work.
335.51.145313075513.61FB
456.10.98557421001605.51METhe column labels in the table mean:
548.91.0184836901605.71MDID – Employee sample numberSalary – Salary in thousands
674.11.1066736701204.51MFAge – Age in yearsPerformance Rating – Appraisal rating (employee evaluation score)
742.21.0554032100815.71FCService – Years of service (rounded)Gender – 0 = male, 1 = female
821.40.929233290915.81FAMidpoint – salary grade midpointRaise – percent of last raise
977.11.151674910010041MFGrade – job/pay gradeDegree (0= BS\BA 1 = MS)
1022.60.983233080714.71FAGender1 (Male or Female)Compa-ratio – salary divided by midpoint
1123.81.03623411001914.81FA
1267.41.1835752952204.50ME
1340.21.0044030100214.70FC
1423.71.03223329012161FA
15231.000233280814.91FA
1647.11.177404490405.70MC
1765.91.1565727553131FE
1837.11.1973131801115.60FB
1924.41.062233285104.61MA
2034.71.1203144701614.80FB
2174.41.1106743951306.31MF
2255.71.161484865613.81FD
2324.71.074233665613.30FA
24531.104483075913.80FD
2524.31.0562341704040MA
2623.51.021232295216.20FA
2741.71.043403580703.91MC
2877.21.152674495914.40FF
2977.71.160675295505.40MF
3047.70.9944845901804.30MD
3123.91.038232960413.91FA
3227.10.875312595405.60MB
33621.088573590905.51ME
3427.60.890312680204.91MB
3522.90.994232390415.30FA
3622.40.975232775314.30FA
3723.41.017232295216.20FA
3865.81.1555745951104.50ME
3935.11.131312790615.50FB
4024.81.078232490206.30MA
4150.31.257402580504.30MC
4223.11.0062332100815.71FA
4374.41.1106742952015.50FF
4461.41.0785745901605.21ME
4548.51.010483695815.21FD
4659.41.0425739752003.91ME
4763.51.114573795505.51ME
4866.41.1655734901115.31FE
4960.91.0685741952106.60ME
5063.51.1145738801204.60ME

Week 1

Week 1: Descriptive Statistics, including Probability
While the lectures will examine our equal pay question from the compa-ratio viewpoint, our weekly assignments will focus on
examining the issue using the salary measure.
The purpose of this assignmnent is two fold:
1. Demonstrate mastery with Excel tools.
2. Develop descriptive statistics to help examine the question.
3. Interpret descriptive outcomes
The first issue in examining salary data to determine if we – as a company – are paying males and females equally for doing equal work is to develop some
descriptive statistics to give us something to make a preliminary decision on whether we have an issue or not.
1Descriptive Statistics: Develop basic descriptive statistics for Salary
The first step in analyzing data sets is to find some summary descriptive statistics for key variables.
Suggestion: Copy the gender1 and salary columns from the Data tab to columns T and U at the right.
Then use Data Sort (by gender1) to get all the male and female salary values grouped together.
a.Use the Descriptive Statistics function in the Data Analysis tabPlace Excel outcome in Cell K19
to develop the descriptive statistics summary for the overall
group’s overall salary. (Place K19 in output range.)
Highlight the mean, sample standard deviation, and range.
b.Using Fx (or formula) functions find the following (be sure to show the formula
and not just the value in each cell) asked for salary statistics for each gender:
MaleFemale
Mean:
Sample Standard Deviation:
Range:
2Develop a 5-number summary for the overall, male, and female SALARY variable.
For full credit, use the excel formulas in each cell rather than simply the numerical answer.
OverallMalesFemales
Max
3rd Q
Midpoint
1st Q
Min
3Location Measures: comparing Male and Female midpoints to the overall Salary data range.
For full credit, show the excel formulas in each cell rather than simply the numerical answer.
Using the entire Salary range and the M and F midpoints found in Q2MaleFemale
a. What would each midpoint’s percentile rank be in the overall range?Use Excel’s =PERCENTRANK.EXC function
b. What is the normal curve z value for each midpoint within overall range?Use Excel’s =STANDARDIZE function
4Probability Measures: comparing Male and Female midpoints to the overall Salary data range
For full credit, show the excel formulas in each cell rather than simply the numerical answer.
Using the entire Salary range and the M and F midpoints found in Q2, findMaleFemale
a. The Empirical Probability of equaling or exceeding (=>) that value forShow the calculation formula = value/50 or =countif(range,”>=”&cell)/50
b. The Normal curve Prob of => that value for each groupUse “=1-NORM.S.DIST” function
Note: be sure to use the ENTIRE salary range for part a when finding the probability.
5Conclusions: What do you make of these results?Be sure to include findings from this week’s lectures as well.
In comparing the overall, male, and female outcomes, what relationship(s) see, to exist between the data sets?
Your findings:
The lecture’s related findings:
Overall conclusion:
What does this suggest about our equal pay for equal work question?

Week 2

Week 2: Identifying Significant Differences – part 1
To Ensure full credit for each question, you need to show how you got your results. This involves either showing where the data you used is located
or showing the excel formula in each cell.Be sure to copy the appropriate data columns from the data tab to the right for your use this week.
As with our examination of compa-ratio in the lecture, the first question we have about salary between the genders involves equality – are they the same or different?
What we do, depends upon our findings.
1As with the compa-ratio lecture example, we want to examine salary variation within the groups – are they equal?Use Cell K10 for the Excel test outcome location.
aWhat is the data input ranged used for this question:
bWhich is needed for this question: a one- or two-tail hypothesis statement and test ?
Answer:
Why:
c. Step 1:Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:
Why this test?
Step 4:Decision rule:
Step 5:Conduct the test – place test function in cell k10
Step 6:Conclusion and Interpretation
What is the p-value:
What is your decision: REJ or NOT reject the null?
Why?
What is your conclusion about the variance in the population for male and female salaries?
2Once we know about variance quality, we can move on to means: Are male and female average salaries equal?Use Cell K35 for the Excel test outcome location.
(Regardless of the outcome of the above F-test, assume equal variances for this test.)
aWhat is the data input ranged used for this question:
bDoes this question need a one or two-tail hypothesis statement and test?
Why:
c. Step 1:Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:
Why this test?
Step 4:Decision rule:
Step 5:Conduct the test – place test function in cell K35
Step 6:Conclusion and Interpretation
What is the p-value:
What is your decision: REJ or NOT reject the null?
Why?
What is your conclusion about the means in the population for male and female salaries?
3Education is often a factor in pay differences.
Do employees with an advanced degree (degree = 1) have higher average salaries?Use Cell K60 for the Excel test outcome location.
Note: assume equal variance for the salaries in each degree for this question.
aWhat is the data input ranged used for this question:
bDoes this question need a one or two-tail hypothesis statement and test?
Why:
c. Step 1:Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:
Why this test?
Step 4:Decision rule:
Step 5:Conduct the test – place test function in cell K60
Step 6:Conclusion and Interpretation
What is the p-value:
Is the t value in the t-distribution tail indicated by the arrow in the Ha claim?
What is your decision: REJ or NOT reject the null?
Why?
What is your conclusion about the impact of education on average salaries?
4Considering both the compa-ratio information from the lectures and your salary information, what conclusions can you reach about equal pay for equal work?
Your findings:
The lecture’s related findings:
Overall conclusion:
Why – what statistical results support this conclusion?

Week 3

Week 3: Identifying Significant Differences – part 2Data Input Table:Salary Range Groups
Group name:ABCDEF
To Ensure full credit for each question, you need to show how you got your results. This involves either showing where the data you used is locatedList salaries within each grade
or showing the excel formula in each cell.Be sure to copy the appropriate data columns from the data tab to the right for your use this week.
1A good pay program will have different average salaries by grade. Is this the case for our company?
aWhat is the data input ranged used for this question:Use Cell K08 for the Excel test outcome location.
Note: assume equal variances for each grade, even though this may not be accurate, for purposes of this question.
b. Step 1:Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:
Why this test?
Step 4:Decision rule:
Step 5:Conduct the test – place test function in cell K08
Step 6:Conclusion and Interpretation
What is the p-value:
What is your decision: REJ or NOT reject the null?
Why?
What is your conclusion about the means in the population for grade salaries?
2If the null hypothesis in question 1 was rejected, which pairs of means differ?
(Use the values from the ANOVA table to complete the follow table.)
Groups ComparedMean Diff.T value used+/- TermLowtoHighDifference Significant?Why?
A-B
A-C
A-D
A-E
A-F
B-C
B-D
B-E
B-E
C-D
C-E
C-F
D-E
D-F
E-F
3One issue in salary is the grade an employee is in – higher grades have higher salaries.
This suggests that one question to ask is if males and females are distributed in a similar pattern across the salary grades?
aWhat is the data input ranged used for this question:Use Cell K54 for the Excel test outcome location.
b. Step 1:Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:Place the actual distribution in the table below.
Why this test?ABCDEFSum
Step 4:Decision rule:Male0
Step 5:Conduct the test – place test function in cell K54Female0
Sum:0000000
Step 6:Conclusion and InterpretationPlace the expected distribution in the table below.
What is the p-value:ABCDEF
What is your decision: REJ or NOT reject the null?Male0
Why?Female0
What is your conclusion about the means in the population for male and female salaries?Sum:0000000
4What implications do this week’s analysis have for our equal pay question?
Your findings:
The lecture’s related findings:
Overall conclusion:
Why – what statistical results support this conclusion?

Week 4

Week 4: Identifying relationships – correlations and regression
To Ensure full credit for each question, you need to show how you got your results. This involves either showing where the data you used is located
or showing the excel formula in each cell.Be sure to copy the appropriate data columns from the data tab to the right for your use this week.
1What is the correlation between and among the interval/ratio level variables with salary? (Do not include compa-ratio in this question.)
a. Create the correlation table.Use Cell K08 for the Excel test outcome location.
i.What is the data input ranged used for this question:
ii.Create a correlation table in cell K08.
b. Technically, we should perform a hypothesis testing on each correlation to determine
if it is significant or not. However, we can be faithful to the process and save some
time by finding the minimum correlation that would result in a two tail rejection of the null.
We can then compare each correlation to this value, and those exceeding it (in either a
positive or negative direction) can be considered statistically significant.
i. What is the t-value we would use to cut off the two tails?T =
ii. What is the associated correlation value related to this t-value? r =
c. What variable(s) is(are) significantly correlated to salary?
d. Are there any surprises – correlations you though would be significant and are not, or non significant correlations you thought would be?
e. Why does or does not this information help answer our equal pay question?
2Perform a regression analysis using salary as the dependent variable and all of the variables used in Q1. Add the
two dummy variables – gender and education – to your list of independent variables. Show the result, and interpret your findings by answering the following questions.
Suggestion: Add the dummy variables values to the right of the last data columns used for Q1.
What is the multiple regression equation predicting/explaining salary using all of our possible variables except compa-ratio?
a.What is the data input ranged used for this question:
b.Step 1: State the appropriate hypothesis statements:Use Cell M34 for the Excel test outcome location.
Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:
Why this test?
Step 4:Decision rule:
Step 5:Conduct the test – place test function in cell M34
Step 6:Conclusion and Interpretation
What is the p-value:
What is your decision: REJ or NOT reject the null?
Why?
What is your conclusion about the factors influencing the population salary values?
c.If we rejected the null hypothesis, we need to test the significance of each of the variable coefficients.
Step 1: State the appropriate coefficient hypothesis statements:(Write a single pair, we will use it for each variable separately.)
Ho:
Ha:
Step 2:Significance (Alpha):
Step 3:Test Statistic and test:
Why this test?
Step 4:Decision rule:
Step 5:Conduct the test
Note, in this case the test has been performed and is part of the Regression output above.
Step 6:Conclusion and Interpretation
Place the t and p-values in the following table
Identify your decision on rejecting the null for each variable. If you reject the null, place the coefficient in the table.
MidpointAgePerf. Rat.SeniorityRaiseGenderDegree
t-value:
P-value:
Rejection Decision:
If Null is rejected, what is the variable’s coefficient value?
Using the intercept coefficient and only the significant variables, what is the equation?
Salary =
d.Is gender a significant factor in salary?
e.Regardless of statistical significance, who gets paid more with all other things being equal?
f.How do we know?
3After considering the compa-ratio based results in the lectures and your salary based results, what else would you like to know
before answering our question on equal pay? Why?
4Between the lecture results and your results, what is your answer to the question
of equal pay for equal work for males and females? Why?
Your findings:
The lecture’s related findings:
Overall conclusion:
5What does regression analysis show us about analyzing complex measures?

BUS 308 Week 4 Lecture 3

Developing Relationships in Excel

Expected Outcomes

After reading this lecture, the student should be able to:

1. Calculate the t-value for a correlation coefficient 2. Calculate the minimum statistically significant correlation coefficient value. 3. Set-up and interpret a Linear Regression in Excel 4. Set-up and interpret a Multiple Regression in Excel

Overview

Setting up correlations and regressions in Excel is fairly straightforward and follows the approaches we have seen with our previous tools. This involves setting up the data input table, selecting the tools, and inputting information into the appropriate parts of the input window.

Correlations

Question 1

Data set-up for a correlation is perhaps the simplest of any we have seen. It involves simply copying and pasting the variables from the Data tab to the Week 4 worksheet. Again, paste them to the right of the question area. The screenshot below has the data for both the question 1 correlation and the question 2 multiple regression pasted them starting at column V. You can paste all the data at once or add the multiple regression variables later (as long as you do not sort the original data).

Specifically, for Question 1, copy the salary data to column V (for example). Then copy the Midpoint thru Service columns and paste them next to salary. Finally copy the Raise column and paste it next to the service column. Notice that our data input range for this question now includes Salary in Column V and the other interval level variables found in Columns W thru AA.

Question 1 asks for the correlation among the interval/ratio level variables with salary and says to exclude compa-ratio. For our example, we will correlation compa-ratio with the other interval/ratio level variables with the exclusion of salary. Since compa-ratio equals the salary divided by the midpoint, it does not seem reasonable to use salary in predicting compa- ratio or compa-ratio in predicting salary.

Pearson correlations can be performed in two ways within Excel. If we have a single pair of variables we are interested in, for example compa-ratio and performance rating, we could use the fx (or Formulas) function CORREL(array1, array2) (note array means the same as range) to give us the correlation.

However, if we have several variables we want to correlate at the same time, it is more effective to use the Correlation function found in the Analysis ToolPak in the Data Analysis tab. Set up of the input data for Correlation is simple. Just ensure that all of the variables to be correlated are listed together, and only include interval or ratio level data. For our data set, this would mean we cannot include gender or degree; even though they look like numerical data the 0 and 1 are merely labels as far as correlation is concerned.

In the Correlation data input box shown below, list the entire data range, indicate if your data has labels or not (good idea to include these), select the output cell, and click OK. Here is a screen shot of the input box and some of the data.

The result will show up in K08 (in this case).

Statistical Significance

Part b. Normally, we would go thru our questions about the p-value for each value. But since you are familiar with the testing logic, for this question we are going to “shortcut” the process. Now, there is an easier way of determining which of the correlations are statistically significant. This is suggested by the question1 part b that we skipped in lecture 2. We noted that values smaller than the r = .20 that we tested could be assumed to all be non-significant. We could also have assumed that values larger than the tested 0.50 would be assumed to all be significant. So, it would seem to make sense that there is a specific value of r that exactly matches the alpha = 0.05 criteria.

If we can find this value of r, we can compare each correlation with this critical value; correlations larger (absolute values) than this are significant; while smaller correlations are not significant. Having this critical value would give us a quick decision point (much like how we use the p-value).

The issue is now, what is this critical r value?

Technical Point. If you are interested in how we obtain the formula for determining the minimum r value, the approach is shown below. If you are not interested in the math, you can safely skip this paragraph, and go to The Result paragraph below.

We know that t = r* sqrt(n-2)/sqrt(1-r2)

Multiplying both sides by sqrt(1-r2) gives us t *sqrt (1- r2) = r*sqrt(n-2)

Squaring both sides gives us: t2 * (1- r2) = r2* (n-2)

Multiplying each side out gives us: t2– t2* r2 = n r2-2* r2

Adding t2* r2 both sides gives us: t2= n* r2-2*r2+ t2 *r2

Factoring gives us: t2= r2 *(n -2+ t2)

Dividing both sides by *(n -2+ t2) gives us: t2 / (n -2+ t2) = r2

Taking the square root gives us: r = sqrt (t2 / (n -2+ t2)

The Result. The formula to use in finding the minimum correlation value that is statistically significant is: r = sqrt(t^2/(t^2 + n-2)), where t is the 2-tail value for any df count.

We would find the t value associated with a two-tail p-value of 0.05 and a df value of 48 by using the t.inv.2T(alpha, df) function with alpha = 0.05 and df = n-2 or 48 (for our data set of 50 employees). Plugging these values into the gives us a t-value of 2.0106 or 2.011(rounded).

t =t.inv.2T(alpha, df) =t.inv.2t(0.05,48) = 2.011

r = sqrt(t^2/(t^2 + n-2)) = sqrt(2.011^2/(2.011^2 + 50-2) = 0.278.

Therefore, in a correlation table based on 50 pairs, any correlation greater than or equal to 0.278 would be statistically significant.

So, what does all this mean? If we find a correlation based on 50 pairs of data (such as what our data set will produce), any correlation value that exceeds an absolute value of 0.278 would be found to be statistically significant (p-value less than 0.05) and cause us to reject the related null hypothesis of not significant.

So, when looking at a table of correlation values, we can identify the significant correlations immediately; these are any correlation above the absolute value of 0.278 (that means larger than + 0.278 (such as + .46) or less than -0.278 (such as -0.53)). Knowing how to interpret table results, we can proceed making our decisions on what is significant.

So, for part b, the first question asked is what is the T value that cuts off the two tails of the distribution with an alpha of 0.05? We calculated this above as 2.011. The second question asks for the associated correlation value for this t-value. Again, we found this above to be 0.278.

Spearman’s. Note that while the Spearman’s rank order correlation is not asked for in the assignment, you might want to use it at times. For example, some could argue that Performance Rating, since it is based mostly on human judgement, it is really ordinal and requires Spearman’s. The formula for Spearman’s (which needs to be manually input into Excel), is:

Rho = 1 – 6*(Sum of d^2)/(n*(n^2 – 1)); where d is the difference in the rank score for each of the paired variables, and n is the count of paired data used. Remember that the ^ in an Excel formula means take the number to that power, so d^2 means d squared (or d times d).

Regression

Question 2

Both linear and multiple regression are both set up in the same fashion, so we will look at only the multiple regression situation. For the data, put the dependent variable, the output such as salary or compa-ratio, in one column and then paste the independent, input, variables in

sequential columns next to it. Make sure that none of the columns contain letter characters. It is also a good idea to include the variable labels for each data column. The first screen shot above shows the data input required for this question.

The Regression function is found in the Data | Analysis block and is labeled Regression. Here is a screen shot of a complete Regression set-up for a regression equation for compa-ratio. Note that unlike the correlation input, we have two ranges to work with. The first is the output, which for this example is compa-ratio (and would be salary for the homework). The second is for the inputs, which should include all of the numeric looking variables, including the Degree and Gender variables as shown below.

Data range entry for the Y (or outcome) and the X (or input) variables are done separately by either typing in the ranges or using dragging the cursor over the data range after clicking on the up arrow at the right end of the data entry boxes. The same is done with the data entry box after clicking the circle for Output range.

There are a number of options to consider. First, of course, is the need to click the labels box if your data ranges include labels. A second option is the Constant is Zero equation. This would force the regression equation to pass thru the X = 0 and Y = 0 origin, even if this is not the best fit. Use this with caution, even though it might make sense to have Y = 0 when all the X variables are 0, using this option may not give us the equation that best fits the data.

The residuals box provides a way to see how well each of the plotted data points fits with the predicted results. This will often allow us to see outliers – cases that do not fit with the rest of the data set. Outliers are sometimes indications of data entry errors or, in the case of salary, they may be paid using a different approach. One such example would be a commission salesperson being included with employees that are paid on a straight salary, the basis of pay is so different these two should not be analyzed in the same study. Other options here allow for the results to be turned into Z-scores (Standardized Residuals), plotted on a graph, or have linear plots made for the output and each separate input. Normal Probability Plots are rather

complicated to discuss, and it is left to the student to explore this if desired. You are encouraged to play around with some of these options, even though they are not required for the assignment.

Here is a video on Regression: https://screencast-o-matic.com/watch/cb6jfuIk8S

Summary

Pearson Correlations are fairly easy to produce in Excel with either the Analysis ToolPak Correlation function (best used for multiple correlations or when you want the labels shown) or the Fx (or Formulas) function CORREL, best used for a single correlation outcome with no labels. Both are used for the Pearson correlation only. The Spearman’s correlation requires setting up the data in rank order and providing ranks for each variable separately and then summing these and placing them into a cell formula to obtain the correlation.

Setting up the data for a correlation is fairly simple. Just list the variables in a column and select the appropriate columns for the function being used. For the correlation table, have all the variable columns in a continuous range.

The statistical significance of either correlation is found using the t formula t = r* sqrt(n- 2)/sqrt(1-r2), where r is the correlation value and n is the number of data pairs used for the correlation. Once we have a t-value we can use the t.dist.2t(t, df) formula (df -= n-2) to find the two tail p-value. The lecture presents an approach for finding a minimum statistically significant value when we have a table of correlations to look at; correlations with an absolute value equal to or greater than this value would be statistically significant.

Data set-up for a regression is similar to the correlation table. Have the outcome variable at one end of the range so it can be selected alone and have all the other input variables listed in a continuous range.

The regression function (for either a linear or multiple regression) is located in the Analysis ToolPak list and is called Regression. The set-up within the data entry box is similar to the other functions we have done (t-test, ANOVA, etc.) with a data range, data output location, and a label box to fill in.

Please ask your instructor if you have any questions about this material.

When you have finished with this lecture, please respond to Discussion Thread 3 for this week with your initial response and responses to others over a couple of days.https://screencast-o-matic.com/watch/cb6jfuIk8S

BUS 308 – Week 4 Lecture 2

Interpreting Relationships

Expected Outcomes

After reading this lecture, the student should be able to:

1. Interpret the strength of a correlation 2. Interpret a Correlation Table 3. Interpret a Linear Regression Equation 4. Interpret a Multiple Regression Equation

Overview

As in many detective stories, we will often find that when one thing changes, we see that something else has changed as well. Moving to correlation and regression opens up new insights into our data sets, but still lets us use what we have learned about Excel tools in setting up and generating our results.

The correlation between events is mirrored in data analysis examinations with correlation analysis. This week’s focus changes from detecting and evaluating differences to looking at relationships. As students often comment, finding significant differences in gender-based measures does not explain why these differences exist. Correlation, while not always explaining why things happen gives data detectives great clues on what to examine more closely and helps move us towards understanding why outcomes exist and what impacts them. If we see correlations in the real world, we often will spend time examining what might underlie them; finding out if they are spurious or causal.

Regression lets us use relationships between and among our variables to predict or explain outcomes based upon inputs, factors we think might be related. In our quest to understand what impacts the compa-ratio and salary outcomes we see, we have often been frustrated due to being basically limited to examining only two variables at a time, when we felt that we needed to include many other factors. Regression, particularly multiple regression, is the tool that allows us to do this.

Linear Correlation

When two things seem to move in a somewhat predictable way, we say they are correlated. This correlation could be direct or positive, both move in the same direction, or it could be inverse or negative, where when one increases the other decreases. The Law of Supply in economics is a common example of an inverse (or negative) correlation, where the more supply we have of something, the less we typically can charge for it; the Law of Demand is an example of a direct (or positive) correlation as the more demand exists for something, the more we can charge for it. Height and weight in young children is another common example of a direct correlation, as one increases so does the other measure.

Probably the most commonly used correlation is the Pearson Correlation Coefficient, symbolized by r. It measures the strength of the association – the extent to which measures change together – between interval or ratio level measures as well as the direction of the relationship (inverse or direct). Several measures in our company data set could use the Pearson Correlation to show relationships; salary and midpoint, salary and years of service, salary and performance rating, etc. The Pearson Correlation runs from -1.0 (perfect negative or inverse correlation) thru 0 (no correlation) to +1.0 (perfect positive or direct correlation).

A perfect correlation means that if we graphed the values, they would fall exactly on a straight line, either increase from bottom left to top right (positive) or from top left to bottom right (negative). The stronger the absolute value (ignoring the sign), the stronger the correlation and the more the data points would form a straight line when plotted on a graph. The Excel Fx function Correl, and the Data Analysis function Correlation both produce Pearson Correlations.

Question 1

When we have a data set with multiple variables, we would want to see what relationships exist – a detective’s sort of “who works with whom” around the result we are looking for. This question asks for us to identify the statistically significant correlations between the SALARY and the other variables.

Remember, while the assignment asks for correlations with Salary, this example will use correlations with the compa-ratio. The hypothesis testing steps are

Step 1: Ho: Correlation with compa-ratio is not significant. Ha: Correlation with compa-ratio is significant. (A two-tail test.)

Technically we should set up the hypothesis testing steps for each of the correlations (which we will see shortly equals 15). This is a bit tedious, so we approach the issue of statistical significance as we did with the ANOVA coefficients last week by using a single Hypothesis test and apply it to each of the correlations.

Step 2: Alpha = 0.05

Step 3: Statistical Test and statistic: Pearson’s correlation (r), t, and the correlation t-test to test a correlation

The significance of the Pearson Correlation is tested with our old friend the t-test.

Step 4: Reject the null hypothesis if the p-value < 0.05.

Step 5: Conduct the test.

Below is a screen shot of a correlation table produced by the Analysis Toolpak Correlation tool.

Reading the Table. The table only shows correlations below the diagonal (which has a 1.00 correlation of each variable with itself). Values above the line would simply duplicate those below it. The diagonal is a “pivot-point,” so to speak. In reading the correlations we would start with a row such as Age. The correlation of Age and Compa-ratio is 0.195 (rounded), the correlation of Age and Midpoint is 0.567. Then we get to the diagonal. Instead of continuing horizontally, we start going vertical (down the Age column). So, the correlation of Age with Performance rating is 0.139, with Service is 0.565, and with raise -0.180. All correlations, except for the first one (in this case Compa-ratio) would be read this way in Correlation tables.

Step 6: Conclusion and Interpretation.

To assess the statistical significance of the correlations we need to calculate a t value for each correlation, using the formula: t = r * sqrt(n-2)/sqrt(1-r^2), df = n-2;

In this formula, n equals the number of data point pairs used in the correlation, and r equals the correlation that we are testing.

The associated p-value for a t-value is found using the Fx function T.DIST.2T(t, df).

In any correlation table, the correlations are all developed with the same number of pairs of data points. (In our case, we have 50 pairs to use.) So, it seems reasonable that each of these correlations would have the same critical value that cuts off our p-value of 0.05. So, once we find a correlation value that is non-significant (that is, we fail to reject the null hypothesis), any correlations smaller than this would also be non-significant.

For this lecture, please ignore part b. We will discuss this in Lecture 3, as it is a short-cut that depends on your understanding the approach described below.

Part c asks us to use this information and identify the variables significantly correlated to salary in the homework and to compa-ratio for this example.

We need to calculate the t value and its associated p-value to determine significance. Starting with the largest correlation value with Compa-ratio, we see this is with midpoint, and is 0.50, rounded. So we have:

T= r * sqrt(n-2)/sqrt(1-r^2) = .5*sqrt(50-2)/sqrt(1-.5^2) = 4 (letting Excel do the math).

The two-tail p-value is T.DIST.2T(t,df) = T.DIST.2T(4,48) = 0.0002.

The null hypothesis is rejected for the Compa-Ratio and Midpoint correlation.

The next largest Compa-ratio correlation is with Age at 0.20 (rounded). Using the same Excel functions, we get t= 1.41 with a p-value of 0.164 (both rounded). So, we do not reject the null hypothesis for the Compa-ratio and age correlation.

It makes sense that if a correlation of .20 is non-significant, then any smaller correlations would also be non-significant, so our testing is done.

Looking at the output table above, we can say that only Midpoint is significantly correlated to compa-ratio with a correlation of .50 (rounded).

Part d asks for any surprising results/correlations. This will depend upon your table and what you did or did not expect.

Part e asks if this information helps us answer our equal pay question. The compa-ratio correlations do not seem to be helpful as they do not shed any insight on gender based issues.

Multiple Correlation

As interesting as linear correlation is, multiple correlation is even more so. It correlates several independent (input) variables with a single dependent (output) variable. For example, it would show the shared variation (multiple R squared, or Multiple Coefficient of Determination) for compa-ratio with the other variables in the data set at the same time rather than in pairs as we did in question 1. While we can generate this value by itself, it is a bit complicated and is rarely found except in conjunction with a multiple regression equation. So, having noted that this exists, let’s move on to multiple regression.

Regression

Regression takes us the next step in the journey. We move from knowing which variables are correlated to finding out which variables can be used to actually predict outcomes or explain the influence of different variables on a result. As we might suspect, linear regression involves a single dependent (outcome) and single independent (input) variable. Linear regression uses at least interval level data for both the dependent and independent variables.

The form of a linear regression equation is:

Y = a + b*X; where Y is the output, X is the input, a is the intercept (the value of y when X = 0) on a graph, and b is the coefficient (showing the change in Y for every 1 change in the value of X.

Earlier, we found that the correlation between raise and performance rating was 0.674 (rounded). While we did not make note of this in our correlation discussion, it was part of the correlation table. This correlation relates to a coefficient of determination (CD) of 0.674^2 or 0.45 (rounded). As mentioned, this is not a particularly strong correlation, and we would not expect the graph of these values to show much of a straight line. For purposes of understanding linear regression, let’s look at a graph showing performance rating as an input (an X variable) predicting raise (Y). An example of a regression equation and its graph is:

Raise (Y) vs Performance Rating (X)

This is a Scatter Diagram graph produced by Excel. The regression line, equation, and R- squared values have been added. Note that the Coefficient of Determination (R2) is the 45% we found earlier, and that the data points are not all that close to the regression (AKA trend) line. Note the format of the regression equation Y = 0.5412 + 0.0512X, this is the same as saying Raise = 0.5412 + 0.0512* Performance Rating when we substitute the variable names for the algebraic letters.

Let us look at the equation. Since we know that the correlation is significant (it is larger than our 0.50 value we found for compa-ratio and Midpoint), the linear regression equation is significant. The regression says for every single point increase in the performance rating (our X variable), the raise (The Y variable) increases, on average by 0.0512%. If we extended the line towards the y (vertical axis), it would cross at Y = – 0.0512 and X = 0, this is an example where looking at the origin points is not particularly helpful as no one has a performance rating of 0. This graph does tend to reinforce our earlier comment that raise and performance rating, even

y = 0.0512x + 0.5412 R² = 0.4538

0

1

2

3

4

5

6

7

0 20 40 60 80 100 120

though the strongest correlation, are not particularly good at predicting each other’s value. We see too much dispersion of data points around the best fit regression line through the data points.

Most of us are probably not surprised, just as we feel compa-ratio is not determined by a single factor, we know raise is more complicated than simply the performance rating. This is where looking at multiple regression, the use of several factors, might be more insightful.

Multiple Regression

Multiple Regression is probably the most powerful tool we will look at in this course. It allows us to examine the impact of multiple inputs (AKA independent variables) on a single output or result (AKA dependent variable). It also allows us to include nominal and ordinal variables in the results when they are used as dummy coded variables.

Multiple regression has an interesting ability that we have not been able to use before. It can use nominal data variables as inputs to our outcomes, rather than using them simply as grouping labels. It does so by assigning either a 0 or 1 to the variable value depending upon whether some characteristic exists or not. For example, with degree we essentially are looking to see if a graduate degree has any impact, since everyone in the sample has at least an undergraduate degree. So, we code the existence of a graduate degree with a 1, and the “non- existence” with a 0. Similarly, with gender we are interested, essentially, how females are being treated, so we code them 1 (existence of being female). This coding is called Dummy Coding, and involves only using a 0 or 1 in specific situations where the existence of a factor is considered important. Note, other than some changes in the value of the coefficients, the outcomes would not differ if the codes were reversed. The significance, or non-significance, of degree or gender would remain the same regardless of the code used. We will comment on this more after we see our results.

Question 2

Question 2 for this week asks for a regression equation that explains the impact of various variables on our output of interest. Of course, in the homework this is salary, while in our lectures it is the compa-ratio.

Now that the data has been set up, let’s look at our hypothesis testing process for the question, first, of whether or not the regression equation is helpful in explaining what impacts compa-ratio outcomes.

Parts a and b. This part looks at the overall regression.

Step 1: Ha: The regression equation is not significant.

Ho: The regression equation is significant.

Step 2: Alpha = 0.05

Step 3: F stat and ANOVA-Regression, used to test regression significance

Step 4: Decision Rule: Reject the null hypothesis if p-value < 0.05.

Step 5: Conduct the test.

Here is a screen shot of a multiple regression analysis for the question of what factors influence compa-ratio. Note: we will split the discussion of the output into two screen shots.

Step 6: Conclusion and Interpretation.

The first table in the output provides some summary statistics. Two are important for us – the multiple correlation, shown as R, which equals 0.655, a moderate value; and, the R square or the multiple coefficient of determination showing that about 43% of the variation in compa-ratio values can be explained by the shared variation in the variables used in the analysis.

The second table shows the results of the actual statistical test of the regression. Similar to the ANOVA tables we looked at last week, it has two rows that are used to generate our F statistic (4.51) and the p-value which is labeled “Significance F.”

What is the p-value? 0.0008

Decision: Rej or Not reject the null? Reject the null hypothesis.

Why? The p-value is less than (<) 0.05.

Conclusion about Compa-ratio factors? The input variables are significantly related to compa-ratio outcomes. Some of the compa-ratio outcomes can be explained by the selected variables. We used the phrase “some of” since the equation only explains 43% of the variance, less than half.

Part c

Once we reject the null hypothesis, our attention changes to the actual equation, the variables and their corresponding coefficients. The third table provides all the details we need to reach our conclusions.

As with the correlations in question 1, we will use the hypothesis testing process, but will write it only once and use the p-values to make decisions on each of the possible equation variables.

Step 1: Ha: The variable coefficient is not significant (b = 0).

Ho: The variable coefficient is significant (b =/= 0).

Step 2: Alpha = 0.05

Step 3: T stat and t-test for coefficients

Step 4: Decision Rule: Reject the null hypothesis if p-value < 0.05.

Step 5: Conduct the test. In this case, the test has already been performed and is part of the regression out. Here is a screen shot of the second half of the Regression output.

Step 6: Conclusions and Interpretation

As with the correlations, we will use a single statement of the 6 steps to interpret the outcomes in this part. You are asked to transfer values from the ANOVA-Reg table to a decision summary table. Here is the completed table. Note that the variable names (the X’s in our regression equation) come from column L. The t-values are shown in column

O while the p-values are in Column P. The coefficients (the b’s in our equation) are listed in Column M.

The rejection decision is the same as we have been using. Looking at the p-values, we reject the null for all p-values less than 0.05. Note the unusual looking p-value for the intercept: 2.9E-13. This format is called exponential and is the same as 2.9 * 10-13. This value equals 0.00000000000029, we move the decimal point 13 places to the left. Any E-xx Excel output will be less than our alpha value of 0.05. We do not list the intercept in our table as it is always included in the equation.

With our results, we reject the null hypothesis and find that 3 variables are significant factors into determining compa-ratio, and these are Midpoint, Perf. Rating and Gender.

The Multiple Regression equation is similar to the linear regression example given above except it has more independent terms: Y = a + b1*X1 + b2*X2 + B3*X3 + …. The b’s stand for the coefficients that are multiplied by the value of each variable (represented by the X’s).

With these 3 variables and the intercept, the statistically significant regression equation is:

Compa-ratio = 0.954 + 0.003*midpoint -0.002*performance rating + 0.056*gender.

So, what does this equation mean? How do we interpret it? The intercept (0.9545) is somewhat of a place holder – it centers the line in the middle of the data points but has little other meaning for us. The three variables, however tell us a lot. Changes in each of them impact the compa-ratio outcome independently of the others; it is as if we can consider the other factors being held constant as we examine each factor’s impact. So, all other things the same, each dollar increase in midpoint increases the compa-ratio value by 0.0034. This relates to what we found last week that compa-ratio is not independent of grade. At the same time, and possibly surprisingly, every increase in an employee’s performance rating causes the compa-rating to decrease by .0024! Finally, the equation says that gender is an important factor. This factor alone means that the company is violating the equal pay act. But, what might be surprising is that for a change from male (coded 0) to female (coded 1) the compa-ratio goes up by 0.0562! Females get a higher compa-ratio (percent of midpoint) when all other things are equal than males do, since the female gender results in adding 0.056*1 to the compa-ratio while the male gender has 0.056 * 0 (or 0) added to their compa-ratio.

We did have one hint that this might be the case, when we noticed in week 1 that the female mean compa-ratio was higher than the male compa-ratio. But, then some of the single factor tests minimize this difference. This is one of multiple regression’s greatest strengths, it will show us the impact of a single variable by controlling for, or keeping constant, the impact of all other variables.

Parts d, e, and f

Gender is a significant element in the compa-ratio, as females (coded 1) get a higher value when all other variables are equal. We see this from the significant positive coefficient to

the variable gender. If we had switched the coding and had Females coded 0, the sign of the gender variable would have changed causing Males to have part of their compa-ratio reduced due to being male.

Question 3

This answer will depend on what other factors you would like to see.

Question 4

As of this point, we have some strong evidence in the compa-regression equation and the t-test on average compa-ratios, that females get more pay for equal work than males. The company is violating the Equal Pay Act, in favor of women.

Question 5

What you say here describes your understanding of regression analysis versus the power of inferential tests of 2 variables at a time.

Summary

Correlations show the direction and strength of a relationship between two variables and are fairly straight forward to understand. The Pearson correlation and the Spearman Rank order correlation are the two most generally used correlations.

Excel produces the Pearson Correlation in a single value or in a table showing correlations among three or more variables. Each cell shows the correlation value for the variable listed on the side row and top column. The statistical significance of either the Pearson or Spearman correlation is found by using the t-statistic. The Spearman’s rank correlation is not produced directly by Excel.

If we have variables that are related/correlated to an output, such as salary, we can create a regression equation. A regression equation is somewhat like a recipe for your favorite food; it tells how “much” of each ingredient (AKA variables) to add into the mix to get your result.

The regression tables have a lot of information in them, and at first glance they can appear a bit overwhelming. However, as with the ANOVA tables, only a few results are important. Start with the first value, the Multiple R and R squared (the multiple coefficient of determination). These give us a sense of how well the regression variables explain the outcome. Then go to the next table and look at the significance of F value at the right. This is our P-value and should be less than 0.05 for our regression equation to reject the null of no significance. Assuming we rejected the null, the third table gives us our details. The first two columns contain the variable names and their respective coefficient. These get multiplied together and added (or subtracted depending on the coefficient sign) to create the regression equation. A couple of columns over is the p-value column, letting us know which of the variables is significant to the regression. If interested, we can build a confidence interval using the final columns for each variable.

Please ask your instructor if you have any questions about this material.

When you have finished with this lecture, please respond to Discussion Thread 2 for this week with your initial response and responses to others over a couple of days before reading the third lecture for the week.