Year 2 |
Customers |
Sales ($000) |
January |
215 |
265 |
February |
259 |
388 |
March |
325 |
298 |
April |
354 |
260 |
May |
258 |
263 |
June |
199 |
402 |
July |
254 |
320 |
August |
299 |
310 |
September |
264 |
307 |
October |
198 |
302 |
November |
223 |
225 |
December |
261 |
361 |
You do not have permissions to perform this activity
Assignment
Week 2 | Roles and Responsibilities in Leadership...
Assignment Executive Summary
Due Date: Nov 11, 2018 23:59:59 Max Points: 150 Details:
In this assignment, you will select a program, quality improvement initiative, or other project from your place of employment. Assume you are presenting this program to the board for approval of funding. Write an executive summary (850-1,000 words) to present to the board, from which they will make their decision to fund your program or project. The summary should include:
1. The purpose of the program or project.
2. The target population or audience.
3. The benefits of the program or project
4. The cost or budget justification.
5. The basis upon which the program or project will be evaluated.
Share your written proposal with your manager, supervisor or other colleague in a formal leadership position within a health care organization. Request their feedback using the following questions as prompts:
1. Do you believe the proposal would be approved if formally proposed?
2. What are some strengths and weaknesses of the proposal?
Submit the written proposal along with the "Executive Summary Feedback Form."
Prepare this assignment according to the APA guidelines found in the APA Style Guide, located in the Student Success Center. An abstract is not required.
This assignment uses a rubric. Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion.
You are required to submit this assignment to LopesWrite. Please refer to the directions in the Student Success Center.
NRS451V. ExecutiveSummaryFeedbackForm_2-24-24.doc
Top of Form
Please Note: Assignment will not be submitted to the faculty member until the "Submit" button under "Final Submission" is clicked.
New Attempt
Bottom of Form
Title |
Attached Documents |
Citation Report |
Similarity Index |
Final Submission |
|
Click 'New Attempt' to start assignment or attach documents |
Error Details
Top of Form
Bottom of Form
Top of Form
Bottom of Form
|
Top of Form Apply Rubrics Executive Summary
Bottom of Form |
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
|
1a1c3844-51c2-4
87114a9c-95c6-4
Current
https://www.goog
wl.contacts_skyd
userID=9c2f2eb6
Mod 2 Case - LR - Year 1
New Star Grocery Company | Insert chart here | ||||||||||||
Year 1 | Customers | Sales ($000) | Number | Month | Customers (x) | Sales (y) | XY | X2 | Y2 | ||||
January | 185 | 230 | 1 | January | |||||||||
February | 241 | 301 | 2 | February | |||||||||
March | 374 | 310 | 3 | March | |||||||||
April | 421 | 389 | 4 | April | |||||||||
May | 425 | 421 | 5 | May | |||||||||
June | 259 | 300 | 6 | June | |||||||||
July | 298 | 318 | 7 | July | |||||||||
August | 321 | 298 | 8 | August | |||||||||
September | 215 | 202 | 9 | September | |||||||||
October | 282 | 265 | 10 | October | |||||||||
November | 235 | 312 | 11 | November | |||||||||
December | 300 | 298 | 12 | December | |||||||||
Totals | 0 | 0 | - 0 | - 0 | - 0 | ||||||||
Mean | 0 | 0.00 | |||||||||||
X-bar | Y-bar | ||||||||||||
b1 | ERROR:#DIV/0! | ||||||||||||
b0 | ERROR:#DIV/0! | ||||||||||||
Y= b0+ b1x |
Year 2 Forecast
New Star Grocery Company | ||||||||||
Sales | ||||||||||
b1 | ERROR:#DIV/0! | Year 2 | Customers (x) | Actual Y(t) | Forecast F(t) | Variance | ||||
b0 | ERROR:#DIV/0! | January | 215 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||
February | 259 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
Y= b0+ b1x | March | 325 | ERROR:#DIV/0! | ERROR:#DIV/0! | ||||||
April | 354 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
May | 258 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
June | 199 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
July | 254 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
August | 299 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
September | 264 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
October | 198 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
November | 223 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
December | 261 | ERROR:#DIV/0! | ERROR:#DIV/0! | |||||||
Totals | 259.08 | ERROR:#DIV/0! | ERROR:#DIV/0! | ERROR:#DIV/0! | ||||||
Assignment Overview
Scenario: You are a consultant who works for the Diligent Consulting Group. Your client, the New Star Grocery Company, believes that there may be a relationship between the number of customers who visit the store during any given month (“customer traffic”) and the total sales for that same month. In other words, the greater the customer traffic, the greater the sales for that month. To test this theory, the client has collected customer traffic data over the past 12-month period, and monthly sales for that same 12-month period (Year 1).
Case Assignment
Using the customer traffic data and matching sales for each month of Year 1, create a Linear Regression (LR) equation in Excel, assuming all assumptions for linear regression have been met. Use the Excel template provided (see “Module 2 Case – LR –Year 1” spreadsheet tab), and be sure to include your LR chart (with a trend line) where noted. Also, be sure that you include the LR formula within your chart.
After you have developed the LR equation above, you will use the LR equation to forecast sales for Year 2 (see the second Excel spreadsheet tab labeled “Year 2 Forecast”). You will note that the customer has collected customer traffic data for Year 2. Your role is to complete the sales forecast using the LR equation from Step 1 above.
After you have forecast Year 2 sales, your Professor will provide you with 12 months of actual sales data for Year 2. You will compare the sales forecast with the actual sales for Year 2, noting the monthly and average (total) variances from forecast to actual sales.
To complete the Module 2 Case, write a report for the client that describes the process you used above, and that analyzes the results for Year 2. (What is the difference between forecast vs. actual sales for Year 2—by month and for the year as a whole?) Make a recommendation concerning how the LR equation might be used by New Star Grocery Company to forecast future sales.
Data: Download the Module 2 Case template here: Data chart for BUS520 Case 2 . Use this template to complete your Excel analysis.
Assignment Expectations
Excel Analysis
Conduct accurate and complete Linear Regression analysis in Excel. Use Excel support to find information on linear regression in Excel: https://support.office.com/en-us/Search/results?query=linear+regression
Written Report
· Length requirements: 4–5 pages minimum (not including Cover and Reference pages). NOTE: You must submit 4–5 pages of written discussion and analysis. This means that you should avoid use of tables and charts as “space fillers.”
· Provide a brief introduction to/background of the problem.
· Your written (in Word) analysis should discuss the logic and rationale used to develop the LR equation and chart.
· Provide complete, meaningful, and accurate recommendation(s) concerning how the New Star Grocery Company might use the LR equation to forecast future sales. (For example, how reliable is the LR equation in predicting future sales?) What other recommendations do you have for the client?
· Write clearly, simply, and logically. Use double-spaced, black Verdana or Times Roman font in 12 pt. type size.
· Have an introduction at the beginning to introduce the topics and use keywords as headings to organize the report.
· Avoid redundancy and general statements such as "All organizations exist to make a profit." Make every sentence count.
· Paraphrase the facts using your own words and ideas, employing quotes sparingly. Quotes, if absolutely necessary, should rarely exceed five words.
· Upload both your written report and Excel file to the case 2 Dropbox.
Here are some guidelines on how to build critical thinking skills.
· Emerald Group Publishing. (n.d.). Developing Critical Thinking. Retrieved from http://www.emeraldinsight.com/learning/study_skills/skills/critical_thinking.htm
Background Readings
Forecasting Methods
There are two main types or genres of forecasting methods, qualitative and quantitative. The former consists of judgment and analysis of qualitative factors, such as scenario building and scenario analysis. The latter is obviously based on numerical analysis. This genre of forecasting includes such methods as linear regression, time series analysis, and data mining algorithms like CHAID and CART, which are useful especially in the growing world of artificial intelligence and machine learning in business. This module will look at the linear regression and time series analysis using exponential smoothing.
Linear Growth
When using any mathematical model, we have to consider which inputs are reasonable to use. Whenever we extrapolate, or make predictions into the future, we are assuming the model will continue to be valid. There are different types of mathematical model, one of which is linear growth model or algebraic growth model and another is exponential growth model, or geometric growth model. The constant change is the defining characteristic of linear growth. Plotting the values, we can see the values form a straight line, the shape of linear growth.
If a quantity starts at size P0 and grows by d every time period, then the quantity after n time periods can be determined using either of these relations:
Recursive form:
Pn = Pn-1 + d
Explicit form:
Pn = P0 + d n
In this equation, d represents the common difference – the amount that the population changes each time n increases by 1. Calculating values using the explicit form and plotting them with the original data shows how well our model fits the data. We can now use our model to make predictions about the future, assuming that the previous trend continues unchanged.
Exponential Growth
If a quantity starts at size P0 and grows by R% (written as a decimal, r) every time period, then the quantity after n time periods can be determined using either of these relations:
Recursive form:
Pn = (1+r) Pn-1
Explicit form:
Pn = (1+r)n P0 or equivalently, Pn = P0 (1+r)n
We call r the growth rate and the term (1+r) is called the growth multiplier, or common ratio.
In exponential growth, the population grows proportional to the size of the population, so as the population gets larger, the same percent growth will yield a larger numeric growth.
Linear regression is a very powerful statistical technique. Many people have some familiarity with regression just from reading the news, where graphs with straight lines are overlaid on scatterplots. Linear models can be used for prediction or to evaluate whether there is a linear relationship between two numerical variables.
Figure 1 shows two variables whose relationship can be modeled perfectly with a straight line. The equation for the line is
y=5+57.49x
Imagine what a perfect linear relationship would mean: you would know the exact value of y just by knowing the value of x. This is unrealistic in almost any natural process. For example, if we took family income x, this value would provide some useful information about how much financial support y a college may offer a prospective student. However, there would still be variability in financial support, even when comparing students whose families have similar financial backgrounds.
Linear regression assumes that the relationship between two variables, x and y, can be modeled by a straight line:
β0+β1xβ0+β1x
where β0 and β1 represent two model parameters (β is the Greek letter beta). These parameters are estimated using data, and we write their point estimates as b0 and b1. When we use x to predict y, we usually call x the explanatory or predictor or independent variable, and we call y the response or dependent variable.
Simple Linear Regression vs. Multiple Regression
In simple linear regression, a criterion variable or dependent variable is predicted from one predictor variable. In multiple regression, the criterion is predicted by two or more independent or predictor variables. Take the SAT case study for an example, you might want to predict a student's university grade point average on the basis of their High-School GPA (HSGPA) and their total SAT score (verbal + math). The basic idea is to find a linear combination of HSGPA and SAT that best predicts University GPA (UGPA). That is, the problem is to find the values of b1 and b2 in the equation shown below that give the best predictions of UGPA. As in the case of simple linear regression, we define the best predictions as the predictions that minimize the squared errors of prediction.
UGPA' = b1HSGPA + b2SAT + A
where UGPA' is the predicted value of University GPA and A is a constant. For these data, the best prediction equation is shown below:
UGPA' = 0.541 x HSGPA + 0.008 x SAT + 0.540
In other words, to compute the prediction of a student's University GPA, you add up (a) their High-School GPA multiplied by 0.541, (b) their SAT multiplied by 0.008, and (c) 0.540. Table 1 shows the data and predictions for the first five students in the dataset.
Table 1. Data and Predictions.
HSGPA |
SAT |
UGPA' |
3.45 |
1232 |
3.38 |
2.78 |
1070 |
2.89 |
2.52 |
1086 |
2.76 |
3.67 |
1287 |
3.55 |
3.24 |
1130 |
3.19 |
The values of b (b1 and b2) are sometimes called "regression coefficients" and sometimes called "regression weights." These two terms are synonymous. The multiple correlation (R) is equal to the correlation between the predicted scores and the actual scores. In this example, it is the correlation between UGPA' and UGPA, which turns out to be 0.79. That is, R = 0.79. Note that R will never be negative since if there are negative correlations between the predictor variables and the criterion, the regression weights will be negative so that the correlation between the predicted and actual scores will be positive.
Interpretation of Regression Coefficients
A regression coefficient in multiple regression is the slope of the linear relationship between the criterion variable and the part of a predictor variable that is independent of all other predictor variables. In this example, the regression coefficient for HSGPA can be computed by first predicting HSGPA from SAT and saving the errors of prediction (the differences between HSGPA and HSGPA'). These errors of prediction are called "residuals" since they are what is left over in HSGPA after the predictions from SAT are subtracted, and represent the part of HSGPA that is independent of SAT. These residuals are referred to as HSGPA.SAT, which means they are the residuals in HSGPA after having been predicted by SAT. The correlation between HSGPA.SAT and SAT is necessarily 0.
The final step in computing the regression coefficient is to find the slope of the relationship between these residuals and UGPA. This slope is the regression coefficient for HSGPA. The following equation is used to predict HSGPA from SAT:
HSGPA' = -1.314 + 0.0036 x SAT
The residuals are then computed as:
HSGPA - HSGPA'
The linear regression equation for the prediction of UGPA by the residuals is
UGPA' = 0.541 x HSGPA.SAT + 3.173
Notice that the slope (0.541) is the same value given previously for b1 in the multiple regression equation.
This means that the regression coefficient for HSGPA is the slope of the relationship between the criterion variable and the part of HSGPA that is independent of (uncorrelated with) the other predictor variables. It represents the change in the criterion variable associated with a change of one in the predictor variable when all other predictor variables are held constant. Since the regression coefficient for HSGPA is 0.54, this means that, holding SAT constant, a change of one in HSGPA is associated with a change of 0.54 in UGPA'. If two students had the same SAT and differed in HSGPA by 2, then you would predict they would differ in UGPA by (2)(0.54) = 1.08. Similarly, if they differed by 0.5, then you would predict they would differ by (0.50)(0.54) = 0.27.
The slope of the relationship between the part of a predictor variable independent of other predictor variables and the criterion is its partial slope. Thus the regression coefficient of 0.541 for HSGPA and the regression coefficient of 0.008 for SAT are partial slopes. Each partial slope represents the relationship between the predictor variable and the criterion holding constant all of the other predictor variables.
It is difficult to compare the coefficients for different variables directly because they are measured on different scales. A difference of 1 in HSGPA is a fairly large difference, whereas a difference of 1 on the SAT is negligible. Therefore, it can be advantageous to transform the variables so that they are on the same scale. The most straightforward approach is to standardize the variables so that they each have a standard deviation of 1. A regression weight for standardized variables is called a "beta weight" and is designated by the Greek letter β. For these data, the beta weights are 0.625 and 0.198. These values represent the change in the criterion (in standard deviations) associated with a change of one standard deviation on a predictor [holding constant the value(s) on the other predictor(s)]. Clearly, a change of one standard deviation on HSGPA is associated with a larger difference than a change of one standard deviation of SAT. In practical terms, this means that if you know a student's HSGPA, knowing the student's SAT does not aid the prediction of UGPA much. However, if you do not know the student's HSGPA, his or her SAT can aid in the prediction since the β weight in the simple regression predicting UGPA from SAT is 0.68. For comparison purposes, the β weight in the simple regression predicting UGPA from HSGPA is 0.78. As is typically the case, the partial slopes are smaller than the slopes in simple regression.
Partitioning the Sums of Squares
Just as in the case of simple linear regression, the sum of squares for the criterion (UGPA in this example) can be partitioned into the sum of squares predicted and the sum of squares error. That is,
SSY = SSY' + SSE
which for these data:
20.798 = 12.961 + 7.837
The sum of squares predicted is also referred to as the "sum of squares explained." Again, as in the case of simple regression,
Proportion Explained = SSY'/SSY
In simple regression, the proportion of variance explained is equal to r2; in multiple regression, the proportion of variance explained is equal to R2. In multiple regression, it is often informative to partition the sum of squares explained among the predictor variables. For example, the sum of squares explained for these data is 12.96. How is this value divided between HSGPA and SAT?
One approach that, as will be seen, does not work is to predict UGPA in separate simple regressions for HSGPA and SAT. As can be seen in Table 2, the sum of squares in these separate simple regressions is 12.64 for HSGPA and 9.75 for SAT. If we add these two sums of squares we get 22.39, a value much larger than the sum of squares explained of 12.96 in the multiple regression analysis. The explanation is that HSGPA and SAT are highly correlated (r = .78) and therefore much of the variance in UGPA is confounded between HSGPA and SAT. That is, it could be explained by either HSGPA or SAT and is counted twice if the sums of squares for HSGPA and SAT are simply added.
Table 2. Sums of Squares for Various Predictors
Predictors |
Sum of Squares |
HSGPA |
12.64 |
SAT |
9.75 |
HSGPA and SAT |
12.96 |
Table 3 shows the partitioning of the sum of squares into the sum of squares uniquely explained by each predictor variable, the sum of squares confounded between the two predictor variables, and the sum of squares error. It is clear from this table that most of the sum of squares explained is confounded between HSGPA and SAT. Note that the sum of squares uniquely explained by a predictor variable is analogous to the partial slope of the variable in that both involve the relationship between the variable and the criterion with the other variable(s) controlled.
Table 3. Partitioning the Sum of Squares
Source |
Sum of Squares |
Porportion |
HSGPA (unique) |
3.21 |
0.15 |
SAT (unique) |
0.32 |
0.02 |
HSGPA and SAT (Confounded) |
9.43 |
0.45 |
Error |
7.84 |
0.38 |
Total |
20.80 |
1.00 |
The sum of squares uniquely attributable to a variable is computed by comparing two regression models: the complete model and a reduced model. The complete model is the multiple regression with all the predictor variables included (HSGPA and SAT in this example). A reduced model is a model that leaves out one of the predictor variables. The sum of squares uniquely attributable to a variable is the sum of squares for the complete model minus the sum of squares for the reduced model in which the variable of interest is omitted. As shown in Table 2, the sum of squares for the complete model (HSGPA and SAT) is 12.96. The sum of squares for the reduced model in which HSGPA is omitted is simply the sum of squares explained using SAT as the predictor variable and is 9.75. Therefore, the sum of squares uniquely attributable to HSGPA is 12.96 - 9.75 = 3.21. Similarly, the sum of squares uniquely attributable to SAT is 12.96 - 12.64 = 0.32. The confounded sum of squares in this example is computed by subtracting the sum of squares uniquely attributable to the predictor variables from the sum of squares for the complete model: 12.96 - 3.21 - 0.32 = 9.43. The computation of the confounded sums of squares in analyses with more than two predictors is more complex and beyond the scope of this text.
Since the variance is simply the sum of squares divided by the degrees of freedom, it is possible to refer to the proportion of variance explained in the same way as the proportion of the sum of squares explained. It is slightly more common to refer to the proportion of variance explained than the proportion of the sum of squares explained. When variables are highly correlated, the variance explained uniquely by the individual variables can be small even though the variance explained by the variables taken together is large. For example, although the proportions of variance explained uniquely by HSGPA and SAT are only 0.15 and 0.02 respectively, together these two variables explain 0.62 of the variance. Therefore, you could easily underestimate the importance of variables if only the variance explained uniquely by each variable is considered. Consequently, it is often useful to consider a set of related variables. For example, assume you were interested in predicting job performance from a large number of variables some of which reflect cognitive ability. It is likely that these measures of cognitive ability would be highly correlated among themselves and therefore no one of them would explain much of the variance independently of the other variables. However, you could avoid this problem by determining the proportion of variance explained by all of the cognitive ability variables considered together as a set. The variance explained by the set would include all the variance explained uniquely by the variables in the set as well as all the variance confounded among variables in the set. It would not include variance confounded with variables outside the set. In short, you would be computing the variance explained by the set of variables that is independent of the variables not in the dataset.
Inferential Statistics
We begin by presenting the formula for testing the significance of the contribution of a set of variables. We will then show how special cases of this formula can be used to test the significance of R2 as well as to test the significance of the unique contribution of individual variables.
The first step is to compute two regression analyses: (1) an analysis in which all the predictor variables are included and (2) an analysis in which the variables in the set of variables being tested are excluded. The former regression model is called the "complete model" and the latter is called the "reduced model." The basic idea is that if the reduced model explains much less than the complete model, then the set of variables excluded from the reduced model is important.
The formula for testing the contribution of a group of variables is:
where:
SSQC is the sum of squares for the complete model,
SSQR is the sum of squares for the reduced model,
pC is the number of predictors in the complete model,
pR is the number of predictors in the reduced model,
SSQT is the sum of squares total (the sum of squared deviations of the criterion variable from its mean), and
N is the total number of observations
The degrees of freedom for the numerator is pC - pR and the degrees of freedom for the denominator is N - pc -1. If the F is significant, then it can be concluded that the variables excluded in the reduced set contribute to the prediction of the criterion variable independently of the other variables. This formula can be used to test the significance of R2 by defining the reduced model as having no predictor variables. In this application, SSQR and pR = 0. The formula is then simplified as follows:
which for this example becomes:
The degrees of freedom are 2 and 102. The F distribution calculator shows that p < 0.001.
F Calculator
The reduced model used to test the variance explained uniquely by a single predictor consists of all the variables except the predictor variable in question. For example, the reduced model for a test of the unique contribution of HSGPA contains only the variable SAT. Therefore, the sum of squares for the reduced model is the sum of squares when UGPA is predicted by SAT. This sum of squares is 9.75. The calculations for F are shown below:
The degrees of freedom are 1 and 102. The F distribution calculator shows that p < 0.001.
Similarly, the reduced model in the test for the unique contribution of SAT consists of HSGPA.
The degrees of freedom are 1 and 102. The F distribution calculator shows that p = 0.0432.
The significance test of the variance explained uniquely by a variable is identical to a significance test of the regression coefficient for that variable. A regression coefficient and the variance explained uniquely by a variable both reflect the relationship between a variable and the criterion independent of the other variables. If the variance explained uniquely by a variable is not zero, then the regression coefficient cannot be zero. Clearly, a variable with a regression coefficient of zero would explain no variance.
Other inferential statistics associated with multiple regression are beyond the scope of this text. Two of particular importance are (1) confidence intervals on regression slopes and (2) confidence intervals on predictions for specific observations. These inferential statistics can be computed by standard statistical analysis packages such as R, SPSS, STATA, SAS, and JMP.

Get help from top-rated tutors in any subject.
Efficiently complete your homework and academic assignments by getting help from the experts at homeworkarchive.com