Lab 4
Enter Your Name
2019-03-28
Abstract
This study investigated associations between working memory (measured by complex memory tasks) and both reading and mathematics abilities, as well as the possible mediating factors of fluid intelligence, verbal abilities, short-term memory (STM), and phonological awareness, in a sample of 6- to 11-year-olds with reading disabilities. As a whole, the sample was characterized by deficits in complex memory and visuospatial STM and by low IQ scores language, phonological STM, and phonological awareness abilities fell in the low average range. Severity of reading difficulties within the sample was significantly associated with complex memory, language, and phonological awareness abilities, whereas poor mathematics abilities were linked with complex memory, phonological STM, and phonological awareness scores. These findings suggest that working memory skills indexed by complex memory tasks represent an important constraint on the acquisition of skill and knowledge in reading and mathematics. Possible mechanisms for the contribution of working memory to learning, and the implications for educational practice, are considered.
Citation Gathercole, S. E., Alloway, T. P., Willis, C., amp Adams, A. M. (2006). Working memory in children with reading disabilities. Journal of Experimental Child Psychology, 93(3), 265-281.
Dataset
- Dependent variable (Y) Reading - reading skills of the 6 to 11 year olds- Independent variables (X) - Verbal - a measure of verbal ability (spelling, phonetics, etc.) - Math - a measure of math ability - Work_mem - working memory score
data lt- read.spss(quotLab4.savquot)lab4 lt- data.frame( reading datareading, verbal dataverbal, math datamath, work_mem datawork_mem)summary(lab4)
reading verbal math work_mem Min. 0.510 Min. 0.000 Min. 0.00 Min. -21.00 1st Qu.0.820 1st Qu. 0.000 1st Qu. 21.00 1st Qu. 11.00 Median 1.300 Median 5.000 Median 37.00 Median 18.00 Mean 1.657 Mean 4.828 Mean 48.34 Mean 24.34 3rd Qu.2.200 3rd Qu. 8.000 3rd Qu. 64.00 3rd Qu. 31.00 Max. 9.650 Max. 17.000 Max. 340.00 Max. 136.00
Data screening
Accuracy
Assume the data is accurate with no missing values. You will want to screen the dataset using all the predictor variables to predict the outcome in a simultaneous multiple regression (all the variables at once). This analysis will let you screen for outliers and assumptions across all subsequent analyses/steps.
Outliers
a. Leverage i. What is your leverage cut off score ii. How many leverage outliers did you have
fit lt- lm(work_mem., datalab4)opar lt- par(mfrow c(2,2), oma c(0, 0, 1.1, 0))plot(fit, las 1)

outlierTest(fit) Bonferonni p-value for most extreme obs
rstudent unadjusted p-value Bonferonni p149 -6.453310 1.2705e-10 3.8370e-0755 -6.450018 1.2980e-10 3.9201e-072105 -6.105427 1.1564e-09 3.4923e-062033 5.894520 4.1738e-09 1.2605e-052828 -5.413179 6.6772e-08 2.0165e-042044 -5.290284 1.3086e-07 3.9521e-042267 5.177501 2.3964e-07 7.2373e-04354 5.084796 3.9051e-07 1.1794e-031664 5.011719 5.7059e-07 1.7232e-032270 4.969976 7.0700e-07 2.1351e-03
qqPlot(fit, mainquotQQ Plotquot) qq plot for studentized resid
1 55 149
leveragePlots(fit) leverage plots

From these plots, we can identify observations 149, 272, and 2105 as possibly problematic to our model. We can look at these observations to see which states they represent.
lab4c(149, 272, 2105),
reading verbal math work_mem149 1.54 0 178 -7272 1.27 5 304 642105 2.13 8 184 0
b. Cook39s i. What is your Cook39s cut off score ii. How many Cook39s outliers did you have
cutoff lt- 4/((nrow(lab4)-length(fitcoefficients)-2)) plot(fit, which4, cook.levelscutoff)

From the above plots, we can identify same observations 149, 272, and 2105, as in the case of leverage plots, as possible outliers in the dataset.
Influence Plot influencePlot(fit, mainquotInfluence Plotquot, subquotCircle size is proportial to Cook39s Distancequot )

StudRes Hat CookD 55 -6.4500182 0.003062291 0.031523370 149 -6.4533096 0.004685067 0.048355585 272 -4.7965540 0.015766025 0.091467146 273 -2.8407833 0.020186211 0.041467765 2927 0.7050794 0.017761524 0.002247769
c. Mahalanobis i. What is your Mahalanobis df ii. What is your Mahalanobis cut off score iii. How many outliers did you have for Mahalanobis
m_dist lt- mahalanobis(lab4,13, colMeans(lab4,13), cov(lab4,13))outlier_maha lt- rep(2, length(m_dist))outlier_maham_dist gt 40 lt- 1plot(m_dist, colc(quotredquot,quotbluequot)outlier_maha)

In the above plot, we can see 5 outliers with a Mahalanobis cut off score of 40. The index of these are
which(outlier_maha in c(1))
1 272 273 1714 1889 2927
d. Overall i. How many total outliers did you have across all variables ii. Delete them
In total we found 7 outliers in the dataset, with index as 149, 272, 273, 1714, 1889, 2105, 2927. Now deleting this values
index lt- setdiff(1nrow(lab4), c(149, 272, 273, 1714, 1889, 2105, 2927))lab4 lt- lab4index,
Hierarchical Regression
a. In step 1, control for verbal ability of the participant predicting reading scores. b. In step 2, test if working memory is related to reading scores.c. In step 3, test if math score is related to reading scores.d. Include the summaries of each step, along with the ANOVA of the change between each step.
Step 1
summary(model lt- lm(reading verbal, data lab4))
Calllm(formula reading verbal, data lab4)Residuals Min 1Q Median 3Q Max -1.1825 -0.8273 -0.3570 0.5514 6.0975 Coefficients Estimate Std. Error t value Pr(gtt) (Intercept) 1.692479 0.031075 54.465 lt2e-16 verbal -0.008390 0.004945 -1.697 0.0899 . ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1Residual standard error 1.091 on 3011 degrees of freedomMultiple R-squared 0.000955, Adjusted R-squared 0.0006232 F-statistic 2.878 on 1 and 3011 DF, p-value 0.08988
anova(model)
Analysis of Variance TableResponse reading Df Sum Sq Mean Sq F value Pr(gtF) verbal 1 3.4 3.4256 2.8783 0.08988 .Residuals 3011 3583.5 1.1901 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1
The control of verbal ability of participant is not good for predicting the reading score as the estimated coefficient is only significant at a level of 0.1. Even ignoring the significance the effect of verbal on reading is very small and equal to 0.008390Step 2
summary(model lt- lm(reading work_mem, data lab4))
Calllm(formula reading work_mem, data lab4)Residuals Min 1Q Median 3Q Max -1.8072 -0.8043 -0.3469 0.5286 6.2099 Coefficients Estimate Std. Error t value Pr(gtt) (Intercept) 1.4640006 0.0311944 46.932 lt 2e-16 work_mem 0.0077394 0.0009964 7.768 1.09e-14 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1Residual standard error 1.081 on 3011 degrees of freedomMultiple R-squared 0.01964, Adjusted R-squared 0.01932 F-statistic 60.34 on 1 and 3011 DF, p-value 1.088e-14
anova(model)
Analysis of Variance TableResponse reading Df Sum Sq Mean Sq F value Pr(gtF) work_mem 1 70.5 70.465 60.336 1.088e-14 Residuals 3011 3516.5 1.168 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1
The working memory is related to reading scores as can be observed from the coefficients table in the above output.
Step 3
summary(model lt- lm(reading math, data lab4))
Calllm(formula reading math, data lab4)Residuals Min 1Q Median 3Q Max -1.9233 -0.7760 -0.3401 0.5049 6.3064 Coefficients Estimate Std. Error t value Pr(gtt) (Intercept) 1.3911517 0.0317716 43.79 lt2e-16 math 0.0054356 0.0005222 10.41 lt2e-16 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1Residual standard error 1.072 on 3011 degrees of freedomMultiple R-squared 0.03473, Adjusted R-squared 0.03441 F-statistic 108.3 on 1 and 3011 DF, p-value lt 2.2e-16
anova(model)
Analysis of Variance TableResponse reading Df Sum Sq Mean Sq F value Pr(gtF) math 1 124.6 124.58 108.34 lt 2.2e-16 Residuals 3011 3462.3 1.15 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1
The math score is related to reading scores as can be observed from the coefficients table in the above output, where the coefficient estimated for math predicting the value of reading score is highly significant. Again, the relation between the two is very small and is given by 0.0054356.
Moderation
a. Examine the interaction between verbal and math scores predicting reading scores.b. Include the simple slopes for low, average, and high math levels (split on math) for verbal predicting reading. c. Include a graph of the interaction.
fit lt- lm(reading verbalmath, data lab4)par(mfrow c(2,2))summary(fit)
Calllm(formula reading verbal math, data lab4)Residuals Min 1Q Median 3Q Max -1.9184 -0.7734 -0.3333 0.5080 6.3338 Coefficients Estimate Std. Error t value Pr(gtt) (Intercept) 1.3293695 0.0501265 26.520 lt 2e-16 verbal 0.0119202 0.0077005 1.548 0.12173 math 0.0074618 0.0008255 9.039 lt 2e-16 verbalmath -0.0004077 0.0001276 -3.195 0.00141 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1Residual standard error 1.07 on 3009 degrees of freedomMultiple R-squared 0.03869, Adjusted R-squared 0.03773 F-statistic 40.37 on 3 and 3009 DF, p-value lt 2.2e-16
plot(fit)

Splitting math into three levels low, average, and high math levels and then fitting the model
First, checking for quantiles
quantile(lab4math, probs c(1/3, 2/3))
33.33333 66.66667 26 53
math.levels lt- rep(quotlowquot, nrow(lab4))math.levelslab4math gt 26 lt- quotaveragequotmath.levelslab4math gt 53 lt- quothighquotlab4math.levels lt- as.factor(math.levels)fit lt- lm(reading verbalmath.levels, data lab4)par(mfrow c(2,2))summary(fit)
Calllm(formula reading verbal math.levels, data lab4)Residuals Min 1Q Median 3Q Max -1.5352 -0.7698 -0.3317 0.5344 6.4306 Coefficients Estimate Std. Error t value Pr(gtt) (Intercept) 1.660554 0.052835 31.429 lt 2e-16 verbal -0.000618 0.008346 -0.074 0.9410 math.levelshigh 0.384610 0.074976 5.130 3.09e-07 math.levelslow -0.301126 0.074470 -4.044 5.40e-05 verbalmath.levelshigh -0.029313 0.012187 -2.405 0.0162 verbalmath.levelslow 0.008687 0.011602 0.749 0.4541 ---Signif. codes 0 3939 0.001 3939 0.01 3939 0.05 39.39 0.1 39 39 1Residual standard error 1.07 on 3007 degrees of freedomMultiple R-squared 0.04016, Adjusted R-squared 0.03856 F-statistic 25.16 on 5 and 3007 DF, p-value lt 2.2e-16
plot(fit)
 

Distinctive Advantage

  • 21 Step Quality Check
  • 24/7 Customer Support
  • Live Expert Sessions
  • 100% Plagiarism Free Content
  • 0% Use Of AI
  • Guaranteed On-Time Delivery
  • Confidential & Secure
  • Free Comprehensive Resources
  • Money Back Guarantee
  • PHD Level Experts

All-Inclusive Success Package

  • Turnitin Report

    FREE $10.00
  • Non-AI Content Report

    FREE $9.00
  • Expert Session

    FREE $35.00
  • Topic Selection

    FREE $40.00
  • DOI Links

    FREE $25.00
  • Unlimited Revision

    FREE $75.00
  • Editing/Proofreading

    FREE $90.00
  • Bibliography Page

    FREE $25.00
  • Get Instant Quote

Enjoy HD Grade Assignments without overpayingSave More. Score Better. Bless YOU!

Order Now
Order Now

My Assignment Services- Whatsapp Tap to ChatGet instant assignment help