Sunday, October 2, 2011

Research Paper Writing

 Another old grad school paper I found while looking for something entirely different. 

Writing Process
Enrico
November 27, 2007

New Structures for Research Writing

    Patricia J. Collins' “Bridging the Gap” raised my interest in investigating research paper writing.  All the freshmen at Leavitt complete a big research project every spring.  For the three years we have given the assignment, no one has been satisfied.  The students find the project long, boring, and unrelated to their lives and other content.  Meanwhile, the teachers feel frustrated that the students merely repeat the information they found, sometimes not even in their own words.  They show no determining of importance or synthesis of ideas.  This project frustrates my whole network, and I hoped to find some information that we could apply this spring when we undertake our research project again. 
    The three articles I found were all from College Teaching.  Initially, I was concerned that the information would not apply to high school freshmen, but because the research papers described were for college freshmen, the information actually fit very well.  Each article discussed the typical college freshmen research paper done in a writing course that is meant to give students an overview of how to do research.  This is the same purpose of our high school freshmen research project: we aim to prepare them for the research they will do the other years of high school.  All three articles present ideas to get students to use critical thinking while researching, synthesizing information, and enlivening their writing.  Also, all three work on breaking the large assignment down into smaller more manageable pieces. 
    James E. Foley's “The Freshmen Research Paper” (2001) details a research project with a different focus.  Tired of boring research papers, Foley assigned students people instead of topics.  However, he did not want reports on celebrity gossip and trivia, so he chose predominate persons in their majors.  Foley used examples of New York Times obituaries as models to show students how to focus on the person's work, not the details of his or her life.  The students' end products would be to write obituaries for their assigned person, but they also had smaller assignments due along the way to help scaffold their work on the project.  In the end, Foley found that not only were the majority of papers no longer boring, but that the students' overall work was much better showing growth not only on research but also on writing.
    Unlike, Foley's article, the other two pieces did not focus on the project as a whole, but rather gave ideas for the structure of the assignment presented to the students.  Stephen L. Broskoske's “Prove Your Case: A New Approach to Teaching Research Papers” (2007) suggests elaborating for students the connection between a lawyer preparing a case and the students preparing their research papers.  Broskoske states, “I draw out the analogy in terms of how lawyers frame their case (as the students define their topics), search out evidence (as the students search for sources), present the evidence (as the students write the paper), and make the closing argument (as the students draw conclusions)” (2007).  Using popular trials currently occurring in the media, Broskoske links the considerations lawyers must take while preparing a case to the decisions student must make as they follow the research process, such as narrowing a topic, presenting creditable evidence, and using an authoritative tone.  Broskoske concludes that the majority of students who used this approach would recommend it and use it again.
    Mardi Mahaffy's “Encouraging Critical Thinking in Student Library Research: An Application of National Standards” (2006) compares to Broskoske's article in that it gives ideas on how to present the research assignment.  This piece discusses the wording of the research assignment or prompt.  Mahaffy presents the five standards for information literacy published by the Association of College and Research Libraries and explains how re-wording a traditional research assignment can help students to demonstrate those standards.  For each of the five standards, Mahaffy explains how re-writing part of an assignment can lead students toward meeting that standard.  For example, to help students “determine the nature and extent of the information needed” Mahaffy suggests asking students to build their background knowledge on a topic using an encyclopedia and then craft two possible thesis statements as opposed to a more traditional assignment which would merely state “choose an issue that you would like to explore further” (Mahaffy 2006).  Unlike the others, Mahaffy does not present any data on how well using this technique works. 
    In my research, I only found two points in these readings lacking for my own personal use.  First, Broskoske's ideas would probably not work for my students.  Most of them would not have the appropriate background knowledge on how a court case or lawyer works for them to understand the analogy.  If we wanted to create a new project that also discussed court cases, maybe something that linked to a text with a trial (ex. To Kill a Mockingbird) or a text that could have a mock trail, then this would perhaps merit the amount of pre-teaching that would be involved to make the analogy accessible for the students.  Secondly , since these articles were for college students, one area was overlooked.  None of the articles discussed formats of or management for note taking.  I found Collins' description of taking notes without looking at the original text unrealistic.  Not only do my students use printouts and photocopies that they will like to highlight, but I would never chose to take notes in that fashion myself.  Since Collin's discussion of note taking left me unsatisfied, this is an area I can investigate further in the future.
    Foley's ideas match my students the best.  Currently, my school has a research project that is based on science content.  The students do not invest in the topics that are handed to them.  Also, my freshmen are not familiar and comfortable with scientific non-fiction and the lack of adequate background and models makes the writing of the paper difficult for them, which aligns with what we have discussed about modeling in class and read in Spandel's chapter on watching others write (Spandel 2005).  Foley's use of obituary format enlivens the writing and provides an accessible model for students, both of which my students need.  The creativity necessary to write an obituary reminded me of the types of projects presented in Collins' piece.  When I read “Bridging the Gap,” I found the list of formats available for presenting their final projects interesting, creative, and fun (Collins 1990).  While I want to offer my students engaging choices for how to present their information, I also see the importance of practicing the non-fiction format.  Keeping that in mind, overall, Foley's topics and format could improve the research assignment, making it more accessible to my students. 
    Similarly, the way Foley breaks down the project also would aid my students.  I am particularly interested in the library scavenger hunt he mentions because Collins did not discuss locating sources because she gathered the materials for her younger students.  While both the librarian and science teachers at Leavitt gather some sources, our goal is for students to locate and evaluate their own sources.  All students get library orientation, but it isn't very gripping.  Having a scavenger hunt once or twice after the orientation would help the students to apply and review the information.  The format would be competitive and fun and could be positively reinforced with prizes.  Foley's annotated bibliography and oral presentation would also work well for my students because we focus on citing and speaking skills freshmen year. 
    Lastly, just before the final assignment is due, Foley takes a week to conference individually with each student.  This connected to Collins' article as well.  She describes how she did status of the class inventories and meets with students during working time to help them along.  Even though Foley is formal and Collins informal, both styles of conferencing give students direct help with their projects.  Currently, the freshmen research project in my school is used as a local assessment and for that reason I am not allowed to help the students much or else it influences the validity of the scores.  However, I want to conference with my students to help.  So often, students just need a little extra help with citing, flow, or adding analysis and I am unable to give them that while they are working. 
    Due to my frustration with our current project and enthusiasm for Foley's ideas I am planning to present my research to my network in hopes that we will scrap, or at least revise, our current assessment and try one which provides more opportunity for the students to connect to their topics, provides more models for the final project, and assesses not only the end product, but many steps along the way giving helpful feedback.  If my network is willing to create a new assessment, we could also use Mahaffy's ideas on how to word the prompts to get more critical thinking.  As our assessment is at present, students are handed a topic by their science teacher and told to research it with a focus on current findings and that they must have one Internet, one journal, and one book source minimum.  This assignment prompt is exactly like the traditional one Mahaffy presents in this article.  Even if we do not create a new assessment, we at least need to re-word our current one so the directions prompt the information literacy standards Mahaffy discusses (determining the scope of the project, accessing needed information, evaluating sources, accomplishing a purpose with the research, and understand legal issues of researching like plagiarism) (Mahaffy 2006). 
    Two of Mahaffy's ideas raise interesting points that contrast our current research assignment.  First, Mahaffy criticizes dictating to students an exact type and number of sources.  He states, “prescribing the types of sources the student is to use sidesteps an opportunity to help the student in developing critical thinking skills” (Mahaffy 2006).  He explains that when students investigate all types of sources, they must evaluate them more fully on their own; while independence is an ultimate goal at Leavitt, our current assignment structure might be limiting the students.  Secondly, Mahaffy mentions using an annotated bibliography in substitution for a formal paper.  He describes students documenting all of their sources and then writing about how each source was useful or not useful and which source was overall most valuable.  This assignment  would work well for the goals of our current freshmen research assessment.  For this assignment, the English department is only concerned with research process, not writing.  The annotated bibliography would be easier for the students to write and the teachers to grade while also fostering in-depth evaluation of sources.  However, the downfall of this assignment is that the students may not ultimately synthesize the information they have gathered if all that was asked of them was a bibliography. 
    Overall, I found it enlightening that college professors struggle with the same problems I do with my high school researchers.  I am excited to present the ideas I have discovered to my network in hopes of bring about some effective change to our assessment. 


Works Cited
Broskoske, Stephen L.  (2007).  Prove Your Case: A New Approach to Teaching Research Papers.
      College Teaching.  55.  31-32.
Collins, Patricia J.  “Chapter 2: Bridging the Gap,” pp. 17-31.  Coming to Know: Writing to Learn in
     the Intermediate Grades, 1990.  New York: Heinemann Educational Books. 
Foley, James E.  (2001).  The Freshmen Research Paper.  College Teaching.  49.  83-86. 
Mahaffy, Mardi.  (2006).  Encouraging Critical Thinking in Student Library Research: An Application
     of National Standards.  College Teaching.  54.  324-327. 
Spandel, Vicki.  (2005).  The Nine Rights of Every Writer.  New York: Heinemann. 

Homework Research

While looking for other old work, came across this research paper I did on homework.  Its really more a paper about research than about homework though. 
 
My Homework was Researching Homework:
An Interpretative Review of Primary Research on Homework


EDU 600 Research Methods and Techniques
Beaudry / Miller
University of Southern Maine
April 29, 2008


Introduction:
    Homework is an integral part to my daily life as a teacher.  I assign reading homework almost every night all year as well as writing assignments and projects.  At my school, we recently spent an in-service day discussing struggling students and student failures, which lead to discussion on homework.  The necessity and grading of homework is highly debated, but I can not see doing without it in my classroom.  So, I decided to use this assignment to become more informed on the issue of homework and see what some research had to say about it.  Even though none of the studies I wound up using directly discussed a secondary English language arts classroom, I found the information interesting and useful. 

Search Process:
    Back in February, I decided to search for studies that dealt with foster care and schooling.  I wrote down the titles of several articles one day in class, but I turned in that sheet and never got it back.  As I waited to get that sheet back, I focused on our current assignments and procrastinated getting my articles.  It turns out that  waiting to get my articles was beneficial because I lost interest in my topic when it came time to buckle down on this assignment.  So, I came up with the idea to research homework, in which I am highly interested.   I went to Academic Search Premiere to find my articles.  I clicked the box for peer reviewed journals and the full text box to help narrow my results.  The original search term I used was just “homework,” but when I saw this search term yielded many results, I narrowed it down to “reading homework,” which reduced the results considerably.  I used the article title and journal name to choose which articles' abstracts I would read.  From the abstracts, I generally could tell if the article was a primary source or not because it would mention samples and such.  In the end, I decided to use the homework study we read in class, so I got rid of my longest article to ease my work load. 

Essential Questions and Hypotheses:
    The first study I read dealing with homework was Elawar and Corno (1985) and it had a clear purpose.  In general, the study question was “What types of homework and teacher uses of homework appear most beneficial for different types of students?” (p. 162).  More specifically, the primary purpose was “to test a response-sensitive feedback treatment in actual classrooms, with specific written feedback tailored to student homework performance” (p. 163).  A second purpose was to see if the treatment would affect students of different ability levels.  Next, I read Bailey et al. (2004) which although clearly stated, had a much different focus: “determine if reading homework, designed to be interactive between children and parents, would increase parental involvement and improve students' abilities to draw inferences from reading material” (p. 173). 
    The third article I encountered, Murray et al. (2006), did not state its purpose quite as clearly.  Generally, it explored how depression in parents affected their ability to help their children with homework, but this was not stated explicitly anywhere in the article.  What was stated were four areas that would be investigated: “promotion of positive mastery motivation, the promotion of independent representational understanding, the provision of general emotional support, and the presence of coercive control” (p. 128).  Then many predictions, or hypotheses, were given, some of which applied directly to the four focus areas; they included: depression would interfere with mothers' ability to support homework,  parents' behavior would associate with child outcomes, promoting positive mastery motivation and independent representational thinking would associate with positive child performance, general emotional support and lack of coercion would associate with high self-esteem and good behavioral adjustment, and lastly, that maternal behavior would have greater affect than paternal (p. 128-9).
    Alber et al. (2002) had a much more straightforward purpose.  This study included two experiments using the same three research questions: “what are the comparative effects of two types of homework assignments on students' (a) accuracy of homework assignments, (b) acquisition of social studies content as measured by next day quizzes, and (c) maintenance of social studies content as measured by unit or chapter tests?” (p. 174). 
    The last two studies I read went together.  Munk et al. (2001) had two purposes: “First, the perceptions of a nationally drawn sample of special and general education parents towards types of homework communication problems will be reported ... Second, the perceptions of both special and general education parents regarding the homework communication problems reported by Jayanthi et al. (1995a) will be described and compared” (p. 191).  The second study, Harniss et al. (2001), was very similar in many aspects which will be described later, but the purpose worked off of the data collected in Munk et al. (2001).  The first study worked to identify the problems with communications while Harniss et al. (2001) had primary purpose to “validate the recommendations reported by the focus groups” and specifically to “identify what recommendations for improving homework communication they perceive as most important and whether there is a difference between” special and general education parents (p. 208). 
    The strongest connection between these studies was they expanded on or verified other published research and thus they all used directional hypotheses.  The experiments described in Alber et al. (2002) were “designed to replicate the study by Hippler et al. (1998)” (p. 174).  Munk et al. (2001) and Harniss et al. (2001) both worked to validate and expand on Jayanthi et al. (1995), and some of the researches worked on all three of those studies.  Bailey et al. (2004) appears to expand on a list of studies listed before the purpose (p. 173), but does not state that it is replicating the studies like the previously mentioned articles did.  Even the Murray et al. (2006) study expanded the previous findings of this longitudinal study. 
    Additionally, researchers investigated how we communicate about homework in four of the studies.  Munk et al. (2001) and Harniss et al. (2001) both explored how communication problems between parents and teachers.  Meanwhile, Elawar and Corno (1985) investigated at how teachers communicate feedback to students.  Another two studies,  Bailey et al. (2004) and Murray et al. (2006), both looked at how parents and their children communicate about homework. 
    A final connection between the topics of study was that the studies were interested in what was most effective.  Alber et al. (2002)  investigated which out of two types of homework was most effective for students. Elawar and Corno (1985) found a more effective form of teacher feedback.  Bailey et al. (2004) looked at effective ways to get parental involvement in homework.  Lastly,  Harniss et al. (2001) gathered parents' views on the most effective ways to solve homework communication problems. 

Type of Research:
    Three of my studies were quasi experiments, which makes them also inferential quantitative research.  All three used convenience samples, which prevented them from being full experiments.  Elawar and Corno (1985) and Bailey et al. (2004) both used treatment and control groups. Elawar and Corno (1985) used a factorial design which resulted in two treatment groups and one control group.  The full treatment group received a specific type of teacher feed back on homework while the control group received only the number of answers that were correct.  A half treatment group was also used where half of the students in the group received the feedback and the other half received just scores.  This was a strength to the experiment because not only were the roles in the half group given at random, but it helped to exclude other factors such as teaching style affecting the results of the experiment. 
    Bailey et al. (2004) also used a factorial design and had two treatment groups, each with a different level of treatment to compare to the control group.  The teachers assigned one treatment group  only homework assignments designed to be interactive between a parent and student.  Meanwhile, the second treatment group received those homework assignments, and the parents attended a workshop about parental involvement in homework.  The control group had neither of these treatments and instead “simply continued their program of instruction and homework with no specific intervention” (p.  175).  In this case, the factorial design was not as beneficial because the roles were assigned by school instead of randomly assigned to a half group like Elawar and Corno (1985).  Thus, even though the extra level of treatment adds to the content of the study, it does not add to its generalizability. 
    Alber et al. (2002) used a much different design than the other two quasi experiments.  This study used an intact group design, a variation of the quasi experiment.  This study used post tests, had no control group, and used a repeated measures design.  A single small group of students switched back and forth between two different treatments, either a SRQ or SRWS homework assignment, and then quizzed after each one.  Alber et al. (2002) could have been improved with a control group because one can not say that using the SRWS method was better than the SRQ method without more control over the variables, such as the difference in difficulty of the material the homework covered. 
    Two studies, Munk et al. (2001) and Harniss et al, were descriptive quantitative studies and had no control or treatment groups and did not use random sampling.  After all our work in this course, I found that the descriptive studies left something to be desired.  I now know that cause and effect can not be determined from this type of study alone.  Yet, that does not mean that descriptive quantitative studies are all poor.  Some research needs to be a starting point that leads to topics for studies that have more generalizable results.  Harniss et al. (2001) even states under limitations that the findings were “not the results of carefully controlled experimental studies” (p. 223) and recommends that type of research as a step for future research. 
    The remaining study, Murray et al. (2006) was a longitudinal correlational study.  The focus of the overall study was to see how postnatal depressed mothers verses non-depressed mothers affect their children.  Yet, in this study, the researchers also decided to pull in the fathers.  This created many factors to be compared across the four focus areas dealing with homework.  The correlational design  fit the purpose of this study well because the researchers were able to compare depressed and non-depressed mothers to their children's performance in various areas. 

Sample:
    The samples of my studies were extremely varied.  I had samples as small as twelve students (Alber et al) to as large 504 (Elawar and Corno (1985)).  There were samples from Venezuela and Brittan, as well as national United States samples and ones which focused on small sections of America.  The ages of the students were from first grade through high school, and they represented all different backgrounds in race, economics, etc. 
    All but two of my articles gave details about the participants.  Munk et al. (2001) and Harniss et al. (2001) both used the same sampling method, snowball sampling.  The researchers asked teacher participants in previous studies to participate again by recommending parents for the studies.  A major weakness is the article does not provide information about if those originally participants were randomly selected or not and does not tell how the teachers chose what parents to give the surveys to.  Also, the features of the parents selected were not described other than to say the ages of the students, the percentage that had special education services, and that they were from all over the United States.  This would be the weakest description of sampling strategy I encountered because readers of these studies do not know if that sample represents the population and using a non-probability sample hinders generalizability. 
    Three of the studies used convenience samples.  Alber et al. (2002) used a convenience sample because the two modified quasi experiments preformed in this study were done in two different classrooms, but this sample was very small; one class had twelve students and the other twenty.  Bailey et al. (2004) had a larger small of eighty-four participants and did its quasi experiment in a classroom as well so its control and treatment participants were not randomly assigned.  Likewise, Elawar and Corno (1985) also used several classes of students as control and treatment groups so roles could not be assigned randomly, yet this sample was the largest with 504 participants.  In all three cases, the convenience samples were appropriate to the purpose and design of the research. 
    My correlational study, Murray et al. (2006), used a characteristic, postnatal depression, as its “treatment,” which could not be random; however, the participants who formed the control group were randomly selected from those who did not have postnatal depression.  So, the  researchers used randomly choosing participants from large group where it was applicable, which improved the generalizability of the results.  Although the sample selection was appropriate to the study, the description of how participants were selected did not fit any of the types of samples we discussed. 

Data Collection Methods:
    Three of my studies used pre- and post-tests: Elawar and Corno (1985), Murray et al. (2006), and Bailey et al. (2004)  In contrast, Alber et al. (2002) did only post tests after each treatment, which was either a SRQ or SRWS homework assignment.  This design was similar to a repeated measures design.  There were no pretests or control group for the researchers to compare the scores of the post tests.  Instead, the post tests of the different treatments were compared to each other to determine which was more effective.  I found that the results from the studies that used pre- and post-tests showed more clearly that the results were due to the treatments. 
    Munk et al. (2001) and Harniss et al. (2001) used surveys as their major way of collecting data.  The surveys used measure scales to collect data.  There were questions that nominal scales (questions about the background of the student), ordinal scales (questions asking participants to rank items), and interval scales (questions asking participants to say how serious a problem they though each item was).  As similar as the designs of these two studies were, there was a big different in the amount of data they were able to collect.  Both studies used snowball sampling to contact the participants.  All the participants received a survey with a cover letter explaining the directions and telling them of an incentive for completing the survey.  However, Munk et al. (2001) sent the surveys directly to the participants and Harniss et al. (2001) gave the surveys to the teacher to give to parents.  Then, both sets of researchers mailed an extra copy of the survey to the participants later in hopes of more returned surveys.  Munk et al. (2001) had a 63% return rate and Harniss et al. (2001) only had a 26% return rate.  I found this marked difference very interesting, but there was no explanation in the studies explaining reasons for the return rates.  This difference left me wondering if reasons would be clearer if the sample were described for each study. 

Data Analysis:   
    I expected to see all sorts of fancy data analysis methods and complicated charts which would confuse me, but I did not.  Various data analysis tests such as t- tests, ANOVA, Chi squares, or f – tests were mentioned, but only the discussion and charts of uni- and multi-variable regressions in the correlational study overwhelmed me.  The charts gave low p values for several factors, which indicated high probability.  I also could determine the results of t-tests and beta-weights, but I was unsure what factors were included in the R2 scores that were listed.  It appeared that the charts were labeled as “models” and that the total R2 score was determined for the factors listed in each model, but I was unsure how that showed how much the individual factors correlated.  Luckily, the text that described the findings of the statics were clear so I could understand the results. 
    All of my other studies discussed mean and standard deviation.  In fact, in Munk et al. (2001) and Harniss et al. (2001), mean and standard deviation were the primary data analysis.  These two studies did not even contain sections labeled “data analysis,” which surprised me.  However, when I reviewed our materials, I saw that the measured scales they used actual lent itself to this type of analysis.  Plus, the analysis of this data fit the purpose of the studies.  The researchers wanted to know the perspectives of parents on homework issues and finding means for their answers to survey questions fit that purpose.  However, I left wondering about the ends of the articles.  Under limitations, Munk et al. (2001) states that the surveys were done by “self reporters” and “the accuracy of the perceptions cannot be validated”  (p. 201).  Also, they explain that the ranking method is a limitation because “the study indicated only the mean ranking for each problem; this method does not provide absolute meaning of seriousness (p. 201).  Meanwhile, Harniss et all states: “The fact that parents ranked a strategy as more effective than another is no guarantee that it would in fact be the most effective in practice.  Lower ranked items may be equally effective though less desirable from a parent's perspective” (p. 222).  At first I thought this was a weakness, but when I reviewed that interval scales are meant to give values that measure “central tendency,” I realized that this was why the researchers chose this type of data collection and analysis.  Yet, I did question the purpose of obtaining the parent perspective and I wondered if the researchers could have done more analysis that showed statically the similarities and differences between the focus groups in Jayanthi et al. (1995) and the current studies, since part of the purpose of both studies was to extend Jayanthi et al. (1995). 

Results and Outcomes:
    Since all of the studies used directional hypotheses, the results of the studies were not shocking.  Results from four of the studies matched what the researchers predicted and all studies that were meant to replicate a published study confirmed the previously found results.  It was only Munk et al. (2001) and Harniss et al. (2001) that found some differences between the original study they expanded upon and the new results.  These two studies worked to match their findings with the results of focus groups described in Jayanthi et al. (1995).  Unlike their sample which contained only teachers, the focus groups contained teachers, so it was interesting to see the differences in how parents and teachers looked at homework communication problems.  Most notable was that teachers and parents had differing perspectives on blame for homework communication problems, yet agreed on effective interventions for solving the problems. 
    Only two of the studies over concluded.  Although the experiment presented in Elawar and Corno (1985) was superior, the researchers jump too quickly to recommending that all Venezuela schools try this form of feedback on homework.  Alber et al. (2002) over concluded more drastically by recommending that teachers start to convert homework to SRWS format when the study did not show cause and effect.  The other four studies did not over conclude; in fact, they all listed limitations to their research, such as design or sample problems, or suggested data they found which could be further researched. 
    Some of the information I learned from the six studies blended well together even with such a broad the topic.  The findings in Murray et al. (2006) and Bailey et al. (2004) agreed that parents influence students achievement on homework. This also went with some of the findings from Harniss et al. (2001), in which one section of questions on the survey asked parents to rank effectiveness the interventions parents can take with homework.  The parents' answers show that they also saw a connection between their involvement and student achievement.  Yet, the Munk et al. (2001) survey did not shed light on the issue of parents and homework because the parents only commented on teachers; this limitation was stated in the study. 
    I found this information on parents and homework the most applicable to me personally due to the amount of homework I assign.  Knowing how much parents affect homework, I am curious how involved parents are with my students' homework, since I teach older students and many of the participants in these studies were not in high school.   Also, I am also curious about their thoughts on how I communicate with them about homework because I rarely discuss daily assignments with parents since my students are teenagers.  These concerns about parents and homework are possible areas I could investigate when I take Teacher Research in the future.

Generalizability:
    The lack of discussion on validity and reliability surprised me.  The worst studies were Munk et al. (2001) and Harniss et al. (2001), which stated “items for [the surveys] were obtained from previous explorative research that was conduced to identify communication problems related to homework among parents and teachers” (Munk et al, p. 193; Harniss et al, p. 209), but did not explain if those were valid or reliable.  They did describe that they used a “group mind process” to revise the survey twice and did a pilot group, but the group that revised the survey was not described as a panel of experts and no data on what the pilot group's reaction to the survey was given.  Moreover, the problems with the sample, discussed above, also affected the generalizability.  Thus, this research, as good the results sounded, has little generalizability. 
    The best study for generalizability was Elawar and Corno (1985).  This study showed the statistics for reliability of all eight instruments they used for data collection.  These ranged from .68 to .95, all of which were within an acceptable range.  Validity was also mentioned when instruments were described as criterion referenced or standardized, but the descriptions were not as specific as the reliability.  Finally, this study contained the largest sample and included randomly assigned roles when possible, including the half-treatment group.  Thus, the sampling and assignment of roles also helped to make this the strongest study in terms of generalizability. 

Overall Ratings:
    Elawar and Corno (1985), Bailey et al. (2004)  and Murray et al. (2006) were all medium high studies.  A strength for all three of these studies was a clearly described sample.  Additionally, Elawar and Corno (1985) and Bailey et al. (2004) were both quasi experiments that used  treatment and control groups and then analyzed the data to see if the treatment caused the outcomes.  This design was a strength for these two studies because cause and effect could be determined, which was absent in the other studies.  Yet, Elawar and Corno (1985) is the highest rated over all because it used a half treatment group and it listed statically the reliability of every measure, which none of the other studies I looked at did. 
    Meanwhile, Murray et al. (2006) I would rate as a medium study.  Murray et al. (2006) covered many areas that could affect parents and how they interact with their children when working on homework.  This was a strength because uni- and multi- variance regressions were then used to determine which correlations were significant and which factors were not.  The study was solid, but lacked superior elements such as information about instrument validity. 
    Alber et al. (2002) is a medium low study. The samples groups were very small.  The experiment was conducted twice for this study with one sample of twelve and the other twenty.  This small of a sample, which is also a convenience sample, really hurt the generalizability of the study.  Also, the lack of treatment and control groups was a weakness.  Without the use of controls, cause and effect can not be determined.  If this study had contained a control group it could have said that the SRWS was the most effective instead of that there was “powerful evidence of a functional relationship between the SRWS condition and  increased academic achievement” (p. 193). 
    Lastly, the Munk et al. (2001) and Harniss et al. (2001) are low studies.  The lack of description of the sample hurts this study, but it is the lack of information about instrument that really makes these studies lowest to me.  It is possible that the sample reflected the population and that the instrument was valid and reliable, but the researchers did not discuss these areas on the article. 


Works Cited
Alber et al.  (2002).  A comparative analysis of two homework study methods on elementary and
     secondary school students' acquisition and maintenance of social studies content.  Education
     and Treatment of Children.  25, 172-196. 
Bailey et al.  (2004).  The effects of interactive reading homework and parent involvement on children's
     inference responses.  Early Childhood Education Journal.  32, 173 – 178. 
Elawar, M. C. & Corno, L.  (1985).  A factorial experiment in teachers' written feedback on student
     homework: Changing teacher behavior a little rather than a lot.  Journal of Educational
     Psychology.  77, 162 -173.
Harniss et al.  (2001).  Resolving homework-related communication problems: Recommendations of
     parents of children with and without disabilities.  Reading and Writing Quarterly.  17,
    205 – 225. 
Munk et al.  (2001).  Homework communication problems: Perspectives of special and general
     education parents.  Reading and Writing Quarterly.  17, 189 – 203. 
Murray et al.  (2006).  Conversations around homework: Links to parental mental health, family
     characteristics and child psychological functioning.  British Journal of Developmental
     Psychology.  24, 125 – 149.