Now this may be obvious to some, but I gained a new perspective on research, studies, and "proof" this summer. Saying that I’m cynical about research wouldn’t be accurate, but I definitely feel that I see research in a larger context now. Major issues of reliability and validity are prevalent, and any studies involving people rather than physical things like battery life or something are inherently less definitive.
There are frequent arguments about the findings of studies and I was well aware of biased research done in the past, especially about the voucher issue. But I thought of the research world as a generally dependable bunch except for a few screwballs with agendas (looking at you Jay Greene) and self-justifying studies done by companies pushing a product. That generalization mostly sorta holds true, but the research landscape is much more free-flowing, personality driven, and scattershot than I had previously conceived it. Individual, institutional, and regional biases, current hot topics, media, politics, and of course profit, strongly affect which fields of research are heavily pursued and which are not. Many different avenues exist to get published or “validate” your research, and the differences in credibility are not immediately apparent to the average reader, or more likely the average newspaper or magazine writer who then passes their take on the gist of the study to their readers. Many, many different publications, both in print and online, publish research in every imaginable field. There are prestigious, peer-reviewed journals with relatively high standards, mid-level journals with differing levels of standards, and then plenty of other printed journal-type publications looking for interesting writing, or just whatever is submitted (Kind of like those lame flyers you’d get in high school advertising that “Who’s Who” book of students that claimed to help with college and job applications, but was actually just a catalogue of the parents gullible or desperate enough to pay to get their kid’s photo in and then pay more for expensive books the family looked at once, and the colleges not at all). And even in some of the better journals, it is possible to find “holes” in many studies—variables that are unaccounted for or things left unexplained that should be explained to make the findings more credible.
This applies to all research, but especially research on people whether it be educational, psychological, behavioral, or even medical research. Almost nothing about people can be neatly buttonholed into definitive rules--we're just too uniquely weird. There are prominent recent examples of medical studies that were rushed through limited population studies and botched in the name of profit, triggering huge lawsuits afterwards (I love the movie The Fugitive….). And a body of research can strongly suggest certain effects, but the medicines and procedures are approved based on an arbitrary FDA standard of “safe enough” because nothing is absolutely certain when it comes to unique individuals. Side effects vary, vaccinations sometimes don’t work, patients react differently to identical procedures, and the same drugs, even safe and common ones like acetaminophen, don’t work for everyone. Research just leads us to the most likely best treatment, not the absolute “best.”
These same limitations and more apply to education research because it deals with psychological and social aspects of a student’s make-up, as well as biological. Our professors discouraged us from using the word "prove" when talking about education research. Every study has limitations. It was conducted by a specific person who is a member of a specific organization or university with a specific research culture and various funding pressures. The subjects of the study are in a specific place with certain teachers, schools, and curriculum and each has a unique background, learning style, and personality. I realize this could seem obvious, but even studies that are conducted with rigorous scientific standards and randomly sampled from large diverse populations only tell us so much. There is nothing definitive. Nothing. The results may show "strong tendencies," or be "generalizable," but that at best means the findings will be true for many kids in many contexts. With the diversity of individual students, it cannot mean it will hold true for all students in all contexts, even among a relatively homogenous population like Utah Valley.
Viewing research as an evolving body of literature rather than definitive snapshots that prove certain points of view is necessary to find defensible “best practices” that have shown repeated positive effects. In other words, you need to look at an entire body of research to begin to understand a subject. A meta-analysis is a study that gathers as many relevant studies as possible on a topic and then runs statistical analyses on the combined numbers to find the consensus. I have a book detailing effective teaching practices where each chapter is based on the results of a meta-analysis on a certain practice. Looking at the charts detailing the findings of the individual studies within each meta-analysis shows a lot of variance in the results. One particular effective practice had a chart showing some studies with extremely high effect sizes, others with medium or small effects, still others that found gains that were barely statistically significant, and one study with a fairly large negative effect. In other words, the students in one study scored measurably worse when taught with the prescribed method that was so effective in other settings and others showed almost no improvement. There are numerous factors that could be responsible for the discrepancies, and even researchers cannot always pinpoint the reasons for differing results from similar research. That’s par for the course as researchers, educators, and parents muddle through trying to find the best ways to teach our diverse children.
Both school systems and school critics have sometimes overused “Research shows” or “if you read the research” as justifications for practices that were speculative, and single studies as arguments against larger bodies of research. (I need to post about the hour I spent listening to Carolyn Eager, the home researcher buddy of Margaret Dayton who is an “expert” on such topics as home school and the IB, being interviewed by Gayle Ruzicka on 630 AM while I drove home from the airport a few weeks ago. It had to be some sort of record for unsupported claims of “If you read the research, you’d find…”) I know I’ve heard it in professional development meetings in public school, and charter or private schools will sometimes search out the studies justifying their founders’ opinions on the “right” curriculum or methods to use.
So I’m not saying disregard education research. I am saying that any one study needs to be approached with a critical eye and possibly a grain of salt, and that even an educational method or curriculum with a well-documented positive effect in multiple studies does not by definition have that effect on all students—the positive effect comes from statistical analysis on the net results from a large sample of students and may not have the same benefit for an individual student or class.
Thursday, August 28, 2008
Subscribe to:
Posts (Atom)