Critiquing Research

Outline:

-- Reading Journal Articles
-- Critiquing Research
Theory & Conclusions
Internal Validity
Construct Validity
External Validity
Subject Population
Biased Researcher


Reading Journal Articles

First -- I am assuming that you know how to find journal articles on a topic of interest. Am I right? What do you do? -- psych lit -- SSCI (used like Psych lit and used to search who has cited key articles) -- peruse journals -- back searching of references -- ask faculty

Now once you have an article, what do you do with it?

What do you read first, how do you go through it, what is important?

Critiquing Research

This is one of my favorite pastimes. There are no perfect research projects -- to my mind, there are none that even approach perfection!

What things might you attack research for? Essentially, everything that you worry about in experimental design!

Theory and Conclusions

You can go at it globally and attack the theory -- the way of viewing the world. Do you agree? If not, why not? Do you like the article? Just saying that you don't like it, or that it doesn't work for you is not enough -- it may be a good starting point, but you need to know why! Look for false assumptions (stated or unstated). Look for erroneous reasoning.

Look at the conclusions and see if they follow logically from the data. One thing to look for is whether they ignore part of their data. A second thing is to see if they have over-explained the findings: In other words are there other, more simple explanations, and they didn't use Occam's razor. This approach assumes that the data and the method are essentially ok. Or rather that you haven't chosen to attack them yet.

Example: The penny recognition experiment. People can't recognize a penny looking at a host of drawings. Claimed the results were due to memory failure. What is another, more plausable explanation?

Example: Flashbulb memories. Brown and Kulik noted a phenomenon. Namely that memories of hearing the news of important world events seem to be detailed and longlasting. They concluded that there is a special neural mechanism for these special events "Now Print." Did not really have evidence however -- they only had reports from people that they were getting it all right. It turns out that people don't!

You got their theory, Now, you should hit them in their method.

Internal Validity

Such problems deal with problems in how the experiment was set up. Is there some other factor that could explain their findings. Generally that other factor has to be something that varies with the independent variable. May be because they let a confound into the design. It may be because they are missing an important control condition that limits their ability to make the claim they want to make. All studies that use a cross-sectional variable have this problem. example: The classic is making claims about racial differences without controlling for Social Class differences. Sometimes you get it because of poor design.

Example: Noting that people involved in counseling start to feel better at the end of therapy than at the beginning. Confounding variable? Time.

Construct Validity

Look at their operational definitions of their constructs. Make sure you think they are measuring what they claim to be doing. Psychologist like to claim they are talking about X and do an experiment on Y. An example is studying emotion -- making claims about how strong an emotional response is based on heart rate. Could be arousal, which may not be the same as emotional strength. Dangerous. A piece of construct validity is reliability.

External Validity

Whether the experiment is set up in a fashion similar to the way the world really works. This is a constant problem. People act different when they know they are being watched. Sometimes this complaint is framed in terms of "ecological validity". The world is really like what you said in your intro, but not what you did in your experiment.

(SIDEBAR -- there seems to be a continuum from ecological validity w/ confounds to controlled experiments with high internal validity but w/o ecological validity. Just the way the world works.)

Subject Population

Look at the subject population. Are the subjects of the experiment the same group to whom the researchers are applying the findings? If not then they have a problem with their subject population. Often termed ³representativeness of sample.² Let me read you one -- a few years old now, but it is a classic.

Researcher Bias

Often the goals of the researchers may interfere with the objectivity of the research design. It is possible to design research that demonstrates your point. The common examples are in political work -- pollsters want to have people agree with their client¹s views. One can easily ask questions in such a way as to lead people to certain responses. One can also group numbers in different ways to emphasize different views. Say a question on do you support the President. 16% do strongly, 25% do somewhat, 30% are neutral, 25% don¹t somewhat, and 14% don¹t strongly. Can claim, only 16% strongly support Pres, or 41% support Pres, or 71% are not opposed to the Pres. One can choose which numbers to report. Mean Median or Mode for home prices?

This is a problem in Psychology. Often it shows up here in a failure to include important control conditions. Controls suggested by other theoretical views. It also shows up in investigating the most basic claims of theories (ones generally not contested) rather than risky claims (ones that are really open-ended).

Conclude

The goal of all research is to have generalizable results. But most have some problems. Therefore, we often settle for series of experiments, and the preponderance of the findings. Look for something called Convergent Validity!

These types of assaults also work when you are evaluating other types of research. You doctor tells you to cut down on cholesterol, but you know that all the research has been conducted on fat old white men with histories of heart troubles and are not convinced that it is really relevent to you.