When I was in college, the infamous Evergreen State College, I took a class where the book "Studying a Study and Testing a Test" was part of the curriculum. Say what you will about Evergreen, it is very possible to get a good education there. It is also possible to slack off for four years and effectively learn nothing. Although I'm pretty sure that is true for any institute of higher learning.
But getting back to "Studying a study", not all research papers that make it to publishing are created equal. Some have a very small sample size which means that the only conclusion one can seriously draw is that further research is needed (the "vaccinations cause autism" scare was started by a study of only 7 autistic children IIRC). Some show correlation but no relationship between the data sets (such as the humorous "Global Warming and Pirates" meme). Al Gore is very quick to point out that temperature and atmospheric CO2 content are very closely correlated over a very broad time scale, but not so quick to point out that temperature LEADS CO2 instead of the other way around. So, correlation does not equal causation (except when it does, and good researchers design experiments to precisely measure that relationship).
Sometimes researchers don't start with a hypothesis, and simply do a case controlled study and look at enough variables to see if something comes up. When things DO come up they draw correlation where none actually exists. Imagine doing a study of a laundromat and simply observing the patrons for eight hours a day for two months. You would see patterns, and you would have data, but unless you started with a hypothesis such as "Blond women wash sexy undergarments in public to attract potential mates" then your data is simply too unfocused to answer that question. Often times when a researcher falls into this trap they will publish anyway for fear of losing funding. Publish or perish, and publishing bad stuff is better than not publishing at all.
Another trap is "selection bias." If you were researching the differences between healthy and unhealthy people, how would you set up your experimental and control groups? Ideally you would match for as many variables as possible such as age, weight, bmi, sex, height, skin color, eye color, family history, exercise habits, social status, income, eating habits, etc. And even then you would only look for ONE particular ailment such as bunions, foot fungus, or brain cancer. If you simply took a group of people A with brain cancer and another group B without brain cancer it would be highly unlikely that you didn't get some sort of selection bias leading to all sorts of weird data points. Such as people with brain cancer spend more time in doctors offices, therefore if you want to avoid brain cancer you should avoid doctors offices.
There are more problems with studies, and while I was specifically studying clinical trials the method of analysis for other areas remains the same. Gun control groups have been having fun with statistics for years now and crying foul every time someone points out that the data does not support their conclusion. Racists on both sides of the color divide have done the same thing. Using statistics as a hammer to beat down your opponent is simply "appeal to (false) authority", one of the classic logical fallacies.
So, if the data doesn't support your conclusion, what do you do? Well you can do a "case study." I use "Case Study" in quotes because when it is done properly by a Doctor it is designed to help OTHER DOCTORS deal with patients who may be suffering the same illness. A case study is basically the story of one person. When the audience is no longer Doctors, then the "case study" becomes "anecdotes" and ANECDOTES are NOT DATA. For example Dave Grossman tells the story of a six year old who killed eight people with eight shots, five of them headshots and then he blames it all on video games. Unfortunately there are others who have done more serious research who found that less than 1 in 8 young murderers routinely played violent video games. Annie Oakley didn't go psycho, but she sure didn't get her skill from "Gears of War."
If someone is trying to convince you by "anecdotal evidence" you can bet it is pure and unadulterated hogwash. Burt Rutan said the same thing about theories that require highly complicated models that don't reproduce the results of known history when he talked about global warming. The truth is that you can feed bupkis into the Global Warming computer simulations and all they will throw back at you is global warming because that is what they have been designed to do.
To sum up, when the news media releases an article that says "Study shows" or "Studies show" I internally cringe because even among researchers not all research is equal. And even when good research IS conducted it is possible to get random outliers, such as "Vitamin C increases risk of cancer" which was actually a pretty well designed study that happened to bring back weird results. But with enough research those statistical outliers prove themselves to be just that. Rule of thumb is that 1 in 20 research papers will show some outlier behavior.
Whenever possible get your hands on the real research. See for yourself if the numbers add up, if the experiment was well designed to answer a specific discrete question, if confounding factors were minimized, if the statistical relevance is high enough to warrant serious consideration.