This note is simply a quick reference for interpreting and understanding P values. A fuller explanation can be found at WHAT A P-VALUE TELLS YOU ABOUT STATISTICAL DATA and How to Correctly Interpret P Values
Many scientific studies perform a statistical analysis of some data, then provide a P value as a means of showing if they found something which is " statistically significant". What this means is they have tried to calculate the odds of the same results happening if the odds were completely random, without any real bias either way. So by saying P<0.05, studies mean there is a 5% chance that this result is a false positive, by saying P<0.01 they mean there is a 1% chance of a false positive. So while having a low P value is taken as important for a study for indicating a possible link, it is by no means a definitive indicator that there is a link. It is also important to keep in mind that such analysis requires all biases to be known and measured, even a small and unknown influence on the values can completely corrupt the analysis.
Multiple Analyses
Because many groups are studying similar if not the same things, we often have multiple analyses of the same tests. We can also have single studies looking at multiple thngs at once to see if the test has an influence on one or more of these things. Often people will choose to look at a single study with a statistically significant P value while ignoring a wide range without. As per usual, XKCD does a wonderful job of ilustrating this, showing that too often people and the media look at P values like this
When we really should be looking at these research findings a bit more like this