Skip to 0 minutes and 15 secondsSo if you're going to have these possible two outcomes, you have two ways of being correct. And you also have two ways of being incorrect and these are what if referred to as alpha and beta errors. Alpha is a false positive which is the error that researchers are most concerned about. We'll talk about that in a second in beta errors is a false negative. I have a couple charts that'll help to explain this. Because sometimes this is a difficult concept because if you're starting with a statement of no difference and you have to either agree or disagree with that you're taking double negatives together and that can often be difficult.
Skip to 0 minutes and 51 secondsSo this chart might help you to understand how we get to these endpoints and why we're concerned about them. So you can see as I mentioned with with two potential outcomes that are correct you can also have two potential ways to be incorrect. So you can see that the decisions decision from the statistical test is on the left and if we could really know when reality is on the top. So if the test found a difference and there really was a difference and obviously that's correct. So if we find we the tests found A difference and there's really no difference as you can see in Zone B then that's considered an alpha error.
Skip to 1 minute and 27 secondsAnd this is the error that clinicians are most concerned about because you don't want to find that treatment works when it actually doesn't because that could cause harm to patients. Now the other error that can occur is finding no difference when in fact there possibly was and this is beta error. Now it's not to say that this error isn't important because if you find a treatment is not effective when potentially it is, then you could be denying patients efficacious treatments that they may not be receiving because of inappropriate results. So what is what is the reader do with this? We're obviously not calculating anything when we're looking at studies. We're not manipulating it.
Skip to 2 minutes and 5 secondsSo how do I is the reader, determine if an alpha or beta error occurs? Well, there're some very easy ways to do this. With an alpha error and this is where you're looking at P in the results section of less than 0.05 Right? Because if you're saying there's no statement of no difference and you find a significant difference you're saying that there is one.
Skip to 2 minutes and 29 secondsSo in order to make sure that an alpha error doesn't occur, now obviously it's going to occur and it's not more than the 5% listed by your p-value, you have to make sure the correct statistical tests are used, which is pretty much what this whole presentation is about to show you how you determine if the correct statistical test was used. Now if a result was not significant a P of greater than 0.05 then there's a potential for beta error. And the only way you can rule out a beta error is with the primary outcome. Because primary outcome of a study is the only thing that's linked to power.
Skip to 3 minutes and 3 secondsSo you need to make sure as the reader is there're correct power involved and then you could determine only for the primary outcome if a beta error occurred or not. That's why secondary outcomes are only hypothesis generating because you can maybe say it was significant but you can't rule out that it's not. Now because I know people think differently with things I've also got a flow diagram of how you would handle this. So first what you would do is the reader. You would check your methods section and you would check to see what p-value is listed there. It goes by different names.
Skip to 3 minutes and 34 secondsIt's either called a priori or alpha or a methods p-value and if it's not listed, unfortunately you have to assume a lot of things. When you're looking at statistics in a study you assume that it's less than 0.05 if it's not listed because 5% chance of an error seems rather high if you think about it but that's what's considered to be acceptable in clinical trials. Then you'd want to look at all the p-values in the results section and compare them to that methods alpha to see if they're less than or greater than that value to see if they're significant or not.
Skip to 4 minutes and 8 secondsAnd then you're going to check for appropriateness of each of the statistical tests for each data point that's in your trial and obviously we're going to focus on the primary outcome for each one. But you still need to do it for each data point to make conclusions. So this is again going just like we did with the box but this might be a little bit more helpful some people think more in algorithms I think in boxes. But so I'll provide it for you both ways.
Skip to 4 minutes and 32 secondsSo you can see that if you want to look to see if they use the appropriate statistical test and you see that on the very left the purple box it says yes and P is less than 0.05, so if they use the right test and they find a significant difference then that means the results should be valid for that piece of data. But what happens if the results are not significant? Or if they use the appropriate test but they're not significant? Well, remember the a beta error can only be ruled out for the primary outcome and that's what it says in the orange box.
Skip to 5 minutes and 6 secondsSo then you'd want to check for four pieces of power to see if you can possibly rule out that beta error from occurring and truly say it was no difference. Now the four main outcomes that a reader can look at and there's obviously more to it but these can help you to possibly see if it's valid. Is you want to make sure the sample size and the final analysis is correct for what they listed in their power analysis. And I'll give you an example in a minute. That it is linked to the primary outcome that they're investigating and if they have more than one primary outcome they should have two different numbers for that.
Skip to 5 minutes and 38 secondsPower should be 80 to 90 percent which would give us then attend to a 20% potential for beta error which is considered acceptable. And then you want the primary outcome to be clinically relevant or it's also referred to an innocent effect size. So what in the guidelines? What in clinical medicine is considered an acceptable change in that value for it to be clinically relevant? And then you can see that in the other one, the next box over where it says yes the appropriate statistical test was used and P was greater than 0.05 but it was greater than 0.05 in the secondary outcomes.
Skip to 6 minutes and 16 secondsAnd as I mentioned you cannot rule out a beta error with this because it is not powered for that. And then what happens if they use the inappropriate statistical test which is the last box on the right? Well, unfortunately because of that you can't rule out the possibility of errors occurring so it doesn't mean you can't find any useful results but you can't find any use or results from the statistical analysis.
Hypothesis Testing: alpha and beta errors
Prof. Mary Ferrill clarifies how to assess alpha and beta errors in this video.
From hypothesis testing, if there are four outcomes, you will have two ways of being correct, and two ways of being incorrect, which are referred to alpha and beta errors.
An alpha error is the error that clinicians are most concerned about. No one wants to find that treatment works when it actually doesn’t because that could harm patients.
Besides, we can learn the steps to assess alpha and beta errors with the diagram in this video.
What is the difference between alpha and beta errors? And how do we take the steps to rule them out? Please share your thoughts below.