# What Is Inter Rater Reliability

What is Inter-Rater Reliability?

Inter-rater reliability is the degree of agreement among raters. It is usually expressed as a percentage of agreement or a correlation coefficient. An inter-rater reliability estimate allows us to determine how well different people agree with each other. This is important because in many settings we need to rely on others to help us gather information. For example when we conduct research using surveys we are often relying on people to give us accurate information about themselves. If we want to be confident in the results of our research it is important to have a high degree of inter-rater reliability.

There are a number of ways to estimate inter-rater reliability. The most common is the percentage of agreement. This is simply the number of times that two raters agree divided by the total number of ratings. For example if two raters agree on 80 out of 100 ratings then the percentage of agreement would be 80%.

Another way to estimate inter-rater reliability is to use a correlation coefficient. This is a more sophisticated statistical measure that takes into account the degree of agreement and the variability of the ratings. The most common correlation coefficient is the Pearson correlation coefficient. This coefficient can range from -1.0 to +1.0. A value of 0.0 indicates no correlation a value of +1.0 indicates a perfect positive correlation and a value of -1.0 indicates a perfect negative correlation.

It is important to note that in order to have a high degree of inter-rater reliability it is not necessary for the two raters to agree on every rating. In fact it is often impossible to achieve perfect agreement. What is important is that the agreement is high enough that we can have confidence in the results of our research.