Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

02.07 – Using 360 Degree Feedback

02.07 - Using 360 Degree Feedback
So at this point, I would like for us to pause and reflect on the two central ways in which companies can use 360s. One way to use 360s and the feedback is for evaluation and performance appraisals. In this case, your 360-Degree Feedback report would be a part of your formal evaluation system and that data would feed into promotion and compensation decisions. The second use of 360s is purely developmental. Meaning, 360s are not a part of your formal evaluation and performance appraisal process.
Companies are about three times as likely to use 360s for developmental than for appraisal purposes and that trend is very much consistent with both raters and ratees perceptions who overwhelmingly prefer that 360s I use for development and not for evaluation, not for compensation decisions. A stern piece of advice you would get is if you do decide to use 360s for evaluation, don’t make it the soul source of the data for evaluations. Please also keep in mind that if you do decide to use 360s for evaluative purposes that can effect the quality of the data. Research shows not surprisingly that when we use 360s for evaluative purposes, our self-ratings tend to be much more inflated.
Some research shows that even peer ratings can be significantly inflated. Although you can think that in comparative rating system environments such as when you have a forced curve or a forced peer rating, there are incentives for your peers to give you lower scores. So what are some of the key challenges of 360s. One is inexperienced raters and we’ll talk about rater errors more systematically, shortly. But here, the problem with rater errors and inexperienced raters is particularly pronounced and common, because you’re reaching out by definition to a broader pool of evaluators. Raters are not held accountable for the quality of their ratings. Raters can miscode, misunderstand the scale or sometimes even the target of evaluation.
They can provide inappropriate comments, discrepancies in evaluations. Where raters can’t quite reach an agreement in terms of their evaluations of a given person. Omitting key stakeholders can lead us to get an incomplete perspective on a given employee and their performance or behaviors. Low participation rates cannot only affect the quality of the data, but also sometimes compromise confidentiality of respondents. If your people are not committed to 360s and 3 out of your 5 teammates over drinks, tell you didn’t bother filling out the 360, you know who the remaining two respondents are. I’d like first to talk a little bit about Best Practices in 360-Degree Feedback.
The first one is you need to measure skills and competencies that are relevant to the success of the individual and organization. The performance dimensions, the skills, the competences you’re measuring have to be in line with the vision and strategy of the organization and your team. Continuity is really important, you need to collect feedback continuously and not once a year. The best approach here is to collect that feedback from various stakeholders upon the completion of major tasks and projects, then aggregate the data and present that feedback to that employee. This is very important, because we are notoriously inaccurate in our evaluations after time delays, especially when we have to evaluate others’ behaviors.
And as you can see, evaluating employees’ behaviors is an absolute key component of 360s. Validate and carefully select your scales, pilot them. Don’t use standardized, we got our solutions in 360s. For example, in evaluation behaviors, I often see companies use frequency scales. Do I exhibit supportive behaviors and the scale would range from not very frequently to very frequently. Well, one of the problems with a scale is that it conflates evaluation with opportunities for observation. So I could say that you’re not exhibiting supportive behaviors very frequently not because that’s really true, but because I don’t interact with you very frequently at the workplace and a final piece is make sure that your feedback data are reliable.
And by that I mean, that your multiple raters have to reach some degree of agreement in their evaluation of a particular employee. Their skills and competencies. Research suggest that groups of nine subordinates, eight peers and four managers produce pretty good reliabilities on a five point scale. Now these are pretty sizable groups. In most companies, we don’t quite reach those numbers. So you have to be very careful as far as the quality of 360 DNR concerned. Take a look at these two pieces of feedback for Lisa and Oscar, which feedback would you consider more reliable?
Even a cursory look at the data reveals that Oscar’s feedback is substantially more reliable than Lisa’s, because Lisa’s raters have a really hard time reaching an agreement in their evaluation of Lisa’s competencies and behaviors. Let’s take, for example, developing others. Oscar’s raters reach pretty good agreement around four and five, giving him very high scores. Lisa’s raters, half of them are really enthusiastic about Lisa’s skills at developing others. Giving her very high scores, fives. And half of them are not enthusiastic at all, giving her very low scores of two. Take ability to influence others. Again, Oscar’s raters reach a significant degree of consistency with a mean of around four.
Lisa’s raters in contrast, use up the entire scale, you see one’s, two’s, three’s, four’s and five’s. And so next time you get your 360-Degree Feedback, evaluate the quality of the data. Be ready to raise concerns, if you feel like feedback is too inconsistent. If you’re sample size allows, meaning if you have large groups of raters evaluating you, you can try to use the olympic rating system wherein you dismiss the highest and the lowest rating on each scale on each dimension. That can improve the consistency of the data. If you’re more of a quantitative geek like I am and you wanna have a hard statistical threshold for evaluating reliability of feedback.
Most statistical packages out there and some open source software allow you to calculate the intraclass correlation coefficient, which is a measure of interrater reliability. And for this kind of feedback, your interclass correlation should be at least 0.48, 0.49 and higher. By this metric, by the way, Oscar’s feedback comfortably passes the reliability threshold. And Lisa’s feedback is so bad, it’s so unreliable that it’s actually unusable.
Research documents some significant positive effects of using 360s. They contribute to improvement in how we see ourselves and how others see us. Now critically though, these effects depend on engagement in post 360 development. So in addition to filling out the 360 forms and having conversations about results, set up a development plan and follow through. 72% of employees said that even when their manager set up a development plan, they didn’t really follow up on it. Another benefit of 360s is that managers develop more positive attitudes towards upward feedback and are less concerned that such feedback may undermine their authority.
I mentioned to you earlier that there are a lot of companies using 360-Degree Feedback and I’m singling out these two, Netflix and Google. Because they’re using 360s not just for developmental purposes, but for evaluative purposes. Peer feedback is considered very carefully at both Netflix and Google for promotion and compensation decisions.
This article is from the free online

Managing Talent

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now