Should we all ditch 360 reviews?

Using 360-degree feedback reviews for development or performance badly needs clarification

In April, it got very heated when Basecamp CEO* and Co-founder Jason Fried sent an internal memo with a load of policy changes that amounted to some distinct cultural engineering. I don’t use that term lightly because my view is culture is hard to engineer and more emergent. But the mass exit of Basecamp employees can’t help but have a distinct impact on the company they leave behind.

But, Jason’s slightly confused stance on politics/no politics, work/home, not paternalistic/telling aside, I saw something else. THEY ARE STOPPING 360s.

I know, I lost my breath completely too.

OK so maybe it’s not quite that dramatic. But maybe it is. It warranted a memo from the CEO, right? What would happen, by the way, if someone just decided that they wanted feedback from their peers and went out to ask them. What exactly would he do about it? Yep, it’s a strange thing to take a stance on.

The reason this resonated with me as a Business Psychologist and a strong advocate of personal development is I think feedback is pretty sweet. I mean we all know it’s the #1 way of exposing your blindspot in Johari’s Window right? But ‘360’ means so many things in different companies there’s some confusion we need to address. Before we do here is the actual excerpt from Jason Fried’s message

 

 

 

 

Friedman Basecamp 360 review

Jason Fried at Web 2.0 conference courtesy of Kris Krüg

 

No more 360 reviews. Employee performance reviews used to be straightforward. A meeting with your manager or team lead, direct feedback, and recommendations for improvement. Then a few years ago we made it hard. Worse, really. We introduced 360s, which required peers to provide feedback on peers. The problem is, peer feedback is often positive and reassuring, which is fun to read but not very useful. Assigning peer surveys started to feel like assigning busy work. Manager/employee feedback should be flowing pretty freely back and forth throughout the year. No need to add performative paperwork on top of that natural interaction. So we’re done with 360s, too.

What is a 360?

 

As I alluded to above we have watered down the definition of 360. So deciding whether we should use them or not is a bit like being a farmer and trying to decide if you want to grow ‘fruit’. Here in the UK, the decision to grow apples (native to the country and easily grown) is very different from deciding to grow lemons (leave it to the south Italians who have the weather for it).

So if we strip it back, ‘getting feedback from a number of people and consolidating it into a report’ sounds good. After all, businesses do this all the time to try to understand, from an external perspective, how to improve their organisation’s competitiveness. Humans are different but the inherent value of feedback is still there.

The issue is a complete lack of awareness from business leaders, managers, employees and HR to what 360 is, or can be.

It’s worth noting Forbes is very quick to point out in article headlines Feedback. Doesn’t. Work. But implied in many of these articles (no doubt designed to get excited clicks from pissed off employees) is the real truth — shitty feedback doesn’t work. Even Gallup admit that employees have had enough.

The topics outlined here, therefore, show a number of the possible configurations for 360 feedback and hopefully the reasons why 360 can (rightly) get a bad name and what value can come from a well designed 360.

Performance vs Development

 

Firstly, we need to address the fact in Jason’s comment and my experience in the tech sector, there’s increasing use of 360s in performance reviews. The introduction of these was with good reason — one person’s perspective can be very biased. However as his message eludes to, if you generally have a friendly culture, the value of such feedback just brings a collective bias — being nice. And not ‘being nice’ in a sophisticated way, but in a slightly kindergarten ‘I’m just going to say nice things so you can be my fwend’ kind of way. Maybe that’s a bit harsh. But on the whole, you aren’t saying anything to help someone improve I would argue you have wasted their time and yours.

Don’t forget if you are being asked for 360 feedback to feed into a performance review, and there’s a chance you will ask the same person to feedback on, you are incentivised to provide a ‘good’ review so you get one too. This reciprocation could be quite sophisticated but it often isn’t — the simplest interpretation of ‘you scratch my back, I’ll scratch yours’ is chosen.

Finally, if the review is linked in any way to some other reward (e.g. pay, promotion etc) then why on earth would anyone start a cycle of realistic reviews i.e. warts and all? It would take a very special culture indeed to consistently nail this.

But if we talk about development we have different options. Yes, there is still the human need to want to be nice, especially if you are scared of people being ‘mean’ in response. But we can take the heat out of the whole thing if we are clear the 360 is for your (as the focus of the feedback) benefit.

With this different lens, and carefully wording the briefing related to it, we can help people to realise accuracy rather than niceness is the aim of the feedback.

The more realistic the feedback the less likely you are to waste your development efforts. If you are going to invest in yourself then you need to make sure you are investing in what’s important.

Finally, just to say this because I don’t want it to be lost in a 360 feedback post, if you really want to develop/help others develop, have a conversation for goodness sake. Don’t rely on a tool.

360 feedback Vs 180 Vs something else

 

The term 360 feedback in my view comes from having a ‘circle’ of different perspectives from which you get your feedback. The ‘validity’ of it is not about any one person’s perspective being ‘true’. It isn’t even about the collective perspective being true. The collective perspective (lots of feedback from different people) helps you to understand how you show up at work.

But a lot of 360s I hear about are actually not a complete circle at all. Or they are but only in 1 plane i.e. peers. Maybe occasionally with a senior stakeholder.

Don’t get me wrong the peer perspective is important but take this scenario — you are a Health and Safety executive whose job it is (as you see it) to ‘police’ your stakeholders and hold them to account. If you ask them for feedback, it’s likely to be quite harsh, and maybe rightly so. But the leaders of the business appreciate the scenario planning you do, the reporting and the general record on H&S. You keep your boss in everyone’s good books and that’s your main driver. If you only ask for feedback in the ‘horizontal plane’ from your peers, you run the risk of missing this important nuance.

We also have the David Brent effect. This is the type of manager who wants to simultaneous be seen as ‘good’ and your ‘friend’ which results in lots of strange twists of logic and shirking of tough messaging. If you get feedback from your peers but not your boss it’s easy for your boss to respond with “well I wouldn’t say that about you but you know, maybe you should take it seriously” — in other words, take no responsibility for your feedback.

The value of multiple perspectives should not be undervalued. The best 360s I have seen give you the option to group your feedback. Typically in these scenarios the group’s would be:

  • Manager(s)
  • Peers
  • Direct reports
  • Seniors
  • Customers
  • Colleagues

The ideas of peers and colleagues might seem odd and of course, if you include all of these you might end up asking too many people and/or having too few in each category. But effectively it’s just a way of grouping people together in a meaningful way. So in the H&S executive example above ‘peers’ might be others in the same team whereas ‘colleagues’ might be people in the business you provide a service to, or they could be ‘customers’. The aim is to group perspectives together i.e. how do people I serve perceive me? How do the people I work closest with perceive me?

360 review McCarthy

Image from the article by Alma M. McCarthy & Thomas N. Garavan — ”360° feedback process: performance, improvement and employee career development”

 

I have never been very prescriptive in categories or even been ‘true’ to the category names. The important thing is to group people in a meaningful way, almost testing a hypothesis — “I think all the people who work for me will see me as a taskmaster but all of my ‘customers’ will see me as a pushover”. Of course, if it’s going to be anonymous then you also need to remember how you have grouped respondents for your understanding of the results. Some 360 tools allow you to create your own categories too.

Qualitative vs Quantitative

 

This is a huge area of discrepancy across what is meant by 360s.
Graphs are great for communicating complex information where are more detailed information helps with in-depth understanding.  Without trying to patronise you, reader, with the definitions of these terms, I just want to be really clear what I mean here.

Quantitative in 360 typically means that a number of statements (hopefully expertly crafted) are rated by the people giving feedback in the 360. These statements will generally be related to something important in the organisation e.g. values, competencies, vision. The idea is to ensure those giving feedback are focussing on the areas the company, and presumably by extension the focus of the feedback, care about.

Qualitative in 360 typically means ‘free form’ questions where the respondent will type in some text. This might also be guided by the company’s priorities i.e. ‘Please describe how [insert name] puts the Customer First’. Or it might be really open i.e. ‘What can [insert name] do better?’.

The best 360s I have seen make the most of both. The advantage of quantitative is they help the process to be quick, they help to control for some biases, they give structure and they help with a visual representation of the feedback at the end (e.g. graphs and diagrams). The downside is it can make it feel more like a rating and people can obsess over their low scores.

Quantitative can help bring clarity and specific advice to the person receiving it. The downside is you have to invest quite a lot of time as a respondent, and in reading through feedback to make sense and get a clear message.

Another issue with this is the rater scoring bias — the tendency to score generally high or low. So a ‘3′ from one person might mean something very different to a ‘3′ from someone else. It’s why I encourage people to look for trends rather than individual scores which is tricky for some people to do — letting go of that one ‘2’ that’s really bugging them. In reality, that ‘2’ may mean the same thing as the other ‘3’s they were given but one person has a tendency to go lower.

Interestingly it’s not just in the raters but in the focus that this bias comes out. That might sound odd but remember in many 360s the feedback from other people is being averaged within groups. This helps to balance out individual tendencies. But the focus is only one person. I have had many conversations with people who have massively over or underrated themselves compared to others. This can easily be read as a lack of confidence or big-headedness. I encourage focuses to look at the shape of the feedback. In other words, do you and your raters agree on where your strengths and weaknesses lie? This is more important in unearthing blindspots than the absolute values which in many cases are unvalidated. Some 360s also show the range of responses or individual ratings too, so you know if you have scored similarly to any of your respondents.

Judging by Jason Fried’s message I’m guessing they were probably using the qualitative method which is quite labour intensive and often involves some of the feedback being sent to the manager for the manager to summarise. This in my opinion is the worst of both worlds. It doesn’t do much to control for bias, it doubles down on the manager bias by adding their interpretation to the feedback given and hands them an opportunity to use it selectively to support any axe they have to grind. Poor manager relationships are one of the top reasons people hate performance appraisals.

Language Confusion

 

A very brief note about the language used in 360s because it varies a lot and can also cause confusion

We have already discussed categories of respondents so we just need to focus on the generic categories…

The person receiving feedback

I have seen quite a few versions, often depending on the purpose of the 360 in the first place, of this but here are some options to consider:

  • Focus (pl. focuses, foci) – I personally like this one because it reminds people who the feedback is for.
  • Learner — could be used for any 360 where the purpose is development but more likely if it’s part of a structured development programme.
  • Ratee — urgh. Assumes a quantitative element to the 360. Hideous.
  • Recipient — as in the recipient of the final 360 report. The problem with this may be that the manager and or a coach may also receive a copy.

The person giving feedback

In many instances, these will be a reflection of the terms used to describe the person receiving feedback but in theory, there isn’t a reason why you can’t mix and match — as long as there’s consistency and a bit of logic.

  • Rater — The counterpoint to ‘ratee’. I don’t dislike the term as much as ‘ratee’ but it does denote qualitative assessment and more importantly judgement so you need to make sure that this matches the purpose of your 360.
  • (Feedback) Giver — Two words seem a bit clunky to me and ‘giver’ alone just seems wrong. Especially as I have never seen the recipient described as a receiver (nor would I want to).
  • Respondent — Because they are responding to the request/questionnaire. Maybe a little formal?
  • Supporter/Teacher — Rarely seen this is more likely to be used in a purely developmental context. What I like about this language, even though some respondents may not feel it accurately describes their role, is that it reminds them that their role is to help. Needs a clear explanation to avoid confusion.

The 360 itself

One of the sticking points can be the name ‘360’. My tendency is to not overuse this term for fear that the confusion outlined in this article will come to the fore. This study of 360 challenges offers us a list of other terms used to describe the 360:

  • stakeholder appraisal
  • full-circle appraisal
  • multi-rater feedback
  • multi-source assessment
  • subordinate and peer appraisal
  • group performance appraisal
  • multi-point assessment
  • multi-perspective ratings

 

Since I’m not a huge supporter of the use of 360s for appraisals I personally would avoid ‘appraisal’ and ‘assessment’. My general suggestion is to come up with something completely unique for your culture whilst being clear on the value it brings to the focus of the feedback. Words such as ‘insight’, ‘learning’, ‘awareness’, ‘performance’ (in the sense of improvement) and ‘growth’ can work quite well.

 

Timing

 

One of the reasons 360 gets a bad name, suggested by Jason in his commentary, is it can completely flood an organisation at a key point in the year. Often linked to performance appraisals.

Firstly if you don’t link it to performance appraisals or a set time in the year for development planning, problem solved. Think about it, this should be in service of the employee whether it’s about performance or development. So how does linking it with a set point in the year help? If it’s a performance appraisal I would strongly encourage organisations to link this to the start date of the employee. You will get a natural distribution around the year and no bottlenecks. If it’s developmental then link the 360s to particular types of learning outcomes e.g. those that impact others like improving assertiveness — that way only those who will get the most value are doing them. Or link to specific learning journeys such as those with aspirations to senior leadership positions.

If you are worried about a few key people getting lots of requests, most professional tools have ways of managing this (maximum number of requests, the key relationship only, etc).

If you are worried about how time-consuming the 360 is then strip it back. No one says that you have to have 5 rating questions and a free entry box for all 15 of your competencies (and why do you have 15 competencies by the way?). Remember the value of this feedback is often in aggregate. You do not need to cover every single point with every single person. In triplicate. People are more likely to complete 5 x 5-minute 360 questionnaires than a 1 x 25-minute questionnaire and I’m pretty sure which one would deliver more value for the organisation. A few (say 10) quantitative questions and then a few simple qualitative are really fast to complete. Here are some examples of broad, open-ended questions I have used:

  • How can this person add more value?
  • What do you really appreciate about this person?
  • How could this person demonstrate more courage?
  • How can this person support their colleagues more?
  • How can this person make better decisions?

Admittedly these are based on my own leadership values of Wisdom, Courage and Love, but they demonstrate how stripping back to the core beliefs of your organisation can give you some of the most powerful questions.

 

Anonymity

 

This is a topic very close to my heart. As mentioned above the best feedback is from a conversation with someone who cares about your development. Honest, Candid, Forthright.

So when it comes to a 360 it isn’t surprising I think it should follow the same suit. For some people, the main advantage of 360 is the opportunity for anonymous feedback.

I think the main advantage to 360 is the quantitative aspect as discussed above. This isn’t hugely reliant on being anonymous.

The value of anonymity is that it’s somehow more honest. Are you happy with that? Have you employed a load of ‘keyboard warriors’? The argument I often get when talking about more ‘named’ (what word should we use here ‘unanonymous’?) feedback is “we aren’t ready yet” or “we want that culture but …”. If not now when? There will not be a neon light telling you when to start being more open — start now.

One of the most important aspects of 360 for people to perceive it positively is that it’s useful. I have had too many report feedback conversations derailed by an anonymous comment the focus can’t follow up on. In these instances, I have suggested they speak to the raters who may have a perspective on the topic (not the specific comment) and ask for their input irrespective of whether it was their input. Of course, leaders without the maturity to overlook one comment rarely have the maturity to carry out these conversations effectively and the risk of a witch hunt is high.

Let’s face it when most people get an anonymous 360 they spend at least 15 minutes trying to work out who said what. It’s human nature. It’s only 15 minutes and people mostly get it right. But the cost of getting it wrong is very high. In this proposed validation model for 360 from 2001 anonymity only appears once as a key factor in judging the effectiveness of 360 and not at all in the final model.

You may have an additional benefit from removing some or all of the anonymity — you have taken away a ‘get out clause’ in avoiding feedback. The tool can then be sold as ‘this is how you see what people think of you in aggregate/visually/against key competencies/skills. Great definitely value add for some people as the focus. What you can’t now see it as is “here’s your chance to tell people what you really think”. So if something is bugging you about someone or you genuinely have useful feedback your choices are now 1) Tell them in a tool and look very passive and a bit pathetic and unconstructive or 2) have a conversation like a grown-up and own it.

Not only is this about feedback specifically but also about wider accountability — this article says it best:

multisource feedback will have little impact when, a) ratees are not accountable for using the feedback, b) raters are not accountable for the accuracy or usefulness of the feedback they provide, and c) management does not accept accountability for providing resources to support behavior change.

 

So should we ditch 360 or not?

 

It may not surprise you if you have made it through everything above, my answer is ‘it depends. If your definition of 360 is an anonymous, painstaking, time consuming, low tech way for managers to avoid their responsibilities then yes. (Large swathes of the tech world take note)

If however, the 360 you have or want to have is an open 360, linked to meaningful categories, that is primarily for development and quick to complete I don’t think it does any harm. If launched correctly and as part of a wider initiative, it can be a catalyst for a feedback culture. But so few organisations have these kinds of 360 where the experience is 99% positive virtually everyone could gain from having an expert review of their tool, or pausing its use whilst the culture of feedback (in particular conversational feedback) is reviewed.

They for me is being clear and purposeful in what you are trying to achieve and derive everything from that purpose. If it’s for development how do you maximise its value for that? How is the focus incentivised to act on the feedback? If it’s for performance assessment how do you ensure every respondent is the ‘right’ person to assess performance? What are the clear and objective criteria? How can the focus feel involved and not ‘done to’? What weight is given to the information and how do the focus and line manager retain accountability for performance?

We certainly shouldn’t be quoting Jason Fried as a reason never to give feedback in a tool ever again.

This article is written by Trevor E. Hudson and was first published on Medium on September 2, 2021.

Photo: jens holm on Unsplash