Today, Total Biscuit released a video titled ‘I will now talk about negative Mad Max reviews for just over 40 minutes‘, and whatever the hell else you can say about it, it is an accurate assessment of what he has to say. It is also quite silly (although to be fair, I agree with much of the last 15 minutes of him rant), but still there’s a lot in this that made me quite cranky. Here are my thoughts:
-
- I’m working on a review for the game myself – I’m currently clocking in at 60 hours and am on the home stretch. The short form is that this game is a mediocre game until you get the Thunderpoon, and after that it is slightly above a mediocre game. The 70% on Metacritic is, if anything, generous.
- TB has had an entirely different experience with the PC port of the game than I have had. My experience is glitchy & crashy, with lots of sound bugs, a couple of blue screens on the PC. It’s the first game I’ve played in a while where I’ve checked all my video drivers to be sure they’re up to date MULTIPLE TIMES. Note: my box is not a bad PC, and my other 3D games run all fine.
- TotalBiscuit seems to be shocked that a pass/fail rating sytem (Steam only asks if you recommend the game – yes/no) will give a different one than one that averages a numeric score. Here’s a hint: if everyone who rated the game at 50% or above in their Metacritic review, you would get something very similar to Mad Max’s ridiculously high scores on Steam, and be completely consistent with the 70% Metacritic rating. We file this one under ‘basic math’.
- Also, it’s long acknowledged that Steam’s rating system is utterly useless because almost everything ends up with a positive rating. In fact, the only games that I know of that have gotten below positive reviews are the ones that are brigaded by idiots. or that literally ship in a non-functional state. Metacritic’s rating system is way more useful than one that says ‘EVERYTHING IS AWESOME!’
- TB also goes into a digression about how it seems like critics don’t value or like open world games or game mechanics. Seriously, he talks about this at length. Meanwhile in the real world: GTA V: 97%. Far Cry 4: 85%. Arkham Knight: 87%. MSGV: 93%. Shadow of Mordor: 84%. Speaking with direct experience with all of these games except MSGV, I can say I’m confident that all of them are much better games than Mad Max, and Mad Max SHOULD have a significantly lower score than all of them
- I’m continually stupefied by people who are shocked that different reviewers give different scores – as if that a successful and healthy field of criticism would result in 200 reviews so identical that there’s no point in reading more than one. This isn’t accurate, and it also isn’t true of any other category of reviewed art
- I do admire the fact that TotalBiscuit accurately describes the sunk lost fallacy (critics can be more critical of a work because they get it for free, whereas gamers feel more inclined to defend their investment) but actually suggest its the critics who have a brain bug in this situation. No, its the gamers who are irrational.
Now then, he is very correct that game reviewers often have different opinions and different criteria than the masses – and like I said before, you’d expect this in any healthy genre including film. You would expect game reviewers to have a lot more context (i.e. a lot more games played) to compare it between the two. And you would expect that game reviewers to want to give more kudos to games that actually bring something new to the mix. This is the system as working by design. As TotalBiscuit points out aptly, movies have the same issue, where Transformers movies get crappy reviews but earn billions of dollars.
But his assertion that the problem is that there’s ‘a score at the end’ is the problem is very silly. I’ve seen no evidence that scores for games won’t work (when they certainly do for movies, for example), and scores are a very useful at-a-glance measure way to get a gist of what’s going on. More to the point, scores are great for review aggregator sites like Metacritic – it makes it easy for consumers to scan the Metacritic page, so we can easily see who gave the most extreme reviews, and they can read the reasons why both sides gave their scores. Because at the end of the day, the most useful thing to a consumer is to actually find reviewers whose opinion most maps to their own tastes. In fact, if I were to make a Metacritic competitor, it would DEFINITELY be score-based, but it would allow readers to include and exclude sources in the mix. Think Polygon is worthless? Remove it! There’s certainly nothing that prevents that technology from being built.
Also, developers love scores because we’re gamers, and like all good gamers, we like to compare ePeens with our competitors.
1) In Steam’s case, I don’t really care much whether the result is positive or negative but I do look at the approximate percentage (as one uses RottenTomatoes, except that nobody really tabulates the score in the case of Steam), not to mention the content of the user reviews. I wish everyone made some effort at explaining their rating, but many don’t.
2) Speaking of RT, let’s not forget that in the case of a few popular (as in, based on a popular franchise) films, you saw filmgoers making death threats to the few critics who hadn’t liked it. In some of the cases, those filmgoers couldn’t have seen the film, which had only been screened for critics at that time. It’s a phenomenon I compare to GamerGate in its hostility to criticism, especially criticism that is based on reasons that fail its ‘objective’ criteria, like whether the game is fun, and especially criticism which questions the legitimacy of the gamer lifestyle. I made that point in a piece on GamerGate I wrote after the entire Airplay affair: https://medium.com/@Vetarnias/gamergate-s-threat-to-journalistic-independence-9f40dc58b7cd
3) Metacritic: the site, if anything, has convinced me to distrust most of the professional game critics because of how the professional aggregate score is often completely at odds with the user reviews. Already, the oddity here is that the boosters at Metacritic are not the average users (after all, how many times have you seen a filmgoer complain that a critic *liked* a film?), but the professional critics themselves. See how in the case of Game of War — you know, the glorified browser game that afforded Kate Upton for its ads — the professional critics (four in all, from outlets I’ve never heard about) gave an aggregate score of 67%, while the users give it an average of 2/10. Who stresses the point that the game is pay-to-win? Nary a word of that from the professionals. Instead, it’s the regular players who bring up the ugly side of the game. And not just of this game, but of several others, while the professionals often won’t risk giving something below 60%.
The only issue I take with Metacritic is that it seems inconsistent in what publications it follows, from what I’ve seen.
Unless a title is absolutely massive, like GTA V or MGSV, it runs the risk of having some sites aggregated, while a different non-massive game has a different set of sites aggregated for it.
It’s been a while since I’ve looked at the site, so perhaps that has gotten better since I last visited, or it’s possible I missed a broader picture.
I do agree with your suggestion of being able to filter reviews, though. That would definitely clear up a lot of the (perhaps perceived) issues I have with Metacritic.
Regarding the masses/critics divide, I’ve read something interesting from moviebob a while ago.
http://moviebob.blogspot.ca/2014/09/a-long-post-about-gamergate.html
Little wonder this appeared on his blog and not at The Escapist, where he still was at the time.
What he doesn’t say, however, is that while it’s true that film critics look for independent stuff because they’re tired of the weekly slop they’re relentlessly offered, see what happens when a film isn’t screened for them by the studios: At best they’ll mention in passing, somewhere, might be at the bottom or another review or in an earlier column or on Twitter, that the film isn’t screened for critics. Shorthand for: even the studio knows it’s crap. At worst, they’ll treat the film as if it never happened.
What you won’t see is the critic actually bothering to review the film in question. His readers are in the know: It’s not screened, so of course it’s bad. Initially, the studios reserved this course of action (apart from cases involving major plot twists they didn’t want to risk being spoiled, like “Psycho”) for films they predicted would be critical but also financial duds, a waste of time and/or money for everybody. But in recent years, they’ve also done that for bad blockbuster films that they just didn’t want to risk being financially jeopardized by critics, no matter how little influence they had by this time.
And that’s the problem I have with both film and game critics. They’re beholden to the companies producing what they’re meant to review, yet are too lazy/comfortable to try to obtain their independence — a situation GamerGate is keenly aware of and tries to exploit to its advantage.
A film isn’t screened for critics? “Dear readers, the film in question is not screened for critics, quality work ain’t it? wink wink.” Then nothing more. As though all films were bad in the same way, and not operating on a scale ranging from “boring” to “objectionable”. (Might even be a good film that its studio wanted buried for one reason or another — William Goldman, in his memoirs, writes of a very good film, seventies or early eighties I guess as he doesn’t name it, that was swept under the rug by its studio because it was a pet project of the ousted previous CEO — or at least so-bad-it’s-good. ) If you can’t see an ethical reason why you, as a professional critic, should buy your own ticket on opening night and review the film without the studio holding your hand, maybe there’s a moral or intellectual one? As in, maybe film criticism isn’t about your status or your ego, but about serving, if not your readers (there’s an element of pandering to this that I dislike, which has as consequence the unfortunate implication that a critic MUST reflect the tastes of his readers, opening wide the door to the likes of GamerGate and their “consumer revolt”), then the artistic medium of film itself? Instead, by implying that a review published a few days after opening night is already too late, you not only (1) walk to the Hollywood tune of the importance of opening-weekend box-office returns, as though your task were one not of criticism but of marketing, but also (2) buy into the notion that film is a disposable product with a shelf life of two days, to be trashed after consumption, and not worthy of extended discussion. To be fair, #2 is true for most films, but it’s presumptuous to claim to know beforehand, and irresponsible to abdicate your task of gatekeeper in favor of, of all people, the studio that made the film with the reasoning, that I’ve seen, that “they should know, since they made it”.
And video game critics, my God, game critics. Terminal boosterism — and GamerGate wants to keep it that way. Rampant grade inflation. Just like “not screened for critics” has become shorthand for “bad film”, 7/10 has become shorthand for a bad game that in any other media would have been given a 3 or 4 on the same scale. Everyone knows this, yet this continues. The gaming press is too close to games, and too close to the game studios, and is by and large homogeneous. That’s where GamerGate’s peculiar brand of hypocrisy kicks in: denounce this homogeneity as evidence of collusion (as with the GameJournoPros affair), but invariably attack the few outlets or writers that seek something else, like Polygon (or even before GG, someone like Carolyn Petit for her review of GTAV in which she pointed out the misogyny despite a score of 9.0), and keep the others in line with threats of putting pressure on their advertisers.
Then there’s the YouTube scene where PewDiePie is worth HOW MUCH? Dancing the fine line between we’re reviewers (when we need access) and we’re not reviewers (when it comes to ethics and such). These guys I regard as parasites, nothing more. Yeah, yeah, it’s mostly because of my bitterness at how someone could become so wealthy doing something that contributes nothing to society, then become respected not for what they do but for how much they make doing it, while people with socially essential tasks, whether it’s a university professor or a garbage collector, go on being both underpaid and denigrated. This speaks to a decadence in our society to which GamerGate just inadvertently pointed with giant flashing arrows.
You want to be a serious games reviewer? (I’ve never been keen on making a distinction between critic and reviewer — I’d call an academic a scholar.) I don’t necessarily mean Gerstmann — he’s honest, sure, and he was once famously sacked for his integrity, yes, but as a games reviewer, sorry to say, he’s middling. Well, you’ll be underpaid, and readers will still treat your stuff as ephemeral garbage, not to mention demand that you be fired if you dare to express an opinion contrary to the consensus. Meanwhile, we got that that popular YouTuber, I forget his name (not the Biscuit), who wanted to be paid by a game studio to make videos about its game, because ETHICS.
Monsieur Homais is now a YouTuber.
I am not surprised he hates scores, considering the linguistic leaps he takes whenever you try to pin his job down. TB will tie himself in semantic knots denying he is a reviewer, a let’s player, a critic, a journalist, anything that would force him to have any accountability of what he says. To give a definitive score would mean giving a final, definitive “This is what I actually think about this game” opinion that he would need to stand by past “These are only my first impressions and hold no water”. But then, this is the guy who decided “Dear Ester” was “not a game” because critics were using big words to describe it :S
I found the drama/clickbait surrounding this game to be excellent PR–even if not entirely intentional.
You’re right that comparing two unrelated numbers, like the up/down percentage in steam vs average of review scores, is silly and meaningless. But you see review scores as useful while they have the same problem. Comparing someone’s subjective score on their personal meter with someone else’s is just as meaningless. The problem with numbers is that people expect them to be exact and comparable, and review scores are neither.
If you want to know a review’s conclusion “at a glance”, read the last paragraph or the box with “ups and downs”. If someone won’t read even a few words and looks for a single number, they don’t want to know how good the game is, they only want to make sure the review gave the “correct” score.
Just wanted to poke my head in as someone who has written more reviews then I can remember, including games, for publications that use scores and some that don’t. Most pubs, if you review their criteria, say something like this:
10/10: Gaming perfection.
9/10: a great game. Tiny flaws.
8/10: A good game with a few issues.
7/10: A pretty good game, but not without problems. Hopefully a patch improves this.
6/10: major bugs, flawed executions, big technical issues.
Below 7ish, you start pushing from subjective matters into objective ones. The issues *change.* You go from “I thought the missions were boring and repetitive” to “Couldn’t run for twenty minutes without crashing.”
The numbers vary depending on the scale in question, but examine most sites and you’ll find that the bottom of the scale is often reserved for games with deep technical problems as opposed to story issues or game length. It’s comparatively rare for games to score this badly because most games aren’t this terrible. The industry may not produce many diamonds, but it does typically prevent utter turds (Arkham Knight notwithstanding.)
Part 2 of above:
The reason you’ll often find that technical problems anchor the bottom of review metrics is because games have a vast array of technical hurdles to clear that movies simply lack. A movie can certainly be judged poorly shot or thematically incoherent, but there is no question of whether or not a movie will literally *run* on movie hardware. Look at the problems surrounding Arkham Knight on PCs, or some of the Assassin’s Creed Unity bugs from a year ago, and you’ll see good examples of how technical bugs can ruin games or render them unplayable for significant swathes of the gaming public. If Darth Vader had walked on screen for the first time and half the gaming public had seen David Prowse’s naked body while half saw Vader’s armor, the audience response would’ve been confused, at best.
There is, I think, general agreement that an *unplayable* game is much worse then a game that a reviewer may find boring or poorly executed. This means that scores below some point are going to be reserved for titles with game-breaking bugs or incredibly poor implementations.
That’s why you see mediocre titles propped up. It’s not because game reviewers are beholden to studios. It’s because generally speaking, the lowest scores are reserved for games that literally can’t be played out of the box or have such problems (like save game-deleting bugs or similar issues) that even a baseline mediocre experience can’t be guaranteed.
Related: The reason that film critics tend to prefer studio screenings is the same reason we hardware reviewers prefer substantial lead times for product launches. Newspapers and other print media typically have lead times of weeks, at the very least. The article you’re reading about a just-released title was prepared weeks or months in advance.
Online news tends to move faster, but I still need time — days or weeks if at all possible — before putting together a comprehensive review of something. It’s just how reviewing *works.*
‘Because at the end of the day, the most useful thing to a consumer is to actually find reviewers whose opinion most maps to their own tastes. ‘
Couldn’t agree more.
However there will always be exceptions and reviewers/critics have their quirks. Just as individual gamers will have loyalties to dev studios based on previous releases.
I echo back to a previous comment that in this day and age of easy access to ingame footage after release, the sensible decision is to wait for a week or two after game launch and watch footage on Twitch or YouTube. See how hampered the game has been by tech issues (or at least wait long enough for the more tech savvy to post fixes).
In short, not to get caught up in the pre-release purchase frenzy… um, unless it’s by you favourite dev studio 😉
I can’t stand video or video reviews but I agree that validating whether you’ll like a game or not is important.
You don’t even need to agree with a reviewer. You just need a predictable mapping function.
Many years ago, there was a reviewer in Denver. He was *highly* useful to us, because his opinions were pretty much 180 degrees out of phase with ours. If he hated it, we should get tickets. If he loved it, we should skip it.
Newspaper reviewer or online guy?
Ebert in his last years was like that to me, when he phoned in his reviews just because that was what he was paid for.