Blog

How Player Ratings Work in Football Analytics

2026-04-08Lopez, U.

Player ratings are one of the most visible outputs of football analytics. Whether it is a post-match score from a newspaper, a seasonal ranking on a data platform, or a card rating in a video game, numbers are constantly attached to footballers. But how are these ratings actually calculated, and what do they really tell us?

The Origins of Player Ratings

The idea of rating individual footballers dates back decades. French sports newspaper L'Équipe popularized the practice of assigning each player in a match a score out of ten. These ratings were based on the subjective judgment of journalists who watched the game. While influential, they were inherently inconsistent — different observers could arrive at very different scores for the same performance.

The rise of data collection in football created an opportunity to make ratings more objective. Companies like Opta began tracking every on-ball event in a match: passes, tackles, interceptions, shots, dribbles, fouls, and more. This granular event data became the foundation for algorithmic player ratings.

How Algorithmic Ratings Work

Most modern player rating systems follow a similar structure. They begin with raw event data from a match and apply weights to each event based on its importance. Completing a pass in your own defensive third, for example, might contribute less to a rating than completing a pass in the final third. A goal carries more weight than a shot off target, and an assist carries more weight than a simple key pass.

The weighted events are then aggregated into a composite score. Some systems normalize the score relative to the player's position, so a centre-back is not penalized for taking fewer shots than a striker. Others compare the player's output to the average for their position in the same league and season, producing a percentile or z-score.

The specific weights and formulas vary between providers. WhoScored, SofaScore, and FotMob each use proprietary algorithms that produce different ratings for the same performance. This is a feature, not a bug — each system reflects different priorities and analytical philosophies.

Match Ratings vs. Season Ratings

It is important to distinguish between match ratings and season ratings, as they serve different purposes.

A match rating summarizes a single performance. It captures what a player did over ninety minutes — or however long they were on the pitch. Match ratings are reactive and volatile; a player can score an eight in one game and a five in the next. They are useful for identifying standout performances and poor displays, but they are not reliable indicators of overall quality.

A season rating, by contrast, is an average or aggregate across many matches. It smooths out the noise of individual games and provides a more stable assessment of a player's contribution. When scouts and analysts compare players, they almost always look at season-level metrics rather than single-match scores.

Positional Context Matters

One of the biggest challenges in player ratings is comparing across positions. A defensive midfielder who completes ninety-two percent of their passes and makes eight recoveries has had an excellent game, but their stat line looks very different from a winger who completes three dribbles and provides two key passes.

Good rating systems account for positional context. They define expected outputs for each position and measure players against those benchmarks. On Sportree, we use percentile rankings that compare each player to others in the same position across the same competition, ensuring that a goalkeeper is evaluated against other goalkeepers, not against strikers.

The Role of Advanced Metrics

Modern rating systems increasingly incorporate advanced metrics beyond simple event counts. Expected Threat (xT), which measures how much a player's actions increase the probability of a goal, is used to value ball progressions. Passes into the final third and progressive carries are weighted more heavily than sideways or backward passes.

Defensive metrics have also improved. Pressures, ball recoveries in the attacking third, and the success rate of defensive duels all contribute to a fuller picture of a player's off-the-ball contribution. Older rating systems that relied heavily on tackles and interceptions often undervalued players who positioned themselves so well that they rarely needed to make last-ditch challenges.

Limitations of Player Ratings

No rating system is perfect. The biggest limitation is that current data captures on-ball events far better than off-ball movement. A player who makes a brilliant decoy run that pulls defenders out of position and creates space for a teammate receives no statistical credit for that action. Similarly, a centre-back who organizes the defensive line through constant communication contributes enormously to the team but generates few measurable events.

Tracking data — which records the position and speed of every player at twenty-five frames per second — promises to address some of these gaps. As tracking data becomes more widely available, rating systems will be able to incorporate spatial control, pressing intensity, and off-ball positioning.

How Sportree Approaches Ratings

On Sportree, we take a transparent approach to player evaluation. Rather than reducing a player to a single number, we present a multi-dimensional profile using radar charts and percentile rankings across key metrics. This allows you to see not just how good a player is overall, but where their strengths and weaknesses lie.

Our AI chat also lets you ask specific rating-related questions. You might ask "Which La Liga midfielders have the best passing accuracy this season?" or "Show me the top-rated centre-backs in the Bundesliga by defensive actions." The system queries our database and returns data-driven answers in seconds.

Player ratings will continue to evolve as data collection improves and analytical methods become more sophisticated. The most important thing for fans is to understand that no single number tells the whole story — ratings are a starting point for analysis, not the final word.