Blogs.

Read about how we are doing, new products, new product features and much more!
Sportmonks Prediction API

Sportmonks Prediction API

July 30, 2019 10:00 in Soccer

KICKING OFF OUR FOOTBALL PREDICTION API

Woohoo! We’ve just released our brand-new Football Prediction API! We’re so proud of our Dev Team and Data Scientists, who’ve helped us fulfil this dream.

Two years ago, we started researching how to get the most accurate and reliable probability estimates in sports events, based on our own SportMonks data. It resulted in the Prediction API we introduce to you today!

In this two-course blog, we will first give you some prominent features, and then dig a little deeper into the technical aspects of our algorithm.

For starters, let me give you some of our algorithm’s key principles:

  • Timely and Substantive:
    Every day, the API updates its models with the latest data from our SportMonks Football Database.
  • Data Controlled:
    It doesn’t need human intervention. It runs on statistical analysis results of the entire historical SportMonks Football Database.
  • Precise Probabilities:
    The API offers the most precise probabilities possible, by using mathematical Probability Distribution models.
  • Predictability Performance:
    Our Prediction API’s success rate and quality are monitored, so you can track our predictions’ performance. Because even smart algorithms can fail, it is important to understand what is predictable and what is not.

Understanding the model

The model

Our main model follows Bayesian principles. This means we are using Probability Distribution to describe our model parameters.

First, it is important to ask ourselves what we want to predict. Obviously, football match results rely on the final score. The core task, therefore, is to model the goal distribution of each team. At this point, it is important to make the distinction between goal distribution on the one hand, and the expected goal1 metric (xG) on the other, which gives the probability rate at which a goal attempt will result in a score. Through our Bayesian model, goal distribution is extracted from our historical data. It tells us the expected scoring rates of two teams for their next match.

To learn those distributions, we can use all the data features available to us: events, players, commentaries, statistics… The hard part is to select the features that best describe the teams’ goal distribution.

Once we have learnt the goal distribution, we can use it to predict many matches.

The technical part

In this section, I’ll be giving you more mathematical details. Please be advised that this may get a bit technical. Of course, we are not going to give away all our secrets, but enough to help you understand what it is about.

As said before, we are performing a Bayesian analysis. To extract our main variable, the goals, we need to choose a Probability Distribution. As a starting point, we will be using the Poisson distribution2, which is a positive distribution for counting data. This means that if yy represents the number of goals scored by a team we assume the following 'prior' (prior probability distribution):
yPoisson(λ) y\sim \text{Poisson}(\lambda)
where λ\lambda is the unique distribution parameter. It can be interpreted as the team's expected number of goals. Often it is also represented as the team's strength combining attack and defence effect, but this is not the route WE will be taking. Instead, we will be treating λ\lambda as a random variable and choose a distribution for it. We know it is a positive continuous variable. Therefore a natural prior distribution is the Gamma distribution:
λGamma(α,β) \lambda\sim \text{Gamma}(\alpha,\beta)
The Gamma distribution has two parameters that will help us cover a large family of shapes. The parameter α\alpha can be interpreted as the number of *goals* scored by the team over β\beta number of matches. Since λ\lambda is now a random variable, it has an expected value that we call μ=E[λ]\mu = \mathbb{E}\left[\lambda\right].

Now we need to find the parameters of the Gamma distribution. We are interested in μ\mu. In this case it is the expectation of the Gamma distribution E[λ]=αβ\mathbb{E}\left[\lambda\right] = \frac{\alpha}{\beta}. Please note that this is also our expected *goal* measure. At this point, we have almost everything we need. The last step is to incorporate the set of features xx that will help us fit the distribution parameter. To do so, we will assume that
μ=θx \mu = \theta^\top x
In this equation xx is a vector of feature of interest, while θN(0,σ)\theta \sim \mathbb{}N\left( 0,\sigma\right) is the Gaussian prior for the parameters.

In the model’s training phase, it determines the posterior parameter distribution, given the data and our prior distribution. The prediction process involves many steps, from data collection to feature engineering through model training.

To help you to understand what it takes, we have summarised our workflow in the following chart.

Directed acyclic graph
pre-processing
pre-processing
Dataset
Home Bayesian model
Away Bayesian model
Training
Sportmonks data
Predictions

Quantifying predictability

There are several ways to measure the prediction’s quality. For example, we can use the number of times we have the correct result, which we call the Accuracy model. We could also use the ranked probability score3 or the Brier score4. Instead, we prefer to use an entropy-related measure, the log loss.

For one event, it is represented in the following equation:
=iΩyilnpi\ell= -\sum_{i\in\Omega} y_{i}\ln p_{i}
Ω\Omega is the set of possible outcomes, pip_{i} represents the probability of the outcome ii and yi{0,1}y_{i}\in \{0,1\} the event label, in which value 1 stands for a success and 0 otherwise. The label can also be interpreted as the a posteriori probability once the event result has been observed. For instance, if the event is 'team A plays team B' and we want to predict the winner. We have Ω={"A wins","Draw","B wins"}\Omega = \{ \text{"A wins"},\text{"Draw"},\text{"B wins"}\} and we assume the following probabilities: pA=0.4p_{\text{A}} = 0.4, pD=0.1p_{\text{D}} = 0.1 and pB=0.5p_{\text{B}} = 0.5. The following table shows the log loss for the different outcome.

event A wins Draw B wins
log loss \ell ln0.40.4-\ln 0.4\approx0.4 ln0.1=1-\ln 0.1=1 ln0.50.3-\ln 0.5\approx0.3

Finally, a league’s predictability is calculated by looking at the the average log loss across all the events. We have chosen the match winner as the main event to compute the league predictability. There are only three possible outcomes: 1. Home team wins, 2. Draw 3. Away team wins.

The closest the average log loss figure is to zero, the better the predictability.

A purely random model would assign a probability of 33% to each outcome. In this case the random model predictability is rand=ln31.0986\ell^{\text{rand}}=\ln 3 \approx 1.0986.

Remark: Any league with a predictability close to or above 1.0986 should be considered as unpredictable.

An other interesting model focusses on historical probabilities. In average the home team win 45% of the time, and away teams 30%. There is a draw 25% of the time. As a result, the historical model predictability is hist1.0671\ell^{\text{hist}}\approx 1.0671.

Remark: Any model with a league predictability close to or above 1.0671 should be considered incapable of learning how to beat the historical model.

To make the measure easier to understand, League Predictability Classification is divided into four categories: poor, medium, good, high.

The Prediction Api

What you get

Every day, our models compute predictions for upcoming matches two weeks ahead. The models are updated on the basis of new information coming in every day. The set of actual predictions delivered by our algorithm is described below:

  1. - Winner: probability of home win, draw and away win.
  2. - Correct score: non-zero probability of a given score.
  3. - Over / under: probability of goal score home team, away team and together.
  4. - Both team to score: probability that both team score.

In addition to these probabilities, we generate the League Predictability through the same model. The League Predictability will reveal the probability set’s quality.

For each league you get:

  • 1. The league predictability score given by the average log loss
  • 2. The league predictability classification given by one of the four categories: poor, medium, good, high.

  • Last but not least, the generated probabilities are analyzed together with the odds markets available at our database. This is called, the Value Bet Model.

Coming soon

The upcoming release on 30 July includes the Prediction Model and Value Bet Model. The Player Contribution Model will follow at a later stage. As more data and more data features will enter the database going forward, the model’s learning curve will grow further. The model will also be able to cover extra prediction features like corner probabilities, half time results, or the league final table. In other words, our new API will help you stay on top of your game! Stay tuned!


  1. 1. The Expected Goal metric is a shot based measure. ↩︎

  2. 2. See for instance Dixon M. and Coles S., (1997), Modelling association football scores and inefficiencies in the football betting market, Applied Statistics, 46, pp 265-280. ↩︎

  3. 3. Epstein E., (1969), A Scoring System for Probability Forecasts of Ranked Categories, Journal of Applied Meteorology, 8(6), pp. 985-987. ↩︎

  4. 4. Brier G., (1950), Verification of Forecasts Expressed in Terms of Probability, Monthly Weather Review, 78(1), pp 1. ↩︎

Build amazing products with realtime sports data.

Get started with our 14-day free trial, and experience blazing fast livescores. Get started