Two Super Models
Two Super Models
If you read my previous article, you will know that it was time for another attempt at data driven betting excellence and with another weekend of Premier League action just hours away, it is time to see how I fared
THE SCORES ARE IN
As before, it was a mixture of gut instinct and data driven football predictions in an attempt to get the better of the bookmakers and before we see how the predictive models fared, lets first recap the picks that come from a combination of head and heart:
Chelsea – Y
Aston Villa – N
Tottenham/Manchester City Draw – N
Manchester United – Y
Everton – Y
West Ham – Y
Leeds/Arsenal Draw – N
Leicester – N
Crystal Palace – N
Wolves/Southampton Draw – Y
Once again a bang average 5 out of 10, which means that my track record for the season so far is 48/88 (54.54%), so holding my own in the “anyone with any real sense could have guessed some of those outcomes”
So although it is far from a disgrace, it is little to crow about either and now it is time to see how both my predictive models fared
MODEL 1 – THE UNCHANGED VERSION
A slighht improvement on the openign week, as this time around I picked up 6 out of 10 – which means we are 11 for 20 in the data driven football predictions part of the project, giving me a 55% hit rate after two weeks. Again not bad, but at least this is success that is based more in logic than an element of blind faith.
With that in mind, how did that match up against the other model in existence:
While this one also returned 6 out of 10, as the only difference in results was a Leicester win over Liverpool rather than a draw and of course, with Liverpool winning it meant that both were a complete bust.
However, an above average start is not all that bad and therefore, this gives us hope for improvement along the way,.
STICK OR TWIST
I must admit by the time West Ham got the better of Sheffield United, I was fearing the worst as at that point I had already picked up 5 out of 6 and although not winning the lot is annoying, the last thing you want is 9 out of 10 and especially with the odds being either 12,077/1 or 7,664/1
So although you want the models to be successful as possible, the last thing you want is for them to just fall short of the line and burst into the tears at the prospect of Aston Villa losing you a boatload of money.
With all the results in, lets now look at my personal scoreboards:
Pure Gut Instinct: 48/88 (54.54%)
PremBot 1: 11/20 (55%)
PremBot 2: 6/10 (60%)
Hybrid Model (Combining First 68 matches and then PremBot only): 49/88 (55.68%)
So on the evidence below there is still some work to be done. Then again, this does raise an interesting debate. Do you tweak the models you already have or do you just build new versions.
To be honest, last season I was doing the latter and that was causing all manner of problems in terms of knowing what was successful and what wasn’t. Therefore, this time I am going to think of new variants once more insight is derived and in theory increase my chances (and albeit my outgoings) in terms of success.
With that in mind, that is your lot for this article as I have to now quickly prep the next set of stats for another round of data driven betting predictions.
Happy punting and thanks for reading.
(THESE ARE NOT TIPS PER SE, THESE ARE JUST DATA DRIVEN FOOTBALL PREDICTIONS IN A PURE TEST AND LEARN MODELLING CAPACITY)