“ Is your win rate normally distributed?”.
“ How much would you win in vegas if we gave you 10k?”.
" What did your poker win rate model look like?"
Problem With My Approach
First let me explain win rate or bb/100. bb/100 means we are expected to win X bb per every 100 hands. Therefore, each sample consist of 100 hands of poker. Unfortunately, the below graph does not represent my true bb/100. In my sql database, hands are stored on a session level. Without access to individual observations, I had to make an estimated range for my win rate. I did so by breaking large sessions of over 100 hands into samples of 100 hands given the average bb/100 over that session. So the below graphs are more a simulated bb/100, but they still represent an accurate picture of win rate
This is what the plots look like for that distribution, roughly normal.
Focusing on narrower range of the distribution
## Estimated average bb/100
average_win_rate <- mean(win_rate_array_df$win_rate_array)
## build out sd test for normality check
upper_end <- sd(win_rate_array_df$win_rate_array) * 1.96 + average_win_rate
lower_end <- sd(win_rate_array_df$win_rate_array) * -1.96 + average_win_rate
## grab length of entire distribution
total_count <- length(win_rate_array_df$win_rate_array)
## find length of samples within 2 sd
samples_within_2sd <- win_rate_array_df %>%
filter( win_rate_array >= lower_end & win_rate_array <= upper_end) %>%
summarise(length(win_rate_array))
## find percent of samples within 2 sd
print(samples_within_2sd / total_count)## length(win_rate_array)
## 1 0.9406088
For our Vegas example we would want to use the true \(\sigma\) for our bb/100, which while I can’t calculate from this data, I know is around \(\sigma\) = 80 bb/100. Assuming we want to play with the full $10k a bb = $100. Lets assume I play 1k hands overall of poker in vegas(30hours).
\[bb/100= 6 \] \[ \sigma = 80 \] \[hands=1000 \] \[samples=10 \] \[ interval = 6+- 1.96*(80/\sqrt{ 10}) \] \[bb=\$100 \]
\[ 56\: bb/100\: to \:-44\: bb/100 \]
\[money\:won\:= 56\ * 10*100 = 56000 \] \[money\:lost\:= -44\ * 10*100 = -44000 \]
Overall we are 95% confident that I would win something between -$44k and $56k on that 10k investment in 1k hands with average winnings of $6k over 30 hours of poker
How many hands to be 95% confident I will breakeven or profit
To make sure I don’t lose we could solve for where lower is greater than 0 \[ 6- 1.96(80/ \sqrt{x})>=0 \]
## [1] 682.9511
683 samples!!! Which means I would need to play 68300 hands to have 95% confidence in only showing profit, which needless to say would cut into worktime.
Total Aggregated Statistics By Player
The results above are the aggregated player level statistics. Attempting to run a regression model on this data was failing. A large reason for this is the massive variance you saw in the Vegas example. To compound the issue, players with 55 hands needed to be treated in a different way than players with 50k hands, as win rate only approaches the true mean win rate over a large sample. So my models were learning on garbage data. My solution was to build out custom confidence intervals for win rates, and discard players which had an interval spanning 0. However, my stats aren’t distributions and therefore I have no samples and no \(\sigma\) . Having hands which could be treated as samples I needed to figure out a good way to estimate \(\sigma\) . I looked at player playing styles, more aggressive players I assumed had a higher \(\sigma\) . Something like this below
Create some intervals. VPIP is how many hands a player plays out of total hands they were dealt. High VPIP likely leads to high \(\sigma\)
40<vpip<100 will be assigned a range of 110
I then labeled winning players and losing players and ran a logistic regression to classify a player as either winning or losing. My hope was that eliminating some of the noise would lead to a better model. I haven’t attempted to rerun, tune, or evaluate the classification with other metrics, but overall it correctly identified 13 out of 17 winners and 64 out of 65 losers on a test set