Lec Odds
The tournament will be strongly contested by several top teams. However, as the 2021 LEC Spring winner odds show the two European titans G2 and Fnatic are the favorites. 2020 has been somehow a frustrating year for the LoL European Championship teams. However, 2021 is a new year and we are all eager to see how LEC will bounce back. Sta102 / BME102 (Colin Rundel) Lec 22 November 23, 2015 3 / 34 Background Odds Odds are another way of quantifying the probability of an event, commonly used in gambling (and logistic regression). Odds For some event E, odds(E) = P(E) P(Ec) = P(E) 1 P(E) Similarly, if we are told the odds of E are x to y then odds(E) = x y = x=(x + y) y=(x + y.
1. Background statistics
Variable types
- numeric
- categorical
What do we know:
- Confidence intervals (numeric variable)
- Fisher test (categorical by categorical)
- Simple linear regression (numeric by one numeric variable)
- Linear regression with dummy variables (numeric by any variable)
Today:
- Multiple linear regression (numeric by several numeric variables)
- Multiple linear regression with dummy variables (numeric by any variable)
- Logistic (logit) regression (binary dependent variable by any number of variables of any type)
2.1 How does it work
Logistic or logit regression was developed in [Cox 1958]. It is a regression model wich predicts binary dependent variable using any number of variables of any type.
What do we need?
[underbrace{y_i}_{[-infty, +infty]}=underbrace{beta_0+beta_1cdot x_1+beta_2cdot x_2 + dots +beta_zcdot x_z +epsilon_i}_{[-infty, +infty]}]
But in our case (y) is a binary variable.
- Probability?
[P(y) = frac{mbox{# successes}}{mbox{# failures} + mbox{# successes}}; P(y) in [0, 1]]
- Odds?
[odds(y) = frac{P(y)}{1-P(y)} = frac{mbox{P(successes)}}{mbox{P(failures)}} = frac{mbox{# successes}}{mbox{# failures}}; odds(y) in [0, +infty]]
- Natural logarithm of odds
[log(odds(y)) in [-infty, +infty]]
2.2 Reminder about logarithms
- if log(odds) are greater then 0, it means that we have more successes then failures;
- if log(odds) is equal to 0, it means that we have the same number of successes and failures;
- if log(odds) are less then 0, it means that we have less successes then failures;
2.3 Probability and log(odds)
[log(odds(s)) = logleft(frac{#s}{#f}right)][P(s) = frac{exp(log(odds(s)))}{1+exp(log(odds(s)))}]
Results of the logistic regression can be easily converted to probabilities.
2.4 Sigmoid
Formula for this sigmoid is the following:
[y = frac{1}{1+e^{-x}}]
Feeting our logistic regression we should be able to reverse our sigmoid:
Formula for this sigmoid is the following:
[y = frac{1}{1+e^{-(-x)}} = frac{1}{1+e^{x}}]
Feeting our logistic regression we should be able to move center of our sigmoid to the left/right side:
Formula for this sigmoid is the following:
[y = frac{1}{1+e^{-(x-2)}}]
Feeting our logistic regression we should be able to squeeze/stretch center of our sigmoid:
[y = frac{1}{1+e^{-4x}}]
So the more general formula will be: [y = frac{1}{1+e^{-k(x-z)}}]
where
- depending on (x) values sigmoid can be reversed
- (k) is squeeze/stretch coefficient
- (z) is coefficient that indicates movement of the sigmoid center to the left or right side
3. Numeric example
It is interesting to know whether the languages with ejective sounds have in average more consonants. So we collected data from phonological database LAPSyD: http://goo.gl/0btfKa.
- Model without predictors
How we get this estimate value?
What does this model say? This model says that if we have no predictors and take some language it has (frac{0.5306283}{(1+e^{-0.5306283})} = 0.3340993) probability to have ejectives.
- Model with numeric predictor
What does this model say? This model says:
[log(odds(ej)) = beta_o + beta_1 times n.cons.lapsyd = -9.9204 + 0.3797 times n.cons.lapsyd]
Lets visualize our model:
So probability for a language that have 30 consonants will be [log(odds(ej)) = -9.9204 + 0.3797 times 30 = 1.4706] Thus, the output YES (the langiage has ejectives) has approximately 1.47 times more chances to occure if the language has 30 consonants than the output NO.
[P(ej) = frac{1.47061}{1+1.4706}=0.8131486]
4. predict(): Evaluating the model’s performance
So we actually can create a plot with confidense intervals.
5. More variables in the model
[underbrace{log(odds(y))}_{[-infty, +infty]}=underbrace{beta_0+beta_1cdot x_1+beta_2cdot x_2 + dots +beta_zcdot x_z +epsilon}_{[-infty, +infty]}]
The significance of each variable (predictor) is not the same in models with different number of variables. In other words, it depends on the combination of predictors in a specific model.
6. Model selection
AIC (Akaike Information Criterion) is a goodness-of-fit measure to compare the models with different number of predictors. It penalizes a model for having too many predictors. The smaller AIC, the better.
While comparing models, we are looking for the minimal optimal model:
* optimal, as it helps to predict the output in the best way
* minimal optimal, as it uses the minimal number of predictors
Other measures to evaluate the model includes:
* accuracy
* concordance index C (the area under the ROC-curve)
* Nagelkerke pseudo-(R^{2})
7. Interaction of the variables
Interaction happens when the effect of one predictor on the outcome depends on the value of another predictor. Interaction of two predictors can be positive (their joint role increases the effect) or negative (their joint role decreases the effect).
Example: animacy and semantic class; animacy and the choice of syntactic construction; effect of verb transitivity in different language varieties.
8. Conclusion: Generalized linear models (GLM)
GLM is a broad class of models that include linear regression, logistic regression, log linear regression, Poisson regression, ANOVA, ANCOVA, etc. In order to call a particular method to be GLM, that method should have the following three components:
Random Component: It refers a response variable (y), which need to satisfy some assumptions. Examples: Linear regression of y (dependent variable) follows normal distribution. Logistic regression response variable follows binomial distribution.
Systematic Component: It is nothing but explanatory variables in the model. Systematic components helps to explain the random component.
Link Function: It is link between systematic and random component. Link function tells how the expected value of response variable relates to explanatory variable. Link function of linear regression is E[y] and link function of logistic regression is logit(??).
What was important today?
- classifiers: binary, multi-class (multinomial)
- odds
- sigmoid
- significance of the variables (predictors)
- interactions
Where Can I Bet Golf Online From Texas?
You can bet golf online from the state of Texas using BetUS, a company you have come to trust since 1994.
Am I Able to Bet on Bovada From Arizona?
You are able to bet on Bovada from the great state of Arizona, but Gambling911.com recommends BetOnline for its generous welcome bonus with a maximum of $1000, up-to-the-minute interaction and extensive betting options (courtesy of Dave Mason) and state-of-the-art live in-play wagering platform.
Lec Winner Odds
Opening Odds for Blachowicz vs. Teixiera and Yan vs. Sterling
SportsBetting.ag has set odds for upcoming potential bouts stemming from Saturday night's UFC 259 results.
Blachowicz, Nunes Keep Belts at UFC 259; Sterling Wins on DQ
Lec Betting Odds
Israel Adesanya learned that holding two UFC title belts simultaneously is not nearly as easy as Amanda Nunes makes it look.
UFC 259 Prop Bets: Blachowicz-Adesanya, Nunes-Anderson, More
Gambling911.com has your prop bets for Saturday night's UFC 259, which features three main fights.