How will prizes be awarded?
A total of ~400 individuals have signed up to the contest. Assuming some typical dropout, our prediction is that there will be about 200 competitors.
The minimal allocated rewards will be $6,000, calculated as an expected reward of $30 per participant.
If there will be more than 200 participants, the allocated awards will increase, up until a maximum of $12,000 and a maximum of 400 participants.
Below, we describe how prizes will be allocated (the numbers below refer to the likely scenario of 200 participants).
For each question, we will calculate each participant’s Brier Accuracy and Categorical Accuracy scores. Furthermore, for 10 of the 14 questions, participants will be required to write a short explanation of their rationale for the prediction, and will receive a score based on the quality of their reasoning.
Brier Accuracy
The Brier score is the squared error between your forecast and reality. To calculate the Brier score, your probability judgment is first transformed to a scale from 0 to 1. In this calculation, 1 represents a situation where the event occurred, and 0 represents a situation where the event did not occur. If you provided the prediction that the event will occur with probability 0.8, and the event did occur, then your Brier score is given by (1-0.8)2+(0-0.2)2 =0.08. On the other hand, if you predicted that the probability that the event will occur is 0.4, and then event occurred, then your Brier score is (1-0.4)2+(0-0.4)2=0.52. The farther your prediction from reality, the higher your squared error. Therefore, lower Brier scores represent higher accuracy.
Brier scores will be standardized per question. Overall Brier Accuracy per participant will be calculated as the average of all 14 standardized Brier scores.
Categorical accuracy
The Category Accuracy score will not take into account the exact probability you assigned to events, just the directionality of the prediction.
For example, if you chose a probability of 51-100 that it will rain, and it indeed rained, you will receive one point; If you chose a probability of 0-49% of rain, and it rained, you will lose 1 point; if you chose 50% it means that you will not win or lose points on this question. Thus, the range for this accuracy measure across all 14 questions is -14 to 14.
Reasoning quality
The quality of reasoning will be assessed by external participants who will rate your arguments. A well-reasoned argument is defined as one where the conclusion is derived from the premises. In this context, you will be asked to explain how your prediction results from the factors you took into consideration.
Each of your 10 arguments will be assessed by 3 external reviewers. The argument score per question will be the average of these three reviews. The overall score per participant will be calculated as the average of all 10 argument scores (30 external reviews per forecaster).
Prizes allocation
The main measure for allocating rewards will be based on Brier accuracy, but we will also reward participants based on categorical accuracy and their reasoning.
There will be five rewards:
1. The top 20% of participants in terms of the Brier accuracy will enter a lottery for a grand prize of $1,000 and two prizes of $500. The odds of winning this lottery will be directly proportional to their Brier Accuracy (the better, i.e. lower, your Brier score, the higher chances you have of winning the prizes). Specifically, each participant will receive ((1-Brier)^20) X 1000 "tickets", and three winning tickets will be selected (one for 1,000$ and two for 500$ each). Assuming 250 participants, 50 will enter the lottery, with the chances of the player in the 99th percentile about ten times those of the one in the 80th percentile
In cases of a tie, the tie-breaker will be based on the final completion time of all 14 questions (i.e., earlier completion entails a better position).
2. Additionally, the top 5% of participants in terms of Brier accuracy will share a reward of $1,000 (i.e., at least $100 per participant).
3. Additionally, the participant with the highest Categorical Accuracy will be awarded $1,000 USD. If there are several participants with the same score, again, the winner will be the one who was the earliest to complete all of the questions.
4. Additionally, for each one of 10 questions where you will be asked to explain your reasoning, a single 100$ prize will be given to the best reasoned argument. Again, ties will be decided based upon completion time.
5. Finally, one participant will be given a 1,000$ prize for overall best-reasoned responses. Again, ties will be decided based upon completion time.
This reward structure ensures that the best strategy for winning is simply to reflect your beliefs about the likelihood of events.
The maximal prizes that could be won by a single participant are: $2,000 for best-reasoned responses + $1,000 based on categorical accuracy + $100 for being in the top 5% + $1,000 for winning the probabilistic lottery. Namely, a potential total of $4,100.
Participants will also be informed of their exact rankings in the contest.
The prizes and rankings will be sent in April, 2021.
We attempted to make resolution criteria as unambiguous as possible, however, as is always the case, we may have missed some unexpected occurrence (e.g., a meteor wiping out half the planet) that will make the resolution criteria a judgement call or that will cause us to delete one of the questions. We will not be able to have some sort of an appeals process. All we can do is assure you that we (the research team, Mr. Yhonatan Shemesh, Ms. Hilla Shinitzki, Prof. David Leiser, Dr. Michael Gilead) will consult at least three independent individuals, blind to our hypotheses and to the identity of participants, to make a judgement call in these contested cases. In such cases, we reserve the right to decide on resolution.
A total of ~400 individuals have signed up to the contest. Assuming some typical dropout, our prediction is that there will be about 200 competitors.
The minimal allocated rewards will be $6,000, calculated as an expected reward of $30 per participant.
If there will be more than 200 participants, the allocated awards will increase, up until a maximum of $12,000 and a maximum of 400 participants.
Below, we describe how prizes will be allocated (the numbers below refer to the likely scenario of 200 participants).
For each question, we will calculate each participant’s Brier Accuracy and Categorical Accuracy scores. Furthermore, for 10 of the 14 questions, participants will be required to write a short explanation of their rationale for the prediction, and will receive a score based on the quality of their reasoning.
Brier Accuracy
The Brier score is the squared error between your forecast and reality. To calculate the Brier score, your probability judgment is first transformed to a scale from 0 to 1. In this calculation, 1 represents a situation where the event occurred, and 0 represents a situation where the event did not occur. If you provided the prediction that the event will occur with probability 0.8, and the event did occur, then your Brier score is given by (1-0.8)2+(0-0.2)2 =0.08. On the other hand, if you predicted that the probability that the event will occur is 0.4, and then event occurred, then your Brier score is (1-0.4)2+(0-0.4)2=0.52. The farther your prediction from reality, the higher your squared error. Therefore, lower Brier scores represent higher accuracy.
Brier scores will be standardized per question. Overall Brier Accuracy per participant will be calculated as the average of all 14 standardized Brier scores.
Categorical accuracy
The Category Accuracy score will not take into account the exact probability you assigned to events, just the directionality of the prediction.
For example, if you chose a probability of 51-100 that it will rain, and it indeed rained, you will receive one point; If you chose a probability of 0-49% of rain, and it rained, you will lose 1 point; if you chose 50% it means that you will not win or lose points on this question. Thus, the range for this accuracy measure across all 14 questions is -14 to 14.
Reasoning quality
The quality of reasoning will be assessed by external participants who will rate your arguments. A well-reasoned argument is defined as one where the conclusion is derived from the premises. In this context, you will be asked to explain how your prediction results from the factors you took into consideration.
Each of your 10 arguments will be assessed by 3 external reviewers. The argument score per question will be the average of these three reviews. The overall score per participant will be calculated as the average of all 10 argument scores (30 external reviews per forecaster).
Prizes allocation
The main measure for allocating rewards will be based on Brier accuracy, but we will also reward participants based on categorical accuracy and their reasoning.
There will be five rewards:
1. The top 20% of participants in terms of the Brier accuracy will enter a lottery for a grand prize of $1,000 and two prizes of $500. The odds of winning this lottery will be directly proportional to their Brier Accuracy (the better, i.e. lower, your Brier score, the higher chances you have of winning the prizes). Specifically, each participant will receive ((1-Brier)^20) X 1000 "tickets", and three winning tickets will be selected (one for 1,000$ and two for 500$ each). Assuming 250 participants, 50 will enter the lottery, with the chances of the player in the 99th percentile about ten times those of the one in the 80th percentile
In cases of a tie, the tie-breaker will be based on the final completion time of all 14 questions (i.e., earlier completion entails a better position).
2. Additionally, the top 5% of participants in terms of Brier accuracy will share a reward of $1,000 (i.e., at least $100 per participant).
3. Additionally, the participant with the highest Categorical Accuracy will be awarded $1,000 USD. If there are several participants with the same score, again, the winner will be the one who was the earliest to complete all of the questions.
4. Additionally, for each one of 10 questions where you will be asked to explain your reasoning, a single 100$ prize will be given to the best reasoned argument. Again, ties will be decided based upon completion time.
5. Finally, one participant will be given a 1,000$ prize for overall best-reasoned responses. Again, ties will be decided based upon completion time.
This reward structure ensures that the best strategy for winning is simply to reflect your beliefs about the likelihood of events.
The maximal prizes that could be won by a single participant are: $2,000 for best-reasoned responses + $1,000 based on categorical accuracy + $100 for being in the top 5% + $1,000 for winning the probabilistic lottery. Namely, a potential total of $4,100.
Participants will also be informed of their exact rankings in the contest.
The prizes and rankings will be sent in April, 2021.
We attempted to make resolution criteria as unambiguous as possible, however, as is always the case, we may have missed some unexpected occurrence (e.g., a meteor wiping out half the planet) that will make the resolution criteria a judgement call or that will cause us to delete one of the questions. We will not be able to have some sort of an appeals process. All we can do is assure you that we (the research team, Mr. Yhonatan Shemesh, Ms. Hilla Shinitzki, Prof. David Leiser, Dr. Michael Gilead) will consult at least three independent individuals, blind to our hypotheses and to the identity of participants, to make a judgement call in these contested cases. In such cases, we reserve the right to decide on resolution.