|
Re: Evaluation encore une fois |
You've been peeking! That's exactly the way my prediction evaluation program worked when I created it for the USGS. I generated several thousand random predictions and scored them. That gave me a curve to use as a standard. Then I would compare the real predictions to the random ones using the Kolmogoroff-Smirnoff goodness of fit test to see how well they were doing. The problem is that you need a number of predictions from each person in order to have enogh information to judge them. Most did poorly because they were not specific enough. The random predictions were detailed; real ones seldom are. Roger Follow Ups: ● Re: Evaluation encore une fois - Roger Musson 07:53:32 - 4/21/2001 (6907) (0) |
|