Re: another issue - One approach
Posted by 2cents on March 30, 2002 at 10:43:32:

Well...your're right Roger.

Since the goal is to help save lives by giving a useful and reliable short-term warning of an impending quake, scoring value should be given for accurate warning of a quake given it's ability to injure, kill, and destroy infrastructure.

Obviously, a low probability Mag 1.0-2.0 is virtually meaningless with regards to destruction whereas the mag. 8 can erase a lot of lives in the blink of an eye.

I suggest the following (more or less arbitrary) "normalization of the predictor's scoring" along the following lines.

1) Gather earthquake historical information (for cases where it is available) as follows:

a) Magnitude
b) Area effected
c) infrastructure damage in (year 2000?) dollars
d) Population density
e) (Lives lost & number injured).

2) Identify one of the above cases as the one from which the other cases will be "normalized" (collapse parameters into a range of 0 - 1.0 using this earthquake as the "1.0" case). this will be called the "Reference case earthquake".

We will disregard e) since it may be considered a dependent variable related to d) population density (and have lots of variability). As a result of the above data the following metric can be formed:

For the reference quake, calculate the following number:

Ref. case Raw Score = Magnitude X (Damage in Dollars/(population density x area effected))

Calculate the same "raw score" (not normailized to 0-1 range) for all the other earthquake cases.
This has the units of (Magnitude x (damage in dollars per effected person)).

Then calculate the scale factor for any earthquake as follows:

Scale Factor = Evaluated earthquake raw score / Ref. case raw score.

We see the scale factor calculation will exceed "1" if the parameters of the evaluated earthquake exceed the reference case (i.e. was a higher magnitude, caused more damage per person, etc.) otherwise it will be less than 1.0. We see that a magnitude 5.5 which caused more damage per person could scale higher than a larger quake happening out in the desert where nobody lives.

Now the question remains: "How should this normalization factor be brought into the probability calculation? "

When approach is just to track it separately in it's own column for all predictions. Getting this average per predictor will give some qualitive indication as to how well the predictor is actually helping other people. In other words, someone who can nail the mag. 7's in the desert at the same rate as someone nailing the mag. 5.5's in a large city will have a lower "usefulness" score.

Another approach is to integrate the number into the calculated probabilities by either multiplying it or dividing it into the calculated probability. This approach should be given some further thought since the "gain factors" could be large....

Comments are welcome.

Just my $.02 worth


Follow Ups:
     ● Re: another issue - One approach - Roger Hunter  11:20:28 - 3/30/2002  (14390)  (0)