It is often necessary to estimate the mean of a uniform distribution in WoW research. The chief example is estimating the coefficient of +spell damage for a certain spell. You know what the average value is with 0 spell gear, from the tooltip. You put on some +damage or +healing gear, and want to find the new average value. You cast the spell 10 or 20 times, and try to estimate the average.
Now, the obvious estimator is the sample average. You assume the real average value is the average of all your samples. It turns out, however, that the estimate "Average = (Maximum + Minimum) / 2" is far more accurate!
Both these estimators are unbiassed. That is, the expected value of the sample mean is the actual mean, and the expected value of (Max + Min) / 2 is also the actual mean. To evaluate them, we consider the average error = (Actual Mean - Estimator Value), which is the stardard deviation of the estimator.
For the sample average, the standard deviation decreases with the square root of the number of samples. "error ~ <range> / sqrt(N)", where <range> is the difference between the largest and smallest possible values, and N is the number of samples. On the other hand, the standard deviation of the estimate "Average = (Maximum + Minimum) / 2" decreases with N! i.e. "error ~ <range> / N", with ~ implying proportionality. This happens because the distibution is uniform over a fixed range, so the sample Maximum is a very accurate estimator of the actual Maximum.
Unfortunately i can't present a rigorous proof for these facts, but i have found it to be true experimentally, by generating thousands of samples with a program. But next time you're working out the +damage coefficient of some spell, try the Min + Max average out! Further, if anyone has more than my basic knowledge in this area, i'd be interested if you could prove the values for the standard deviation, including the constants of proportionality. Also, if you could prove whether this is the best possible estimator for the mean, in terms of minimum square error, or even in an asymptotic sense, or find a better estimator, i'd be very interested!
There's a specific reason for that. As you said, the damage of a spell does not follow a Gaussian distribution, for which the mean of the samples would be best estimator. It's a constant distribution over a finite interval, and 0 everywhere else.
So, what you're really searching for when you experiment with a particular spell/setup, are the endpoints. No other data actually contain any information (if you've hit for 1000 and 1200, and then you hit for 1100, you've learned nothing new). So, it makes perfect sense that the optimum interpretation of the data uses only the two extreme points.
You might be able to improve on that. If you examine the damage range at 0 +dmg, and examine data collected with no multiplicative damage bonuses (Piercing Ice, etc.), you can see when the observed range is exactly as large as the known range. At that point, you can claim to have complete information on the location of the damage range.