One of the simplest design problems is to decide when to stop sampling. For example, suppose are independently and identically distributed according to in the interval [0,1] with mean and variance . From Bernstein’s inequality, we know that if is fixed proportional to and , then, with probability at least , approximates with absolute error &egr;. Often, however, is small, and a good absolute error estimate of is typically a poor relative error approximation of . If then is an approximation of .
The authors describe an approximation algorithm based on a simple stopping rule. Using the stopping rule, their algorithm gives an approximation of after an expected number of experiments proportional to , where .
The authors then present a powerful algorithm, the algorithm, that, given &egr;, &dgr;, and independently and identically distributed outcomes generated from any random variable distributed in [0,1], obtains an approximation of after an expected number of experiments proportional to . They prove that for all , runs the optimal number of experiments to within a constant factor. also provides substantial computational savings in applications that employ a poor upper bound on .