Page 223 - Contributed Paper Session (CPS) - Volume 4
P. 223
CPS2201 Mikhail L.
Definition 1. Cost function Cost of a measurement configuration Ω is
a Lebesgue measurable function : ℝ |Ω | → ℝ .
+
Let be a function (measurable for a suitably chosen −algebra) such
that ∶ ℱ × ℱ → ℝ . For our purposes, we typically like to be
4
+
4
inducing either a quasi-distance or a pseudo-distance on ℱ , where ℱ ⊆ ℱ
0
0
4
4
4
is a sufficiently reach subset. As an example, a perception-based µ from
Langovoy et al. (2014), was often used in our practical experiments. Standard
−distances are easier for theoretical comparisons.
∞
Definition 2. Sampling strategy Ω is a sequence Ω = {Ω } where for
0 0=1
each there exists an integer ≥ such that Ω 0 = Ω () for some
0
0
measurement configuration Ω () defined according to (4), and for any
integers ≥ it holds that |Ω | ≤ |Ω |.
2
1
2
1
Consider arbitrary fixed statistical estimator of BRDFs, ∶ ℝ → ℱ .
4
∞
Definition 3. Let Ω = {Ω } be a sampling strategy, and suppose that
0 0=1
∶ ℕ → ℝ be a known function. We say that the strategy Ω has
+
uniformly admissible costs with the majorant , if for all ≥ 1 it holds
that (Ω ) < (). We say that Ω has asymptotically uniformly
admissible costs with the majorant , if there exist ∈ ℕ such that for
all ≥ it holds that (Ω ) < ().
1
2
Consider two sampling strategies: Ω , Ω . Suppose that both strategies
have uniformly admissible costs. The problem of generalized active learning
for BRDF sampling can be stated in the following way: find a sampling strategy
Ω = {Ω ()} ∞ such that for all ≥
=
(6) Ω () = arg min ( (; Ω), )
Ω:(Ω)<
Definition 4. Suppose ∈ ℱ is a particular (possibly unknown) BRDF. Let
4
2
1
Ω , Ω sampling strategies be sampling strategies with − uniformly
1
admissible costs. We say that strategy Ω is asymptotically more efficient for
learning than the strategy Ω , and write Ω ≽ Ω , iff
2
2
1
1
( (; Ω ()), )
(7) lim sup < 1.
2
→∞ ( (; Ω ()), )
Notice that, for the task of evaluating sampling quality, expected errors
(over classes of BRDFs) are more interesting than maximal errors (over the
same classes). Indeed, maximal errors are often dominated by degenerate
counterexamples, while we are interested in a typical case behavior of our
learning procedures. Therefore, we are typically interested in expected errors
of the form ′ (( (; Ω()), )) ⟶ min, where ℱ ′ ⊆ ℱ is a
ℱ 4 4 4
sufficiently reach subset. Clearly, the choice of quasi-metric plays a crucial
role.
212 | I S I W S C 2 0 1 9