Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published January 2010 | Published
Book Section - Chapter Open

Universal ε-approximators for integrals


Let X be a space and F a family of 0, 1-valued functions on X. Vapnik and Chervonenkis showed that if F is "simple" (finite VC dimension), then for every probability measure μ on X and ε > 0 there is a finite set S such that for all f ∊ F, σx∊S f(x)/|S| = [∫ f (x)dμ(x)] ± ε. Think of S as a "universal ε-approximator" for integration in F. S can actually be obtained w.h.p. just by sampling a few points from μ. This is a mainstay of computational learning theory. It was later extended by other authors to families of bounded (e.g., [0, 1]-valued) real functions. In this work we establish similar "universal ε-approximators" for families of unbounded nonnegative real functions — in particular, for the families over which one optimizes when performing data classification. (In this case the ε-approximation should be multiplicative.) Specifically, let F be the family of "k-median functions" (or k-means, etc.) on ℝd with an arbitrary norm ϱ. That is, any set u1, …, uk ∊ ℝd determines an f by f(x) = (mini ϱ(x – ui))α. (Here α ≥ 0.) Then for every measure μ on ℝd there exists a set S of cardinality poly(k, d, 1/ε) and a measure ν supported on S such that for every f ∊ F, σx∊S f(x)v(x) ∊ (1 ± ε) · (∫ f(x)dμ(x))

Additional Information

© 2010 SIAM. Work supported in part by The Open University of Israel's Rsearch Fund (grant no. 46109) and Cisco Collaborative Research Initiative (CCRI). Work supported in part by the NSF.

Attached Files

Published - 1_2E9781611973075_2E50.pdf


Files (506.4 kB)
Name Size Download all
506.4 kB Preview Download

Additional details

August 19, 2023
October 23, 2023