CaltechAUTHORS
  A Caltech Library Service

Some Results Regarding the Estimation of Densities and Random Variate Generation Using Neural Networks

Magdon-Ismail, Malik and Atiya, Amir (2000) Some Results Regarding the Estimation of Densities and Random Variate Generation Using Neural Networks. California Institute of Technology , Pasadena, CA. (Unpublished) https://resolver.caltech.edu/CaltechCSTR:2000.005

[img]
Preview
Postscript - Submitted Version
See Usage Policy.

1MB
[img] PDF - Submitted Version
See Usage Policy.

1MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechCSTR:2000.005

Abstract

In this paper we consider two important topics: density estimation and random variate generation. We will present a framework that is easily implemented using the familiar multilayer neural network. First, we develop two new methods for density estimation, a stochastic method and a related deterministic method. Both methods are based on approximating the distribution function, the density being obtained by differentiation. In the second part of the paper, we develop new random number generation methods. Our methods do not suffer from some of the restrictions of existing methods in that they can be used to generate numbers from an observed inverse relationship between the density estimation process and the random number generation process. We present two variants of this method -- a stochastic and a deterministic version. We propose a second method that is based on formulating the task as a control problem, where a "controller network" is trained to shape a given density into the desired density. We justify the use of all the methods that we propose by providing theoretical convergence results. In particular, we prove that the L8 convergence to the true density to both the density estimation and random variate generation techniques occurs as a rate O((log log N/N)^((1-e)/2) where N is the number of data points and e can be made arbitrarily small for sufficiently smooth target densities. This bound is very close to the optimally achievable convergence rate under similar smoothness conditions. Also, for comparison, the L2 (RMS) convergence rate of a positive kernel density estimator is O(N^(-2/5)) when the optimal kernel width is used. We present numerical simulations to illustrate the performance of the proposed density estimation and random variate generation methods. In addition, we present an extended introduction and bibliography that serves as an overview and reference for the practitioner.


Item Type:Report or Paper (Technical Report)
Additional Information:© 2000 California Institute of Technology. Submission Date: September 8, 2000 The authors would like to acknowledge the helpful comments of Dr. Kurt Hornik Yaser Abu Mostafa and the Caltech Learning Systems Group. The authors would like to acknowledge the support of NSF's Engineering Research Center at Caltech.
Group:Computer Science Technical Reports
Funders:
Funding AgencyGrant Number
NSFUNSPECIFIED
Subject Keywords:density estimation random number generation distribution function multilayer network neural network estimation error convergence rate stochastic algorithms
DOI:10.7907/Z9GB222G
Record Number:CaltechCSTR:2000.005
Persistent URL:https://resolver.caltech.edu/CaltechCSTR:2000.005
Usage Policy:You are granted permission for individual, educational, research and non-commercial reproduction, distribution, display and performance of this work in any format.
ID Code:26818
Collection:CaltechCSTR
Deposited By: Imported from CaltechCSTR
Deposited On:25 Apr 2001
Last Modified:03 Oct 2019 03:18

Repository Staff Only: item control page