CaltechAUTHORS
  A Caltech Library Service

Experience-weighted Attraction Learning in Normal Form Games

Camerer, Colin F. and Ho, Teck-Hua (1997) Experience-weighted Attraction Learning in Normal Form Games. Social Science Working Paper, 1003. California Institute of Technology , Pasadena, CA. (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20170814-161157311

[img] PDF (sswp 1003 - Dec. 1997) - Submitted Version
See Usage Policy.

369Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20170814-161157311

Abstract

We describe a general model, 'experience-weighted attraction' (EWA) learning, which includes reinforcement learning and a class of weighted fictitious play belief models as special cases. In EWA, strategies have attractions which reflect prior predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ which weights the strength of hypothetical reinforcement of strategies which were not chosen according to the payoff they would have yielded. When δ = 0 choice reinforcement results. When δ = 1, levels of reinforcement of strategies are proportional to expected payoffs given beliefs based on past history. Another key feature is the growth rates of attractions. The EWA model controls the growth rates by two decay parameters, φ and ρ, which depreciate attractions and amount of experience separately. When φ = ρ, belief-based models result; when ρ = 0 choice reinforcement results. Using three data sets, parameter estimates of the model were calibrated on part of the data and used to predict the rest. Estimates of δ are generally around .50, φ around 1, and ρ varies from 0 to φ. Choice reinforcement models often outperform belief-based models in the calibration phase and underperform in out-of-sample validation. Both special cases are generally rejected in favor of EWA, though sometimes belief models do better. EWA is able to combine the best features of both approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do.


Item Type:Report or Paper (Working Paper)
Related URLs:
URLURL TypeDescription
http://resolver.caltech.edu/CaltechAUTHORS:20110210-093101968Related ItemPublished Version
ORCID:
AuthorORCID
Camerer, Colin F.0000-0003-4049-1871
Ho, Teck-Hua0000-0001-5210-4977
Additional Information:Revised version. Original dated to March 1997. This research was supported by NSF grants SBR-9511001, 9511137, and 9601236, and the hospitality of the Center for Advanced Study in Behavioral Sciences. We have had helpful discussions with Chris Anderson, Bruno Broseta, Vince Crawford, Ido Erev, Drew Fudenberg, Dave Grether, Elef Gkioulekas, Yuval Rottenstreich, Rakesh Sarin, John Van Huyck, and Robert Weber, and research assistance from Hongjai Rhee, Chris Anderson, and Juin Kuan Chong. Barry Sopher generously provided data for us to analyze. Many helpful comments were received from anonymous referees and seminar participants at the Society for Mathematical Psychology conference (July 1996), the Russell Sage Foundation Summer Institute in Behavioral Economics (July 1996) the Economic Science Association meetings (October 1996), the Marketing Science Conference (March 1997), Bonn Conference on Theories of Bounded Rationality (May 1997), the FUR VIII Conference in Mons, Belgium (July 1997), Gerzensee ESSET Economic Theory Conference (July 1997), and seminars at Caltech, Harvard and Washington Universities, and the Universities of Alicante, Autonoma, California (Berkeley, Los Angeles), Chicago, Pennsylvania, Pittsburgh, Pompeu Fabra, Texas (Austin) and Texas A&M. Published as Camerer, C., & Hua Ho, T. (1999). Experience‐weighted attraction learning in normal form games. Econometrica, 67(4), 827-874.
Group:Social Science Working Papers
Funders:
Funding AgencyGrant Number
NSFSBR-9511001
NSFSBR-9511137
NSFSBR-9601236
Subject Keywords:Learning, behavioral game theory, reinforcement learning, fictitious play
Series Name:Social Science Working Paper
Issue or Number:1003
Record Number:CaltechAUTHORS:20170814-161157311
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20170814-161157311
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:80389
Collection:CaltechAUTHORS
Deposited By: Jacquelyn Bussone
Deposited On:15 Aug 2017 16:20
Last Modified:03 Oct 2019 18:30

Repository Staff Only: item control page