Polarization in groups of Bayesian agents

Research output: Contribution to journalJournal articleResearchpeer-review

In this paper we present the results of a simulation study of credence developments in groups of communicating Bayesian agents, as they update their beliefs about a given proposition p. Based on the empirical literature, one would assume that these groups of rational agents would converge on a view over time, or at least that they would not polarize. This paper presents and discusses surprising evidence that this is not true. Our simulation study shows that these groups of Bayesian agents show group polarization behavior under a broad range of circumstances. This is, we think, an unexpected result, that raises deeper questions about whether the kind of polarization in question is irrational. If one accepts Bayesian agency as the hallmark of epistemic rationality, then one should infer that the polarization we find is also rational. On the other hand, if we are inclined to think that there is something epistemically irrational about group polarization, then something must be off in the model employed in our simulation study. We discuss several possible interfering factors, including how epistemic trust is defined in the model. Ultimately, we propose that the notion of Bayesian agency is missing something in general, namely the ability to respond to higher-order evidence.

Original languageEnglish
Pages (from-to)1-55
Publication statusPublished - Jan 2021

    Research areas

  • Bayesian updating, Epistemic rationality, Epistemic trust, Group polarization, Higher-order evidence

ID: 214228921