Models, Inference, and Decisions

What We Do

Recent research in cognitive and behavioral sciences is increasingly illuminating the basic mechanisms of human reasoning and cognition, as well as their limitations and systematic deviations from normative theories of rational inference and decision-making. It also raises interesting questions concerning the foundations and methods of different scientific disciplines, and the analysis of scientific reasoning in general.

This research line puts together theoretical and formal models of inference and decision-making with empirical approaches to the study of human reasoning and cognition. The aim is twofold: to better understand, and possibly improve, how people reason and make choices in different contexts, both in ordinary life and in science; and to clarify and strengthen the methodology and foundations of cognitive, behavioral, and social sciences.

Topics we work, or plan to work, on include:

  • Formal epistemology and philosophy of science:

    • Bayesian confirmation theory, truthlikeness theory, cognitive decision theory.

    • Philosophy of cognitive, behavioral, and social sciences, including neuroscience, (behavioral) economics, medicine, forensic science, statistics, machine learning, history, textual criticism, etc.

    • Philosophical issues in the foundations, epistemology, and the methodology of science: e.g., reverse inference, abduction, analogy, simplicity, replicability, explainability, etc.

  • Models of reasoning, inference and decision-making:

    • Heuristics and biases, ecological rationality, nudge theory.

    • Experts, science, and society: boosting scientific literacy, managing false and misleading information, assessing expert advice

    • Legal epistemology and cognitive biases in legal reasoning


  • Rationality as truth approximation

  • Reasoning, uncertainty, and expert judgment

    • NUTS (Nudge Unit Toscana per la Salute), joint IMT - ARS Toscana initiative

Who We Are

Principal InvestigatorAssociate Professor, PhDScholar, ResearchGate, Personal page
Assistant Professor, PhDScholar, ResearchGate,
Assistant Professor, PhD
Post Doc Research Fellow, PhD
PhD Student
PhD Student
PhD Student
PhD Student
PhD Student

Guests and past members

LINA LISSIAResearch Collaborator, PhD

What We Publish

Approaching deterministic and probabilistic truth: a unified account

Cevolani, G.; Festa, R.Synthese, 2021. DOI: 10.1007/s11229-021-03298-y
The basic problem of a theory of truth approximation is defining when a theory is “close to the truth” about some relevant domain. Existing accounts of truthlikeness or verisimilitude address this problem, but are usually limited to the problem of approaching a “deterministic” truth by means of deterministic theories. A general theory of truth approximation, however, should arguably cover also cases where either the relevant theories, or “the truth”, or both, are “probabilistic” in nature. As a step forward in this direction, we first present a general characterization of both deterministic and probabilistic truth approximation; then, we introduce a new account of verisimilitude which provides a simple formal framework to deal with such issue in a unified way. The connections of our account with some other proposals in the literature are also briefly discussed.

A Millian Look at the Logic of Clinical Trials

Festa, R.; Cevolani, G.; Tambolo, L.Uncertainty in Pharmacology, 2020. DOI: 10.1007/978-3-030-29179-2_9
The use of a certain drug or treatment in the cure of a disease or condition should be based on appropriate tests of the hypothesis that said drug or treatment is efficacious in the cure of the disease or condition at hand. In this paper, we aim at elucidating how such “efficacy hypotheses” are tested and evaluated, especially in clinical trials. More precisely, we shall suggest that the principles governing the assessment of efficacy hypotheses, and, more generally, hypotheses of statistical causality, are provided by an appropriate statistical version of such a venerable procedure as the method of difference put forward by John Stuart Mill.

Generalized information theory meets human cognition

Crupi, V.; Nelson, J.; Meder, B.; Cevolani, G.; and Tentori, K. Cognitive Science, 2018. DOI: 10.1111/cogs.12613
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma–Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information‐theoretic formalism.

Fallibilism, verisimilitude, and the Preface Paradox

Cevolani, G.Erkenntnis, 2017. DOI: 10.1007/s10670-016-9811-0
The Preface Paradox apparently shows that it is sometimes rational to believe logically incompatible propositions. In this paper, I propose a way out of the paradox based on the ideas of fallibilism and verisimilitude (or truthlikeness). More precisely, I defend the view that a rational inquirer can fallibly believe or accept a proposition which is false, or likely false, but verisimilar; and I argue that this view makes the Preface Paradox disappear. Some possible objections to my proposal, and an alternative view of fallible belief, are briefly discussed in the final part of the paper.

Our Collaborations

  • Vincenzo Crupi - University of Turin

  • Jan Sprenger - University of Turin

  • Carlo Martini - Università Vita Salute San Raffaele (Milan)

  • Giovanni Valente - Politecnico di Milano

Our Talks

  • D. Coraci - Reverse Inference and Bayesian Confirmation in Cognitive Neuroscience; Symposium: From Brain Structures to Cognitive Functions: Philosophical and Neuroscientific Perspectives on Reverse Inference, European Society for Philophy and Physchology (ESPP) Conference 2021, Leipzig (Germany), 2021

  • A. Demichelis - Vaccine Hesitancy and Trust: Lessons from the COVID-19 Pandemic. SAS Conference 2021 - Trust in Science. High Performance Computing Center, Stuttgart. 2021

  • C. Colombo, M. Fanghella, F. Guala, C. Sinigaglia, The Role of Strategic Thinking and Motor Information in Interpersonal Coordination, SIPF, Conference, Palermo, 2021

  • F. Panizza, The contribution of fact-checking tips and monetary incentives to the recognition of online scientific disinformation. INEM 2021 Conference, Arizona State University, 2021.

  • E. Peruzzi, G. Cevolani - Defending De-Idealization in Economic Modeling: a Case Study. INEM 2021 Conference, Arizona State University, 2021.

  • G. Cevolani, D. Coraci - Reverse Inference, Bayesian Confirmation, and the Neuroscience of Moral Reasoning’ . 2020 International Neuroethics Society Conference, online, October 22-23 2020.