Pittsburgh Formal Epistemology Workshop

Adam Bjorndahl, Krzysztof Mierzewski and I are running a formal epistemology workshop/lecture series (the Pittsburgh Formal Epistemology Workshop, aka PFEW), affiliated with the CMU Center for Formal Epistemology.

This semester, all talks will be held in a hybrid format. The schedule is below. If you have any questions or would like to be added to our mailing list, please send us an email.

Fall 2024

  • Friday, December 6, 1:00-2:30pm: Mario Günther

    Title: Legal Proof Should Be Justified Belief of Guilt

    Abstract: We argue that legal proof should be tantamount to justified belief of guilt. A defendant should be found guilty just in case it is justified to believe that the defendant is guilty. Our notion of justified belief implies a threshold view on which justified belief requires high credence, but mere statistical evidence does not give rise to justified belief.

  • Monday, November 25, 4:00-5:30pm: Jan-Willem Romeijn

    Title: Overfitting in statistics and machine learning, or The complexities of complexity (Joint work with Daniel Herrmann and Tom Sterkenburg)

    Abstract: Machine learning (ML) methods seem to defy statistical lore on the risks of overfitting. They often have many more adjustable parameters than are needed for fitting the data perfectly but, after showing predictive loss in the realm of normal overfitting, highly overparameterized ML models show surprisingly good predictive performance. In my talk I will review several attempts by statisticians and computer scientists to explain this so-called “double descent phenomenon” and distill three philosophical lessons that each derive from seeking continuity between statistics and ML. One is that our conception of model capacity needs an update, as it harbors a variety of ideas about complexity. A further lesson is that we have to flip the script on the problem of underdetermination: the use of unidentified statistical models offers predictive advantages, and this invites a fresh look at our empiricist ideals. A final lesson relies on De Finetti's representation theorem and on basic insights into the problem of induction: understanding the success of machine learning requires that we delve into processes of model and data construction.

  • Friday, November 22, 1:00-2:30pm: Aydin Mohseni

    Title: Why are the Social Sciences Hard? (Joint work with Daniel Herrmann and Gabe Avakian Aarona)

    Abstract: We present two novel hypotheses for why the human sciences are hard: (1) we are pre-committed to a very specific domain of prediction tasks in the human sciences, which limits a powerful strategy for making scientific progress—changing the prediction task to something more tractable; and (2) due to evolutionary pressures, human baseline performance is relatively high for many of the ‘low-hanging fruit’ prediction tasks concerning human behavior, making progress beyond this baseline challenging. We provide a formal framework for reasoning about the difficulty of disciplines and the impressiveness of their achievements in terms of their success at particular prediction tasks.

  • Thursday, November 7, 5:00-6:30pm: Branden Fitelson

    Title: Probabilities of Conditionals and Conditional Probabilities—Revisited

    Abstract: Lewis' (1976) triviality argument against The Equation (also known as Adams' thesis) rests on an implausible assumption about the nature of (epistemic) rational requirements. Interestingly, Lewis (1980) later rejected this assumption. In his discussion of the Principal Principle, Lewis makes a weaker and more reasonable assumption about the nature of rational requirements. In this paper, I explain how to apply the insights of Lewis (1980) to repair Lewis (1976). This leads to a more reasonable rendition of the equation—one that is (a) immune to triviality and (b) a better candidate for a (bona fide) rational requirement.

  • Friday, November 1, 1:00-2:30pm: Samuli Reijula

    Title: Modeling cognitive diversity in group problem solving

    Abstract: According to the diversity-beats-ability theorem, groups of diverse problem solvers can outperform groups of high-ability problem solvers (Hong and Page 2004). This striking claim about the power of cognitive diversity is highly influential both within and outside academia, from democratic theory to management of teams in professional organizations. Our replication and analysis of the models used by Hong and Page suggests, however, that both the binary string model and its one-dimensional variant are inadequate for exploring the trade-off between cognitive diversity and ability. Diversity may sometimes beat ability, but the models fail to provide reliable evidence of if and when it does so. We suggest ways in which these important model templates can be improved.

    There’s a paper that the talk is based on: https://escholarship.org/content/qt84g365px/qt84g365px.pdf, and more information about my research can be found here www.samulireijula.net.

  • Friday, October 25, 1:00-2:30pm: Max Noichl

    Title: Towards Empirical Robustness in Network Epistemology (Joint work with Ignacio Quintana and Hein Duijf)

     Abstract: One of the central papers in simulation studies of science argues that less communication can lead to higher reliability. More generally, simulation studies have been used to explore which communication networks enhance collective reliability and speed of convergence. However, this literature has largely concentrated on relatively simple network structures (e.g., cycles, wheels, full graphs), which bear little resemblance to real social networks. Does less communication often lead to higher reliability also in real social networks? In this talk, we provide the first results concerning the empirical robustness of these findings with respect to real social networks.

     We develop a novel method to perform this empirical robustness analysis. First, we use citation graphs to depict empirical networks commonly discussed in the literature as examples of lagging discovery—i.e., one concerning the bacterial causes of peptic ulcers and another concerning the prolonged history of the perceptron. Second, we develop a new method to generate a sample of “similar” networks, based on the optimization of generative network models toward the sampled empirical ones. Third, we work out the collective reliability and speed of convergence of these communication networks by running simulations. Finally, we analyze the data about these networks to determine which outcomes can be expected in real networks and which network properties (e.g., degree heterogeneity, clustering, etc.) strongly affect collective reliability and speed of convergence.

  • Friday, October 4, 1:00-2:30pm: Ahmee Christensen

    Title: Logics of Knowability

    Abstract: Knowability has a significant history in the field of epistemic logic dating back to the introduction of Fitch's paradox in 1963. More recently it has received attention from the perspectives of dynamic epistemic logic and epistemic temporal logic. However, existing treatments of knowability either address a knowability-like concept that deviates from an intuitive understanding of knowability in some important way or take knowability to be a complex notion merely definable in the language. In this paper, we take `it is knowable that' as a primitive modality and offer axiomatizations of two knowability logics—one of knowledge and knowability and one of pure knowability. We prove that these logics are sound and complete with respect to a distinguished subclass of birelational frames. These proofs are hopefully of technical interest, as well; because knowability is, we argue, best understood as a non-normal modality, the completeness proofs require novel model constructions.

  • Friday, September 20, 1:00-2:30pm: Daniel Halpern

    Title: Aggregating Preferences with Limited Queries

    Abstract: Social choice theory studies how to aggregate individual preferences into a collective decision for society. Traditionally, this assumes full access to each individual’s complete set of preferences. However, modern online platforms promoting civic participation, such as pol.is, aim to solve social choice problems that do not fit neatly into this framework. These platforms aggregate complex preferences over a vast space of alternatives, rendering it infeasible to learn any individual's preferences completely. Instead, preferences are elicited by asking each user a simple query about a small subset of their preferences.

    In this talk, I will present a simple model for analyzing what is possible in these scenarios, along with a variety of positive and negative results . It covers two recent papers:

    EC’24 paper on ranked preferences: https://arxiv.org/abs/2402.11104
    AAAI’23 paper on approval preferences: https://arxiv.org/abs/2211.15608

    Contributions include: 

    - Positive algorithmic results: Efficient algorithms that produce representative outcomes with limited queries.

    - Information-theoretic impossibilities: Fundamental limits on what can be learned, regardless of the number of queries.

    - Query-complexity lower bounds: Situations where, even if it is possible in theory to achieve a desired outcome, an exponential number of queries may be required, making it practically infeasible.

    No prior knowledge of social choice will be assumed.

  • Friday, September 6, 1:00-2:30pm: Siddharth Namachivayam

    Title: Topological Semantics For Asynchronous Common Knowledge

    Abstract: Common knowledge as usually defined has the property that it must arise synchronously, i.e. someone cannot know P is common knowledge without everyone knowing P is common knowledge. Recent work by Gonczarowski and Moses 2024 (G&M 2024) proposes redefining common knowledge so it can arise asynchronously at different times for different agents. Why? We think of common knowledge as guiding coordination. So if we would like to guide coordination asynchronously, we ought to define common knowledge asynchronously. In this talk, we analyze a Byzantine generals-like learning game where agents must coordinate on correctly reporting a proposition is true asynchronously. We argue the worlds where agents can successfully coordinate in some equilibrium of this game using no retractions should be thought of as the worlds where the proposition can become asynchronous common knowledge. In the course of doing so, we develop a purely topological semantics for asynchronous common knowledge which makes no explicit reference to time. Our topological semantics correspond to detemporalizing G&M 2024’s semantics but also naturally admit a notion of asynchronous common belief. We conclude by showing the worlds where agents can successfully coordinate in some equilibrium of our learning game using a fixed budget of retractions coincides with the worlds where the proposition is true and can become asynchronous common belief.

Spring 2024

  • Friday, April 19, 1:00-2:30pm: Caspar Oesterheld

    Title: Can de se choice be ex ante reasonable in games of imperfect recall? A complete analysis

    Abstract: In this paper, we study games of imperfect recall, such as the absent-minded driver or Sleeping Beauty. We can study such games from two perspectives. From the ex ante perspective (a.k.a. the planning stage) we can assess entire policies from the perspective of the beginning of the scenario. For example, we can assess which policies are ex ante optimal and which are Dutch books (i.e., lose money with certainty when it would be possible to walk away with a guaranteed non-negative reward). This perspective is conceptually unproblematic. The second is the de se perspective (a.k.a. the action stage), which tries to assess individual choices from any given decision point in the scenario. How this is to be done is much more controversial. Multiple different theories have been proposed, both for how to form beliefs and how to choose based on these beliefs. To resolve such disagreements, multiple authors have shown results about whether particular de se theories satisfy ex ante standards of rational choice. For example, Piccione and Rubinstein (1997) show that the ex ante optimal policy is always “modified multiself consistent”. In the terminology of the present paper (and others in this literature), they show that the ex ante optimal policy is always compatible with choosing according to causal decision theory and forming beliefs according to generalized thirding (a.k.a. the self-indication assumption). In this paper, we aim to give a complete picture of which of the proposed de se theories match the ex ante standards. Our first main novel result is that the ex ante optimal policy is always compatible with choosing according to evidential decision theory and forming beliefs according to generalized double-halfing (a.k.a. compartmentalized conditionalization and the minimal-reference-class self-sampling assumption). Second, we show that assigning beliefs according to generalized single-halfing (a.k.a. the non-minimal reference class self-sampling assumption) can avoid the Dutch book of Draper and Pust (2008). Nevertheless, we show that there are other Dutch books against agents who form beliefs according to generalized single-halfing, regardless of whether they choose according to causal or evidential decision theory.

  • Friday, March 29, 1:00-2:30pm: Mariangela Zoe Cocchiaro

    Title: Fail again. Fail better. But fail how?

    Abstract: As the so-called ‘Defeat’ assumption in the epistemology debate suggests, peer disagreement often functions as a sort of litmus paper for detecting the presence of a defective attitude. In this talk, I scrutinize the exact nature of this defective attitude—and of the credal version of ‘Defeat’ stemming from it—when we operate in a fine-grained model of belief. Firstly, I show how the question as to the nature of the defectiveness of the credences in these cases falls within the scope of the epistemology debate. Then, after claiming that the fairly obvious appeal to inaccuracy comes with philosophically heavy commitments, I turn to what credences are taken to be for a principled answer.

  • Friday, March 15, 1:00-2:30pm: Jessica Collins

    Title: Imaging is Alpha + Aizerman

    Abstract: I give a non-probabilistic account of the imaging revision process. Most familiar in its various probabilistic forms, imaging was introduced by David Lewis (1976) as the form of belief revision appropriate for supposing subjunctively that a hypothesis be true. It has played a central role in the semantics of subjunctive conditionals, in causal decision theory, and, less well known to philosophers, in the computational theory of information retrieval. In the economics literature, non-probabilistic imaging functions have been called “pseudo-rationalizable choice functions”. I show that the imaging functions are precisely those which satisfy both Sen’s Alpha Principle (aka “Chernoff’s Axiom”) and the Aizerman Axiom. This result, a version of which was proved in Aizerman and Malishevksy (1981) allows us to see very clearly the relationship between non-probabilistic imaging and AGM revision: see diagram, AGM revision is Alpha + Beta. Mark Aizerman (1913-1992) was a Soviet cyberneticist at the Institute for Control Sciences, Moscow. 

  • Friday, March 1, 1:00-2:30pm: Alexandru Baltag

    Title: The Dynamic Logic of Causality: from counterfactual dependence to causal interventions

    Abstract: Pearl's causal models have become the standard/dominant approach to representing and reasoning about causality. The setting is based on the static notion of causal graphs, but it also makes an essential use of the dynamic notion of causal interventions. In particular, Halpern and Pearl used this setting to define and investigate various notions of actual causality.

    As noted by many, causal interventions have an obvious counterfactual flavour. But... their relationship with the counterfactual conditionals (a la Lewis-Stalnaker) has remained murky. A lot of confusion still surrounds this topic.

    The purpose of this talk is threefold:

    1. understand interventions as dynamic modalities (rather than conditionals);
    2. elucidate the relationship between intervention modalities and counterfactual conditionals;
    3. formalize and completely axiomatize a Causal Intervention Calculus (CIC), that is general enough to allow us to capture both interventions and causal conditionals, but also expressive enough to capture the various notions of actual causality proposed in the literature.

  • Friday, February 9, 1:00-2:30pm: Sven Neth

    Title: Against Coherence
    Abstract: Coherence demands that an agent does not make sequences of choices which lead to sure loss. However, Coherence conflicts with the plausible principle that agents are allowed to be uncertain about how they will update. Therefore, we should give up coherence.

Fall 2023

  • Friday, December 8, 1:00-2:30pm: Josiah Lopez-Wild

    Title: A Computable von Neumann-Morgenstern Representation Theorem

    Abstract: The von Neumann-Morgenstern Representation Theorem (hereafter “vNM theorem”) is a foundational result in decision theory that links rational preferences to expected utility theory. It states that whenever an agent’s preferences over lotteries satisfy a set of natural axioms, they can be represented as maximizing expected utility with respect to some function. The theorem provides us with a behavioral interpretation of utility by grounding the notion in choice behavior. This talk presents a computable version of the vNM theorem. Using techniques from computable analysis I show that a natural computability requirement on the agent’s preferences implies that there is a computable utility function that the agent maximizes in expectation, and that in some cases this computability condition is also necessary. I discuss the philosophical significance of computable representation theorems for decision theory, and finish with a discussion of other representation theorems that I suspect can be effectivized.

  • Friday, November 17, 1:00-2:30pm: Marta Sznajder

    Title: Janina Hosiasson-Lindenbaum: Building a career in inductive logic in the 1920s and 1930s 

    Abstract: Janina Hosiasson-Lindenbaum (1899-1942) was a Polish philosopher working on inductive reasoning and the interpretation of probability. As a member of the Lvov-Warsaw School, she was an active participant in the logical empiricist movement, broadly construed. Most of her philosophical work concerned the logical aspects of inductive reasoning and the nature of probability. In this talk, I will present her academic career from the historical and philosophical perspective. 

    In spite of her extensive publication record—spanning over more than thirty articles, conference talks, texts in popular magazines, and book translations—and her wide network of academic contacts, she never held a post at a university and remained a high school teacher between obtaining her PhD in 1926 and the beginning of World War II. I will give an overview of the strategies she used to promote her work and establish herself as an internationally recognized philosopher, as well as the obstacles that she faced—culminating with the failed efforts to obtain refugee scholar funding from the Rockefeller Foundation in 1940. 

    While Hosiasson-Lindenbaum has been recognised as an early adopter and developer of subjectivism, her philosophical work spans a much broader range. As it turns out, she was engaged in some ways with almost all significant developments in philosophical theories of probability and confirmation of the interwar decades: as a critic and a commentator, and as a highly original philosopher.

  • Friday, November 10, 1:00-2:30pm: Michael Cohen

    Title: Imperfect Recall from Descartes to Monty Hall

    Abstract: The overall aim of this talk is to draw interesting connections between assumptions foundational to Bayesian Epistemology and principles of dynamic epistemic logic. Various authors have observed that both Dutch book and accuracy based arguments for Bayesian conditioning require a partition assumption from the learning experience. Roughly, for such arguments to work, the agent must know there is a set of propositions that partitions the epistemic space, and that the proposition learned in the learning experience comes from that partition. Schoenfield, Bronfman, Gallow and others have connected this partition assumption to epistemic introspective principles ("KK-like" principles, applied to learning), although the exact logical formulation of those principles remains informal. In this talk, I present a general logical framework to analyze the logical properties of (Bayesian) learning experiences, using dynamic epistemic logic.  I argue that Perfect-Recall is an important epistemic principle at the heart of these Bayesian matters. In such epistemic logic formulation, Perfect-Recall is not really about memory, but about the agent's general ability to know how they came to know what they know. Following the existing literature, I use Monty Hall style cases to demonstrate the connection between Perfect-Recall and the partition assumption. 

  • Friday, October 27, 1:00-2:30pm: Jeff Barrett

    Title: Algorithmic randomness, probabilistic laws, and underdetermination (joint work with Eddy Chen)

    Abstract: We consider two ways one might use notions of algorithmic randomness to characterize probabilistic physical laws like those encountered in quantum mechanics. The first is as generative chance* laws. Such laws involve a nonstandard notion of chance. The second is as probabilistic* constraining laws. Such laws impose randomness constraints that every physically possible world must satisfy. This algorithmic approach to physical randomness allows one to address a number of long-standing issues regarding the metaphysics of laws. In brief, since many histories permitted by traditional probabilistic laws are ruled out as physically impossible, it provides a tighter connection between probabilistic laws and their corresponding sets of possible worlds. But while the approach avoids one variety of empirical underdetermination, it reveals other varieties of underdetermination that are typically overlooked.

  • Friday, October 6, 1:00-2:30pm: Saira Khan

    Title: Deliberation and Normativity in Decision Theory

    Abstract: The prescriptions of our two most prominent strands of decision theory, evidential and causal, differ in general classes of problems known as Newcomb problems and decision instability problems. Attempts have been made at reconciling the two theories through deliberational models (Eells 1984; Skyrms 1982; Huttegger, forthcoming; Joyce 2012; Arntzenius 2008). However, philosophers have viewed deliberation very differently. In this talk, I consider how deliberational decision theory differs in the kinds of normative prescriptions it offers when compared with our traditional decision theories from Savage (1954), Jeffrey (1965) and their intellectual predecessors. This raises questions about whether deliberation is an appropriate method for reconciling evidential and causal decision theory.

  • Friday, September 29, 1:00-2:30pm: Kevin Kelly

    Title: A General (Distribution-free) Topological Characterization of Statistical Learnability

    Abstract: In purely propositional models of learning, the possible propositional information states that the inquirer might encounter determine a topology on possible worlds that may be called the information topology. An important insight of topological learning theory is that learnability, both deductive (infallible) and inductive (fallible) can be characterized in terms of logical complexity relative to the information topology. Much follows, including a novel justification of Ockham’s razor. However, none of that applies literally to statistical inference, in which one receives propositional information about a sample related to the world only by chance. Nonetheless, there are strong intuitive analogies between propositional and statistical learning that suggest a deeper connection. My former Ph.D. student Konstantin Genin [University of Tuebingen] ingeniously discovered such a connection by proving that there is a unique topology on statistical worlds such that learnability (almost surely or in chance) is characterized exactly in terms of complexity definable in that topology. Furthermore, the topology has a natural, countable basis whose elements may be thought of as propositional information states directly about the statistical world under study for purposes of proving negative results about statistical learnability. Alas, Genin’s beautiful result is not fully general: it assumes that the chance that a sample hits exactly on the geometrical boundary of an elementary sample information state is necessarily zero, which essentially restricts the result to the discrete and continuous cases. This talk presents an extension of Genin’s seminal result to the distribution-free case. The new result depends on a generalized concept of convergence in probability that makes sense even when the propositional information received from the sample is not itself subject to chance, which is of independent interest as an antidote to “chance bloat”: the questionable frequentist tendency to assume that every sample event has a definite chance. This is very recent, unpublished work from my current book draft. For background, Genin’s theorem is summarized in:

    The Topology of Statistical Verifiability. (2017)
    Konstantin Genin, Kevin T. Kelly.
    In proceedings of TARK 2017, Liverpool.

  • Friday, September 15, 12:00-1:30pm: Zoé Christoff

    Title: Majority Illusions in Social Networks

    Abstract: The popularity of an opinion in one’s circles is not necessarily a good indicator of its popularity in one’s entire community. For instance, when confronted with a majority of opposing opinions in one’s direct circles, one might get the impression that one belongs to a minority. From this perspective, network structure makes local information about global properties of the group potentially inaccurate. However, the way a social network is wired also determines what kind of information distortion can actually occur. We discuss which classes of networks allow for a majority of agents to be under such a ‘majority illusion’. 

Spring 2023

  • Friday, April 7, 1:30-3pm: Kevin Dorst

    Title: Do Gamblers Commit Fallacies?

    Abstract: The “gambler’s fallacy” is the widely-observed tendency for people to expect random processes to “switch”—for example, to think that after a string of tails a heads is more likely.  Is it irrational?  Understood narrowly, it is—but we have little evidence that people exhibit it.  Understood broadly, I’ll argue that it follows from reasonable uncertainty combined with rational management of a limited memory.

  • Friday, March 24, 1:30-3pm: Jingyi Wu

    Title: Modeling Injustice in Epistemic Networks

    Abstract: I use network models to explore how social injustice impacts learning in a community. First, I simulate situations where a dominant group devalues evidence from a marginalized group. I find that the marginalized group ends up developing better beliefs. This result uncovers a mechanism by which standpoint advantages for the marginalized group can arise because of testimonial injustice. Interestingly, this model can be reinterpreted to capture another kind of injustice—informational injustice—between industrial and academic scientists. I show that a subgroup of scientists can learn more accurately when they unilaterally withhold evidence.

  • Tuesday, March 21, 3:30-5pm: Gert de Cooman

    Title: Indifference, symmetry, and conditioning

    Abstract: I intend to discuss the representation of indifference in inference and decision making under uncertainty, in the very general context of coherent partial preference orderings, or coherent sets of desirable options.

     I’ll begin with a short discussion of the basic model—coherent sets of desirable options, and intend to show how it can capture many relevant aspects of so-called conservative probabilistic inference. In particular, I’ll explain how this model leads in particular to coherent lower and upper previsions or expectations, conditioning, and coherent conditional (precise) previsions.

     I’ll then discuss how a notion of indifference can be introduced into this context, and what its effects are on desirability models, through representation results. In this context, the different notions of *conservative inference under indifference* and *updating under indifference* make their appearance.

     I’ll then present a number of examples: specific useful and concrete instances of the abstract notion of option space and the abstract notion of indifference. Such as:

    - observing an event and conditioning a coherent set of desirable gambles on this observation;

    - observing the outcome of a measurement and Lüders’ conditionalisation in quantum mechanics;

    - exchangeability and de Finetti’s representation theorem in an imprecise probabilities context.

  • Friday, February 24, 1:30-3pm: Derek Leben

    Title: Cooperation, Maximin, and the Foundations of Ethics

     Abstract: A Social Contract view about meta-ethics proposes that normative principles can be causally and functionally explained as solutions to cooperation problems, and they can therefore be evaluated by how effectively they solve these problems. However, advocates of the Social Contract view have often not specified details about what counts as a cooperation problem, and what solutions to it would look like. I propose that we define cooperation problems as interactions where there exists at least one Strong Pareto-Improvement on every pure Nash Equilibrium (willfully ignoring mixed solutions). We will explore a range of solutions to this problem, and how these solutions correspond to various normative principles. In the past, I have advocated the Maximin principle as an optimal solution to cooperation problems, but this turns out to be incomplete at best, and mistaken at worst. I will end with a plea for help from others who are more knowledgeable and intelligent than I am.

  • Friday, February 10, 1:30-3pm: Adam Bjorndahl

    Title: Knowledge Second

    Abstract: Classical philosophical analyses seek to explain knowledge as deriving from more basic notions. The influential “knowledge first” program in epistemology reverses this tradition, taking knowledge as its starting point. From the perspective of epistemic logic, however, this is not so much a reversal as it is the default—the field arguably begins with the specialization of “necessity” to “epistemic necessity”; that is, it begins with knowledge. In this context, putting knowledge *second* would be the reversal.

    In this talk I will motivate, develop, and explore such a “knowledge second” approach in epistemic logic, founded on distinguishing what a body of evidence actually entails from what it is (merely) believed to entail. I'll import a logical framework that captures exactly this distinction and use it to define formal notions of “internal” and “external” justification; these will then be applied to yield new insights into old topics, namely the KK principle and the regress problem. I will close with some remarks about the “definition” of knowledge and/or extensions of this framework to the probabilistic setting.

  • Friday, January 27, 1:30-3pm: T. Virgil Murthy

    Title: From Borel's paradox to Popper's bridge: the Cournot principle and almost-sure convergence

    Abstract: Adherents to non-frequentist metaphysics of probability have often claimed that the axioms of frequentist theories—in particular, the existence of a limiting relative frequency in infinite iteration of chance trials—can be derived as theorems of alternate characterizations of probability by means of convergence theorems. This project constitutes a historical analysis of a representative such attempt: Karl Popper's “bridge” between propensities and frequencies as discussed in his books Logic of Scientific Discovery and Realism and the Aim of Science. I reconstruct the motivation for Popper's argument, focusing on its relationship to the so-called “Borel paradox” outlined by van Lambalgen in his doctoral dissertation. I then discuss the structure of, reception toward, and debate around the argument. Richard von Mises argued that Popper assumes an empirical reading of the law of large numbers which is circular in context. The crucial problem is that SLLN is an almost-sure theorem; it has a measure zero exclusion clause. But taking measure-zero sets to be probability-zero sets is a frequentist assumption. It presupposes von Mises' limit axiom, which is exactly what Popper is attempting to derive.

    Two features of the debate, however, have not received substantial historical consideration. First, Popper was evidently aware of his assumption about the law of large numbers, and even joined von Mises in criticizing Frechet's more heavy-handed application of it. Second, statistical convergence theorems like SLLN require that trials satisfy independence conditions (usually i.i.d.); without presupposing also von Mises' second axiom, it is not clear why this antecedent condition should be assumed. My primary contributions are (1) an explanation of why this mutual misunderstanding between von Mises and Popper arose, and (2) a charitable interpretation of Popper's assumptions that better justifies the undiscussed premise that experimental iterations that give rise to Kollektivs satisfy the antecedent of the SLLN. For the first, I discuss Popper's use of the “Cournot principle” and evaluate whether it permits a nonfrequentist to infer that null sets are “impossible events.” For the second, I introduce Popper's n-freedom criterion and his reference to Doob's theorem. Though this historical investigation paints Popper's approach as more statistically informed than is commonly thought, it does not get him entirely out of trouble. I close by evaluating Popper's metaphysics of probability as a hybrid view, in which his assumption of the Cournot principle functions as a moderate frequentist preconception that does not entail the stronger axioms used by canonical frequentists like von Mises and Reichenbach.

Fall 2022

  • Friday, December 9, 2:30-4pm: Nevin Climenhaga

    Title: Are Simpler Worlds More Probable?

    Abstract: Some philosophers have suggested that simpler worlds are more intrinsically probable than less simple worlds, and that this vindicates our ordinary inductive practices. I show that an a priori favoring of simpler worlds delivers the intuitively wrong result for worlds that include random or causally disconnected processes, such as the tosses of fair coins. I conclude that while simplicity may play a role in determining probabilities, it does not do so by making simpler worlds more probable.

  • Friday, December 2, 9-10:30am: Yanjing Wang

    Title: Knowing How to Understand Intuitionistic Logic

    Abstract: In this talk, we propose an approach to “decode” intuitionistic logic and various intermediate logics as (dynamic) epistemic logics of knowing how. Our approach is inspired by scattered ideas hidden in the vast literature of math, philosophy, CS, and linguistics about intuitionistic logic, which echoed Heyting’s initial conception of intuitionistic truth as “knowing how to prove.” This notion of truth is realized by using a bundled know-how modality based on some formalized Brouwer–Heyting–Kolmogorov interpretation. Our approach reveals the hidden, complicated epistemic information behind the innocent-looking connectives by providing intuitive epistemic readings of formulas in intermediate logics. As examples, we show how to decode inquisitive logic and some version of dependence logic as epistemic logics. If time permits, we show how similar ideas can be applied to deontic logic.

  • Tuesday, November 22, 2-3:30pm: Sander Beckers

    Title: Causal Explanations and XAI

    Abstract: Although standard Machine Learning models are optimized for making predictions about observations, more and more they are used for making predictions about the results of actions. An important goal of Explainable Artificial Intelligence (XAI) is to compensate for this mismatch by offering explanations about the predictions of an ML-model which ensure that they are reliably action-guiding. As action-guiding explanations are causal explanations, the literature on this topic is starting to embrace insights from the literature on causal models. Here I take a step further down this path by formally defining the causal notions of sufficient explanations and counterfactual explanations. I show how these notions relate to (and improve upon) existing work, and motivate their adequacy by illustrating how different explanations are action-guiding under different circumstances. Moreover, this work is the first to offer a formal definition of actual causation that is founded entirely in action-guiding explanations. Although the definitions are motivated by a focus on XAI, the analysis of causal explanation and actual causation applies in general. I also touch upon the significance of this work for fairness in AI by showing how actual causation can be used to improve the idea of path-specific counterfactual fairness.

    The full paper is available at https://arxiv.org/abs/2201.13169.

  • Friday, November 18, 1:30-3pm: Tom Sterkenburg

    Title: Machine learning and the philosophical problem of induction

    Abstract: Hume’s classical argument says that we cannot justify inductive inferences. Impossibility results like the no-free-lunch theorems underwrite Hume’s skeptical conclusion for machine learning algorithms. At the same time, the mathematical theory of machine learning gives us positive results that do appear to provide justification for standard learning algorithms. I argue why there is no conflict here: rather, there are two different conceptions of formal learning methods, that lead to two different demands on their justification. I further discuss how these different perspectives relate to prominent contemporary proposals in the philosophy of inductive inference (including Norton’s material theory and Schurz’s meta-inductive justification of induction), and how they support two broader epistemological outlooks on automated inquiry.

  • Friday, November 4, 1:30-3pm: Krzysztof (Chris) Mierzewski

    Title: Probing the qualitative-quantitative divide in probability logics

    Abstract: Several notable approaches to probability, going back at least to Keynes (1921), de Finetti (1937), and Koopman (1940), assign a special importance to qualitative, comparative judgments of probability ("event A is at least as probable as B"). The difference between qualitative and explicitly quantitative probabilistic reasoning is intuitive, and one can readily identify paradigmatic accounts of each type of inference. It is less clear, however, whether there are any natural structural features that track the difference between inference involving comparative probability judgments on the one hand, and explicitly numerical probabilistic reasoning on the other. Are there any salient dividing lines that can help us understand the relationship between the two, and classify intermediate forms of inference lying in between the two extremes?

    In this talk, I will explore this question from the perspective of probability logics. Probability logics can represent probabilistic reasoning at different levels of grain, ranging from the more 'qualitative' logic of purely comparative probability to explicitly 'quantitative' languages involving arbitrary polynomials over probability terms. I will identify a robust boundary in the space of probability logics by distinguishing systems that encode merely additive reasoning from those that encode additive and multiplicative reasoning. The latter include not only languages with explicit multiplication, but also languages expressing notions of probabilistic independence and comparative conditional probability.

    As I will explain, this distinction tracks a divide in computational complexity: for additive systems, the satisfiability problem remains NP-complete, while systems that can encode even a modicum of multiplication are robustly complete for ETR (the existential theory of the reals). I will then address some questions about axiomatisation by presenting new completeness results, as well as a proof of non-finite-axiomatisability for comparative probability. For purely additive systems, completeness proofs involve familiar methods from linear algebra, relying on Fourier-Motzkin elimination and hyperplane separation theorems; for multiplicative systems, completeness relies on results from real algebraic geometry (the Positivstellensatz for semi-algebraic sets). If time permits, I will highlight some important questions concerning the axiomatisation of comparative conditional probability.

    We will see that, for the multiplicative probability logics as well as the additive ones, the paradigmatically 'qualitative' systems are neither simpler in terms of computational complexity, nor in terms of axiomatisation, while losing in expressive power to their explicitly numerical counterparts.

    This is joint work with Duligur Ibeling, Thomas Icard, and Milan Mossé.

  • Friday, October 21, 1:30-3pm: Maryam Rostamigiv

    Title: About the type of modal logic for the unification problem

    Abstract: I'll talk about the unification problem in ordinary modal logics, two-modal logic fusions, and multi-modal epistemic logics. Given a formula A and a propositional logic L, in a unification problem we should find substitutions s such that s(A) is in L. These substitutions are known as unifiers of A in L. When they exist, we investigate various methods for constructing minimal complete sets of unifiers of a given formula A, and we discuss the unification type of A based on the cardinality of these minimal complete sets. Then, I will present the unification types of several propositional logics.

  • Friday, October 7, 1:30-3pm: Brittany Gelb and Philip Sink

    Title: Modal Logic Without Possible Worlds
    Abstract: We will present a semantics for modal logic based on simplicial complexes that instead of possible worlds uses an "Agent Perspective". Philip will explain the details of the formalism, including a novel soundness and completeness proof. Brittany will follow up with some applications of these models to a distributed setting. Additionally, she will show how we can use tools from algebraic topology to show a variety of things including the nonexistence of bisimulations.

  • Friday, September 23, 1:30-3pm: Mikayla Kelley

    Title: A Contextual Accuracy Dominance Argument for Probabilism

    Abstract: A central motivation for Probabilism—the principle of rationality that requires one to have credences that satisfy the axioms of probability—is the accuracy dominance argument: one should not have accuracy dominated credences, and one avoids accuracy dominance just in case one satisfies Probabilism. Up until recently, the accuracy dominance argument for Probabilism has been restricted to the finite setting. One reason for this is that it is not easy to measure the accuracy of infinitely many credences in a motivated way. In particular, as recent work has shown, the conditions often imposed in the finite setting are mutually inconsistent in the infinite setting. One response to these impossibility results—the one taken by Michael Nielsen—is to weaken the conditions on a legitimate measure of accuracy. However, this response runs the risk of offering an accuracy dominance argument using illegitimate measures of accuracy. In this paper, I offer an alternative response which concedes the possibility that not all sets of credences can be measured for accuracy. I then offer an accuracy dominance argument for Probabilism that allows for this restrictedness. The normative core of the argument is the principle that one should not have credences that would be accuracy dominated in some epistemic context one might find oneself in if there are alternative credences which do not have this defect.

  • Friday, September 9, 1:30-3pm: Francesca Zaffora Blando

    Title: Randomness as the stable satisfaction of minimal randomness conditions

    Abstract: What are the weakest properties you would expect an infinite sequence of zeroes and ones to possess if someone told you that that sequence is random (think: maximally irregular and patternless)? Perhaps you would expect that sequence to be uncomputable or, with a little more reflection, bi-immune. Perhaps you would expect it to satisfy the Strong Law of Large Numbers (namely, you would expect the limiting relative frequency of 0, and of 1, along that sequence to be 1/2). None of these properties is, by itself, sufficient for randomness. For instance, the sequence 01010101… satisfies the Strong Law of Large Numbers, yet it is very regular. But what if, similarly to what von Mises did when defining collectives, we instead required these properties to be satisfied stably? In other words, what if we required them to be preserved under an appropriate class of transformations? Would this suffice to obtain reasonable randomness notions? In this talk, I will discuss some work (very much) in progress that addresses this question and its connections with von Mises’ early work on randomness.

Spring 2022

  • Friday, May 6, 1:30-3pm: Philip Sink

    Title: A Logical Model of Pluralistic Ignorance

    Abstract: Much of the existing literature on pluralistic ignorance suggests that agents who find themselves in such a situation must consider themselves "special" in one way or another (Grosz 2018, Bjerring et al. 2014). Agents have to recognize their own dishonesty, but believe everyone around them is perfectly honest. This argument is taken to show that pluralistic ignorance is irrational. Modifying work from Christoff 2016, we use a simple logical model to show that these arguments for the irrationality of pluralistic ignorance depend on various introspection assumptions. We will finish by putting forth various scenarios where agents can be honest, headstrong, or something similar (generally taken to be impossible under pluralistic ignorance) but are nonetheless consistent if one relaxes introspection assumptions. This shows that agents can see themselves as no different from their friends and still be in a situation of pluralistic ignorance with sufficiently weak introspection assumptions.

  • Friday, April 29, 2-3:30pm: Marina Dubova

    Title: Against theory-motivated data collection in science

    Abstract: We study the epistemic success of data collection strategies proposed by philosophers of science or executed by scientists themselves. We develop a multi-agent model of the scientific process that jointly formalizes its core aspects: data collection, data explanation, and social learning. We find that agents who choose new experiments at random develop the most accurate accounts of the world. On the other hand, the agents following the confirmation, falsification, crucial experimentation (theoretical disagreement), or novelty-motivated strategies end up with an illusion of epistemic success: they develop promising accounts for the data they collected, while completely misrepresenting the ground truth that they intended to learn about. These results, while being methodologically surprising, reflect basic principles of statistical learning and adaptive sampling.

  • Friday, April 8, 2-3:30pm: Tomasz Wysocki

    Title: Causal Decision Theory for The Probabilistically Blind

    Abstract: If you can’t or don’t want to ascribe probabilities to the consequences of your actions, classic causal decision theory won’t let you reap the undeniable benefits of causal reasoning for decision making. I intend the following theory to fix this problem.

    First, I explain in more detail why it’s good to have a causal decision theory that applies to non-deterministic yet non-probabilistic decision problems. One of the benefits of such a theory is that it’s useful for agents under bounded uncertainty. I then introduce the underdeterministic framework, which can represent non-probabilistic causal indeterminacies. Subsequently, I use the framework to formulate underdeterministic decision theory. On this theory, a rational agent under bounded uncertainty solves a decision problem in three steps: she represents the decision problem with a causal model, uses it to infer the possible consequences of available actions, and chooses an action whose possible consequences are no worse than the possible consequences of any other action. The theory applies to decisions that have infinitely many mutually inconsistent possible consequences and to agents who can’t decide on a single causal model representing the decision problem.

  • Friday, March 25, 4:30-6pm: Alicja Kowalewska

    Title: Measuring story coherence with Bayesian networks (joint work with Rafał Urbaniak)

    Abstract: When we say that one’s views or story are more or less coherent, we seem to think of how well their individual pieces fit together. However, explicating this notion formally turns out to be tricky. In this talk, I’ll describe a Bayesian network-based coherence measure, which performs better than its purely probabilistic predecessors. The novelty is that by paying attention to the structure of the story encoded in the network, we avoid considering all possible pairs of subsets of a story. Moreover, our approach assigns special importance to the weakest links in a story, to improve on the other measures’ results for logically inconsistent scenarios. I’ll discuss the performance of the measures in relation to a few philosophically motivated examples and the real-life case of Sally Clark.

  • Friday, February 25, 2-3:30pm: Pablo Zendejas Medina

    Title: Rational Inquiry for Qualitative Reasoners

    Abstract: If you believe something, you're also committed to believing that any future evidence to the contrary would be misleading. Thus, it would seem to be irrational to inquire further into a question that you already have a belief about, when you only care about the accuracy of that belief. This is a version of the Dogmatism Paradox. In this talk, I'll show how the paradox can be solved, even granting its core assumptions, if we make the right assumptions about belief revision and about how belief licenses action. Moreover, the argument generalizes: it turns out that given these assumptions, inquiry is always rational and often even rationally required. On the resulting view, an opinionated inquirer believes that they won't encounter defeating evidence, but still inquires in case they turn out to be mistaken.

  • Friday, February 11, 1:30-3pm: Johanna Thoma

    Title: What’s wrong with pure risk paternalism?

    Abstract: A growing number of decision theorists have, in recent years, defended the view that rationality is permissive in the sense that there is rational leeway in how agents who value outcomes in the same way may choose under risk, allowing for different levels of ‘pure’ risk aversion or risk inclination. Granting such permissiveness complicates the question of what attitude to risk we should implement when choosing on behalf of another person. More specifically, my talk is concerned with the question of whether we are pro tanto required to defer to the risk attitudes of the person on whose behalf we are choosing, that is, whether what I call ‘pure risk paternalism’ is problematic. I illustrate the practical and theoretical significance of this question, before arguing that the answer depends less on one’s specific account of when and why paternalism is problematic more generally, and more on what kinds of attitudes we take pure risk attitudes to be.

  • Friday, January 28, 2-3:30pm: Johan van Benthem

    Title: Venturing Further Into Epistemic Topology
    Abstract: Epistemic topology studies key ideas and issues from epistemology with mathematical methods from topology. This talk pursues one particular issue in this style: the nature of informational dependence. We combine two major semantic views of information in logic: as 'range' (the epistemic logic view) and as 'correlation' (the situation theory view), in one topological framework for knowledge and inquiry arising out of imprecise empirical observations. Technically, we present a decidable and complete modal base logic of information-carrying dependence through continuous functions. Our treatment uncovers new connections with other areas of mathematics: topological independence matches with 'everywhere surjective functions', stricter requirements of computability lead to a complete modal logic for Domain Theory with Scott topology. Finally, we move from topology to Analysis and offer a logical take on uniform continuity, viewed as a desirable form of epistemic know-how, modeled in metric spaces, but also in new qualitative mathematical theories of approximation in terms of entourages. Beyond concrete results, the talk is meant to convey a certain spirit: how epistemic topology can profit from venturing more deeply into mathematics.

    References

    A. Baltag & J. van Benthem, 2021, A Minimal Logic of Functional Dependence, https://arxiv.org/abs/2103.14946.

    —, 2021, Knowability and Continuity, A Topological Account of Informational Dependence, manuscript, ILLC Amsterdam.

    —, 2018, Some Thoughts on the Logic of Imprecise Measurement, https://eprints.illc.uva.nl/id/eprint/1660/1/2018.MarginError.pdf.

Fall 2021

  • Friday, December 10, 1:30–2:30pm: Xin Hui Yong

    Title: Accidentally I Learnt: On relevance and information resistance

    Abstract: While there has been a movement aiming to teach agents about their privilege by making the information about their privilege as costless as possible, Kinney & Bright argue risk-sensitive frameworks (particularly Lara Buchak's, from Risk and Rationality) can make it rational for privileged agents to shield themselves from learning about their privilege, even if the information is costless and relevant. In response, I show that in this framework, if the agent is not certain if the information will be relevant, they may have less reason to actively uphold ignorance. I explore what the agent's uncertainty about the relevance of the information could describe, and what this may mean upshot-wise. For example, these educational initiatives may not be as doomed as Kinney & Bright suggest, and risk-sensitive frameworks like Buchak's can lead to situations where an agent would feel better off having had learned something at the same time that they may rationally decline to know it now. I aim to explore these upshots and what they say about elite group ignorance and the viability of risk-sensitive expected utility theory as a helpful explanation of elite agent ignorance.

  • Friday, November 19, 2-3:30pm: Taylor Koles

    Title: Higher-Order Sweetening Problems for Schoenfield

    Abstract: I argue against a particular motivation for adopting imprecise credences advanced by Schoenfield 2012. Schoenfield's motivates imprecise credences by arguing that it is permissible to be insensitive to mild evidential sweetening. Since mild sweetening can be iterated, Schoenfield's position that our credences should be modeled by a set of precise probability functions ensures that even on her view it is at least sometimes impermissible to be insensitive to mild sweetening. Taking a lesson from the literature on higher-order vagueness, I argue that the better approach is to get off the slope at the first hill - a perfectly rational agent would not be insensitive to mild evidential sweetening.

  • Friday, November 5, 1-2:30pm: Snow Zhang

    Title: Updating Stably

    Abstract: Bayesianism appears to give counter-intuitive recommendations in cases where the agent lacks evidence about what their evidence is. I argue that, in such cases, Bayesian conditionalization is rationally defective as an updating plan. My argument relies on a new norm for rational plans: self-stability. Roughly, a plan is self-stable if it gives the same recommendation conditional on its own recommendations. The primary goal of this talk is to give a precise formulation of this norm. The secondary goal is to show how this norm relates to other norms of rationality.

  • Friday, October 22, 2-3:30pm: Kenny Easwaran

    Talk Title: Generalizations of Risk-Weighted Expected Utility

    Abstract: I consider Lara Buchak’s (2013) “risk-weighted expected utility” (REU) and provide formal generalizations of it. I do not consider normative motivations for any of these generalizations, but just show how they work formally. I conjecture that some of these generalizations might result from very slightly modifying the assumptions that go into her representation theorems, but I don’t investigate the details of this.

    I start by reviewing the formal definition of REU for finite gambles, and two ways to calculate it by sums of weighted probabilities. Then I generalize this to continuous gambles rather than finite ones, and show two analogous ways to calculate versions of REU by integrals. Buchak uses a risk-weighting function that maps the probability interval [0,1] in a continuous and monotonic way to the weighted interval [0,1]. I show that if we choose some other closed and bounded interval, the result is formally equivalent, and I show how to generalize it to cases where the interval is unbounded. In these cases, the modified REU can provide versions of maximin (if the interval starts at −∞), maximax (if the interval ends at +∞) as well as something new if both ends of the interval are infinite. However, where maximin and maximax are typically either indifferent or silent between gambles that agree on the relevant endpoint, the decision rules formally defined here are able to consider intermediate outcomes of the gamble in a way that is lexically posterior to the endpoint(s).

    Finally, I consider the analogy between risk-sensitive decision rules for a single agent and inequality-sensitive social welfare functions for a group. I show how the formal generalizations of REU theory allow for further formal generalizations of inequality-sensitivity that might have some relevance for population ethics.

  • Friday, October 15, 2-3:30pm: Francesca Zaffora Blando

    Title: Wald randomness and learning-theoretic randomness

    Abstract: The theory of algorithmic randomness has its roots in Richard von Mises’ work on the foundations of probability. Von Mises was a fervent proponent of the frequency interpretation of probability, which he supplemented with a (more or less) formal definition of randomness for infinite sequences of experimental outcomes. In a nutshell, according to von Mises’ account, the probability of an event is to be identified with its limiting relative frequency within a random sequence. Abraham Wald’s most well-known contribution to the heated debate that immediately followed von Mises’ proposal is his proof of the consistency of von Mises’ definition of randomness. In this talk, I will focus on a lesser known contribution by Wald: a definition of randomness that he put forth to rescue von Mises’ original definition from the objection that is often regarded as having dealt the death blow to his entire approach (namely, the objection based on Ville’s Theorem). We will see that, when reframed in computability-theoretic terms, Wald’s definition of randomness coincides with a well-known algorithmic randomness notion and that his overall approach is very close, both formally and conceptually, to a recent framework for modeling algorithmic randomness that rests on learning-theoretic tools and intuitions.

  • Friday, September 24, 2-3:30pm: Kevin Zollman

    Title: Is 'scientific progress through bias' a good idea?

    Abstract: Some philosophers have argued for a paradoxical conclusion: that science advances because of the IRrationality of scientists. That is, by combining epistemically inferior behavior on the part of individual scientists with a social structure that harnesses this irrationality, science can make progress faster than it could with more rational individuals. Through the use of a simple computational model, I show how this is indeed possible: biased scientist do in fact make more scientific progress than an equivalent community which is unbiased. However, this model also illustrates that such communities are very fragile. Small changes in their social structure can move biased communities from being very good to being abysmal.

  • Friday, September 10, 2-3:30pm: Kevin Kelly

    Title: Ockham's Razor, Inductive Monotonicity, and Topology (joint work with Hanti Lin and Konstantin Genin)

    Abstract: What is empirical simplicity? What is Ockham's razor? How could Ockham's razor find hidden structure in nature better than alternative inductive biases unless you assume a priori that the truth is simple? We present a new explanation. The basic idea is to amplify convergence to the truth in the limit with constraints on the monotonicity (stability) of convergence. Literally monotonic convergence to the truth is hopeless for properly inductive problems, since answering the question implies the possibility of false steps en route to the truth. *Inductive monotonicity* requires, more weakly, that the method never retracts from a true conclusion to a false one, which sounds like a straightforward epistemic consideration (i.e., it is weaker than Plato's requirement in the Meno that knowledge be stable true belief). We show (very easily) that inductively monotonic convergence to the truth implies that your inductive method satisfies Ockham's razor at *every* stage of inquiry, which projects a long-run criterion of success into the short run, with no appeal to an alleged short-run notion of "inductive support" of universal conclusions.

    The main result is a basic proposition in point set topology. Statistics, ML, and the philosophy of science have bet their respective banks on probability and measure as the "right" concepts for explaining scientific method. We respond that the central considerations of scientific method: empirical information, simplicity, fine-tuning, and relevance are all fundamentally topological and are most elegantly understood in topological terms.

Spring 2021

  • Friday, July 2, 2–3:30pm: Cailin O'Connor

    Title: Interdisciplinarity can aid the spread of better methods between scientific communities

    Abstract: Why do bad methods persist in some academic disciplines, even when they have been clearly rejected in others? What factors allow good methodological advances to spread across disciplines? In this work, we investigate some key features determining the success and failure of methodological spread between the sciences. We introduce a model that considers factors like methodological competence and reviewer bias towards one's own methods. We show how self-preferential biases can protect poor methodology within scientific communities, and lack of reviewer competence can contribute to failures to adopt better methods. We further argue, however, that input from outside disciplines, especially in the form of peer review and other credit assignment mechanisms, can help break down barriers to methodological improvement.

  • Friday, June 18, 2-3:30pm: Jacob Neumann

    Title: DTL, Refined: Topology, Knowledge, and Nondeterminism

    Abstract: In this talk, I'll describe some of my work in dynamic topological logic, including some in-progress inquiries and some content from my master's thesis. DTL is a variety of modal logic which proves quite adept at modelling the capacity for knowledgeable agents to know and do things. In particular, DTL is rich enough to encode epistemically nondeterministic situations: ones where an agent finds themself unable to know what effect their actions will have. My goal for this talk is to explore one simple kind of nondeterministic scenario (coin-flipping scenarios) as a case study for how to utilize DTL to characterize dynamic-epistemic phenomena. Along the way, I'll introduce some of the novel mathematical content which had to be developed for this purpose, as well as point to avenues of further investigation.

  • Friday, June 4, 2-3:30pm: William Nalls

    Title: Endogenizing Epistemic Actions in Dynamic Epistemic Logic

    Abstract: Through a series of examples, we illustrate some important drawbacks that the action model logic framework suffers from in its ability to represent the dynamics of information updates. We argue that these problems stem from the fact that the action model, a central construct designed to encode agents’ uncertainty about actions, is itself effectively common knowledge amongst the agents. In response to these difficulties, we motivate and propose an alternative semantics that avoids them by (roughly speaking) endogenizing the action model; these semantics allow for the representation of an agent learning about their own epistemic abilities. We discuss the relationship between this new framework and action model logic, and provide a sound and complete axiomatization of several new logics that naturally arise. We also show how moving to these semantics corresponds to a natural weakening of the No Miracles principle in epistemic logic.

  • Friday, May 28, 2-3:30pm: Adam Bjorndahl

    Title: Generic Logic

    Abstract: Universal quantification is a cornerstone of reasoning and expression across multiple domains, from formal mathematical languages to scientific generalizations to casual conversation. But in many quantificational contexts—formal and informal—full universal quantification is too strong or unnecessary. In this talk, we develop a general logical setting in which "generic" quantification takes center stage. More precisely, we relax the foundational notion of validity in possible worlds semantics to quantify over "almost all" worlds, instead of all worlds. Of course, this depends on just what we mean by "almost all". Natural closure conditions on the corresponding notion of generic validity yield some constraints but do not determine a unique definition. Well-known topological and measure-theoretic notions of "large" sets (and, dually, "negligible" sets) provide appealing candidates for making this notion precise; each determines a corresponding class of generically valid formulas, with some surprising and familiar axiomatizations.

  • Friday, April 30, 12–1:30pm: John Norton

    Title: Non-Probabilistic Physical Chances

    Abstract: My talk with review physical systems whose chance properties are non-probabilistic. The talk will be self-contained, but will draw on the later chapters of my book

    The Material Theory of Induction

    http://www.pitt.edu/~jdnorton/homepage/cv.html#material_theory

    especially

    Ch. 13 Infinite Lottery Machines

    Ch. 15 Indeterministic Physical Systems

  • Friday, April 23, 2-3:30pm: James Woodward

    Title: Mysteries of actual causation: It’s Complicated

  • Friday, April 9, 2-3:30pm: Justin Shin

    Title: What do we tell the jury about hindsight? A work in progress.

    Abstract: When we ask a jury in a negligence case to make a judgment about whether or not some harm was foreseeable, we may worry that the jury will reason in such a way that makes that harm seem more foreseeable than it actually was due to their knowledge that that harm has already occurred. In other words, the defense might worry that the jury will suffer from hindsight bias that favors the prosecution. When and how we should prepare jurors for hindsight bias, if we should at all, depends not only on when hindsight reasoning is rational but also on what the foreseeability criteria of negligence are. The discussion around hindsight reasoning is to do with the activities of rational people. Jury instructions in negligence cases are to do with the activities of reasonable people. The rational person of rational choice theory and the reasonable person of law are not obviously the same person, but the handling of hindsight reasoning in juries must appease them both.

  • Friday, April 2, 2-3:30pm: Teddy Seidenfeld

    Title: Three Degrees of Imprecise Probabilities

    Abstract: DeFinetti's theory of "precise" subjective probability derives from applying a criterion of coherence (i.e. respect for dominance) when the decision maker is required to give "fair" prices—also called "2-sided" prices—for buying and selling random variables.

    I consider a progression of three kinds of imprecise/indeterminate subjective probabilities—so-called "IP" theories—that result from applying dominance while allowing the decision maker ever more radical departures from requiring "fair" prices. The three are:
    IP-1 incomplete elicitation

    IP-2 1-sided prices

    IP-3 coherent choice functions.

  • Friday, March 5, 2-3:30pm: Kevin Zollman

    Title: Homophily, polarization, and misinformation: a simple model

    Abstract: Many have suggested that our tendency to interact with those who have similar views (belief homophily) contributes to polarization. I construct a simple model of belief-based polarization that shows this, and allows us to understand how information, misinformation, and homophily interact to create the conditions for polarization. In addition, I explore some of the epistemic consequences of homophily and polarization. I will connect the model to two other famous simple models of belief changes based on "opinion dynamics" (Degroot's linear pooling model and the Hegselmann-Krause Bounded Confidence model).

  • Friday, February 12, 2-3:30pm: Miriam Schoenfield

    Title: Can Bayesianism Accommodate Higher Order Defeat?

    Abstract: Sometimes we get evidence which suggests that our beliefs aren't rational. This might come from learning that people with the same evidence reach different conclusions, that we're under the influence of mind-distorting drugs, fatigued, subject to implicit biases, or that our beliefs have been impacted in potentially problematic ways by social or evolutionary forces. In this paper I'll consider whether reducing confidence substantially in response to such evidence can be accommodated in a Bayesian framework. I'll show that thinking of such revisions as Bayesian, while possible, brings with it a set of substantial and controversial commitments. For those who find these commitments unattractive, I'll sketch two non-Bayesian alternatives in the second part of the paper.