Contributed Talks 17
Chairs: Fredrik Engström, Valentin Blot
Time allocation: 20 minutes per speaker; 5 minutes for each changeover
Talks in this session
14:00 |
Ivan Di Liberti, Topoi with enough points |
Authors: Ivan Di Liberti and Morgan Rogers In 1964 Pierre Deligne proved a very celebrated theorem in topos theory. The theorem’s original motivation came from algebraic geometry, but after Joyal and Reyes developed the theory of classifying topoi, it was observed by Lawvere that Deligne’s theorem was essentially the statement of Gödel’s completeness theorem for first order logic in disguise. This realisation crowned Deligne’s theorem as a major result in categorical logic, and a source of inspiration for finding other completeness-like results using techniques from topos theory. Following Deligne’s theorem, it was some time before new results emerged showing that further classes of topoi have enough points. Makkai and Reyes proved that separable topoi have enough points. This result was inspired by the Fourman-Grayson completeness theorem for the logic \(L_{\omega_1,\omega_0}\), and indeed it is almost the translation of it into topos-theoretic language through the bridge of classifying topoi. The proof by Makkai and Reyes is a bit sketchy, and of model theoretic inspiration, thus it is hard to compare this result to Deligne’s. Recently, the main advancements are due to Espíndola; most recently together with Kanalas they have provided a more categorical analysis which delivers a vast generalization of the original results that Espíndola achieved in his PhD thesis. Simultaneously to these developments, the community revolving around topos theory has been trying to understand the limits of Deligne’s original argument and its possible generalization. Our contribution We clarify and extend Deligne’s original argument showing that locally coherent topoi have enough points. We show that our refinement of Deligne’s technique can be adapted to recover every existing result of this kind, including the most recent results about \(\kappa\)-coherent \(\kappa\)-topoi. Our presentation allows us to relax the cardinality assumptions typically imposed on the sites involved. We show by abstracting further that a closed subtopos of a topos with enough points has enough points. Theorem. Let \(j: \mathcal{F} \rightarrowtail \mathcal{E}\) be an inclusion of toposes. Suppose that for every point \(p\) can be improved. If \(\mathcal{E}\) has enough points, then \(\mathcal{F}\) has enough points. Theorem. A closed subtopos of a topos with enough points has enough points. |
|
14:25 |
Mariami Gamsakhurdia, Goedel logics: prenex fragments |
Authors: Matthias Baaz and Mariami Gamsakhurdia One of the first recognised characteristics of classical logic is the existence of prenex forms for each formula. The quantifier-shifting rules are used non-uniquely to construct these prenex forms. The expressive power of prenex fragments is easy to see in classical logic because it coincides with the whole logic, and in Intuitionistic logic since the prenex formulas are very weak (the validity of the prenex formula is decidable). However, because Gödel logics are intermediary logics, the expressibility of its prenex is relatively important. It is clear that prenex normal forms cannot be constructed in the usual sense in Gödel logics because some of the quantifier-shift rules may fail, but this does not imply that no prenex normal form exists. However, demonstrating that such prenex forms do not exist is more difficult. Prenexation does not work for \(G_{[0,1]}\) when \(0\) is not isolated, since the formula \((\neg \forall x A(x)\wedge \forall x \neg \neg A(x))\) does not allow a prenex normal form. To prove this fact, we use the glueing argument, which is not easy to see. This result can be extended to all Gödel logics where there is one accumulation point from above, even if it is not \(0\). In this talk, we provide the complete classification for the first-order Gödel logics after the property that the formulas admit logically equivalent prenex normal forms. We show that the only first-order Gödel logics that admit such prenex forms are those with finite truth value sets since they allow all quantifier-shift rules and the logic \(G_\uparrow\) with only one accumulation point at \(1\). In all the other cases, when there is at least one accumulation point from above, all accumulation points are from above, and all accumulation points are from below; there are no logically equivalent prenex normal forms. We will also see that \(G_\uparrow\) is the intersection of all finite first-order Gödel logics. The second stage of our research investigates the existence of the validity equivalent prenex normal form. Gödel logics with a finite truth value set admit such prenex forms. Gödel logics with an uncountable truth value set have the prenex normal form if and only if every surrounding of \(0\) is uncountable or \(0\) is an isolated point. Otherwise, uncountable Gödel logics are incomplete, and the prenex fragment is always complete with respect to the uncountable truth value set. Therefore, there is no effective translation to the valid formula and the valid prenex form. The countable case, however, is still up for debate. Bibliography
|
|
14:50 |
Andrea Sabatini, Hypersequent calculi for propositional default logics |
Default reasoning is a form of non-monotonic reasoning that allows to draw plausible conclusions under incomplete information and unless explicit evidence to the contrary: it can be formalized by extending classical logic with a collection of extra-logical axioms along with a set of default rules. Starting with Reiter’s seminal work, default logic’s formalism has been investigated through Hilbert-style calculi, semantic tableaux and sequent calculi [1]. These proof-theoretic methods have undesirable features: (i) the proof-search space in Hilbert-style calculi is infinite; (ii) in tableaux-based calculi, the information flow cannot be controlled at the local level; (iii) sequent calculi rely on ad hoc extensions of the underlying language. In this talk, we introduce a novel proof-theoretic approach to default propositional logics which overcomes these drawbacks by means of a non-standard notion of hypersequent. We redefine hypersequents as hybrid constructs, comprising sets of sequents and antisequents, and we depart from the conventional disjunctive interpretation of the separating bar, embracing a conjunctive reading: antisequents are meant to furnish contrary updates concerning the provability of the associated sequent. We employ these hybrid hypersequents to define specific extra-logical rules, which internalize within derivation trees the consistency checks involved in the application of default rules. As a result, we obtain a Gentzen-style formulation of defaults as constrained instances of Strengthening – i.e., the inverse rule of Weakening [2]. On this basis, we design hybrid hypersequent HG4 calculi for default logics: we establish admissibility of structural rules, invertibility of logical rules and a weakened version of the subformula property for cut-free proofs. We present HG4 calculi that are sound and (weakly) complete with respect to credulous consequence based over Łukasiewicz extensions [3]. Moreover, we propose a hypersequent-based decision procedure for skeptical consequence which relies on the abductive search of counterexamples [5], thereby circumventing the need for early computation of all extensions. We conclude with a brief discussion of HG4 calculi for credulous consequence based on Reiter extensions [6], as well as for exclusionary default reasoning [4]. Bibliography
|
|
15:15 |
Heinrich Wansing, Quantifiers in connexive logic (in general and in particular) |
Authors: Heinrich Wansing and Zach Weber Connexive logic has room for two pairs of universal and particular quantifiers: one pair, \(\forall\) and \(\exists\), are standard quantifiers; the other pair, \({A}\) and \({E}\), are unorthodox, but we argue, are well-motivated in the context of connexive logic. Both non-standard quantifiers have been introduced previously, but in the context of connexive logic they have a natural semantic and proof-theoretic place, and plausible natural language readings. The result are logics which are negation inconsistent but non-trivial. |
|
15:40 |
Hans Halvorson, Equivalence in foundations |
I survey results — recent as well as historical — about the equivalence or inequivalence of systems that could serve as the foundation of mathematics. Throughout I focus on clarifying which notion of equivalence is operative in the various results, in the hope of arriving at a more clear sense of which notions are relevant in mathematical practice. The known results that I discuss include:
I consider precise definitions of equivalence including:
While bi-interpretability is, according to Hamkins, the “gold standard for equivalence”, there remains some unclarity in its definition — in particular, in the definition of “translation” on which bi-interpretability is built. I show that two theories can be Morita equivalent without being bi-interpretable, even under the most liberal interpretation of the latter notion. This mismatch between Morita equivalence and bi-interpretability raises a puzzle about the classical result [2] that any theory formulated in many-sorted logic can be replaced by an unsorted theory. While the result is true for Morita equivalence for theories with finitely many sorts, it fails for infinitely many sorts, and even more severely for bi-interpretability. I attempt to bring order to this situation by considering various definitions of a translation between theories, and by comparing the resulting notion of equivalence with Morita equivalence. Bibliography
|