Logic Colloquium 2024

Contributed Talk

Contributed Talks 7

Chair: Gianluca Curzi

  Wednesday, 14:00, J330 ! Live

Time allocation: 20 minutes per speaker; 5 minutes for each changeover

Talks in this session

  14:00

Giulia Manara, On non-commutative logic and process calculi

Authors: Matteo Acclavio, Giovanni Bernardi, Giulia Manara and Fabrizio Montesi

The calculus of communicating systems (CCS) is a process calculus for the specification and modelling of discrete communicating systems. In this talk we define a sequent calculus operating on formulas encoding CCS processes capturing the operational semantics for the recursion-free fragment of CCS, and establish a correspondence between executions of processes and (branches of) derivations in this calculus. For this purpose, we define an extension of the first order multiplicative and additive linear logic with nominal quantifiers and a non-commutative and non-associative connective. After proving cut-elimination for this system, we show that sequent rules simulating the rules of the operational semantics of CCS are derivable our calculus. We conclude by discussing future research directions.

  14:25

Sebastijan Horvat, Advantages of using weak bisimulation for the logic IL with respect to Verbrugge semantics

The basic semantics for interpretability logics is Veltman semantics. R. Verbrugge defined a new relational semantics for interpretability logics, which today is called Verbrugge semantics in her honor. It has turned out that this semantics has various good properties (see e.g. [2]). Bisimulations are the basic equivalence relations between Veltman models. M. Vuković in [4] defined bisimulations and their finite approximations called \(n\)-bisimulations for Verbrugge semantics. M. Vuković and T. Perkov in [3] used bisimulations and bisimulation games to prove the van Benthem’s characterization theorem with respect to Veltman semantics. However, we have proved in [1] that \(n\)-modally worlds do not necessarily have to be \(n\)-bisimilar. Therefore, in [1] we gave a new version of bisimulations, which we called weak bisimulations.

In this talk, we will make an overview of the results that can be obtained using weak bisimulations and the corresponding weak bisimulation games. We will show that \(n\)-modally equivalent worlds are \(n\)-weak bisimilar. Also, we will emphasize results about the finite model property and the van Benthem’s characterization theorem with respect to Verbrugge semantics.

Bibliography

  1. S. Horvat, T. Perkov, M. Vukovi’c, Bisimulations and bisimulations games for Vebrugge semantics, Mathematical Logic Quarterly, vol. 69 (2023), no. 2, pp. 231–243.
  2. L. Mikec, M. Vuković,Interpretability logics and generalized Veltman semantics, The Journal of Symbolic Logic, vol. 85 (2020), no. 2, pp. 749–772.
  3. T. Perkov, M. Vuković,A bisimulation characterization for interpretability logic,Logic Journal of the IGPL,vol. 22 (2014), no. 6, pp. 872–879.
  4. M. Vuković,Bisimulations between generalized Veltman models and Veltman models,Mathematical Logic Quarterly,vol. 54 (2008), no. 4, pp. 368–373.
  14:50

Matthias Eberl, Indefinitely extensible models of simple type theory

Starting from the idea that some mathematical concepts are indefinitely extensible [1], we present a model of simple type theory that interprets its types in an indefinitely extensible way. Let indefinite extensibility be defined relative to a notion of definiteness, where a concept is (extensionally) definite if there exists a fixed set of objects falling under it. If a collection of objects is definite when it corresponds to a set in ZF set theory, then indefinite extensibility applies to ordinal numbers. But we can take the simplest notion of definiteness, which is being finite, in which case indefinite extensibility becomes the notion of the potential infinite.

In any case, indefinitely extensible totalities of objects are formalized as families of (definite) sets, indexed over a directed set of stages. We use simple type theory as a framework in which there is a possibly open-ended set of base types \(\iota\), as well as function types \(\rho \to \sigma\). Each type \(\rho\) has its own index set \(\mathcal{I}_\rho\) which is typically indefinitely extensible. The interpretation of the type \(\rho\) is a family \(([\rho]^i)_{i \in \mathcal{I}_\rho}\), which is a factor system [2] having an internal limit construction in order to obtain indefinitely large stages.

If we add a type \(prop\) for propositions, interpreted as a finite set of truth values, e.g. \(\{true,false\}\), and restrict simple type theory to a fragment without function variables (but with variables for relations), then we are able to interpret the universal quantifier. The main idea is to use the indefinitely large stages of the factor system and interpret the universal quantifier with a reflection principle. Thus, second-order quantification ranges over definite sets, also for properties. This idea extends the approach of [3], which has its origin in [4].

Bibliography

  1. Stewart Shapiro and Crispin Wright,All things indefinitely extensible,Absolute generality(Agustin Rayo and Gabriel Uzquiano),Clarendon Press, 2006, pp. 255–304.
  2. Matthias Eberl,Higher-order concepts for the potential infinite,Theoretical Computer Science,vol. 945 (2023), Article 113667.
  3. Matthias Eberl,A Model Theory for the Potential Infinite,Reports on Mathematical Logic,vol. 57 (2022), pp. 3–30.
  4. Jan Mycielski,Locally Finite Theories,Journal of Symbolic Logic,vol. 51 (1986), no. 1, pp. 59–62.
  15:15

Piotr Błaszczyk, Finite, infinite, hyperfinite. A new interpretation of Newton’s De Analysi.

Authors: Piotr Błaszczyk and Anna Petiurenko

In De Analysi, Newton derives three primary achievements of calculus: the area under the curve \(y(x)=x^{\tfrac mn}\) equals \(\frac {n}{m+n}x^{\tfrac {m+n}n}\) (Rule I), the power series of arcsine, and the power series of sine. Two further Rules introduced without proof reinforce Rule I. Rules II and III state that the area under finitely/infinitely many curves equals the sum of areas under each curve.

The standard interpretation of De Analysi runs through calculus: adopting the Riemann integral, it presents Rule I as the Fundamental Theorem of Calculus \((\int_0^x f(t)dt)'=f(x)\). Accordingly, term-by-term integration of series explains Rule III. However, this interpretation does not correspond to the argument’s structure regarding the series of arcsine and sine. In calculus, one first expands the series of sine and then gets the expansion of arcsine by the theorem on the inverse function derivative. On the contrary, Newton finds the power series of arcsine first and then the series of sine. The core of this difference is that Newton does not apply the derivative or limit concept.

We interpret De Analysi within the framework of nonstandard analysis, providing a coherent account of Newton’s technique of indivisibles, measuring the area under a curve by rectangles and ‘infinitely close’ relation as a deductive tool. We represent Newton’s arguments on a hyperfinite grid, meaning a discrete domain rather than a continuous one. We measure the area under a curve by a hyperfinite sum, reconstruct Newton’s proof of Rule I, and prove his Rule III.

Bibliography

  1. Nigel J. Cutland,Loeb Measure in Practice,Springer,2000.
  2. Tom Whiteside, editor,The Mathematical Papers of Isaac Newton: Vol. 2,Cambridge University Press,London,1968.
  15:40

Gabriele Buriola, A parameterized family of decidable theories involving differentiable functions

Authors: Gabriele Buriola, Domenico Cantone, Gianluca Cincotti, Eugenio Omodeo and Gaetano Spartà

We build on the existing literature on decidability in elementary analysis [2,3] by introducing a new family of decidable theories (Here the word ‘theory’ refers to a fragment of a logical language subject to strong syntactic restraints.) called \(\text{RDF}^n\). These theories are parameterized by natural numbers and involve differentiable real functions of class \(C^n\). By extending the previously developed theory \(\text{RDF}^{*}\) [1], each \(\text{RDF}^n\) is a quantifier-free first-order theory that includes numerical variables, basic arithmetic operations, function variables, and operators and relators for function addition, multiplication by a scalar, differentiation, function comparisons, strict and non-strict monotonicity / convexity / concavity, and comparisons between functions (and their derivatives up to \(n\)-th order) and real-valued terms. The sentences constituting the \(\text{RDF}^n\) language are (universally closed) propositional combinations of atomic formulas of the forms shown in the table: therein, the prefix ‘\(\text{S}\_\)’ indicates strictness; \(\alpha\) stands for an integer ranging from 1 to \(n\) and \(D^\alpha\) denotes the \(\alpha\)-th order derivative; \(A\) stands for an interval specification; \(\bowtie\) can be any of the relators \(=,<,>,\leq,\geq\); \(s,t\) stand for real-valued terms, and \(f,g\) for terms designating functions in \(C^n\).

\(s = t\), \(s > t\), \(f(s) = t\), \(D^{\alpha}[f](s) = t\),
\((f=g)_A\), \((f>g)_A\), \((f \bowtie t)_A\), \((D^{\alpha}[f] \bowtie t)_A\),
\(\textit{Up}(f)_A\), \(\textit{Down}(f)_A\) , \(\textit{Convex}(f)_A\) , \(\textit{Concave}(f)_A\),
\(\textit{S}\_\textit{Up}(f)_A\), \(\textit{S}\_\textit{Down}(f)_A\) , \(\textit{S}\_\textit{Convex}(f)_A\), \(\textit{S}\_\textit{Concave}(f)_A\) ,

The decidability of these theories relies on the one hand on Tarski’s celebrated theorem [6] concerning the decision problem for the algebra of real numbers and, on the other hand, on interpolation techniques. Our ultimate goal is to institute a uniform theory \(\textit{RDF}^{\infty}\) encompassing \(C^{\infty}\) functions.

Additionally, we explore the limits of decidability in the presence of a richer endowment of constructs relating derivatives and functions, e.g., equalities of the form \(\left( D^2[f]=g \right)_{\! A}\), building upon the findings of Richardson [4] and subsequent improvements by Laczkovich [5].

Bibliography

  1. G. Buriola, D. Cantone, G. Cincotti, E. Omodeo, G. Spartà,A decidable theory involving addition of differentiable real functions,Theoretical Computer Science,vol. 940 (2023), pp. 124–148.
  2. H. Friedman, ‘A. Seress,Decidability in elementary analysis, I,Advances in Mathematics,vol. 76(1) (1989), pp. 94–115.
  3. \bysame_Decidability in elementary analysis, II_,Advances in Mathematics,vol. 79(1) (1990), pp. 1–17.
  4. D. Richardson,Some undecidable problems involving elementary functions of a real variable,Journal of Symbolic Logic,vol. 33(4) (1968), pp. 514–520.
  5. M. Laczkovich,The removal of \(pi\) from some undecidable problems involving elementary functions,Proceedings of the American Mathematical Society,vol. 131(7) (2002), pp. 2235–2240.
  6. A. Tarski,A decision method for elementary algebra and geometry,Technical Report U.S. Air Force Project RAND R-109, Santa Monica, CA, 2nd revised edition, University of California Press, Berkeley and Los Angeles, iii+63 pp., 1951.

 Overview  Program