TDPhiMa2, 1–3 September 2021: Programme
Wednesday, 1st September
- Jeremy Avigad: The Design of Mathematical Language
- Michael Friedman: On the metaphor of the sea of mathematics: between Blumenberg and Grothendieck
- Fenner Tanswell: Interesting and Important Mathematics: A Corpus Study
- Silvia De Toffoli: Diagrammatic Languages in Category Theory
- Dirk Schlimm and David Waszek: Uninterpretable symbols, algebra, and instrumentalism in Boole
Thursday, 2nd September
- Mikkel Willum Johansen and Henrik Kragh Sørensen: What is an experiment in mathematics? New evidence from mining the Mathematical Reviews
- Roi Wagner: Cohen's philosophical position – a keyword analysis
- Peter Koepke: Controlled Natural Mathematics (CNM)
- Juan Luis Gastaldi: What Are Machine Learning Approaches to Mathematics and What Should They Be? The Place of Textuality in Neural Approaches to Mathematical Knowledge
- Anton Kristian Suhr and Henrik Kragh Sørensen: Explanations in different contexts
Friday, 3rd September
- Benedikt Löwe and Bernhard Schröder: Thetic Phrases in Mathematics
- Giorgio Venturi: Axioms and their Creative Role in Mathematics
- Bernhard Fisseni, Deniz Sarikaya, and Bernhard Schröder: Frames for Mathematical Texts
- Keith Weber: The use and significance of instructions, imperatives, and recipes in written proofs in set theory
- Michael Kohlhase: Symbol Grounding vs. “Semantics by Pointing” – Bridging between Representations of mathematical/technical Documents and Knowledge
Times should be acceptable to European, but also many non-European participants: Please follow the link on the time slots to convert to your time zone.
Formal languages provide informative descriptions of informal mathematical language, but there is a sense in which they specify too much, and there is a sense in which they specify too little. The fact that an ordinary mathematical text can be equally represented in any of a number of foundational languages suggests that these languages should be viewed as alternative implementations of mathematical vernacular. It is therefore reasonable to look for descriptions of mathematical language and patterns of inference that clarify the specifications that the implementations are designed to meet.Formal language specify too little in the sense that features of ordinary mathematical language and inference that are essential to its function are not addressed by a formal specification. Instantiating the formal foundation is only the first step to implementing a mathematical proof assistant, and the bulk of the work goes into supporting the interactions with users that makes them usable in practice. The design of a proof assistant requires making countless engineering decisions that bear on the system’s usability, and we might optimistically seek a broad view of mathematical language that can help us evaluate the choices and assess their merits.In this talk, I will make a start on developing a design specification for mathematical language that clarifies the challenges that proof assistants have to meet, and I will describe some of the mechanisms that contemporary proof assistants use to meet those challenges.
We discuss the concept of frames with respect to mathematical texts. The concept of frames has been developed in the context of Artificial Intelligence since the 1970s, but recently also in linguistics and philosophy. We defend the following theses:
- Frames can be used to model both (textual) structural properties of proofs and ontological aspects of mathematical knowledge, and to relate both aspects of proof understanding.
- Specifically, using frames it is possible to model how mathematicians understand proofs texts where patterns have not been executed in a fully explicit way.
- The inheritance hierarchy of frames allows, among others, to represent generalization and creative application of proof methods.
Amir Alexander (2002), in his book Geometrical Landscapes, notes that 17th century mathematicians adopted the imagery of geographical discovery and made it their own. Alexander underlines several mathematicians who employ nautical imagery and the metaphor of the sea to account for their discoveries – Cavalieri, Torricelli or John Davies. Almost 1500 years before, the story of the Delian Problem of doubling the cube is told by Plutarch (in his work from the 1st century De genio Socratis) as a story of a meeting on the shores of the island Caria, when Plato returns from a nautical voyage. What do these stories tell us about discovery and problems in mathematics, and more explicitly, about the role which metaphors (of the sea) play when mathematicians and philosophers discuss mathematical problems?
In order to answer this question, and following Hans Blumenberg’s (1960) Metaphorology, I claim that the metaphors that frame these discoveries were by no means chosen at random, but have an intrinsic relationship with how the mathematical object was considered. Several components of the sea metaphor – the meeting of sea and shore, the stormy sea, the unpredictability of the waves, or the ever-changing location of the shore – refer to the encounter with a frontier or a limit, and hence correspond to the way in which the transformations of mathematics was perceived by mathematicians as well as by philosophers. As I will show, the transformations of mathematics are also reflected in the transformations and the history of the sea metaphor, as can be seen also with Alexander Grothendieck’s approach to mathematical discovery, known as “the rising sea” (“la mer qui monte”, see McLarty, 2007).
Alexander, A. (2002). Geometrical Landscapes. The Voyages of Discovery and the Transformation of Mathematical Practice, Stanford University Press.
Blumenberg, H. (1960). Paradigmen zu einer Metaphorologie, Archiv für Begriffsgeschichte, 6 , 7–142.
McLarty, C. (2007). The Rising Sea: Grothendieck on Simplicity and Generality. In J. Gray, K. Parshall (Eds.), Episodes in the History of Modern Algebra (1800–1950), American Mathematical Society, 301–326.
Juan Luis Gastaldi: What Are Machine Learning Approaches to Mathematics and What Should They Be? The Place of Textuality in Neural Approaches to Mathematical Knowledge.
Recent years have seen a remarkable development of deep neural network (DNN) techniques for data analysis, along with their increasing application in scientific research across different disciplines. As a side consequence, we witness a renewed interest in textual corpora and their representational capabilities. The field of mathematics has not been exempted from this general trend. Indeed, different works have suggested the relevance of possible applications of DNNs to the treatment of mathematical knowledge at multiple levels. Even more than any other field of application, these attempts in mathematics raise critical philosophical questions since the formal (i.e. non-empirical) nature generally attributed to mathematical knowledge contrasts with the radically empirical position assumed by connectionist approaches guiding the application of DNNs. In particular, a tension arises between a bottom-up approach to knowledge specific to neural techniques and a top-down perspective provided by the logico-symbolical principles traditionally associated with mathematical knowledge. Significantly, the problem of mathematical textuality lies at the center of this tension, being both the privileged form of empirical data for the computational treatment of mathematics and the key to symbolic generalization. This centrality of mathematical language is confirmed by the fact that most neural models for mathematics are borrowed from natural language processing. Based on a survey of the most relevant literature in the field, I will assess the philosophical stakes and challenges of a machine learning approach to mathematical knowledge and indicate the conceptual and technical orientations that can be drawn for a text-driven philosophy of mathematics.
Mikkel Willum Johansen and Henrik Kragh Sørensen: What is an experiment in mathematics? New evidence from mining the Mathematical Reviews
During the past three decades, interest in the role of experiments in mathematical research practice have increased, not least with the establishment of the Journal of Experimental Mathematics (1992) and through the seminal work of Borwein and Bailey (2004). Yet, the wider phenomenon of mathematical experiments remains little explored within the philosophy of mathematical practice.
In a novel approach to the problem, we mined Mathematical Reviews for entries related to experiments, and through an iterative process including several rounds of qualitative and quantitative analysis we identified the typical roles experiments play in mathematical practice. Some of these are in line with the idea of Borwein and Bailey, but others are surprising. Especially, inductive evidence obtained through experiments seems to play a large role in the epistemology of mathematical practice. This result runs counter to traditional philosophical assumption about mathematics, where deductive proof is typically taken to be the only allowed way to verify mathematical claims (see also Weber et al 2014).
In this paper we will give in-depth analysis of the role experiments play in mathematical practice and we will discuss whether mathematicians’ use of experiments challenge traditional assumptions about the epistemology of mathematics.
Borwein, J. and Bailey, D. (2004): Mathematics by Experiment: Plausible Reasoning in the 21st Century. Natick: A K Peters.
Weber, K, M. Inglis and J-P. Mejía Ramos (2014): How mathematicians obtain conviction. Educational Psychologist 49(1): 36–58.
Michael Kohlhase: Symbol Grounding vs. “Semantics by Pointing” – Bridging between Representations of mathematical/technical Documents and Knowledge
In this talk we want to understand the computation of meaning of technical terms (and symbolic notations) in mathemtical language (symbol grounding). The key observation is that the meaning of symbols is given in a different way than that of words in everyday language. In contrast to the latter whose meaning is given exerpientially their meaning is given by definitions.
We will discuss the practice of representing mathematical concepts in formal languages as pairs of a name and a pointer to a definition (we tentatively call this "semantics by pointing''), see where its limitations are and what we could do to lift them.
Euclid's theorem "There are infinitely many prime numbers" is immediately understood by mathematicians; the statement has a strong intuitive, canonical appeal. It can be recast in a variety of strictly formal theories, be it some first-order axiomatic system or type theory. In all these theories and in their implementations in computer assisted proof systems one can readily derive the infinitude of primes based on the same mathematical ideas. On the other hand such formalization seem to do violence to the intuitive and flexible original formulation in natural language, and basic ideas can appear very different in different systems.
We suggest that mathematics is a "natural" activity similar to natural language description and argumentation. In natural language there is the programme of normalized or controlled natural languages (CNL): to get rid of ambiguities by restricting to sublanguages of ordinary natural language. Whereas this concept shows only limited success in general we hold that mathematics is much more amenable to a similar programme of controlled natural mathematics (CNM) since mathematical texts are striving to avoid ambiguities anyway.
The Naproche (Natural Proof Checking) proof assistant demonstrates that one can devise and implement controlled natural languages for mathematics that are readable by every mathematician. Statements in such a language are unambiguous and are arguably more canonical, apart from natural language variants, than formalizations using artificial formal constructs. We expect that the CNM-approach can be scaled up to sophisticated modern mathematics.
A thetic phrase is a sentence introducing a designator for an object. In ordinary language, these phrases are relatively rare since we usually do not have the power of naming things; in mathematical language, they occur frequently and serve a very important role in mathematical discourse. Thetic phrases in mathematics do not just introduce designators, but also constrain the properties of the newly introduced object: “let G be a group” introduces the new designator, G, to refer to a newly labelled mathematical objects and informs us exactly which properties we may assume the object to have. Similar to the introduction of fictional objects in literature, properties not explicitly or implicitly stated are undetermined for the newly introduced object G (e.g., G is neither Abelian nor non-Abelian). In this article, we survey the representation of thetic phrases in the mathematical vernacular of various languages, observe that they all use unusual or archaic grammatical constructions, and discuss requirements of a logical analysis of thetic phrases.
Dirk Schlimm and David Waszek: Uninterpretable symbols, algebra, and instrumentalism in Boole [Slides]
This paper addresses the role of symbols in mathematical reasoning by offering a new reconstruction of George Boole’s views on the use of ‘uninterpretable symbols’ in algebra and logic. Boole famously wrote that, while for ordinary reasoning the intelligibility of each inference is ‘an actual condition and important safeguard’, no such restriction should be imposed on symbolical reasoning: instead, ‘the validity of a conclusion arrived at by any symbolical process of reasoning, does not depend upon our ability to interpret the formal results which have presented themselves in the different stages of the investigation’ – a principle he claims belongs to the ‘axiomatic truths which constitute, in some sense, the foundation of the possibility of general knowledge’. However, there are frequent misunderstandings on what parts of his logical calculus Boole actually took to be ‘uninterpretable’, and on what he meant by this word. First, it is often assumed that a central ‘uninterpretable’ aspect of Boole’s calculus is his use of an operation of division devoid of logical meaning; in fact, division is meaningful in Boole’s system, and uninterpretability arises instead from the need for intermediate computations to go through expressions in which some operations (be it addition, subtraction or divison) are seemingly applied beyond their legitimate domain of definition. Second, Boole’s broader attitude to symbols is sometimes reduced to a kind of instrumentalism, whereas his philosophical attitude is actually grounded in sophisticated and still interesting reflections, among his fellow British algebraists, on the generality of algebra.
Anton Kristian Suhr and Henrik Kragh Sørensen: Explanations in different contexts
Philosophers have debated whether explanations occur in mathematical discourse, and if so whether such explanations differ from other accounts of explanation in the sciences (see e.g. Overton 2012). Recent empirically grounded studies have indeed shown that mathematicians, too, engage in explanatory talk, at least in more informal contexts such as (online) collaborations and the like (see e.g. Pease, Aberdein and Martin 2019). And text-mining has corroborated this for papers in the online preprint-server arXiv (Mejía-Ramos et al. 2019). However, the more precise nuances of mathematical explanations and their roles in different contexts remain open and interesting questions for the philosophy of mathematical practice.
In our study, we have subjected a corpus of structurally parsed articles from the arXiv to analyses based on natural language processing to explore questions about the extent and types of explanatory discourse in mathematical texts. Our method is thus based on applying both quantitative and qualitative digital tools to study mathematical corpora.
In our contribution, we will briefly present the background and technology of our approach, then report on our findings of differences in the use of explanations in mathematical contexts (such as theorem statements, proofs, and paratext) and across mathematical fields. We will then use these text-mining approaches to single out a few special case studies for closer analyses. Combined with different experiments allows us to gauge and nuance different accounts of what a mathematical explanation in mathematics might amount to.
Mejía-Ramos, Juan Pablo et al. (2019). “Using Corpus Linguistics to Investigate Mathematical Explanation”. In: Methodological Advances in Experimental Philosophy. Ed. by Eugen Fischer and Mark Curtis. London et al.: Bloomsbury Academic. Chap. 8, pp. 239–264.
Overton, James A. (Apr. 2012). “‘Explain’ in scientific discourse”. Synthese, vol. 190, no. 8, pp. 1383–1405.
Pease, Alison, Andrew Aberdein, and Ursula Martin (2019). “Explanation in Mathematical Conversations. An Empirical Investigation”. Philosophical Transactions A, vol. 377, no. 2140.
Mathematical peer review (and other similar assessment exercises) is not just about deciding whether some piece of mathematics is correct, but also relies on judgments of whether it is new, interesting, and important. While the content of referee reports is relatively inaccessible to empirical studies, we propose that it is also possible to study the key evaluative properties by looking at how the language of “interesting”, “novel”, “important” and related terms are used in mathematical research papers. Here, we carry out a corpus linguistic study of these terms to discover how they are used, what they are applied to, and how they relate to one another.
This is joint work with Matthew Inglis.
The focus of this article is the use of diagrammatic languages in category theory. This is particularly interesting because in this domain, the different notations that are widely adopted in practice bridge algebraic to geometric reasoning. In fact, category theory is at the same time an abstract mathematical ﬁeld, where the reasoning is algebraic in nature, and a ﬁeld in which it is possible to exploit topological intuition through a well-deﬁned graphical language. Two kinds of diagrammatic languages, which are related by a duality relation, will be introduced: the algebraic and the graphical language. Although both these languages are two-dimensional and are generally considered diagrammatic, the graphical notation alone enables mathematicians to deploy topological intuition. It will be shown that these diagrammatic languages can be interpreted as cognitive instruments, used by practitioners not only as heuristic aids but as authentic reasoning tools. They are not only deployed to arrive at new conjectures and discoveries but are also used to prove results rigorously, therefore playing genuine epistemic roles. The rigor of arguments in the graphical notation is guaranteed by soundness and completeness theorems.
Giorgio Venturi: Axioms and their Creative Role in Mathematics
In this talk we will present a new approach to the study of creativity in mathematics based on a linguistic analysis of the creative component of mathematics language. We will present, as a case study, an analysis of axioms and postulates in terms of speech act theory. This will help us to identify assertive, directive, and declarative elements in axiomatic systems. The declarative component will then suggest a creative role for axioms in our mathematical theorizing.
This paper will apply semiotic methodologies to understand the philosophical underpinning of Paul Cohen’s work on set theory. The purpose is to demonstrate the use of these methodologies for interpreting mathematical practices.
Since Paul Cohen was not (nor pretend to be) a philosopher, his definitions of terms such as “intuition”, “natural”, “reality”, “model” and “truth” are under-specified. However, the use of these terms can educate us about the views guiding his mathematical practice and help us understand the “everyday” mathematician’s conception of these terms.
Methodologically, I will apply two semiotic principles:
The meaning of signs (in this case, words) is a function of their > relations to each other and their uses, rather than supposed > references to physical or metaphysical objects.
These relations and uses needn’t be stable or consistent, and these > tensions play a formative role in understanding the signs.
I will apply these principles to analyze the aforementioned terms in Paul Cohen’s “The independence of the continuum axiom” (1963–1964), “Set theory and the continuum hypothesis” (1966), and “The discovery of forcing” (2002). In order to enrich the analysis, I will relate it to Cohen’s explicit uses of temporal adverbs, formal and informal language frameworks, and explicit markings of the determinacy or indeterminacy of mathematical objects, which endow his work with narrative-like structures. The result of the analysis will not be an argument in favor of a well-defined philosophy of mathematics, but an insight into the working philosophy of real mathematicians and the structures of mathematical discourse.
Keith Weber : The use and significance of instructions, imperatives, and recipes in written proofs in set theory [Slides]
A proof is often conceptualized as a sequence of mathematical assertions, where each assertion is an acceptable premise (e.g., a hypothesis or axiom) or is deductively derived from previous assertions in the proof. Recently, Fenner Tanswell observed that in mathematical practice, proofs often do not align with this conceptualization because they contain instructions in the form of imperatives which direct the reader to perform mathematical actions. In this paper, I catalog and summarize the use of Kenneth Kunen’s Set theory: An introduction to Independence proofs, a canonical and widely used textbook to introduce logicians to forcing and other topics in set theory. I use my analysis to argue for the following:
- imperatives are common and used in most of the proofs in the textbook;
- many proofs are centered around recipes – that is, sequences of imperatives or instructions – that describe to the reader how to construct objects (sets and models) with desirable properties;
- Evaluating and understanding recipe-based proofs involves the reader imagining performing mathematical actions over (often transfinite) time and anticipating the result of applying these actions, which is a different process than checking whether assertions in a proof are logical consequences of previous assertions;
Finally, I explore the ways in which Kunen’s imperative proofs can and cannot align with formal derivations, and its relevance to the contemporary debates on the ontological status of forcing extensions.