August 4, 2019

some claims on the philosophy of mathematics

I encountered a reference to Grosholz's book Representation and Productive Ambiguity in Mathematics and the Sciences while reading Mathematics without Apologies by Michael Harris. These are my reading notes and discussion of a synopsis on Professor Grosholz’ website.

Reading: Emily R. Grosholz, “Research in Philosophy of Mathematics” http://emilygrosholz.com/research/philofmath.html

What is Mathematical Reasoning?

Grosholz inquires about the nature of ampliative reasoning or “reasoning that adds [mathematical] content as it solves problems.” Grosholz draws an immediate example from the field of analysis; Andrew Wiles’ proof of Fermat's Last Theorem, she writes, consists of “an argument [that] can be reconstructed” due to its having added new mathematical content during its original construction. This content comes from “discovering” (my word, not her's) the “conditions of the intelligibility of existing things” or “conditions of the solvability of a problem”.

What is Wrong with the (Logicist) Philosophy of Mathematics?

Grosholz’ core argument is that the usual philosophical inquiries into mathematics treats mathematical reasoning as an analog to mathematical formalism, especially a formalism expressed by predicate logic. She writes that the above nature of ampliative reasoning is “obscured by the assumptions of logicist philosophers of mathematics, who pretend, or hope, that mathematical reasoning can be translated into predicate logic (perhaps with set theory added), and then located within the closed box of an axiomatic system, proved from first principles by deductive logic”.

Her argument draws three criticisms of the logicists’ view thus described:

  1. Mathematical research does not proceed as a formalist exercise from first principles. Classroom teaching and textbook problem-solving might proceed that way.
  2. Highly specific languages have been created for mathematical problem-solving, and one-way translation into a formalist language (eg. predicate logic) “diminishes the expressive and explanatory power of those languages.
  3. Translation as criticized in (2) can “enhance the explanatory and expressive power of mathematical languages . . . only when the disparate languages are retained”.

How Do These Criticisms Hold Up?

I'm very much in favor of Grosholz’ approach and her project; I do have a couple reservations about the strenghts of her points (1-3) above.

My first question is whether classroom teaching or textbook problem-solving proceed effectively in the logicist manner. Maybe this question splits hairs, but I worry that Grosholz has drawn a distinction between effective mathematical research and ineffective classroom teaching. Anecdotally, we hear that effective classroom teaching progresses very much like a research project: student-driven interest and insight into teacher-proposed problems (anecdotally) increases students’ capacity to engage with and learn from existing mathematical concepts. Ultimately, however, I suspect Grosholz is correct on this point; studies show that the anecdotally effective, student-led, research-like classroom experience is not as effective as direct instruction. So it is reasonable to assume that logical formalism can aid instruction in ways that it may not aid, and possibly hinder, research.

When Grosholz begins discussing “languages” and “translation” in point two, my questions multiply. In general, I'm a bit fuzzy on what constitutes a “language” in Grosholz’ criticism, as Grosholz appears to overload the term to mean each of natural language, formal language, and mathematical notation. I'd agree, however, that it's a very dubious position to believe – as a very strict (almost straw-man) logicist might – that comprehensive reduction to a predicate logical form can be achieved. Perhaps she discusses a clearer distinction in her full-length papers and books.

My difficulty with point (3) inherits from my fuzziness in point (2). It seems Grosholz’ describing a situation where a plurality of langauges (natural, formal, notational) helps ampliative reasoning in a way where a mono-linguistic approach via predicate logic cannot. I wonder, then, what is ampliative reasoning? Do formal languages not allow for the creation and reconstruction of analytic tools? Is there something more or different happening when people explain and express mathematical problems and their solutions which cannot occur via a formalist approach? These are difficult questions… again, Grosholtz probably discusses them in her books.

Basically, I need to read more of her writings to see how these arguments unfold in full.

Why Is This Interesting?

More specifically, why is this interesting to people interested in machine learning?

I am not a machine learning expert, and I've only implemented very rudimentary tutorial-esque ML solutions. When I look at the fancy demonstrations of Deep Learning – most especially GANs – I ask myself: “what does this technology and its implementation in this problem space add?” Encountering Grosholz’ project, I have a few tools to better discuss ML from a philosophical perspective and a few viewpoints to compare the technical progress in ML with the intellectual progress in mathematics as a whole.

Does a Given Machine Learning Algorithm Display Ampliative Reasoning?

I think the answer here is “yes”. I'm assuming here that the data scientist constructs an algorithm and that algorithm – following some kind of training or sandboxing or evolution or whatever – outputs a model for usage. In other words, when a data scientist's ML algorithm outputs a model, it has effectively added content to the problem solving space. That fits the minimal definition of ampliative reasoning perfectly.

Moreover, observe that the data scientist does use programming frameworks and tools that, via Curry-Howard Correspondance, can be expressed by predicate logic. Kind of. Because we also have to observe the domain knowledge, the data collection and cleaning, the algorithm selection, model evaluation, and general programming trial-and-error that the data scientist must undergo to reach a reproducible progam. This speaks directly in favor of Grosholz’ criticism of the logisic approach.

Does a Given Machine Learning Model Reconstruct (An) Analytic Method(s)?

I think the answer here is “mostly”. Recall Grosholz gives two basic views of “analysis”:

  • Leibniz’ “search for conditions of the intelligibility of existing things in mathematics”
  • Pappus’ “[search for] conditions of the solvability of a problem [in mathematics]”

Now, you'll notice that my catch-all usage of “Machine Learning” is way too broad. Something like feature detection might be described by Leibniz’ definition; an certain kind of optimization procedure might be described by Pappus’ definition. I should be more careful not to capture all the varieties of the data scientist's toolkit under one “ML” label. But I think my emphasis here is on reconstructability – and I think an unstated precondition for ML-guided analysis is that it be reproducible.

Does It Make Sense – from a Philosophical Point-of-View – to Interpret a Machine Learning Model?

The ability to interpret what a machine learning system is doing and why is a perennial blog topic (see, for instance, Interpreting Machine Learning Models by Lars Hulstaert). But, like, why?

If Machine Learning (in my slightly irresponsible catch-all connotation) does, in Grosholtz’ sense, exhibit ampliative reasoning and analytic reproducibility, then, also following Grosholtz, then there's no reason to need “interpretation”! Specifically, when ML solves a problem we say it has (1) added content to ML via its constructed model and (2) captured its analysis in a reproducible form, i.e., its model. The extra task of elucidating the model in a human-readable form repeats the logicists’ claim that mathematical knowledge can survive one-way translation into a cannonical form. If I'm reading her correctly, in Grosholtz’ view, we need the plurality of languages (whatever that really means) in order to fully benefit from the mathematicians’ or the machines’ reasoning process (whatever that really looks like); interpreting ML loses the full benefits.

Ironically – and amusingly – we think we need interpretible ML to avoid succumbing to a black box's decision-making process and we think we don't need logical formalism to avoid boxing our mathematicians’ discovery process. At the root of the philosophical problem may lie a basic anxiety about human limitedness: Can human reasoning be “boxed” or is that a debasement of the human species? Shouldn't non-human reasoning be “unboxed” or is that a xenephobic reaction to seemingly non-human activities?

I really don't know. The practical path to take is to say that ML exists specifically to solve human problems and the only human problems that can be affordably solved right now are problems in large commercial businesses, social enterprises, or scientific endeavors. But maybe as the cost goes down and the affordable problem space widens out, we'll start to view these philosophical questions with a bit more candor.

How Is Machine Learning Different From Human Learning?

I am always fascinated by this question. Really, it comes down to investigating just what human learning involves – which is exactly what Grosholtz’ project pursues. Necessarily, this requires taking a historial outlook rather than a speculative outlook. I'll close this post with Grosholtz’ closing paragraph (boldings mine):

The philosophy of mathematics is in the process of renegotiating its relation to the history of mathematics […] this relationship must be undergirded by a philosophy of history that does not reduce the narrative aspect of history to the forms of argument used by logicians and natural scientists. History is primarily narrative because human action is, and therefore no physicalist-reductionist account of human action can succeed. So philosophy must acknowledge and retain a narrative dimension, since it concerns processes of enlightenment, the analytic search for conditions of intelligibility. Indeed, the very notion of a problem in mathematics is historical, and this claim stems from taking as central and irreducible (for example) the narrative of Andrew Wiles's analysis that led to the proof of Fermat's Last Theorem. If mathematical problems have an irreducibly historical dimension, so too do theorems (which represent solved problems), as well as methods and systems (which represent families of solved problems): the logical articulation of a theory cannot be divorced from its origins in history. This claim does not presume to pass off mathematics as a series of contingencies, but it does indicate in a critical spirit why we should not try to totalize mathematical history in a formal theory.

Content by © Jared Davis 2019
Theme by © Emir Ribic 2017

Powered by Hugo & Kiss.