Here is their ontology for versioning and management of change; somewhat different from our group’s notion of management of change, though.
Archive for October, 2008
My third meeting with Cris …
- The Way of Discovery Math: Cris highlighted again that it is important to distinguish the following two aspects in mathematics, i.e. presenting mathematical results and actually discovering mathematical results. He said, if I want to really understand mathematical practice, I need to follow and analyse the second aspect. In his lectures, Cris is trying to teach his students how to discover math, but this is indeed difficult, he says. For example, he is intentionally presenting them several wrong definitions before finally concluding with the correct definition. Even in one of his paper, Cris et al. successfully published several wrong definitions (before introducing the correct one) of a non-deterministic automata, although the reviewers first wanted them to “remove the rubbish”: C. S. Calude, Elena Calude, B. Khoussainov. Finite nondeterministic automata: Simulation and minimality, Theoret. Comput. Sci. 242, 1-2 (2000), 219-235. He also pointed me to the books of George Polyá, in particular, Mathematics and Plausible Reasoning. Further Cris said that if we train (young) researchers we have to teach them “to have the guts to not be afraid to be wrong“. And then he said: “The productivity of a PhD is 5%, 95% is failure” and this is almost the same situation for professional mathematicians (besides some genius), who do not continuously produce correct and beauty proofs and theorems.
- An assessment system for mathematical knowledge: Cris said that my proposal is still to vague and abstract to be understood by other scientists. He made an example to illustrate that I should work on my presentation of new ideas (freely cited, not his actual words): “When I present my view to solve a mathematical problem I need to include to things: (1) I need to provide a number of concrete steps towards the solution, i.e. first I proof this, then I proof this, … and I have to point to the basic methods I will be using, e.g. I’ll use graph theory rather then topology. (2) I need to point to my previous accomplishments, i.e. some steps I have already addressed. Then people will accept it more likely.” Cris also said, that for the existing achievements, I do not necessarily have to provide my own work, but I can point to existing research and that will increase my acceptance. So what I will do for my seminar talk is, to put more emphasis on the methods and related work part. And I will use many examples to illustrate my approach. I will have a look at Cris GoogleTalk again. However, in general, Cris thinks that this could be interesting. Actually he believes that 90% of all published mathematical proofs are either incorrect, incomplete, or uninteresting. “Having over 100.000 published a year, how are can you still find your way without any guidance? You have the expertise in your small community, but as soon as you have to rely on work outside your field, you have to trust the expert.“
I also met interesting people today, I am eager to talk to further. And I’ll get a chance tomorrow, as there is the “end of the lecture party”.
- Emilia Mendes expert is web development.
- André Nies mathematician
- Brian Carpenter (haven’t met him, but will soon). Cris recommended me to read his A Dialogue on the Internet, published in Bulletin of the European Association for Theoretical Computer Science (2008)
I have the feeling that the challenge of bringing mathematicians and computers closer together, requires a lot of expertise in the field of pure mathematics and the more technical-field of mathematical knowledge management. The latter requires a good overview on mathematical tools, in particular, computer algebra systems and theorem provers as well as other mathematical editors (e.g. Plato). Moreover, the step-wise formalization of mathematical text seems to be a core interest and competence of the whole KWARC group rather then my own research focus.
Looking at the discussion with Cris on the confidence in proofs, acceptance of proofs as well as trust in proofs, I am wondering, whether we could use our understanding of the mathematical practice and use it for providing an assessment and reputation system for mathematics, i.e. something that we could attach onto a repository of mathematical knowledge and use to facilitate the collaboratively assessment of the mathematical community on different layers:
- Is (the proof) published in journals (considering the impact factor of these journals)
- Was it accepted by the Zentralblatt or Mathematical Reviews (adding more confidence)?
- How long was the proof tested by the mathematical community?
- How many proofs do exist for the theorem?
- Is it a nice proof/ theorem (beauty)?
- Is it correct? (later adding automatic verification methods …)
- Can it be understood (e.g. measured by the numbers of references and re-uses)? How hard can it be understood?
- Is it re-used by the community, i.e. is it relevant and useful for further mathematical work?
We can imagine a top-down and bottom-up approach:
- The top down approach requires the user’s explicit assessment of the content: evaluating the users’ ratings, tags, bookmarks
- The bottom up approach could implicitly provide the relevance of content making use of the logical and narrative structure of the mathematical knowledge: computing theory interrelations, citations, …
I will discuss with Cris, whether he thinks that this would lead to a useful application for mathematicians.
Communities of Practice in Mathematics
Christine Müller is a PhD student from the Jacobs University Bremen, who is working with Prof. Cristian Calude for the next 7 months. In her PhD, Christine is using the metaphor of Communities of Practice to enrich semantic mark-up formats for science, in particular, mathematics with social and collaborative components. By integrating social (Web2.0) technologies and user modelling techniques, Christine aims at eventually improving the management, in particular, the accessibility of mathematical online materials. To illustrate and evaluate her approach, she is currently implementing a prototype eLearning application, which is used within an introductory Computer Science lecture at Jacobs University Bremen.
In the seminar, Christine will introduce the underlying representation format for mathematics and present the latest findings of her work, i.e. a framework for the context-aware adaptation of mathematical notations.
Thesis Title: Community of Practice in Mathematics
Please note, I am currently in the progress of structuring and narrowing down my research interest.
Communities of Practice: Using the metaphor for all the subsequent research interests …
- e.g. for extending document mark-up formats with social and collaborative components
- e.g. for understanding mathematical practice
- e.g. for the discussion on adaptation/ user modelling/ community modelling
- Semantic mark-up of mathematical documents and mathematical practice
- Logical structure of mathematical knowledge (theories and interrelations)
- Narrative (didactic/ rhetorical) structure of mathematical documents (see mathui paper)
- Mark-up of mathematical notations and notation context (see small survey, see mathui paper, mkm 2008)
Accessibility of Mathematical Knowledge
- Adaptation of mathematical knowledge: Selection, Sequencing, and Presentation of mathematical knowledge (see ABIS, FGWM)
- Adaptation of mathematical notations
- Intelligent selection and recommendation of mathematical examples and exercises
- User modelling and community modelling (see ABIS, FGWM)
Semantic Packaging Format: Integrating Semantic Tools (WSKS paper)
Integrating Semantic Web and Social Web
- Enriching (semantically marked-up) mathematical documents with social contexts: ratings, semantic tagging
- Discussion and social interaction with (semantically enriched) mathematical contents
Case study: proof-of-concept prototype for mathematical E-Learning
- Confidence in and Acceptance of proofs: In the previous meeting, I haven’t be sure about what Cris means with context. Cris means by this, that the confidence in a theorem increases by the number of applications, i.e. the number of times it is reused by others e.g. to prove another theorem. This application can be interpreted in two ways: (1) Other mathematicians understood the prove and (2) the mathematical community assessed this contribution to be relevant. In particular, Cris again pointed out that there a several dimensions of a accepted and significant theorem: Neither beauty nor correctness are enough, but it also has to be understood and used by the community, i.e. it needs to be relevant and useful for further mathematical work.
- Reusing trusted and correct Theorems: Cris illustrated today how critical it is to reuse only trusted and correct theorems and that even then mathematicians have no certainty as there is no absolute correctness: In 1988, Yoichi Miyaoka, number theorist at the Max Planck Institute in Bonn, claimed that he had proven the Fermat’s Theorem (eventually proved by A. Wiles). As I was told, his actual proof was not the direct flaw, but when checking his proof step-by-step, the professional referees identified that one of the theorems he re-used was incorrect. However, that theorem had been published and reused by other mathematicians, but had not been carefully checked so far (cf. NY Time). This is a phenomena I have heard before, i.e. sometimes proofs without central interest to the community are not extensively checked.
- Mathematics and Computers: As far as I understood, Cris is interested in integrating traditional and unconventional mathematics, i.e. axiomatic-deductive thinking and computer-supported experimenting to finally achieve a proof. He believes that mathematical proofs will more and more be checked by computers. Of course, this will be a gradual process, from solely human referees to fully automatized proof checking. In our last meeting, we identified to ways of addressing the problem of bringing mathematicians and computers closer together: (1) less formal and human-friendly systems and (2) new ways (of thinking and the respective tools) for encoding mathematical knowledge. Cris is envisioning a process that will support authors to convert their proofs (consisting of text and formulae) to fully formal formats that can be object of automatic provers. In a first step, he would like to see that we can distinguish pure comments from text that is part of the proof. This would require tools and a respective representation format for mathematical documents: The tools need to (semi-)automatize the annotation of mathematical text. Different technologies can be of use from natural language parsing to semi-automated semantic annotation ideally integrated in the publication processes of mathematicians (see sTeX as semantic extension for LaTeX; invasive editors for MS PPT and Word; Sentido; and SWiM). The format has to support the mark-up of narrative, rhetorical, but also mathematical concepts (see OMDoc and its extension in the 2.0 version). However, apart from the format and tools, Cris vision also includes a social aspect that has to evolve over time, i.e. mathematicians have to trust in computer-checked proofs and they actually have to use the tools for formalizing their documents and this leads us to an ongoing-discussion, also referred to as authoring-dilemma (see Andrea’s work).
- Collaboration with Cris: Potentially, Cris and I will work on a programmatic paper of what one should do to help integrating traditional and new mathematical practice, i.e. to motivate our this vision and to give concrete suggestion on how to realize it step by step. However, in a next meeting I will introduce him to my work and areas I am involved in, from there on we will make further decision. I will also give a seminar talk here in some weeks to introduce my work to the CS/ math department.
Notes on Cristian S. Calude et al. (2007) Proving and Programming (see also previous post, see slides). Please note that the authors do not necessarily share my personal opinions and summaries on this page.
There is a strong analogy between proving theorems in mathematics and writing programs in computer science. Programming gives mathematics a new form of understanding. The computer is the driving force behind these changes.
- “Theorems (in mathematics) correspond to algorithms and not programs (in computer science)”
- “Programs (in computer science) correspond to mathematical models.”
- Proving the correctness of algorithms is seldom a focus in software projects (but: even testing/ documenting is sometimes neglected)
- Do some programming language facilitate/ required to put more focus on the correctness of the algorithms (e.g. functional language such as scala or ML; where programs are closer to mathematical models than in imperative languages)?
- Can bugs be avoided? Can the use of rigorous mathematical proofs guarantee that software and hardware perform as expected?
- Many projects and products dedicated to automated testing of program correctness, for example, VIPER or TestEra (automated testing of Java programs).
- What do proof obligations (Verpflichtung) have to do with the correctness of programs?: e.g. Barthe et al. discuss the “interaction between compilation and verification condition generators (VC generators), which are used in many interactive verification environments to guarantee the correctness of source programs, and by several proof carrying code (PCC) architectures to check the correctness of compiled programs. Such VC generators operate on annotated programs that carry loop invariants and procedure specifications expressed as preconditions and postconditions, and yield a set of proof obligations that must be discharged in order to establish the correctness of the program.” (loop and procedure level; see also specification languages such as CASL for the representation of precondition, invariants, and postconditions, i.e. proofs obligation)
- But correctness proof for a program, adds very little to one’s confidence in the program: “Beware of bugs in the above code: I have only proved it correct, not tried it. (Knuth)” (p.310)
- Checking new types of proofs: probabilistic, experimental, hybrid proofs (computation plus theoretical arguments) (p.311 ff.)
- Hayashi (1998): Testing Proofs by Examples
Please note that the authors do not necessarily share my personal opinions and summaries on this page.
Stages of mathematics
- pre-Greek mathematics dominated by observation, intuition, experience
- Greek deductive mathematics based on theorems; Euclid’s geometry; see also Pythagoras,Thales, Aristotle, Euclid, Archimedis. (Euclid became the reference for axiomatic-deductive thinking, see more recent works in mathematics, physics, computer science, biology, linguistics, which follow this tradition)
- Mathematical language triggered by need for precision and rigour. Previously ordinary language was used “dominated by imprecision resulting from its predominantly spontaneous use, where emotional factors and lack of care have an impact”. Galilei, Descartes, Newton, Leibniz, … contribute to a shift from ordinary to a mixed language, i.e. ordinary language supplemented by an artificial component of symbols, formulas, and equations
- The epsilon rigour; 19th century, Cauchy, Weierstrass (coping with processes with infinitely many steps such as limit, continuity, differentiability, and integrability)
- The challenge of the principle of non–contradiction and the logical crisis (Russell–Whitehead, Hilbert, Brouwer); 19th/20th century; optimistic towards the possibility to arrange the whole mathematics as a formal system and to decide for any possible statement whether it is true or false.
- Gödel’s incompleteness theorem (1931): “every formal system which is finitely specified, rich enough to include the arithmetic, and consistent (free of contradiction), is incomplete”, distinction between truth and provability (1:p.173 ff.); Chaitin (1975) suggests that complexity is a source of incompleteness
- Reconciliation of empirical–experimental mathematics with deductive mathematics (today): Four-colour Problem (4CT; 1976; by Kenneth Appel & Wolfgang Haken; proof of 4CT by Neil Robertson, Daniel P. Sanders, Paul Seymour and Robin Thomas ; or the Kepler Conjecture by Thomas Hales) realized by the use of computer programs as pieces of mathematical proof
- Quantization: Proofs are no longer exclusively based on logic and deduction, but also empirical and experimental.
What is mathematical proof?
- “A proof is a series of logical steps based on some axioms and deduction rules which reaches a desired conclusion. Every step in a proof can be checked for correctness by examining it to ensure that it is logically sound.” (1:p. 171)
- But who checks the proof (as human and Computer agents make mistakes)? Often proofs are falsified after having been published and then corrected (e.g. Appel/ Haken take on Kempe’s ideas). Some proofs cannot be checked by single humans (being to long and/or accomplished by computer-assistance) such as 4CT. (1:p. 171)
- “Mathematics cannot be conceived without proofs: [..] proofs and theorems go together; the object of a proof is to reach a theorem, while theorems are validated by proofs” (1:p.172).
- But: Mathematics is more than proofs. “What the mathematical community seems to value most are ideas. The most respected mathematicians are those with strong intuitions. (Harris)”. (1:p.172)
- Three dimensions of proofs:
- syntactically: the formal proof, but “proof is only one step in the direction of confidence” (1:p.174, see Edmund Landau)
- semantically: (truth value) “correctness by itself does not validate a proof, it is also necessary to understand it” (1:p.175; René Thom, Daniel Cohen, William Thurston) “the mission of mathematics is understanding”, consequently, computer-assisted proofs are harder to accept (see 4CT), but also deductive proofs are sometimes only understood by few mathematicians (see 1:p.176), “a theorem is validated if it has been accepted by a general agreement of the mathematical community” (Thom, 1:p.178), “a theorem is a statement that could be checked individually by a mathematician and confirmed also individually by at least two or three mathematicians, each of them working independently” (1:p.178) thus proposing the notion of an “agnogram … a theorem-like statement that we have verified as best we could, but whose truth is not know with the kind of assurance we attach to theorems and about which we must thus remain, to some extend, agnostic” (Swart, 1:p.178), “we believe the experts and we cannot tell for ourselves” (1:p.179), “mathematics occupies a special place [among other disciplines], where we believe anyone who claims to have proved a theorem on the say – so of just a few people – that is, until we hear otherwise” (1:p.179), “… in mathematics you can really argue that this is as close to absolute truth as you can get (Joe Spencer)” (1:p.179)
- pragmatical: (relevance & use) “truth is not where you find it, but where you put it” (Perlis). “no matter how precise the rules are, we need human consciousness to apply the rules and to understand them and their consequences” (1:p.183)
- Aspects of mathematical proofs: deduction/ syllogistic reasoning (most visible aspect), but also: observation, intuition, experiment, visual representations, induction, analogy, and examples; some belonging to the preliminary steps, whose presence is not made explicit (when finally presenting the proof), but without which proofs cannot be conceived (2-p.17; see previous post); proving is a very heterogeneous process
In real life proofs may be different …
- Proof by obviousness: “The proof is so clear that it need not be mentioned.”
- Proof by general agreement: “All in favour?”
- Proof by calculus: “This proof requires calculus, so we’ll skip it.”
- Proof by lost reference: “I know I saw it somewhere”
- Proof by necessity: “It had better be true, or the entire structure of mathematics would crumble to the ground.”
- Proof by plausibility: “It sounds good, so it must be true.”
- Proof by intimidation: “Don’t be stupid; of course it’s true.”
- Proof by terror: When intimidation fails
- Proof by lack of sufficient time: “Because of the time constraint, I’ll leave the proof to you.”
- Proof by tessellation: “This proof is the same as the last.”
- Proof by majority rule: Only to be used if general agreement is impossible
- Proof by authority: “Well, Don’t Knuth says it’s true, so it must be!”
- Proof by intuition: “I just have this gut feeling”
Towards Artificial Mathematics and quasi-empirical proofs
- equivalence between the logical and computational proofs
- logical/ conventional proofs: traditional; reasoning of humans (see Euclid, …); the logical process, i.e. finding a ﬁnite sequence of sentences strictly obeying some axioms and inference rules
- computational/ unconventional: computational process (machines are constructed based on sequences of sentences by humans) producing these sequences; but: proofs can contain steps that can never be verified by humans (based on the equivalence: development of artificial mathematicians, i.e. theorem provers such as Mathematica, Maple, MathLab)
- classical but unconventional proofs also comprise classical proof of excessive length and complexity (e.g. the classification of finite simple groups; 1:p.176/183)
- Artificial mathematicians are far less ingenious and subtle than human mathematicians, but they surpass their human counterparts by being inﬁnitely more patient and diligent.
- Towards quantum computational proofs: conversion from computation into a sequence of sentences may not longer be possible, quantum automation are able to check a proof, but fail to reveal a “trace” of the proof (we don’t why it true), (quasi-empirical) quantum proofs might influence how we learn/ understand mathematics; leading to new ways to understand (and practice) mathematics, although we might not fully accept/trust unconventional proofs, the computational result is “a mathematical activity because it advances our knowledge of mathematics” (1:p.184)
- Overall: “There is little difference between traditional and unconventional types of proofs as [..] i) mathematical truth cannot always be certified by proof, ii) correctness is not absolute, but almost certain, as mathematics advance by making mistakes and correcting and re-correcting them (see Lakatos), iii) non-deterministic and probabilistic proofs do not allow mistakes in the applications of rules, the are just indirect forms of checking, iv) the explanatory component, the understanding emerging from proofs [..] is subjective and has no bearing on formal correctness.” (1:p.184)
- “Experimental Mathematics – as systematic mathematical experimentation ranging from hypotheses building to assisted proofs and automated proof-checking-will play an increasingly important role and will become part of the mainstream of mathematics. There are many reasons for this trend: They range from logical (the absolute truth simply doesn’t exist), sociological (correctness is not absolute [..]), economic (powerful computers will be accessible to more and more people), and psychological (results and success inspire emulation)” (2-p.26)
Knowledge is acquired through reason and by experiment: Should proofs belong exclusively to logic? Or should we also accept empirical-experimental arguments? Towards blending logical and empirical-experimental arguments. There is hope for integration! (see also next post)
I had a chat with Christian Hirsch today. In his PhD he has developed the semantic visual wiki Thinkbase, which combines the two commercial tools thinkmap and the semantic wiki freebase. Thinkbase is a visual wiki in that it provides an interactive graph using the semantic data of freebase. Users can use the graph to browse the wiki and may also access other web-content, such as Wikipedia. However, being based on two commercial tools, his implementation is not open-source. But he might be able to re-implement his code and provide it as open-source.
Based on his implementation for Thinkbase, he has developed Thinkfree, which is an application of thinkbase for the IT department of the University of Auckland and ProcessMapper, which visualizes a business process, i.e. the enrolment at the university (both internal tool, access via Intranet only). The latest prototype is Thinkpedia, a visualization of Wikipedia. It will be online soon.
For his PhD thesis he is aiming at developing a meta-tool that may be used to create a visual extension to semantic wikis and other web-based application. Moreover, he will look at other possible application for his graph-based navigation, such as personalized adaptation (filtering or recommendation) and more.