In mathematics, we try to determine which mathematical statementswhich precise statements about numbers, continuous functions, vector spaces, and the likeare true and which are false. To determine that a statement is true, you must provide an argument explaining why the statement is true, and to determine that a statement is false, you must provide an argument explaining why the statement is false. Mathematical arguments can be very difficult to produce. Some even take years! So imagine your disappointment when another mathematician takes issue with the latest argument you spent such incredible effort perfecting. Perhaps she found a mistake in your reasoning. Or perhaps she disagrees with one of your basic premises.
Mathematics became more and more abstract over the course of the 1800s, and by the early 1900s, after more than a few controversies, disagreements, and paradoxes, mathematicians realized that we needed to formally fix the rules of our game. The idea was to agree on a collection of basic axioms and on a collection of reasoning rules (such as if 'A' and 'A implies B' are both true, then 'B' must also be true) so that the truth or falsity of any mathematical statement could be determined by starting from the axioms and reasoning according to the rules. Thus the axioms should be intuitively true, and deductions made by applying the reasoning rules to true premises should yield true conclusions.
Fixing intuitively true axioms and reasoning rules that preserve truth certainly seems like the natural and obvious way to give mathematics a solid foundation. However, to L. E. J. Brouwer, this classical foundation based on truth and falsity was far too permissive. Brouwer's complaint was, essentially, that mathematical objects (like continuous functions, vector spaces, and so on) do not necessarily correspond to anything in reality and that there is no objective, absolute notion of mathematical truth. Instead, a mathematical object is the result of some mental construction that is somehow justifiable by the mathematician's intuition. The reasoning rules for mathematics should therefore be designed to preserve these justifications instead of mere truth. Brouwer's position came to be called 'intuitionism.'
The famous mathematician Andrey Kolmogorov had many interests, including intuitionism, and he proposed an informal interpretation of intuitionism as a 'logic of problem solving' and a 'calculus of problems.' Yuri Medvedev, in the 1950s, was the first to formalize Kolmogorov's computational interpretation. Medvedev's idea was to say that a mathematical problem P (appropriately formalized) reduces to another mathematical problem Q if there is a uniform computational procedure that translates solutions to problem Q into solutions to problem P. Using this idea, Medvedev showed how to interpret atomic logical propositionsthe 'A,' 'B,' and 'C' in an expression like 'A implies (B or C)'as mathematical problems. Classically, we think of 'A,' 'B,' and 'C' as each being either true or false and the expression 'A implies (B or C)' as meaning that if 'A' is true, then either 'B' or 'C' must also be true. Under Medvedev's formalization, we instead think of 'A,' 'B,' and 'C' as mathematical problems and of 'A implies (B or C)' as meaning that if problem 'A' is solvable, then either problem 'B' or problem 'C' must also be solvable. In this project, we study a similar computational interpretation of intuitionism introduced by Elena Dyment. The key difference is that Dyment's interpretation is based on computing with partial information, whereas Medvedev's interpretation is based on computing with complete information. We seek to characterize the logic that arises from Dyment's interpretation and determine whether or not it differs from the logic that arises from Medvedev's interpretation.
