Natural deduction

In philosophical logic, natural deduction is an approach to proof theory that attempts to provide a formal model of logical reasoning as it "naturally" occurs.

Natural deductive logic
One version of natural deductive logic has no axioms. System L, developed by E.J. Lemmon, has only nine primitive rules that govern the syntax of a proof.

The nine primitive rules of system L are
 * 1) The Rule of Assumption (A)
 * 2) Modus Ponendo Ponens (MPP)
 * 3) The Rule of Double Negation (DN)
 * 4) The Rule of Conditional Proof (CP)
 * 5) The Rule of ∧-introduction (∧I)
 * 6) The Rule of ∧-elimination (∧E)
 * 7) The Rule of ∨-introduction (∨I)
 * 8) The Rule of ∨-elimination (∨E)
 * 9) Reductio Ad Absurdum (RAA)

In system L, a proof has a definition with the following conditions:


 * 1) has a finite sequence of wffs (well-formed formula)
 * 2) each line of it is justified by a rule of the system L
 * 3) the last line of the proof is what is intended, and this last line of the proof uses the only premise(s) that is given; or no premise if nothing is given.

Then if no premise is given, the sequent is called theorem. Therefore, the definitions of a theorem in system L are
 * a theorem is a sequent that can be proved in system L, using an empty set of assumption.
 * a theorem is a sequent that can be proved from an empty set of assumptions in system L.

An example of the proof of a sequent (Modus Tollendo Tollens in this case):

An example of the proof of a sequent (a theorem in this case):

Each rule of system L has its own requirements for the type of input(s) or entry(es) that it can accept and has its own way of treating and calculating the assumptions used by its inputs.

Motivation
Natural deduction grew out of a context of dissatisfaction with sentential axiomatizations common to the systems of Hilbert, Frege, and Russell (see e.g. Hilbert-style deduction system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. His proposals did not prove to be popular, however. Natural deduction in its modern form was independently proposed by the German mathematician Gentzen in 1935, in a dissertation delivered to the faculty of mathematical sciences of the university of Göttingen. The term natural deduction (or rather, its German equivalent) was coined in that paper:


 * Ich wollte zunächst einmal einen Formalismus aufstellen, der dem wirklichen Schließen möglichst nahe kommt. So ergab sich ein „Kalkül des natürlichen Schließens“. (First I wished to construct a formalism that comes as close as possible to actual reasoning. Thus arose a "calculus of natural deduction".)

&mdash; Gentzen, Untersuchungen über das logische Schließen (Mathematische Zeitschrift 39, pp.176-210, 1935)

Gentzen was motivated by a desire to establish the consistency of number theory, and he found immediate use for his natural deduction calculus. He was nevertheless dissatisfied with the complexity of his proofs, and in 1938 gave a new consistency proof using his sequent calculus. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study was to become the definitive work on natural deduction, and included applications for modal and second-order logic.

The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives (Martin-Löf, 1996).

Formal definition
A deduction (or proof) can be defined precisely in the context of a formal system like the propositional calculus. A proposition α is deduced from a collection Σ of premises by applying inference rules repeatedly (see above section). The deduction is a record of this repeated application of inference rules.

More formally, a finite sequence β1 ,..., βn of propositions is said to be a deduction of α from a collection of premises Σ if
 * βn = α, and
 * For all 1 ≤ i ≤ n, either βi is a premise (i.e. βi ∈ Σ) or βi is the result of the application of some inference rule on ealier propositions in the sequence.

Different versions of axiomatic propositional logics contain a few axioms, usually three or more, in addition to one or more inference rules. For instance, Gottlob Frege's axiomatization of propositional logic, which is also the first instance of such an attempt, has six propositional axioms and two rules. Bertrand Russell and Alfred North Whitehead also suggested a system with five axioms.

For instance a version of axiomatic propositional logic due to Jan Łukasiewicz (1878-1956) has a set A of axioms adopted as follows:
 * [PL1] p → (q → p)
 * [PL2] (p → (q → r)) → ((p → q) → (p → r))
 * [PL3] (¬p → ¬q) → (q → p)

and it has the set R of Rules of inference with one rule in it that is Modu Ponendo Ponens as follows:
 * [MP] from α and α → β, infer β.

The inference rule(s) allows people to derive the statements following the axioms or given wffs of the ensemble Σ.

Judgments and propositions
A judgment is something that is knowable, that is, an object of knowledge. It is evident if one in fact knows it. Thus "it is raining" is a judgment, which is evident for the one who knows that it is actually raining; in this case one may readily find evidence for the judgment by looking outside the window or stepping out of the house. In mathematical logic however, evidence is often not as directly observable, but rather deduced from more basic evident judgments. The process of deduction is what constitutes a proof; in other words, a judgment is evident if one has a proof for it.

The most important judgments in logic are of the form "A is true". The letter A stands for any expression representing a proposition; the truth judgments thus require a more primitive judgment: "A is a proposition". Many other judgments have been studied; for example, "A is false" (see classical logic), "A is true at time t" (see temporal logic), "A is necessarily true" or "A is possibly true" (see modal logic), "the program M has type τ" (see programming languages and type theory), "A is achievable from the available resources" (see linear logic), and many others. To start with, we shall concern ourselves with the simplest two judgments "A is a proposition" and "A is true", abbreviated as "A prop" and "A true" respectively.

The judgment "A prop" defines the structure of valid proofs of A, which in turn defines the structure of propositions. For this reason, the inference rules for this judgment are sometimes known as formation rules. To illustrate, if we have two propositions A and B (that is, the judgments "A prop" and "B prop" are evident), then we form the compound proposition A and B, written symbolically as "$$A \wedge B$$". We can write this in the form of an inference rule: $$\frac{A\hbox{ prop} \qquad B\hbox{ prop}}{A \wedge B\hbox{ prop}}\ \wedge_F$$ This inference rule is schematic: A and B can be instantiated with any expression. The general form of an inference rule is: $$\frac{J_1 \qquad J_2 \qquad \cdots \qquad J_n}{J}\ \hbox{name}$$ where each $$J_i$$ is a judgment and the inference rule is named "name". The judgments above the line are known as premises, and those below the line are conclusions. Other common logical propositions are disjunction ($$A \vee B$$), negation ($$\neg A$$), implication ($$A \supset B$$), and the logical constants truth ($$\top$$) and falsehood ($$\bot$$). Their formation rules are below. $$ \frac{A\hbox{ prop} \qquad B\hbox{ prop}}{A \vee B\hbox{ prop}}\ \vee_F \qquad \frac{A\hbox{ prop} \qquad B\hbox{ prop}}{A \supset B\hbox{ prop}}\ \supset_F $$

$$ \frac{\hbox{ }}{\top\hbox{ prop}}\ \top_F \qquad \frac{\hbox{ }}{\bot\hbox{ prop}}\ \bot_F \qquad \frac{A\hbox{ prop}}{\neg A\hbox{ prop}}\ \neg_F $$

Introduction and elimination
Now we discuss the "A true" judgment. Inference rules that introduce a logical connective in the conclusion are known as introduction rules. To introduce conjunctions, i.e., to conclude "A and B true" for propositions A and B, one requires evidence for "A true" and "B true". As an inference rule: $$ \frac{A\hbox{ true} \qquad B\hbox{ true}}{A \wedge B\hbox{ true}}\ \wedge_I $$ It must be understood that in such rules the objects are propositions. That is, the above rule is really an abbreviation for: $$ \frac{A\hbox{ prop} \qquad B\hbox{ prop} \qquad A\hbox{ true} \qquad B\hbox{ true}}{A \wedge B\hbox{ true}}\ \wedge_I $$ This can also be written: $$ \frac{A \wedge B\hbox{ prop} \qquad A\hbox{ true} \qquad B\hbox{ true}}{A \wedge B\hbox{ true}}\ \wedge_I $$ In this form, the first premise can be satisfied by the $$\wedge_F$$ formation rule, giving the first two premisses of the previous form. In this article we shall elide the "prop" judgments where they are understood. In the nullary case, one can derive truth from no premises. $$ \frac{\ }{\top\hbox{ true}}\ \top_I $$ If the truth of a proposition can be established in more than one way, the corresponding connective has multiple introduction rules. $$ \frac{A\hbox{ true}}{A \vee B\hbox{ true}}\ \vee_{I1} \qquad \frac{B\hbox{ true}}{A \vee B\hbox{ true}}\ \vee_{I2} $$ Note that in the nullary case, i.e., for falsehood, there are no introduction rules. Thus one can never infer falsehood from simpler judgments.

Dual to introduction rules are elimination rules to describe how to de-construct information about a compound proposition into information about its constituents. Thus, from "A ∧ B true", we can conclude "A true" and "B true": $$ \frac{A \wedge B\hbox{ true}}{A\hbox{ true}}\ \wedge_{E1} \qquad \frac{A \wedge B\hbox{ true}}{B\hbox{ true}}\ \wedge_{E2} $$ As an example of the use of inference rules, consider commutativity of conjunction. If A ∧ B is true, then B ∧ A is true; This derivation can be drawn by composing inference rules in such a fashion that premisses of a lower inference match the conclusion of the next higher inference.

$$ \cfrac{\cfrac{A \wedge B\hbox{ true}}{B\hbox{ true}}\ \wedge_{E2} \qquad \cfrac{A \wedge B\hbox{ true}}{A\hbox{ true}}\ \wedge_{E1}} {B \wedge A\hbox{ true}}\ \wedge_I $$

The inference figures we have seen so far are not sufficient to state the rules of implication introduction or disjunction elimination; for these, we need a more general notion of hypothetical derivation.

Hypothetical derivations
A pervasive operation in mathematical logic is reasoning from assumptions. For example, consider the following derivation: $$ \cfrac{A \wedge \left ( B \wedge C \right ) \ true}{\cfrac{B \wedge C \ true}{B \ true} \wedge E_1} \wedge E_2 $$ This derivation does not establish the truth of B as such; rather, it establishes the following fact:
 * If A ∧ (B ∧ C) is true then B is true.

In logic, one says "assuming A ∧ (B ∧ C) is true, we show that B is true"; in other words, the judgement "B true" depends on the assumed judgement "A ∧ (B ∧ C) true". This is a hypothetical derivation, which we write as follows: $$ \begin{matrix} A \wedge \left ( B \wedge C \right ) \ true \\ \vdots \\ B \ true \end{matrix} $$ The interpretation is: "B true is derivable from A ∧ (B ∧ C) true". Of course, in this specific example we actually know the derivation of "B true" from "A ∧ (B ∧ C) true", but in general we may not a-priori know the derivation. The general form of a hypothetical derivation is: $$ \begin{matrix} D_1 \quad D_2 \cdots D_n \\ \vdots \\ J \end{matrix} $$ Each hypothetical derivation has a collection of antecedent derivations (the Di) written on the top line, and a succedent judgement (J) written on the bottom line. Each of the premisses may itself be a hypothetical derivation. (For simplicity, we treat a judgement as a premiss-less derivation.)

The notion of hypothetical judgement is internalised as the connective of implication. The introduction and elimination rules are as follows. $$ \cfrac{ \begin{matrix} \cfrac{}{A \ true} u \\ \vdots \\ B \ true \end{matrix} }{A \supset B \ true} \supset I^u \qquad \cfrac{A \supset B \ true \quad A \ true}{B \ true} \supset E $$

In the introduction rule, the antecedent named u is discharged in the conclusion. This is a mechanism for delimiting the scope of the hypothesis: its sole reason for existence is to establish "B true"; it cannot be used for any other purpose, and in particular, it cannot be used below the introduction. As an example, consider the derivation of "A ⊃ (B ⊃ (A ∧ B)) true": $$ \cfrac{\cfrac{\cfrac – {A \ true} u \quad \cfrac – {B \ true} w}{A \wedge B \ true}\wedge I}{ \cfrac{B \supset \left ( A \wedge B \right ) \ true}{ A \supset \left ( B \supset \left ( A \wedge B \right ) \right ) \ true } \supset I^u } \supset I^w $$ This full derivation has no unsatisfied premisses; however, sub-derivations are hypothetical. For instance, the derivation of "B ⊃ (A ∧ B) true" is hypothetical with antecedent "A true" (named u).

With hypothetical derivations, we can now write the elimination rule for disjunction: $$ \cfrac{ A \vee B \hbox{ true} \quad \begin{matrix} \cfrac{}{A \ true} u \\ \vdots \\ C \ true \end{matrix} \quad \begin{matrix} \cfrac{}{B \ true} w \\ \vdots \\ C \ true \end{matrix} }{C \ true} \vee E^{u,w} $$ In words, if A ∨ B is true, and we can derive C true both from A true and from B true, then C is indeed true. Note that this rule does not commit to either A true or B true. In the zero-ary case, i.e. for falsehood, we obtain the following elimination rule: $$ \frac{\perp true}{C \ true} \perp E $$ This is read as: if falsehood is true, then any proposition C is true.

Negation is similar to implication. $$ \cfrac{ \begin{matrix} \cfrac{}{A \ true} u \\ \vdots \\ p \ true \end{matrix} }{\lnot A \ true} \lnot I^{u,p} \qquad \cfrac{\lnot A \ true \quad A \ true}{C \ true} \lnot E $$ The introduction rule discharges both the name of the hypothesis u, and the succedent p, i.e., the proposition p must not occur in the conclusion  A. Since these rules are schematic, the interpretation of the introduction rule is: if from "A true" we can derive for every proposition p that "p true", then A must be false, i.e., "not A true". For the elimination, if both A and not A are shown to be true, then there is a contradiction, in which case every proposition C is true. Because the rules for implication and negation are so similar, it should be fairly easy to see that not A and A ⊃ ⊥ are equivalent, i.e., each is derivable from the other.

Consistency, completeness, and normal forms
A theory is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a model. However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the strength of elimination rules: they must not be so strong that they include knowledge not already contained in its premisses. As an example, consider conjunctions.

Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions:

These notions correspond exactly to β-reduction and η-expansion in the lambda calculus, using the Curry-Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961; see his book Natural deduction: a proof-theoretical study, A&W Stockholm 1965, no ISBN. It is much easier to show this indirectly by means of a cut-free sequent calculus presentation.

First and higher-order extensions


The logic of the earlier section is an example of a single-sorted logic, i.e., a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms. More precisely, we will add a new kind of judgement, "t is a term" (or "t term") where t is schematic. We shall fix a countable set V of variables, another countable set F of function symbols, and construct terms as follows: For propositions, we consider a third countable set P of predicates, and define atomic predicates over terms with the following formation rule: In addition, we add a pair of quantified propositions: universal (∀) and existential (∃): These quantified propositions have the following introduction and elimination rules.

In these rules, the notation [t/x] A stands for the substitution of t for every (visible) instance of x in A, avoiding capture; see the article on lambda calculus for more detail about this standard operation. As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters), and the hypotheses named u and v in ∃E are localised to the second premiss in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable.

So far the quantified extensions are first-order: they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules: A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind.

Proofs and type-theory
The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a proof. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with proof variables (from some countable set V of variables), and decorate the succedent with the actual proof. The antecedents or hypotheses are separated from the succedent by means of a turnstile. This modification sometimes goes under the name of localised hypotheses. The following diagram summarises the change. The collection of hypotheses will be written as Γ when their exact composition is not relevant. To make proofs explicit, we move from the proof-less judgement "A true" to a judgement: "π is a proof of (A true)", which is written symbolically as "π : A true". Following the standard approach, proofs are specified with their own formation rules for the judgement "π proof". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself. For brevity, we shall leave off the judgemental label true in the rest of this article, i.e., write "Γ π : A". Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus: The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections &mdash; first (fst) and second (snd). For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, u:A" stands for the collection of hypotheses Γ, together with the additional hypothesis u. With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a substitution theorem, and can be proved by induction on the depth (or structure) of the second judgement.


 * Substitution theorem : If Γ π1 : A and Γ, u:A  π2 : B, then Γ  [π1/u] π2 : B.

So far the judgement "Γ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is "the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections.

The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥ true". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.

Like logic, type theory has many extensions and variants, including first-order and higher-order versions. An interesting branch of type theory, known as dependent type theory, allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules: These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules. Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price &mdash; checking that a given program is of a given type is undecidable. For this reason, dependent type theories in practice do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable index domain, for example integers, strings, or linear programs.

Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt.

The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.

Classical and modal logics
For simplicity, the logics presented so far have been intuitionistic. Classical logic extends intuitionistic logic with an additional axiom or principle of excluded middle:


 * For any proposition p, the proposition p ∨ ¬p is true.

This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting:

(XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.

A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truth-centric judgement A true with a more classical notion: in localised form, instead of Γ A, he used Γ  Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.)

Another important extension was for modal and other logics that need more than just the basic judgement of truth. These were first described in a natural deduction style by Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic of necessity requires one new judgement, "A valid", that is categorical with respect to truth:


 * If "A true" under no assumptions of the form "B true", then "A valid".

This categorical judgement is internalised as a unary connective A (read "necessarily A") with the following introduction and elimination rules:

Note that the premiss "A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ A true" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgement "A true"; validity is not needed here since "Ω A valid" is by definition the same as "Ω;  A true". The introduction and elimination forms are then:

The modal hypotheses have their own version of the hypothesis rule and substitution theorem.


 * Modal substitution theorem : If Ω; π1 : A true and Ω, u: (A valid) ; Γ  π2 : C true, then Ω;Γ  [π1/u] π2 : C true.

This framework of separating judgements into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics, to give a few examples.

Sequent calculus
Main article: sequent calculus

The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic. In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search, or even for proof checking (or type-checking in type theory). To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic. Kleene, in his seminal 1952 book Introduction to Metamathematics (ISBN 0-7204-2103-9), gave the first formulation of the sequent calculus in the modern style.

In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar: On the left: Recall the ∨E rule of natural deduction in localised form: The proposition A ∨ B, which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows: In the sequent calculus, the left and right rules are performed in lock-step until one reaches the initial sequent, which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition: The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument.


 * Soundness of ⇒ wrt. : If Γ ⇒ A, then Γ  A.
 * Completeness of ⇒ wrt. : If Γ  A, then Γ ⇒ A.

It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules.

The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes [3]), which can be used to formally define the notion of a normal form for natural deduction.

The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus.


 * Cut (substitution) : If Γ ⇒ π1 : A and Γ, u:A ⇒ π2 : C, then Γ ⇒ [π1/u] π2 : C.

In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: " ⊥ true" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have " ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties.