Incompleteness and Mathematical Complexity
I have a question of logic or mathematical specialists about whether it is necessary for determining a theorems complexity, whether it would be needed it to be proven true that it is complete.
The logic for determining this is that a Gödel alphabet can expand infinitely until it attains completeness where then complexity would be able to be determined, yet in computational logic this is called Kolmogorov complexity, and no amount of strings of primitives in logic can determine this without fulfilling the presupposition that the theorem is complete, where the analysis can thence follow.
Seemingly, this question has the shortest answer if the computational process was performed by a quantum computer, I'm assuming?
The logic for determining this is that a Gödel alphabet can expand infinitely until it attains completeness where then complexity would be able to be determined, yet in computational logic this is called Kolmogorov complexity, and no amount of strings of primitives in logic can determine this without fulfilling the presupposition that the theorem is complete, where the analysis can thence follow.
Seemingly, this question has the shortest answer if the computational process was performed by a quantum computer, I'm assuming?
Comments (146)
I think I can help with this question. It simply means the number of steps to determine a Q.E.D. result for a Turing machine. Of course there's no units to measure this by, which reverts us back to the original question.
Quoting tim wood
Well, hypothetically, and I firmly believe that completeness isn't impossible to entertain, that by expanding the Gödel alphabet up to a certain integer, that we might arrive at a proof that is least complex for a theorem.
Quoting tim wood
Yes, you did mention this previously, which is of interest to me. Yet, as mentioned that given the Second Incompleteness Theorem, you cannot have consistency, which aligns with no way of determining complexity, even without the previous issue with no unitary measure of determining it at all!
What do you mean by a theorem being complete?
Quoting Shawn
What do you mean by "determine a QED result for a Turing machine"?
/
Have you read a book in mathematical logic?
There aren't many to talk if any, and Godel's Incompleteness theorems apply retroactively, to all that have denumerable and countable alphabets. However, some uncountable alphabets exist, which leaves open the possibility that a theorem may be complete, yet possibly incomprehensible.
Quoting TonesInDeepFreeze
That the Turing complexity is determinate or definite.
That is not an answer to my question, and it's gibberish as is the rest of what you've posted in this thread.
Quoting Shawn
Are you saying a theorem is complete if it has been proven?
Then provide your reasoning. I don't see what your getting at. Everything written in the OP is just what it says.
Yes, jgill, that's what I mean. In simpler terms, I don't see how logically you can begin to ascertain "complexity" or Turing complexity of a theorem's proof by working at it in any way without knowing it being complete and/or consistent, which Gödel proved it being impossible to satisfy it being both complete and consistent.
Quoting jgill
No, I'm saying that to work on a theorem in terms of "complexity", which doesn't even have unitary measure or quantifiable methodology to start with, then how does one ascertain this with the proof being moot in terms of both completeness and consistency.
Hope that makes some sense.
What do others think? Does this make things sounds somewhat disorganized in how proofs are written?
Your OP and subsequent posts are a mishmash of several things and I can't figure out what you're saying. You could be talking about an extension of Turing machines that accept languages over an uncountable alphabet. You could be talking about Turing degree, the level of algorithmic unsolvability sets of natural numbers. Interestingly this is not a linear order, so that there are sets with incomparable Turing degree. You could be talking about extending an incomplete axiomatic system by adding either a proposition or its negation in order to form a larger system, which is still necessarily incomplete; and doing that over and over, hoping to eventually generate a complete system. That can't be done, and was in fact the subject of Turing's doctoral dissertation. You also tossed in Kolmogorov complexity and even everyone's favorite buzzphrase, quantum computing.
Is it possible for you to focus your subject? Can you perhaps give a specific example of what you're getting at?
The subject or point in question is whether one can determine a theorem's complexity from the fact that a theorem cannot both be consistent and complete, according to Gödel's Incompleteness Theorems?
A theorem can neither be consistent nor complete not by virtue of Gödel, but rather by virtue of the fact that the terms consistency and completeness apply to axiomatic systems and NOT to theorems. You seem unclear on this point.
Let me elaborate in an effort to be useful. Standard set theory, ZF, is incomplete by Gödel's first incompleteness theorem. We can identify a particular proposition that can neither be proved nor disproved, say the axiom of choice, That's incompleteness. Consistency just says that there's no proposition P such that both P and not-P can be proven from ZF. So consistency and completeness are properties of axiomatic systems.
A particular theorem, like Fermat's last theorem, or the fundamental theorem of finitely generated Abelian groups, or the fundamental theorem of calculus, do not have the properties of consistency and completeness. Those properties don't apply to individual theorems.
Please forgive my lack of knowledge on the matter; but, what I wanted to say that a theorems proof for an axiomatic system in math is unable to be ascertained in complexity with regards to being neither complete and consistent.
With this fact being true, then how does one ever hope to gauge a mathematical theorem's proof as measurably complex or less complex.
This sentence repeats the same misunderstanding. Individual theorems do not have the consistency or completeness attributes, in the same sense that automobile tires don't have the horsepower attribute.
Quoting Shawn
As stated, what you said is not sufficiently clear to be true or false; but if I had to guess at your meaning, it's false because consistency and completeness don't apply to individual theorems.
Quoting Shawn
For computational problems, computational complexity theory is the standard approach, as you already know. For theorems, I suppose if anyone cared, we can pick any one of the contemporary proof assistants like Coq or Agda, and define the complexity of a theorem as the shortest proof of that theorem in such a system.
But completeness and consistency of axiomatic systems seem to have little or nothing to do with your question. If a theorem has a proof and you want to find the simplest one, that has nothing to do with the axiomatic system being incomplete. The axiom of choice has no proof in ZF so it's meaningless to ask about the simplest proof. And if a system is inconsistent, then everything has a one-line proof following directly from the inconsistency. So completeness and consistency are irrelevant to your question.
Reasoning requires definitions. You have not provided them.
Quoting Shawn
What I'm getting at is exactly the question I already asked:
What is your definition of 'a theorem is complete'?
Do you know what a definition is?
So, when I say that a proof of a theorem is subject to not being able to determine its complexity does that mean anything?
Quoting fishfry
What I'm trying to determine is whether there is any possibility to determine the complexity of proofs by reasoning that a Q.E.D. would occur at the least exhaustive method of determining it. Does that make sense? Following with this logic, if you don't have a method of doing this, then how can you determine complexity in mathematical proofs?
Yes.
Quoting tim wood
No, according to how I view complexity, it's ascertained or determined by the shortest Q.E.D.
Quoting tim wood
Well, yes. Meaning that the amount of primitives and recursions leads to the least exhaustive outcome for a Turing machine.
Yeah, and that's not even counterintuitive, until I bring into the discussion completeness and consistency of a theorem, which seemingly, as I view it, are needed to determine the shortest length of a proof to determine the complexity of it. And, brute forcing the result of the shortest Q.E.D. is not really that interesting to talk about.
If P is a theorem, let L(P) be defined by:
L(P) = n iff (there is a proof of P in n number of symbols & for all m, if m
That is, L(P) is the least length of a proof of P.
Are you asking whether L(P) is computable? That is, is your question this:? Is there an algorithm for the set of theorems that takes any theorem as input and outputs the least length from among its proofs?
For about the fifth time, "completeness of a theorem" makes no sense unless you tell us what you meant by it.
Maybe you mean "completeness of a theory", which does make sense.
I'm asking if n or m
I'll stick with my definition. I would add another terminology with definition to accommodate your different notion, but it doesn't make sense: There is no such thing as P's proof, since P has many proofs, so there is no single proof to say is "P's proof".
(3) Quoting Shawn
I guess you mean 'determinate' not 'determine'. What definition of 'determinate'? Do you mean 'decidable'? Something else?
Well, yes. I'm concerned with complexity of a theorems proof or what I think is how you can gauge complexity in mathematics, through determining how simple the proof is for any theorem.
Does that sound about right?
Let me know if that's a word salad again. :sweat:
"XYZ"-In my mind is either or related to completeness or consistency, which is where my confusion seems to arise.
And by 'complexity' do you mean 'the sum of the lengths of the formulas in the proof'?
I'm thinking again of having proven a theorem, putting into a computer algorithm, then counting symbols to ascertain "complexity." I guess that would be some sort of definition, but why one would wish to do that is not clear to me. I suppose one could say, "Oh, Gill's proof of Theorem X has complexity 32, whereas Kojita's proof has complexity 56." Then what? :roll:
I find it interesting whether there is an algorithm to compute the least length.
I take it we are talking about proofs in the first-order predicate calculus.
Not sure, but I think this is right: Any theorem has only finitely many symbols from the language signature, so a shortest proof would use only (1) connectives (of which there are only finitely many) and quantifiers (of which there are only finitely many), (2) signature symbols in the the theorem itself, and (3) finitely many variables. Then, modulo the variables used (since swapping variables wouldn't change length), for any n, there are only finitely many sequences of formulas whose sum of formula lengths is n. So, successively, for each n, look for proofs of the theorem. The answer is the first n that has a proof.
What do you mean by the complexity of a proof? I've already suggested that it could be defined as the length of a proof in some formal proof system. What do you mean by it?
Quoting Shawn
What is a QED? Do you mean a proof?
Quoting Shawn
I've already suggested a way. I don't know that it's regarded as particularly important but I could be wrong about that. For sure it's important in computer science, and I pointed you to complexity theory.
Here's the inherent problem as I see it, in that there's no syntactically quantifiable method, not least a semantical qualitative method to make "complexity" determinate. What do you think?
Mathematicians often talk about a theorem or conjecture being qualitatively "hard", yet for "complexity", there's no syntax, or is there?
If all one wants to do is count primitives or the length of the proof, then there's still the burden of determining how to estimate its Turing complexity, yes?
Yes, and as to this, I think that it is important in terms of making mathematics more formal in saying that a less complex formal proof for a theorem is important, no?
I keep on repeating myself here; but, I discussed earlier that there's some lack of language with a syntax that could ascertain this. The only example I keep on mentioning is related to Kolmogorov complexity, which is non-computable.
Quoting fishfry
Yes.
Quoting fishfry
Well, this much I understand; but, the point of this thread is in regards to whether there is any relation to an axiomatic system that is either or consistent and complete and the determination of "truth" as to whether the axiomatic system is subject to degrees of "complexity"? Which, seemingly can only be entertained when providing proofs, and those proofs being subject to a formal measure of complexity...
If my previous post is correct, then I take it that your full question is:
What is the complexity class of such an algorithm?
I don't know. And I'm not very much informed on complexity classes.
I find it interesting as a mathematical question onto itself.
There's something in my mind that I haven't disclosed, and this is more of a sentiment if you don't care or whatnot; but, without knowing that a theorem's proof is more complex or less, that this contributes to mathematics seeming a lot more, how to put this lightly, disorganized or uncategorized? I'm not sure if you follow me on this sentiment. Feel free to comment if not.
Yet, try and specify this syntactically/quantifiably in a formal system, if you can even begin to. Non-trivial stuff...
There was an attempt at doing this by Russell and Whitehead in the Principia Mathematica, where they utilized type theory extensively, to no avail. Set-theory can't do this; but, type theory can to some degree.
Food for thought?
If my attempt was correct, and there are only finitely many symbols in the signature, then, if I'm not mistaken, there is an algorithm to list the theorems in order of proof length (and, say, lexicographically within length).
PM addressed complexity classes?
So, it's syntactic? How do you make it both syntactic and semantical within a formal system to ascertain the Turing complexity of the task, as I understand it, type theory doesn't do this syntactically to quantify this methodologically according to Turing Complexity; but, rather semantically.
That's my estimate. What do you think?
See the previous post to you.
What does "this" refer to there? Set theory can't do a lot of things, but what is it in particular are you saying set theory can't do?
Sorry, I was thinking about it dialectically with regards to type theory, which I explained my thoughts about above in the post above.
Estimate based on what?
I don't see it in the table of contents given in the article on PM in the SEP.
A little off-topic; but, type theory can resolve the set of all set paradox by hierarchy classes, which is confusing, because its not entailed within it to do so; but, has to do it extra-syntactically or even semantically.
I have no idea what you mean.
Quoting Shawn
I have no idea what you mean.
Are you deliberately wasting people's time with remarks you've intentionally prepared to be gibberish or are so personally esoteric that no one but you could know what you mean?
Sorry, I'm still struggling with the language on the issue.
Feel free to ignore my word salads.
Quoting TonesInDeepFreeze
I seem to have last been on the same page here, and brought in type theory in regards to determining class hierarchy when talking about...
EDIT:
[s]type theory[/s], but, resolving entailment of the set of all sets, by hierarchy classes.
True enough. Beyond my pale.
There is no set of all sets, I hope we're not going down this road.
It's off topic; but, I recall reading that Russell solved this by class hierarchy in type theory. I'll search if I'm wrong on this.
You don't need to search. You are correct that PM is one approach to avoiding the paradox.
I do not believe there is a set of all sets either in Russell's type theory or in any version of modern type theory. I'd be grateful if you could supply references and/or context to the contrary. At best I've read that type theory "avoids the paradox," but I've seen no claim that there is a set of all sets.
Why would I want to supply references to a claim that PM allows a set of all sets when I agree that PM does not allow a set of all sets?
Do you have a cat? Perhaps your cat wrote this using your handle:
Quoting TonesInDeepFreeze
It is correct that PM is one approach to avoiding the paradox. With PM there is no set of all sets.
That last sentence makes all the difference, especially in the context of @Shawn referencing the set of all sets, my telling him that there is no such thing, Shawn saying he'd look up PM, and you saying that he didn't need to, the implication being that PM confirmed his misunderstanding, when in fact it doesn't. You added to OP's confusion on this issue and I don't know why.
I was talking about type theory in general avoiding the issue of the set of all sets by assigning hierarchy classes as I last read about the issue.
He said that he thinks PM solves the problem with types but that he might have to look it up to be sure.
I said he doesn't have to look it; he is correct that PM avoids the paradox.
In context, your remark served to amplify the OP's confusion rather than correct it. I straightened the situation out. You're welcome. I myself am singularly unconfused about this issue, which is why I challenged your technically correct but misleading-in-context claim.
You understand that there is no set of all sets, correct? Even in type theory. Yes?
Yes, sir.
I'm only pondering if class complexity can be ascertained at the moment for a proof of a theorem in a formal system, for Turing computability. That's where I left off with @TonesInDeepFreeze
No, I addressed exactly the sentence he wrote. What he wrote in that sentence was correct. I am not responsible for addressing other confusions he has that might conflict with the sentence he posted and to which I responded. He wrote something correct, and I affirmed it.
Quoting TonesInDeepFreeze
I think I'll go argue with the partisans on the Israel-Palestine war thread. It seems more peaceful.
Not 'class complexity'. The 'complexity class'.
Do you think complexity class can determine "complexity" quantifiably for a Turing computable algorithm?
OK
Perhaps Turing degree is what you're looking for. I'm not sure what you mean by quantifiable. Complexity theory puts computational problems into equivalence classes, That's perhaps more qualitative than quantitative.
https://en.wikipedia.org/wiki/Turing_degree
OK, I sat down, read it, and read it more, and seemingly you would have the have a powerful oracle machine to make the decision to do this with least decidedly complexity.
Am I getting that much right?
I surmise that he's not asking about degree of unsolvability but about the complexity of the problem.
4.1 Significance of Complexity here: https://plato.stanford.edu/entries/computability/
If I'm not mistaken, this is not about unsolvable problems. Rather, it's about finding out the complexity of the algorithm for deciding this decidable problem.
Assuming you have a metric to do this by, which is why I reference the need for a way to ascertain the computability beforehand, meaning some kind of robust oracle machine.
But, how does the machine do this without a brute force method?
I'm not an authority on CS theory. The idea seems to be that we write A < B for sets of natural numbers if A can be decided with an oracle for B. What's interesting is that this is not a linear order. There are sets A and B such that neither A < B or B < A.
The method I mentioned is brute force. I don't know whether there's a better method.
How would you define it? Is it even possible to define this? And, may I ask how is this distinct from estimating Kolmogorov complexity?
I see, so that's where I kinda made the quip in the OP about stuff like quantum computing that take the route of least complexity eg. Shor's algorithm.
It's defined in CS, you can Google around. It's a fairly technical subject, nothing I know much about.
Of course it's possible to define, the definition is in the Wiki article, although that article's not too informative. I don't know any more about it than you'd learn from Googling and reading some papers. None that I've seen are particularly accessible.
But, you do acknowledge that the complexity class of a algorithmic theorem's proof can be indeterminate, as long as the machine engages in non-brute force methods to determine "complexity"?
I have no idea, I know very little of these matters.
Do you know of any way to approach this problem? I'm assuming you will reference information theory, which isn't what I think is appropriate to ascertain "complexity", or is it?
I've referred you to Turing degree, complexity theory, and proof systems like Agda and Coq. I'm all out of ideas. A good resource for complexity theory is everything on Scott Aaronson's website. https://www.scottaaronson.com/
Whether the problem of quantifying "complexity" is undecidable or decidable, I know not.
I'm assuming you can decide the Turing degree; but, I have no idea how difficulty would be estimated, nor how it could be estimated.
A posteriori I assume? Not a priori.
I'm wondering about reaching out on somewhere like Stack Exchange for more information or how to answer it.
Thanks in advance!
That's a possibility, but be prepared for some confusion about what you are asking. You're in a different ball game on that forum.
I'm still not sure what your question is. Also, I'm not very informed about complexity, so my own formulation might need correction too. But here's my best try:
For first order theories, is it correct that there is a brute force algorithm that tells us the shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)? What is the complexity class for the algorithm? Are there algorithms better than brute force? What is the least complexity class for an algorithm to tell us the shortest proof length?
https://math.stackexchange.com/questions/4141341/problem-of-finding-shortest-proof-how-complex-is-it
Can you edit your stackexchange post?
(I think complexity class pertains to the problem not the algorithm?)
This is better:
For first order theories, is it correct that there is a brute force algorithm that tells us the shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)? Are there algorithms better than brute force? What is the complexity class for the problem?
Edited, let me know if the title needs to be more precise.
I would title it:
Problem of finding shortest proof - how complex is it?
But I really have to warn that, since I'm not really informed on this particular subject, my own formulation might get shot down.
A response to it:
Quoting Rob Arthan
I was going to write:
"I should have stipulated that we're talking about finitely axiomatizable theories. Even if the theory is recursively axiomatizable but not finitely axiomatizable, then the method would not work."
So I thought his argument was correct. But then I saw that he amended to say that brute method would work if 'length' is defined by symbols not lines. (And we did define by symbols not lines.) So, by his amended argument, the paragraph above in which I correct myself is not correct and I never needed to correct myself except to include that the theory is recursively axiomatizable. But I don't see why his original argument is still not correct, whether it's symbols or lines. So I am confused.
He gave this as the answer:
https://math.stackexchange.com/a/4141376/928370
Length & Complexity
Let me know what you think to say?
However, the idea that I could take the theorem I am trying to prove right now and turn it over to an ATP is far fetched. Although appealing, I must admit. I am at the point that there might be the slightest possibility it falls into the Godel Hole, since I find that to prove the conjecture seems to require that I assume the conjecture, a circular trap. I'm not speaking of an indirect proof. All my computer experiments suggest it is true. :worry:
Can you explain this, as I'm quite interested?
Thank you.
It's an interesting topic, yeah?
If I have time later, maybe I'll reply on StackExchange to better understand some of the points.
It's interesting for myself and perhaps @jgill that how do you discern axioms in the theorem that are non-computable from those that are computable.
What do you think?
What do you mean by 'computable axiom'?
'recursive axiomatization' is given a rigorous definition. With a theory that is recursively axiomatizable, it is computable whether a symbol string is or is not an axiom. And we may stipulate that we are using effectivized languages. In ordinary cases, if the theory is recursively axiomatizable, then we know an algorithm for checking whether a symbol string is or is not an axiom.
I have something that addresses this here:
Is it meaningful to talk about computability of uncountable alphabets, at all and given this thread?
If the language is uncountable, then the theory is not recursively axiomatizable and is not a formal theory. And, yes, Godel's theorem pertains only to formal theories.
A book you should read:
Godel's Theorem: An Incomplete Guide To Its Use and Abuse - Franzen.
It's not a technical tome, and I bet it would answer a lot of your questions right away.
I redid it and asked the following question:
https://math.stackexchange.com/questions/4142390/completeness-without-uncountability
I asked you previously: Do you understand the difference between a theory and a theorem?
That there is a recursive axiomatization of a theory T entails that it is computable whether any given string is or is not an axiom.That follows from the fact that any recursive relation is computable.
By the way, I don't know what would be the value in telling others that I said it, since I am not in any way an authoritative expert in logic of mathematics. The only thing I really know very much about is jazz.
I got this reply, which is really interesting for me to understand:
What do you think @jgill, and @TonesInDeepFreeze?
Here, 'computable' is to be taken as 'Turing machine computable'.
"If R is recursive then R is Turing machine computable".
That has a long and complicated proof, but it's not controversial that it is proven.
Note that this does not even need to assume the Church-Turning thesis, as its converse ("If R is recursive then R is computable (in an intuitive informal sense of 'computable')") is not controversial.
That is incorrect. You have conflated "formula F is independent of axiom set S" with "formula F is an axiom in the axiom set S".
Axiomatization is a relation between a set of sentences (called 'the axioms') and the set of sentences (called 'the theorems') that are provable from those axioms. I.e.:
A set of sentence S is an axiomatization of a set of sentences T iff every member of T is provable from some finite subset of S.
A theory is a set of sentence closed under deduction (provability).
A recursive axiomatization S of a theory T is a recursive set of sentences S such that every member of T is provable from a finite subset of S.
Trivially, any axiom in S is provable from S. (Trivially, any axiom is provable from itself.)
A theory T is recursively axiomatizable iff there is a recursive set S that is an axiomatization of T.
/
https://math.stackexchange.com/questions/4142390/completeness-without-uncountability
That is incorrect.
It is an easy theorem that R is recursive iff (R is recursively enumerable & ~ R is recursively enumerable).
A recursive enumeration does not itself provide a decision procedure for R, but that does not entail that there does not exist a decision procedure for R.
I would have to think it through. I am simply not caught up in the StackExchange thread.
The very first sentence in the very first reply to you in that thread:
"What does it mean for a theorem to be complete or uncountable?"
Please stop using the phrases "theorem is complete" or "theorem is countable". They make no sense, and all they do is make people have to stop you in your tracks before we can even answer you about anything else you ask or say.
In the exact sense that the set of recursive sets is a proper subset of the set of recursively enumerable sets.
Quoting tim wood
A theory is closed under deduction.
But the clause you have in mind is recursive axiomatizability. (I don't think that is usually referred to as 'closure'.)
At least for now, I've decided not to post there. There are already too many confusions in the discussions, and I'm not up to sorting through them with the participants. Also, I don't like the interface and layout of the post and thread formatting there.
Well, Shawn, you asked for it. Sorry, buddy, it's convoluted and I doubt comprehensible. I don't usually go to professional math sites like StackExchange, preferring to do the job myself, but I don't think it would make any difference for this particular problem. So I'll put it up here. And one of the math people on the forum might have a suggestion:
Start with a sequence of complex functions: [math]\left\{ {{f}_{k}} \right\}_{k=2}^{\infty }[/math],
And create a second sequence: [math]{{F}_{k-1,n}}(z)={{f}_{k-1}}\left( {{F}_{k,n}}(z) \right)[/math], [math]{{F}_{n,n}}(z)=z[/math] Which can be written as: [math]{{F}_{1,n}}(z)=z+{{\rho }_{1}}{{\varphi }_{1}}\left( {{F}_{2,n}}(z) \right)+{{\rho }_{2}}{{\varphi }_{2}}\left( {{F}_{3,n}}(z) \right)+\cdots +{{\rho }_{n-1}}{{\varphi }_{n-1}}\left( {{F}_{n,n}}(z) \right)[/math]
[math]{{\varphi }_{k}}(z)=-zQ(z)/\left( 1+{{\rho }_{k}}Q(z) \right),Q(z)=xCos(y)+iySin(x)[/math]
Then show [math]\underset{n\to \infty }{\mathop{\lim }}\,{{F}_{1,n}}(z)[/math] exists.
There have been discussions on this thread about recursive objects. Here's a Lulu from actual math, not about math. :cool:
This dynamical system is associated with an image I will post.
Reproductive Universe
I've been wondering if the question posted over at StackExchange, is generalizable?
What do you think?
https://math.stackexchange.com/questions/4144858/finding-the-shortest-proof-how-complex-is-it
First, for your questions, these still need to be clearly settled:
(1) Given a recursively axiomatizable theory with finitely many axioms, and with 'length of proof' defined as the sum of the lengths of the formulas that are the lines in the proof, is there an algorithm to determine. for any given theorem, the least length of a proof?
(2) Given a recursively axiomatizable theory with infinitely many axioms, and with 'length of proof' defined as the sum of the lengths of the formulas that are the lines in the proof, is there an algorithm to determine, for any given theorem, the least length of a proof?
(3) Given a recursively axiomatizable theory with finitely many axioms, and with 'length of proof' defined as the number of lines in the proof, is there an algorithm to determine, for any given theorem, the least length of a proof?
(4) Given a recursively axiomatizable theory with infinitely many axioms, and with 'length of proof' defined as the number of lines in the proof, is there an algorithm to determine, for any given theorem, the least length of a proof?
Should I post this on StackExchange?
How would you go about this?
I don't know, because I got confused by what different people said at StackExchange.
In all cases, it seems to me that, since we are concerned with finding the shortest proof, we only need to consider the finite part of the signature that occurs in the theorem, and we don't need to consider alphabetic variants.
(1) I think 'Yes'. There are only finitely many strings of a given finite length from a finite set of symbols, and only finitely many axioms to check whether a string is an axiom.
(2) I think 'Yes'. There are only finitely many strings of a given finite length from a finite set of symbols, and it is algorithmic to check whether any one of them is an axiom.
(3) I didn't check the details of the argument at
https://math.stackexchange.com/questions/1002618/how-to-find-the-shortest-proof-of-a-provable-theorem
that purports to show that the answer is 'No', but the poster seems to be knowledgeable.
(4) I didn't check the details of the argument at
https://math.stackexchange.com/questions/1002618/how-to-find-the-shortest-proof-of-a-provable-theorem
that purports to show that the answer is 'No', but the poster seems to be knowledgeable. And, aside from his argument, it does seem to make sense that if there are infinitely many axioms, then there are infinitely many different possible proof lines, so the algorithm could go on indefinitely before arriving at an answer.
It would be interesting to look into this as a general proposition.
What do you think, @TonesInDeepFreeze?