HN.zip

Peano arithmetic is enough, because Peano arithmetic encodes computation

222 points by btilly - 111 comments
btilly [3 hidden]5 mins ago
This is a stack overflow question that I turned into a blog post.

It covers both the limits of what can be proven in the Peano Axioms, and how one would begin bootstrapping Lisp in the Peano Axioms. All of the bad jokes are in the second section.

Corrections and follow-up questions are welcome.

doodlebugging [3 hidden]5 mins ago
After putting on my boots and wading through all of that I think that you have one edit to make.

In the "Why Lisp?" section there is a bit of basic logic defined. The first of those functions appears to have unbalanced parentheses.

>(defun not (x) > (if x > false > true)

I have a compulsion that I can't control when someone starts using parentheses. I have to scan to see who does what with who.

You later say in the same section

>But it is really easy to program a computer to look for balanced parentheses

Okay. This is pretty funny. Thanks for the laugh. I realize that you weren't doing that but it still is funny that you pointed out being able to do it.

This later comment in the "Basic Number Theory" section was also funny.

>; After a while, you stop noticing that stack of closing parens.

I really enjoyed reading this post. Great job. Though it has been a long time since I did anything with Lisp I was able to walk through and get the gist again.

btilly [3 hidden]5 mins ago
Thank you for the correction!

And I'm glad that someone liked some of my attempts at humor. As my wife sometimes says, "I know that you were trying to make a joke, because it wasn't funny."

doodlebugging [3 hidden]5 mins ago
It was a great read. I enjoyed how you laid it all out. It reminded me of some of my upper level math coursework. Easy to follow if you take it step by step and stop to consider the implications. Some things become obvious to the most casual observer.
whatagreatboy [3 hidden]5 mins ago
I found all the jokes funny as well. Thanks for the blog post. Extremely nice read. I love the approach.
Brian_K_White [3 hidden]5 mins ago
"I think you liked it just fine." hehe
kazinator [3 hidden]5 mins ago
> I have to scan to see who does what with who.

Are you saying that parentheses introduce the problem of having to scan to see what goes with what?

As in, if we don't have parentheses, but still have recursive/nested structure, we don't have to scan?

doodlebugging [3 hidden]5 mins ago
For myself the issue goes back to my college mathematics courses, especially differential equations. I worked those homework problems by hand on a large format tablet, roughly 24" x 36", carefully laying them out step by step so that I could walk through them in the future and make sense of the solution process. Counting and matching parentheses was pretty critical since a missed parenthesis may not pop out at you like it would in a compiler error or by walking through code.

I automatically count and match even today, 40 years later.

Retr0id [3 hidden]5 mins ago
Python block indentation is an example of nested structure that's at least easier to visually scan. You don't need to count opening/closing parens, just look down a column of text - assuming nobody mixed tabs and spaces. (but I wouldn't go as far as saying you don't need to scan it)
User23 [3 hidden]5 mins ago
Canonically indented Lisp reads an awful lot like Python. You don’t read the braces, you read the indentation.
edanm [3 hidden]5 mins ago
This is fascinating! I haven't read much past the intro yet, but I find the whole premise that you can prove all specific instances of Goodstein sequences terminate at 0 within PA, but not that all sequences terminate (it's a trivial result but still interesting).

I also find it super weird that Peano axioms are enough to encode computation. Again, this might be trivial if you think about it, but that's one self-referential layer more than I've thought about before.

One question for you btilly - oddly enough, I just recently decided to learn more Set Theory, and actually worked on an Intro to Set Theory textbook up to Goodstein sequences just last week. I'm a bit past that.

Do you have a good recommendation for a second, advanced Set Theory textbook? Also, any recommendation for a textbook that digs into Peano arithmetic? (My mini goal since learning the proof for Goodstein sequences is to work up to understanding the proof that Peano isn't strong enough to prove Goodstein's theorem, though I'll happily take other paths instead if they're strongly recommended.)

Cheyana [3 hidden]5 mins ago
Thanks for this. In another strange internet coincidence, I was asking ChatGPT to break down the fundamentals of the Peano axioms just yesterday and now I see this. Thumbs up!
burnt-resistor [3 hidden]5 mins ago
I suspect in the near future (if not already) ChatGPT data will be sold to data brokers and bough by Amazon such that writing a prompt will end up polluting Alexa product recommendations within a few minutes to hours.
lmpdev [3 hidden]5 mins ago
Oh for fuck sake

Can we not ruin every technology we develop with ads?

smoyer [3 hidden]5 mins ago
We need to make ads cost more than they are worth - here's one small way (what are others?): https://adnauseam.io/
Incipient [3 hidden]5 mins ago
Haha that's brilliant. Does it actually 'work'? I expect there is more to recording ads than simply a click?

Also is this somewhat not just lining Google's/ad networks pockets? They won't care about even a large-ish number of people using this add on.

0_gravitas [3 hidden]5 mins ago
So, this works only for a subset of ads, specifically, ads that are CPC (cost-per-click) (sometimes called CPL -> cost-per-link), this wouldn't work for CPA campaigns (cost-per-action), which only pay out when a sign up or some other arbitrary action is performed by the user.

The larger adtech companies will have some form of bot-activity-detection, _but_ plenty will let it all just slip on by. (I've personally written a Babashka script at $job that checks performance of our publishers, and flags any fraudulent-seeming activity)

ccppurcell [3 hidden]5 mins ago
Watch Adam Curtis Century of the Self. I don't think he set out to explain ads in tech, but to my mind it answers a lot about how ads specifically came to power.
blipvert [3 hidden]5 mins ago
brookst [3 hidden]5 mins ago
Let’s also not ruin every discussion with speculation about how someone will eventually ruin everything with ads.
chii [3 hidden]5 mins ago
when there's a need to make money, but there's noone who wants to pay (but they want to use the service), then it naturally falls onto being ad-driven.
louthy [3 hidden]5 mins ago
> no one wants to pay

Fewer people went to pay. That’s the crux. Companies wanting mass market penetration, rather than just accept the natural size of the market.

tshaddox [3 hidden]5 mins ago
Turns out that it doesn’t matter whether people want to pay. Many paid services also include advertising.
Art9681 [3 hidden]5 mins ago
I've noticed this too. I will be researching a topic elsewhere and then it seems to pop up in HN. Am I just looking for patterns where there are none, or is there some trickery happening where HN tracks those activities and mixes in posts more relevant to my interests with the others?
nemomarx [3 hidden]5 mins ago
HN has a single front page for everyone, so it's a recency illusion. You pay more attention to details and skim over titles you haven't been thinking about.
gjm11 [3 hidden]5 mins ago
It may also be that there's some common cause for (1) a topic being on HN's front page and (2) someone researching it elsewhere. E.g., maybe a couple of days ago there was an interesting social-media post related to it, and (1) one person saw it and wrote up something interesting enough to get on HN and (2) another person saw it, started digging around to find out more, and then read HN.

(I am not suggesting that that very specific sequence of events happened in this case. It's just an example of how "I was looking this up for other reasons and then I saw something about it on HN" could genuinely happen more often than by chance, without any need for weird tracking-and-advertising shenanigans.)

RcouF1uZ4gsC [3 hidden]5 mins ago
This proves that ChatGPT sells your data to HN which then decides which posts to put on the front page.
billyjmc [3 hidden]5 mins ago
Incidentally, this also proves that GP is the main character.
im3w1l [3 hidden]5 mins ago
Well there was a post on mathmemes a day ago about teaching kids set theory as a foundation for math with some discussion of PA. So maybe related ideas are echoing across the intertubes in this moment?
kjellsbells [3 hidden]5 mins ago
> teaching kids set theory as a foundation for math

Very reminiscent of the New Math pedagogy of the 1960s. Built up arithmetic entirely from sets. Crashed and burned for various reasons but I always had a soft spot for it. It also served as my introduction to binary arithmetic and topology.

Timwi [3 hidden]5 mins ago
> Corrections and follow-up questions are welcome.

There are two places where you accidentally wrote “omega” instead of “\omega”.

btilly [3 hidden]5 mins ago
Thanks!

I am away from my computer for the day, but I will fux it later.

gjm11 [3 hidden]5 mins ago
> I will fux it later

I think the problem is that it's already fuxed.

ikrima [3 hidden]5 mins ago
Hey! This is fantastic and actually ties in some very high disparate parts of math. Basically, reorient & reformulate all of math/epistomology around discrete sampling the continuum. Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)

Then every modal logic becomes some mapping of 2^(N) to some set of statements. The only thing that matters is how predictive they are with some sort of objective function/metric/measure but you can always induce an "ultra metric" around notions of cognitive complexity classes i.e. your brain is finite and can compute finite thoughts/second. Thus for all cognition models that compute some meta-logic around some objective F, we can motivate that less complex models are "better". There comes the ultra measure to tie disparate logic systems. So I can take your Peano Axioms and induce a ternary logic (True, False, Maybe) or an indefinite-definite logic (True or something else entirely). I can even induce bayesian logics by doing power sets of T/F. So a 2x2 bayesian inference logic: (True Positive, True Negative, False Positive, False Negative)

Fun stuff!

Edit: The technical tldr that I left out is unification all math imho: algebraic topology + differential geometry + tropical geometry + algebraic analysis. D-modules and Microlocal Calculus from Kashiwara and the Yoneda lemma encode all of epistemology as relational: either between objects or the interaction between objects defined as collision less Planck hyper volumes.

basically encodes the particle-wave duality as discrete-continuum and all of epistemology is Grothendieck topoi + derived categories + functorial spaces between isometry of those dual spaces whether algebras/coalgebra (discrete modality) or homologies/cohomologies (continuous actions)

Edit 2: The thing that ties everything together is Noether's symmetry/conserved quantities which (my own wild ass hunch) are best encoded as "modular forms", arithmetic's final mystery. The continuous symmetry I think makes it easy to think about diffeomorphisms from different topoi by extracting homeomorphisms from gauge invariant symmetries (in the discrete case it's a lattice, but in the continuous we'd have to formalize some notion of liquid or fluid bases? I think Kashiwara's crystal bases has some utility there but this is so beyond my understanding )

ricardobeat [3 hidden]5 mins ago
> Invert our notions of Aleph/Beth/Betti numbers as some sort of triadic Grothendieck topoi that encode our human brain's sensory instruments that nucleate discrete samples of continuum of reality (ontology)

There’s probably ten+ years of math education encoded in this single sentence?

gjm11 [3 hidden]5 mins ago
My apologies to ikrima for being critical, but I think anyone who thinks "aleph/beth/Betti numbers" is a coherent set of things to put together is just very confused.

Aleph and beth numbers are related things, in the field of set theory. (Two sequences[1] of infinite cardinal numbers. The alephs are all the infinite cardinals, if the axiom of choice holds. The beth numbers are the specific ones you get by repeatedly taking powersets. They're only all the cardinals if the "generalized continuum hypothesis" holds, a much stronger condition.)

[1] It's not clear that this is quite the right word, but no matter.

Betti numbers are something totally different. (If you have a topological space, you can compute a sequence[2] of numbers called Betti numbers that describe some of its features. (They are the ranks of its homology groups. The usual handwavy thing to say is that they describe how many d-dimensional "holes" the space has, for each d.)

[2] This time in exactly the usual sense.

It's not quite true that there is no connection between these things, because there are connections between any two things in pure mathematics and that's one of its delights. But so far as I can see the only connections are very indirect. (Aleph and beth numbers have to do with set theory. Betti numbers have to do with topology. There is a thing called topos theory that connects set theory and topology in interesting ways. But so far as I know this relationship doesn't produce any particular connection between infinite cardinals and the homology groups of topological spaces.)

I think ikrima's sentence is mathematically-flavoured word salad. (I think "Betti" comes after "beth" mostly because they sound similar.) You could probably take ten years to get familiar with all the individual ideas it alludes to, but having done so you wouldn't understand that sentence because there isn't anything there to understand.

BUT I am not myself a topos theorist, nor an expert in "our human brain's sensory instruments". Maybe there's more "there" there than it looks like to me and I'm just too stupid to understand. My guess would be not, but you doubtless already worked that out.

[EDITED to add:] On reflection, "word salad" is a bit much. E.g., it's reasonable to suggest that our senses are doing something like discrete sampling of a continuous world. (Or something like bandwidth-limited sampling, which is kinda only a Fourier transform away from being discrete.) But I continue to think the details look more like buzzword-slinging than like actual insight, and that "aleph/beth/Betti" thing really rings alarm bells.

ikrima [3 hidden]5 mins ago
also you're onto the actual quantum mechanics paper I'm working on. QM/QFT is modern day epicycles: arbitrarily complex because it was the aliasing the natural deeper representation which was Fourier/Spectral analysis.

Reformulating our entire ontology around relational mechanics is the answer imho. So Carlo Ravoli's RQM is right but I think it doesn't go far enough. Construct a grothendeik topos with a spacetime cohomology around different scales of both space and time with some sort of indefinite conservation and you get collision less Planck hyper volumes that map naturally to particle-wave duality interpretations of QM.

ikrima [3 hidden]5 mins ago
lol, it's a sketch of a proof covering a large swath of unexplored math. the other poster wasn't wrong when he said I smashed 10y+ of graduate math in one sentence.

Aleph numbers = these are cardinals sizes of infinity; depending on your choice of axioms, ZFC or not, you have the continuum hypothesis of aleph0 = naturals, aleph1= 2^N = Continuum

Beth numbers are transfinite ordinals => they generalize infinitesimals like the 1st, 2nd, 3rd. so you can think of them as a dual or co-algebra (I'm hand waving here, it's been twenty years since real analysis).

Betti numbers are for persistent cohomology; they track holes similar to genus

I mean there's a lot to cover between tropical geometry, differential geometry, and algebraic analysis. So sometimes alarm bells are false alarms and your random internet commenter knows what he's talking about but is admittedly too sloppy but it's 5 pm on a Saturday and I wrote that in the morning while making breakfast eggs, not for submission to the annals of Mathematics!

Thank you for coming to my TED Stand Up Talk.

More math at the GitHub: http://github.com/ikrima/topos.noether

Also, if you're really that uptight, most of this is actually to teach algebraic topology to my autistic nonverbal nephew because I'm gonna gamify it as a magic spell system

So it'll be open source and that begs the question, if you use it to learn something, did that mean I just zero-proof zero-knowledge something out of you that I didn't even need to know by making a self referential statement across both space & time?

peace out my ninja!

gjm11 [3 hidden]5 mins ago
The comment you're replying to already explained what aleph, beth and Betti numbers are. (But a few nitpicks: 1. Beth numbers are not ordinals, they're cardinals. They're indexed by ordinals, just as the alephs are, but if that's what you care about why not use the ordinals themselves? 2. I'm not seeing how you get from "Beth numbers are indexed by ordinals" to "they generalize infinitesimals" to "you can think of them as a dual". Not saying there isn't something there, but I think you could stand to unpack it a bit if so. 3. Betti numbers are not only for persistent (co)homology; they were around long before anyone had thought of persistent (co)homology.)

It's certainly possible (as I explicitly said before) that my bad-math-alarms have hit a false positive here. You haven't convinced me yet, for what it's worth. (You need not, of course, care whether you convince me or not. It's not as if my opinion is likely to have any effect on you beyond whatever you might feel about it.)

ikrima [3 hidden]5 mins ago
I think we're vehemently in semantic agreement but hn comment threads are two bandwidth limiting to discuss tropical geometry and speculative mathematics that require decades of abstract algebra, geometry, and Galois theory :)

For Beth numbers, the wikipedia article is plenty enough to get you started: https://en.wikipedia.org/wiki/Beth_number

ikrima [3 hidden]5 mins ago
I mean you wouldn't be wrong to assume so but how can you expect anyone to saliently condense the entirety of a 10 year long proof of Grothendieck topos to 3 or 4 sentences my guy!
ikrima [3 hidden]5 mins ago
you know what, I nerd sniped myself, here's a more fleshed out sketch of the [Discrete Continuum Bridge

https://github.com/ikrima/topos.noether/blob/master/discrete...

bobsh [3 hidden]5 mins ago
There are others out here thinking along similar lines (in my case, with massive help from LLMs). Proof: https://claude.ai/share/a8128fde-ea47-4dd8-a284-16a1fd76240c . Also, I have a GitHub too: https://github.com/bobshafer/PITkit/blob/main/Links.md
gjm11 [3 hidden]5 mins ago
It seems to be entirely written by an LLM.

[EDITED to add:] This is worth noting because today's LLMs really don't seem to understand mathematics very well. (This may be becoming less so with e.g. o3-pro and o4, but I'm pretty sure that document was not written by either of those.) They're not bad at pushing mathematical words around in plausible-looking ways; they can often solve fairly routine mathematical problems, even ones that aren't easy for humans who unlike the LLMs haven't read every bit of mathematical writing produced to date by the human race; but they don't really understand what they're doing, and the nature of the mistakes they make shows that.

(For the avoidance of doubt, I am not making the tired argument that of course LLMs don't understand anything, they're just pattern-matching, something something stochastic parrots something. So far as I can tell it's perfectly possible that better LLMs, or other near-future AI systems that have a lot in common with LLMs or are mostly built out of LLMs, will be as good at mathematics as the best humans are. I'm just pretty sure that they're still some way off.)

(In particular, if you want to say "humans also don't really understand mathematics, they just push words and symbols around, and some have got very good at it", I don't think that's 100% wrong. Cf. the quotation attributed to John von Neumann: "Young man, in mathematics you don't understand things, you just get used to them." I don't think it's 100% right either, and some of the ways in which some humans are good at mathematics -- e.g., geometric intuition, visualization -- match up with things LLMs aren't currently good at. Anyway, I know of no reason why AI systems couldn't be much better at mathematics than the likes of Terry Tao, never mind e.g. me, but they aren't close enough to that yet for "hey, ChatGPT, please evaluate my speculation that we should be unifying continuous and discrete mathematics via topoi in a way that links aleph, beth and Betti numbers and shows how our brains nucleate discrete samples of continuum reality" to produce output that has value for anything other than inspiration.)

ikrima [3 hidden]5 mins ago
Yup, it's 100% generated by an LLM. I thought that was intentionally clear? (I'm recovering from a TBI so I'm still adjusting to figuring out how to relearn typing; I use the LLMs as my voice mediated interface to typing out thoughts).

I'm not sure there's an argument I'm hearing here other than you seem to have triggered some internal heuristic of "this was written by an LLM" x "It contains math words I don't understand" => "this is bullshit"

which you wouldn't be wrong but I am making a specific constructionist modal logic here using infinity-groupoids from category theory. infinite dimensional categories are a thing and that's what these transfinite numbers represent

you have hyperreal constructionists of the reals as well which follows nonstandard analysis. you can also use the Weil cohomology which IIRC gets us most of calculus without the axiom of choice but someone check me on that.

so....again, not sure what your specific critique is?

gjm11 [3 hidden]5 mins ago
No specific critique here other than "it was written by an LLM and this seems worth pointing out given that LLMs are bad at actually understanding difficult mathematics".

(In a different comment I make some actual criticisms of what you wrote. I see you replied to my comment there, and that's a more appropriate place to discuss actual ideas. I don't see much point in criticizing LLM output in a field LLMs are bad at.)

Anyway: (1) no, it wasn't clear. I wouldn't generally take "I nerd-sniped myself. Here's a more fleshed-out sketch of ..." to mean "Here's something written for me by an LLM". I'd take it to imply that the person had done the fleshing-out themself. And (2) no, the problem wasn't that you used words I don't understand. It's certainly possible that your ideas are excellent and I just don't understand them, but I'm a mathematician myself and none of the words scare me.

ikrima [3 hidden]5 mins ago
"no, it wasn't clear. I wouldn't generally take "I nerd-sniped myself. Here's a more fleshed-out sketch of ..." to mean "Here's something written for me by an LLM". I'd take it to imply that the person had done the fleshing-out themself."

aaaaaaaah, I think you finally helped me notice something subtle in the way I use LLMs than other people. It sounds obvious now that I think about it but I never considered people use LLMs like google whereas I use it more like a real time thought transcriber (e.g. Dragon Naturally Speaking but not shite :P) Since it's trained on a RAG based off of my own polished thoughts, I've set it up as a meta-circular evaluator to do linguistic filtration (basically Fourier kernels on clip embedding space that map to various measures of "conceptual clarity").

So the LLM-ness of it to me is a clear-flag that this is hastily dictated.

ikrima [3 hidden]5 mins ago
well, you're in luck because I'm about to make a fool of myself in trying to tease Terence Tao over at https://mathstodon.xyz/@mathemagical

Wish me luck!

ikrima [3 hidden]5 mins ago
You know what, since you put in all that work, here's my version using p-adic geometry to generalize the concept of time as a local relativistic "motive" (from category theory) notion of ordering (i.e. analogous to Grothendieck's generalization of functions as being point samples along curves of a basis of distributions to generalize notions of derivatives):

https://github.com/ikrima/topos.noether/blob/aeb55d403213089...

anthk [3 hidden]5 mins ago
Boot sector Lisp bootstraps itself.

https://justine.lol/sectorlisp2/

Also, lots of Lisp from https://t3x.org implement numerals (and the rest of stuff) from cons cells and apply/eval:

'John McCarthy discovered an elegant self-defining way to compute the above steps, more commonly known as the metacircular evaluator. Alan Kay once described this code as the "Maxwell's equations of software". Here are those equations as implemented by SectorLISP:

ASSOC EVAL EVCON APPLY EVLIS PAIRLIS '

Ditto with some Forths.

Back to T3X, he author has Zenlisp where the meta-circular evaluation it's basically how to define eval/apply and how to they ared called between themselves in a recursive way.

http://t3x.org/zsp/index.html

btilly [3 hidden]5 mins ago
I knew that this sort of stuff was possible, but it is fun to see it.

When it comes to bootstrapping a programming language from nothing, the two best options are Lisp and Forth. Of the two, I find Lisp easier to understand.

dwohnitmok [3 hidden]5 mins ago
Your comment to JoJoModding in Math Stackexchange is incorrect.

"That's because there are nonstandard models of PA which contain infinite natural numbers. So PA may be able to prove that it produces a proof, but can't prove that the proof is of finite length. And therefore it might not be a valid proof."

If PA proves "PA proves X" then PA can prove X. This is because the key observation is not that there are nonstandard models, but rather that the standard natural numbers model PA.

Therefore if PA proves "PA proves X", then there is in fact a standard, finite natural number that corresponds to the encoded proof of "PA proves X". That finite natural number can be used then to construct a proof of X in PA.

dwohnitmok [3 hidden]5 mins ago
I haven't had the time to go through your argument in more detail, but it's important to note (because the natural language version of what you've presented is ambiguous) that you haven't shown "PA proves 'Provable(forall n, G(n))'" in which case it would be the case that in fact "PA proves 'forall n, G(n)'", but rather "PA proves 'forall n, Provable(G(n))'".

My logic is very rusty at this point, but if someone could give me an argument that you cannot move the 'Provable' outside the 'forall', I would really appreciate that, without making reference to Goodstein sequences. In other words, that in general for propositions 'P' it is not true that proving "forall n, Provable(P(n))" implies you can prove "Provable(forall n, P(n))".

btilly [3 hidden]5 mins ago
> If PA proves "PA proves X" then PA can prove X.

Not true.

From PA we can construct a function that can search all possible proofs that can be constructed in PA. In fact I outlined one way to do this at the end of my answer.

With this function, we can construct a function will-return that analyzes whether a given function, with a given input, will return. This is kind of like an attempted solution to the Halting Problem. We know that it doesn't always work. But we also know that it works a lot of the time.

From will-return we can create a function opposite-return that tries to return if a given function with a given input would not, and doesn't return if that function would. This construction is identical to the one in the standard proof of the Halting Problem.

Now we consider (opposite-return opposite-return opposite-return). (Actually you need a step to expand the argument into this recursive form. I've left that out, but that is identical to the one in the standard proof of the Halting problem.)

PA can prove the following:

- PA proves that if PA can prove that opposite-return returns, then it doesn't. - PA proves that if PA can prove that opposite-return doesn't return, then it does. - PA proves that if it can prove everything that it proves that it can prove, then PA must have a proof of one of the two previous statements. - Therefore PA proves that if it can prove everything that it proves that it can prove, then PA is inconsistent.

This is a form of Gödel's second incompleteness theorem.

And, therefore, there must be a distinction to be made between "PA proves" and "PA proves that it proves".

ComplexSystems [3 hidden]5 mins ago
I am not sure I follow and it is possible I may not be on the same wavelength, but here is another thought.

For any sentence S, if first-order PA (or any first-order theory) proves S, then that means that S holds true in every model of that theory, via the completeness (not incompleteness) theorem.

The statement "PA proves x" is equivalent to saying "there exists a natural number N which is a Gödel number encoding a proof of x." The letter "x", here, is a natural number that is assumed to encode some sentence we want to prove, that is, x = #S for some sentence S.

The above is a predicate with one existential quantifier: Provable(x) = there exists N such that IsProof(N, x) holds true, where IsProof says that N is the Gödel number of a valid proof ending in x.

If PA proves "Provable(x)", that means that the predicate "Provable(x)" holds in every model of PA. This means every model of PA would have some natural number N that satisfies IsProof(N, x). Of course, this number can vary from model to model.

However: the standard model, which has only standard natural numbers, is also another model of PA. So if PA proves "Provable(x)," and Provable(x) thus holds in every model, it must also hold in the standard model of PA. This means that there must be a regular, standard, finite natural number N_st that encodes a proof of x.

Since the standard model is an initial segment of every other model of PA, then every other model includes the standard model and all the standard natural numbers. Thus, if PA proves Provable(x), then the standard number N_st must exist in all models.

So we cannot have a situation where PA proves Provable(x) but all proofs of x must be nonstandard and infinite in length. This would mean no such proof exists in the standard model of PA - but then, via completeness, PA would not be able to prove "Provable(x)".

tromp [3 hidden]5 mins ago
> > If PA proves "PA proves X" then PA can prove X.

If we assume PA to be sound, then indeed everything it proves is true.

> Not true.

Now you're saying PA is unsound.

But your article wasn't about PA proves "PA proves X", it was about "forall n : PA proves G(n)".

For PA not to prove "forall n: G(n)", there is no soundness issue, only a ω-consistency issue.

btilly [3 hidden]5 mins ago
I think we are saying the same thing.

If PA proves that it proves a statement, PA cannot conclude from that fact that it proves that statement.

If PA proves that it proves a statement, and then fails to prove it, PA is unsound.

There exist collections of statements such that PA proves that it proves each statement, and PA does prove each statement, but PA does not prove the collection of statements.

Our understanding of the last is helped by understanding that "PA proves that PA proves S" is logically not the same statement as, "PA proves S". Even though they always have the same truth value.

ComplexSystems [3 hidden]5 mins ago
If PA proves that a number exists with some mathematical property - including being a Gödel number of a proof of something - then some number with that property must exist in every model, including the standard model. So there would have to be a standard number encoding a proof, and the proof that it encodes would have to be correct, assuming your Gödel numbering is.
dwohnitmok [3 hidden]5 mins ago
> There exist collections of statements such that PA proves that it proves each statement, and PA does prove each statement, but PA does not prove the collection of statements.

I think this is true (although I would like a significantly more succinct proof than one using Goodstein sequences as I mention here https://news.ycombinator.com/item?id=44277809), but regardless this is substantially weaker than your assertion that there are statements PA can prove that it can prove, but cannot itself prove.

In particular I can prove that for all propositions P, PA proves P if and only if P proves "Provable(P)" (i.e. "P is provable in PA").

Given PA proves P, take the finite proof of P in PA and do the usual Godel encoding to get a single number. That is a witness to the implicit existential statement behind "Provable(P)".

In the other direction, given P proves "Provable(P)", take the natural number witness of the implicit existential statement. Because the standard natural numbers satisfy PA (this is basically equivalent to the assumption that PA is consistent), we know that this natural number is in fact a standard, finite natural number. Hence we can transform this natural number back to a proof of P in PA.

This equivalence is precisely the reason why the provability relation (more accurately for our purposes stated as "provability in PA") "Provable" is valuable. If the equivalence didn't hold it would be very weird to call this relation "Provable".

In particular, this means PA proves P if and only if P proves "Provable(Provable(P))" (and so on for as many nestings of Provable as you'd like).

> If PA proves that it proves a statement, PA cannot conclude from that fact that it proves that statement.

I kind of think I know what you mean by this, but definitely your conclusion is way too strong. In particular, it is true in a certain intuitive sense that PA cannot "use" the fact of "Provable(P)" to actually prove P. It is in a sense a coincidence that every P for which PA can prove P, PA can also prove "Provable(P)", since that equivalence is carried "outside" of PA. But that's not really anything to do with PA itself, but that's just rather the nature of any encoded representation.

Even what tromp is saying is quite strange to me (in apparently implying that it's normal for PA to have an omega-consistency issue). It would be very strange to assert PA is omega-inconsistent. It is tantamount to saying PA is inconsistent, since stating that PA is omega-inconsistent is saying that the standard natural numbers are not a valid model of PA.

dwohnitmok [3 hidden]5 mins ago
> PA proves that if it can prove everything that it proves that it can prove, then PA must have a proof of one of the two previous statements.

I don't believe this is true. I don't know what result you're using here, but I think you're mixing up "provable" and "true".

In particular your line of reasoning violates Lob's Theorem, which is a corollary of the second incompleteness theorem.

Animats [3 hidden]5 mins ago
This is very like Boyer-Moore theory[1], which builds up mathematics from the Peano axiom level.

Boyer and Moore wrote an automated theorem prover that goes with this theory. I have a working copy for GNU Common LISP at [2].

"It is perhaps easiest to think of our program much as one would think of a reasonably good mathematics student: given the axioms of Peano, he could hardly be expected to prove (much less discover) the prime factorization theorem. However, he could cope quite well if given the axioms of Peano and a list of theorems to prove (e.g., “prove that addition is commutative,” . . . “prove that multiplication distributes over addition,” . . . “prove that the result returned by the GCD function divides both of its arguments,” . . . “prove that if the products over two sequences of primes are identical, then the two sequences are permutations of one another”)." - Boyer and Moore

[1] https://www.cs.utexas.edu/~boyer/acl.pdf

[2] https://github.com/John-Nagle/nqthm/tree/master

dooglius [3 hidden]5 mins ago
The interesting part to me (I have a background in both math+programming) isn't so much the encoding of computation but that one can work around the independence of goodstein's theorem in this self-referential way. I think this implies that PA+"PA is omega-consistent" can prove goodstein's theorem, and perhaps can more generally do transfinite induction up to epsilon_0? EDIT: I think just PA+"PA is consistent" is enough?
bubblyworld [3 hidden]5 mins ago
Unfortunately not, and apparently no other purely universally quantified formulas will do either (so this is a more general thing, not specific to Con(PA)): https://math.stackexchange.com/questions/5003237/can-goodste...

On the first question, how do you encode omega-consistency as a formula of PA? Just curious, it's not at all obvious to me.

LegionMammal978 [3 hidden]5 mins ago
> On the first question, how do you encode omega-consistency as a formula of PA? Just curious, it's not at all obvious to me.

I was also wondering about that, but going by the Wikipedia definition, it doesn't seem too complicated: you say, "For all encoded propositions P(x): If for every x there exists an encoded proof of P(x), then there does not exist an encoded proof of 'there exists an x such that ¬P(x)'." That is, if you can prove a proposition for each integer in the metatheory, then quantifiers within the target theory must respect that.

bubblyworld [3 hidden]5 mins ago
Thanks, I see, so you pick some Gödel numbering and then quantifying over propositions is actually just quantifying over elements of the domain (and using your encodings of Sub(...) and Proves(...) and such). I see why that might have a chance of working, because it's now much higher up the arithmetic hierarchy.
LegionMammal978 [3 hidden]5 mins ago
As far as the arithmetic hierarchy goes, ω-consistency should just be a Π₂ sentence, with a universal quantifier over the encoded propositions, and an existential quantifier for the negated antecedent. All the implementation details of the encoding should be bounded. (Cf. ordinary consistency, which is the Π₁ sentence "there does not exist an encoded proof of a contradiction".)
Kotlopou [3 hidden]5 mins ago
Asker of the SO question here: I edited the question to link to a few other answers on this kind of thing. Essentially, "PA is consistent" is not enough, but a "uniform reflection principle" that says "if PA proves something, it's true" is enough. I'm not 100% certain that this principle is equivalent to omega-consistency, but if I'm reading this correctly, it should be:

https://en.wikipedia.org/wiki/%CE%A9-consistent_theory#Relat...

The Wikipedia article says T is omega-consistent if "T + RFN_T + the set of all true sentences is consistent", which should mean the same thing as "T + RFN_T is true".

codeflo [3 hidden]5 mins ago
I also like the recursion. In essence, you're making a meta-proof about what PA proves, and given that you trust PA, you also trust this meta-proof.

> I think just PA+"PA is consistent" is enough?

It's not clear to me how. I believe PA+"PA is consistent" would allow a model where Goodstein's theorem is true for the standard natural numbers, but that also contains some nonstandard integer N for which Goodstein's theorem is false. I think that's exactly the case that's ruled out by the stronger statement of ω-consistency.

NoahZuniga [3 hidden]5 mins ago
I think so, the math exchange post mentions that the PA + transfinite induction works on epsilon_0 proves PA. It seems likely to me that PA + PA is consistent would be able to prove transfinite induction on epsilon_0.
btilly [3 hidden]5 mins ago
Now we're getting a little beyond the detail that I feel comfortable making statements about.

ChatGPT tells me that PA+"PA is consistent" is not quite enough. I believe that it has digested enough logic textbooks that I'll believe that claim.

gugagore [3 hidden]5 mins ago
I was talking to someone about inductive data types, and showed them the zero/succ definition of `Nat`, e.g. in Lean or Rocq.

It was interesting because they were asking "is this all you need? What about the Peano axioms? Is there anything more primitive than the inductive data type?"

I bring it up because it's good not to take for granted that the Peano axioms are inherent, instead of just one design among many.

SabrinaJewson [3 hidden]5 mins ago
> Is there anything more primitive than the inductive data type?

I believe that the natural numbers are more primitive than inductive data types, since all inductive data types may be constructed from the natural numbers alongside a small number of primitive type formers (e.g. Π, Σ, = and Ω).

gugagore [3 hidden]5 mins ago
You don't need all the natural numbers for that, though. I think you need 0 and 1 only?

I think there are two primitive sets for dependent type theory. The one with omega, and then the one with inductive types. None of them need axioms like the Peano axioms.

cryptonector [3 hidden]5 mins ago
Pure lambda calculus is enough because lambda calculus encodes computation.
atsmyles [3 hidden]5 mins ago
Related to PA consistency, it can be proven in PA. https://youtu.be/6pjLmmkZnIA
robinhouston [3 hidden]5 mins ago
This definitely needs some context for the non-logicians in the house! Gödel's second incompleteness theorem shows that, if PA can prove its own consistency, then PA is inconsistent (and can therefore prove anything, including false things).

The work linked here doesn't show that PA is inconsistent, however: what it does is to define a new, weaker notion of what it means for PA to “prove its own consistency” and to show that PA can do that weaker thing.

Interesting work for sure, but it won't mean anything to you unless you already know a lot of logic.

petters [3 hidden]5 mins ago
This topic has 123 points, but the linked post on SO only has 11 upvotes.
netruk44 [3 hidden]5 mins ago
Stack Overflow requires you to get 15 points before you can upvote things.

I imagine their reputation problem (nobody wants to post things there because it just gets deleted) plus the 15 point requirement precludes many people from upvoting over there.

RossBencina [3 hidden]5 mins ago
But is computation enough? The computable reals are a subset of the reals.
A_D_E_P_T [3 hidden]5 mins ago
"Reals" (tragically poorly named) can be interpreted as physical ratios.

That is: Real numbers describe real, concrete relations. For e.g., saying that Jones weighs 180.255 pounds means there's a real, physical relationship -- a ratio -- between Jones' weight and the standard pound. Because both weights exist physically, their ratio also exists physically. Thus, from this viewpoint, real numbers can be viewed as ratios.

In contrast, the common philosophical stance on numbers is that they are abstract concepts, detached from the actual physical process of measurement. Numbers are seen as external representations tied to real-world features through human conventions. This "representational" approach, influenced by the idea that numbers are abstract entities, became dominant in the 20th century.

But the 20th century viewpoint is really just one interpretation (you could call it "Platonic"), and, just as it's impossible to measure ratios to infinite precision in the real world, absolutely nothing requires an incomputable continuum of reals.

Physics least of all. In 20th and 21st century physics, things are discrete (quantized) and are very rarely measured to over 50 significant digits. Infinite precision is never allowed, and precision to 2000 significant digits is likewise impossible. The latter not only because quantum mechanics makes it impossible to attain great precision on very small scales. For e.g., imagine measuring the orbits of the planets and moons in the solar system: By the time you get to 50 significant digits, you will need to take into account the gravitational effects of the stars nearest to the sun; before you get to 100 significant digits, you'll need to model the entire Milky Way galaxy; the further you go in search of precision, the exponentially larger your mathematical canvas will need to grow, and at arbitrarily high sub-infinite precision you’d be required to model the whole of the observable universe -- which might itself be futile, as objects and phenomena outside observable space could affect your measurements, etc. So though everything is in principle simulatable, and precision has a set limit in a granular universe that can be described mathematically, measuring anything to arbitrarily high precision is beyond finite human efforts.

meroes [3 hidden]5 mins ago
> absolutely nothing requires an incomputable continuum of reals.

In a sense this is wrong. Some say the indispensability of modern math, which includes the incomputable reals, shows that such abstract objects are required for science.

bhk [3 hidden]5 mins ago
In what way does analysis require incomputable reals?
meroes [3 hidden]5 mins ago
The argument is more like our most successful sciences like physics use our most successful math, which is ZFC. And thus incompatible reals are part of this package deal. Maybe it’s possible to do physics without any packaging of lots of these mathematical objects, but to my knowledge it hasn’t been done (physics based on a paired down math, from top to bottom). It’s a de facto require, that despite effort like intuitionism or Science without Numbers has not replaced classic ZFC as our most successful math.
random3 [3 hidden]5 mins ago
If a popularity is the measure of success, then ZFC is successful. However, I'm not sure there's anything that requires it and doubt that there's a strong claim wrt to any ZFC requirement. So saying science requires it, is like saying that ARM is required vs x86 or IEEE 754 is because your experimental setup runs on it.
jfengel [3 hidden]5 mins ago
As far as we know quantum mechanics does not have a granularity. You can measure any value to arbitrary precision. You must have limits on measuring two things simultaneously.

Granularity is implied by some, but not all, post standard model physics. It's a very open question.

A_D_E_P_T [3 hidden]5 mins ago
Whether or not nature is discrete is an open question, but it's rather strongly implied, and there's absolutely nothing to suggest that the universe is incomputable.

> You can measure any value to arbitrary precision

Quantum mechanics lets you refine one observable indefinitely if you are willing to sacrifice conjugate observables.

You can measure that one observable to an arbitrarily (not infinitely!) high precision -- if you have an correspondingly arbitrary duration of time and arbitrarily powerful computational resources. That measurement of yours, if exceedingly precise, might require a block of computronium which utilizes the entire energy resources of the universe. As a practical matter, that's not permitted. I'm certainly unaware of any measurement in physics to more than 100 significant digits, let alone "arbitrary precision".

In fact, as it turns out, no experiment has ever resolved structure down to the Planck length. A fundamental spatial resolution is likely to be quite a lot smaller than the Planck length; even the diameter of the electron has been estimated at anywhere from 10^-22m to 10^-81m.

The question I was responding to asked whether reals are "necessary" -- plainly, insofar as reality is concerned, they are not.

jfengel [3 hidden]5 mins ago
We cannot measure anywhere near the Planck length or anything like 100 significant figures.

My concern with the reals goes the other direction. The reals are not closed under common operations. A lot of the work is done in complex numbers, which are. The rationals are not closed under radicals, and radicals seem pretty fundamental.

You can define a finitist model despite this. But it's ugly, and while ugliness is not physically meaningful, it tends to make progress difficult. A usable solution may yet arise; we shall (perhaps) see.

adrian_b [3 hidden]5 mins ago
The set of values of any physical quantity must have an algebraic structure that satisfies a set of axioms that include the axioms of the Archimedean group (which include the requirements that it must be possible to compare, add and subtract the values of that physical quantity).

This requirement is necessary to allow the definition of a division operation, which has as operands a pair of values of that physical quantity, and as result a scalar a.k.a. "real" number. This division operation, as you have noticed, is called "measurement" of that physical quantity. A value of some physical quantity, i.e. the dividend in the measurement operation, is specified by writing the quotient and the divisor of the measurement, e.g. in "6 inches", "6" is the quotient and "inch" is the divisor.

In principle, this kind of division operation, like any division, could have its digit-generating steps executed infinitely, producing an approximation as close as desired for the value of the measured quantity, which is supposed to be an arbitrary scalar, a.k.a. "real" number. Halting the division after a finite number of steps will produce a rational number.

In practice, as you have described, the desire to execute the division in a finite time is not the only thing that limits the precision of the measured values, but there are many more constraints, caused by the noise that could need longer and longer times to be filtered, by external influences that become harder and harder to be suppressed or accounted for, by ever greater cost of the components of the measurement apparatus, by the growing energy required to perform the measurement, and so on.

Nevertheless, despite the fact that the results of all practical measurements are rational numbers of low precision, normally representable as FP32, with only measurements done in a few laboratories around the world, which use extremely expensive equipment, requiring an FP64 or an extended precision representation, it is still preferable to model the set of scalars using the traditional axioms of the continuous straight line, i.e. of the "real" numbers.

The reason is that this mathematical model of a continuous set is actually much simpler than attempting to model the sets of values of physical quantities as discrete sets. An obvious reason why the continuous model is simpler is that you cannot find discretization steps that are good both for the side and for the diagonal of a square, which has stopped the attempts of the Ancient Greeks to describe all quantities as discrete. Already Aristotle was making a clear distinction between discrete quantities and continuous quantities. Working around the Ancient Greek paradox requires lack of isotropy of the space, i.e. discretization also of the angles, which brings a lot of complications, e.g. things like rigid squares or circles cannot exist.

The base continuous dynamical quantities are the space and time, together with a third quantity, which today is really the electric voltage (because of the convenient existence of the Josephson voltage-frequency converters), even if the documents of the International System of Units are written in an obfuscated way that hides this, in an attempt to preserve the illusion that the mass might be a base quantity, like in the older systems of units.

In any theory where some physical quantities that are now modeled as continuous, were modeled as discrete instead, the space and time would also be discrete. There have been many attempts to model the space-time as a discrete lattice, but none of them has produced any useful result. Unless something revolutionary will be discovered, all such attempts appear to be just a big waste of time.

btilly [3 hidden]5 mins ago
Your first paragraph is contradicted by the Heisenberg uncertainty principle.
adrian_b [3 hidden]5 mins ago
The Heisenberg uncertainty principle is completely irrelevant for metrology and it certainly does not have any relationship whatsoever with the algebraic structure of the set of values of a physical quantity.

The Heisenberg uncertainty principle is just a trivial consequence of the properties of the Fourier transform. It just states that there are certain pairs of physical quantities which are not independent (because their probability densities are connected by a Fourier transform relationship), so measuring both simultaneously with an arbitrary precision is not possible.

The Heisenberg uncertainty principle says absolutely nothing about the measurement of a single physical quantity or about the simultaneous measurement of a pair of independent physical quantities.

There is no such thing as a quantity that cannot be measured, i.e. the set of its values does not have the required Archimedean group algebraic structure. If it cannot be measured, it is not a quantity (there are also qualities, which can only be compared, but not measured, so the sets of their values are only ordered sets, not Archimedean groups; an example of a physical quality, which is not a physical quantity, is the Mohs hardness, whose numeric values are just labels attached to certain values, a Mohs hardness of "3" could have been as well labeled as "Ktcwy" or with any other arbitrary string, the numeric labels have been chosen only to remember easy the order between them).

btilly [3 hidden]5 mins ago
The Heisenberg uncertainty principle says that basic physical quantities do not have unique values. They have ranges of values. And therefore it is not always possible to compare two physical properties. For example you can't always compare two particles to discover which is farther away from you.

Various proposals exist in which the fundamental physical structure of space form a kind of "quantum foam". And so, somewhere near the Planck length, it doesn't even make sense to talk about distance.

adrian_b [3 hidden]5 mins ago
I am sorry, but you have forgotten what the Heisenberg uncertainty principle really says.

What you have in mind about "ranges of values" has nothing to do with Heisenberg, but with the formalism of quantum mechanics as developed by Schroedinger, Dirac and others (not counting the matrix mechanics of Heisenberg, which was an inferior mathematical method, soon forgotten after superior alternatives were developed, and which is something completely else than the Heisenberg principle of uncertainty).

In the various variants of the formalism of quantum mechanics, the correspondent of a single value in classic dynamics is a function, the so-called wave function, but even those functions cannot be called "ranges of values". There are several interpretations of what the "wave" functions mean, but the most common is that they are probability densities for the values of the corresponding physical quantity.

Depending on the context, in quantum mechanics the value of a physical quantity may be unknown, when only the probability of it having various values is known, but the value may also be known with absolute certainty, usually in some stationary states.

In both cases the value of the physical quantity can be measured, but in the former only an average value can be measured, which nonetheless may have unlimited precision, while in the latter case the exact value can be measured, exactly like in classical dynamics.

btilly [3 hidden]5 mins ago
You are being unnecessariky rude. I have not forgotten what it says.

There are pairs of physical properties, connected by a Fourier transform, that cannot both be known at once. Therefore each property is, in general, only known to be in a range. And therefore we cannot always compare these properties between different particles.

As for measurement, in the Everett interpretation the values are not known after measurement either. All that is knowable are correlations due to entanglement.

But now we are far afield. You made an a priori statement about physics. Physics is not required to oblige you. My understanding of physics says that it does not. But even if I am wrong, and it currently does oblige you, it is still not required to.

A priori assertions do not have a good history. For an example, look at Kant's assertion that reality must be described by Euclidean geometry. He was not in a position to understand the ways in which he would prove to be wrong.

anthk [3 hidden]5 mins ago
You need to read about limits on Calculus.
A_D_E_P_T [3 hidden]5 mins ago
lol dude, you need to read about the philosophy of mathematics. Like, what is it? Besides, go back and read the post I was responding to.

Now, are the reals necessary?

adrian_b [3 hidden]5 mins ago
Like I have expanded in another reply, reals are not necessary, but using them to model the sets of values of the dynamic quantities is much simpler than any alternative that attempts to use only rational numbers.

During the last decades, there have been published many research papers exploring the use of discreteness instead of the traditional continuity, but they cannot be considered as anything else but failures. The resulting mathematical models are much more complicated than the classic models, without offering any extra predictions that could be verified.

The reason for the complications is that in physics it would be pointless to try to model a single physical quantity as discrete instead of continuous. You have a system of many interrelated quantities and trying to model all of them as discrete reaches quickly contradictions for the simpler models, due to the irrational or transcendent functions that relate some quantities, functions that appear even when modeling something as simple as a rotation or oscillation. If, in order to avoid incommensurability, you replace a classic continuous uniform rotation with a sequence of unequal jumps in angular orientation, to match the possible directions of nodes in a discrete lattice, then the resulting model becomes extremely more complicated than the classic continuous model.

btilly [3 hidden]5 mins ago
I disagree with calling them all failures.

Everything that is known about numerical analysis is based on discreteness, instead of continuity. Every useful predictive model that we have about the world, for example for forecasting weather, depends on numerical analysis.

I consider this a success!

adrian_b [3 hidden]5 mins ago
Nope.

Forecasting weather, designing semiconductor devices, or any other such activities are all based on continuous mathematical models, which use e.g. systems of equations with partial derivatives or systems of integral equations.

Only in the last stage of solving such a problem, in order to use a digital computer the continuous mathematical model is approximated by a discrete mathematical model, which is constructed using a standard method, e.g. finite elements, boundary elements, finite differences, most of which are based on approximating an unknown function with an infinity of degrees of freedom with a function determined by a finite number of parameters, then by approximating the operations required to compute those parameters by operations with floating-point numbers.

Such an approximating method is something extremely different from formulating a discrete mathematical model of physics that is considered the exact model.

Even the graphics for a game are based on continuous models of space, not on discrete models, where it would be very difficult to implement something like the rotation of an object or perspective views.

The failures to which I have referred are the attempts to create such discrete models of physics, e.g. where the space and time are discrete not continuous.

These attempts have nothing in common with the approximating techniques, which are indeed the base for most successes in using digital computers.

gyrovagueGeist [3 hidden]5 mins ago
Yep! Optimize (solve in infinite dimensions) and then discretize onto a finite basis has typically led to much better and stable methods than a discretize and then optimize approach.

Time-scale calculus is a pretty niche theoretical field that looks at blending the analysis of difference and differential equations, but I'm not aware of any algorithmic advances based on it.

jiggawatts [3 hidden]5 mins ago
Numbers interpretable as ratios are the Rational numbers, by definition, not the Reals.

This entire discussion is about mathematical concepts, not physical ones!

Sure, yes, in physics you never "need" to go past a certain number of digits, but that has nothing to do with mathematical abstractions such as the types numbers. They're very specifically and strictly defined, starting from certain axioms. Quantum mechanics and the measurability of particles has nothing to do with it!

It's also an open question how much precision the Universe actually has, such as whether things occur at a higher precision than can be practically measured, or whether the ultimate limit of measurement capability is the precision that the Universe "keeps" in its microscopic states.

For example, let's assume that physics occurs with some finite precision -- so not the infinite precision reals -- and that this precision is exactly the maximum possible measurable precision for any conceivable experiment. That is: Information is matter. Okay... which number space is this? Booleans? Integers? Rationals? In what space? A 3D grid? Waves in some phase space? Subdivided... how?

Figure that out, and your Nobel prize awaits!

adrian_b [3 hidden]5 mins ago
Rational numbers are ratios of integers.

There are plenty of ratios that are ratios of other things than integers, so they are not rational numbers.

jiggawatts [3 hidden]5 mins ago
Ratios of numbers that are not integers or Rationals are... the Reals. I mean sure, you could get pedantic and talk about ratios of complex integers or whatever, but that's missing the point: The Rationals are closed under division, which means the ratio of any two Rationals is a Rational. To "escape" the Rationals, the next step up is Irrational numbers. Square roots, and the like. The instant you mix in Pi or anything similar, you're firmly in the Reals and they're like a tarpit, there's no escape once you've stepped off the infinitesimal island of the Rationals.
adrian_b [3 hidden]5 mins ago
There are many other kinds of ratios.

Ratios of collinear vectors are scalars a.k.a. "real" numbers, ratios of other kinds of vectors are matrices, ratios of 2D-vectors are "complex" numbers, ratios of 2 voltages are scalars a.k.a. "real" numbers, and so on.

In general, for both multiplication and division operations, the 3 sets corresponding to the 2 operands and to the result are not the same.

Only for a few kinds of multiplications and of divisions the 3 sets are the same. This strongly differs from addition operations, which are normally defined on a single set to which both the operands and the result belong.

In practice, multiplications and divisions where at least one operand or the result belong to another set than the remaining operands or result are extremely frequent. Any problem of physics contains such multiplications and divisions.

jiggawatts [3 hidden]5 mins ago
> Ratios of collinear vectors are scalars a.k.a. "real" numbers, ratios of other kinds of vectors are matrices,

Wat? Vectors do not form a division algebra. You can't "divide" vectors! Perhaps you're thinking of the dot product, which returns a scalar.

Also, you're conflating physical/engineering concepts (such as units of measure) with mathematical abstractions such as number spaces, which don't have units.

Physical measurements exist in the real world, with its limitations, units, and practicalities.

Mathematical numbers exist in a pure theory space that is completely and totally independent of any physical reality. They're just axioms and rules. Definitions and logical conclusions.

Don't mix up the two!

anthk [3 hidden]5 mins ago
Your whole comment can just be TL;DR Forth and the fixed point philosophy :)

Rationals > irrationals on computing them. You can always approximate irrationals with rationals, even Scheme (Lisp, do'h) has a function to convert a rational to decimal and the reverse. decimal to rational.

btilly [3 hidden]5 mins ago
Are they?

The idea that uncountable means more comes from a bad metaphor. See https://news.ycombinator.com/item?id=44271589 for my explanation of that.

Accepting that uncountable means more forces us to debatable notions of existence. See https://news.ycombinator.com/item?id=44270383 for a debate over it.

But, finally, there is this. Every chain of reasoning that we can ever come up with, can be represented on a computer. So even if you wish to believe in some extension of ZFC with extremely large sets, PA is capable of proving every possible conclusion from your chosen set of axioms. So yes, PA is enough.

If you're not convinced, I recommend reading https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden....

112233 [3 hidden]5 mins ago
I like to think about it like this: while real numbers in general are impossible to compute, write down or do anything else with them, many statements about real numbers can be expressed in a useful, computable form.

It all gets mind-bendingly mind-bending really fast, especially with people like Harvey Friedman ( https://u.osu.edu/friedman.8/foundational-adventures/fom-ema... ) or the author of this post trying to break the math by constructing an untractably large values with the simplest means possible (thus showing that you can encounter problems that do not fit in the universe even when working in a "simple" theory)

(i saw the username and went to check audiomulch website, but it did not resolve :( )

psychoslave [3 hidden]5 mins ago
This is an under specified question, until some observable goal is attached to "enough". Enough for what?
somewhereoutth [3 hidden]5 mins ago
> But is computation enough?

Of course not, but that would invalidate the entire project of some/many here to turn reality into something clockwork they can believe they understand. Reality is much more interesting than that.

zozbot234 [3 hidden]5 mins ago
Given my former experiences with encoding type-level computation in Haskell and Rust, I'd kinda rephrase that statement in the title, and say "peano arithmetic is enough, if you only ever need to go up to 88." https://upload.wikimedia.org/wikipedia/commons/b/bd/D274.jpg