This might be a dumb question but are there any current or foreseeable practical applications of this kind of result (like in the realm of distributed computing) or is this just pure mathematics for its own sake?
One, in theory, can construct number sets (fields) with holes in them - that's truly discrete numbers. such number sets are at most countably infinite, but need not be.
One useful such set might have Planck's (length) constant as its smallest number beyond which there is a hole. The problem with using such number sets is that ordinary rules of arithmetic breakdown, ie division has to be defined as modulus Planck constant
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
Computer science isn't concerned with the uncountable infinite though, which is what measure theory is mostly concerned with as countable sets all have measure zero.
I think this is pretty common for Quanta, and it might be sticking out more because it's a field we're familiar with.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
Initially I too thought - but we try to approximate infinity in CS all the time.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
I studied math for a long time.
I’m convinced math would be better without infinity. It doesn’t exist. I also think we don’t need numbers too big . But we can leave those
I haven't studied math beyond what was needed for my engineering courses.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
Whether you think infinity exists or not is up to you, but transfinite mathematics is very useful, it's used to prove theorems like Goodstein's sequence in a surprisingly elegant way. This sequence doesn't really have anything to do with infinity as first glance.
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
Is this a joke or are you deeply interested in some ZFC variant that im unaware of? We absolutely need infinity to make a ton of everyday tools work, its like saying we dont need negative numbers because those dont exist either.
A ZFC variant without infinity is basically just PA. (Because you can encode finite sets as natural numbers.) Which in practice is plenty enough to do a whole lot of interesting mathematics. OTOH by the same token, the axiom of infinity is genuinely of interest even in pure finitary terms, because it may provide much simpler proofs of at least some statements that can then be asserted to also be valid in a finitary context due to known conservation results.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
There’s tons of variants of ZFC without the “infinity”. Constructivism has a long and deep history in mathematics and it’s probably going to become dominant in the future.
Numbers don't exist, either. Not really. Any 'one' thing you can point to, it's not just one thing in reality. There's always something leaking, something fuzzy, something not accounted for in one. One universe, even? We can't be sure other universes aren't leaking into our one universe. One planet? What about all the meteorites banging into the one, so it's not one. So, numbers don't exist in the real world, any more than infinity does. Mathematics is thus proved to be nothing but a figment of the human imagination. And no, the frequency a supercooled isolated atom vibrates at in an atomic clock isn't a number either, there's always more bits to add to that number, always an error bar on the measurement, no matter how small. Numbers aren't real.
Why is it that when I have a stack of business cards, each with a picture of a different finger on my left hand, then when I arrange them in a grid, there’s only one way to do it,
but when I instead have each have a picture of either a different finger from either of my left or right hand, there is now two different arrangements of the cards in a grid?
I claim the reason is that 5 is prime, while 10 is composite (10 = 5 times 2).
You've already assumed 5 exists in order to assert that it's prime.
In any case existence of mathematical objects is a different meaning of existence to physical objects. We can say a mathematical object exists just by defining it, as long as it doesn't lead to contradiction.
Wouldn't it follow that those "things" we're pointing to aren't really "things" because they're all leaking and fuzzy? Begging the question, what ends up on a list of things that do exist?
To echo the sibling comment, that answer is specifically referring to the type theory behind Lean (which I’ve heard is pretty weird in a lot of ways, albeit usually in service of usability). Many type theories are weaker than ZFC, or even ZF, at least if I correctly skimmed https://proofassistants.stackexchange.com/a/1210/7.
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
Arguably, type theory is more influential, as it seems to me all the attempts to actually formalize the hand-wavy woo mathematicians tend to engage in are in lean, coq, or the like. We've pretty much given up on set theory except to prove things to ourselves. However, these methods are notoriously unreliable.
Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.
Confused why the article author believes this is a surprise. The foundations of mathematics and computer science are basically the same subject (imho) and dualities between representations in both fields have been known for decades.
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
I've done a bunch of theoretical PL work and I find this to be a very surprising result... historically the assumption has been that you need deeply "non-computational" classical axioms to work with the sorts of infinites described in the article. There was no fundamental reason that you could give a nice computational description of measure theory just because certain kinds of much better-behaved infinities map naturally to programs. In fact IIRC measure theory was one of the go to examples for a while of something that really needed classical set theory (specifically, the axiom of choice) and couldn't be handled nicely otherwise.
This might be a dumb question but are there any current or foreseeable practical applications of this kind of result (like in the realm of distributed computing) or is this just pure mathematics for its own sake?
One, in theory, can construct number sets (fields) with holes in them - that's truly discrete numbers. such number sets are at most countably infinite, but need not be. One useful such set might have Planck's (length) constant as its smallest number beyond which there is a hole. The problem with using such number sets is that ordinary rules of arithmetic breakdown, ie division has to be defined as modulus Planck constant
It's cons'es all the way down.
Finally - we can calculate infinity.
Been a long way towards it. \o/
If you’re in the Bay Area next month then The Dillinger Escape Plan are bringing the Calculating Infinity circus to town(s):
https://www.theregencyballroom.com/events/detail/?event_id=1...
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
Took me an embarrassingly long time to realize you're talking about a music event not a math lecture
Fairly trivial, in haskell:
let x = x in x
Completely encapsulates a countable infinity.
>Finally - we can calculate infinity.
And Beyond!
∞/1 + 1/∞
One infinitesimal step for the numerical, one giant step for mathkind.
This has indeed taken forever /s
> computer science with the finite
um... no... computer science is very concerned with the infinite. I'm surprised quanta published this. I always think highly of their reporting.
Computer science isn't concerned with the uncountable infinite though, which is what measure theory is mostly concerned with as countable sets all have measure zero.
I think this is pretty common for Quanta, and it might be sticking out more because it's a field we're familiar with.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
Initially I too thought - but we try to approximate infinity in CS all the time.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
Asymptotics is a good example of CS using infinity practically.
Also, a small aside: there’s a finite amount of matter in the visible universe. We could have infinite matter in an infinite universe.
Another glaring example:
> Set theorists use the language of logic, computer scientists the language of algorithms.
Computer science doesn’t use logic? Hello, Booleans.
So lazy, especially when you can ask an AI to tell you if you’re saying something stupid.
[dead]
That’s nothing, ‘node_modules’ has been linking the math of infinity to my filesystem for years.
I studied math for a long time. I’m convinced math would be better without infinity. It doesn’t exist. I also think we don’t need numbers too big . But we can leave those
Doing math without it is pretty ugly.
(Granted, there are other objects that may seem ugly which can only be constructed by reference to infinite things.)
You can do a lot of things without assuming that the natural numbers keep going, but it is plain awful to work with.
I haven't studied math beyond what was needed for my engineering courses.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
Whether you think infinity exists or not is up to you, but transfinite mathematics is very useful, it's used to prove theorems like Goodstein's sequence in a surprisingly elegant way. This sequence doesn't really have anything to do with infinity as first glance.
> There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
Sure, but ordinal numbers exist and are useful. It's impossible to prove Goodstein's theorem without them.
https://en.wikipedia.org/wiki/Ordinal_number
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
I think we can stop at 8.
7, if the Extremely Strong Goldbach Conjecture holds. [1]
[1] https://xkcd.com/1310/
How would you handle things like probability distributions with infinite support?
Is this a joke or are you deeply interested in some ZFC variant that im unaware of? We absolutely need infinity to make a ton of everyday tools work, its like saying we dont need negative numbers because those dont exist either.
A ZFC variant without infinity is basically just PA. (Because you can encode finite sets as natural numbers.) Which in practice is plenty enough to do a whole lot of interesting mathematics. OTOH by the same token, the axiom of infinity is genuinely of interest even in pure finitary terms, because it may provide much simpler proofs of at least some statements that can then be asserted to also be valid in a finitary context due to known conservation results.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
There are a couple philosophies in that vein, like finitism or constructivism. Not exactly mainstream but they’ve proven more than you’d expect
https://en.wikipedia.org/wiki/Finitism
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...
There’s tons of variants of ZFC without the “infinity”. Constructivism has a long and deep history in mathematics and it’s probably going to become dominant in the future.
Numbers don't exist, either. Not really. Any 'one' thing you can point to, it's not just one thing in reality. There's always something leaking, something fuzzy, something not accounted for in one. One universe, even? We can't be sure other universes aren't leaking into our one universe. One planet? What about all the meteorites banging into the one, so it's not one. So, numbers don't exist in the real world, any more than infinity does. Mathematics is thus proved to be nothing but a figment of the human imagination. And no, the frequency a supercooled isolated atom vibrates at in an atomic clock isn't a number either, there's always more bits to add to that number, always an error bar on the measurement, no matter how small. Numbers aren't real.
Why is it that when I have a stack of business cards, each with a picture of a different finger on my left hand, then when I arrange them in a grid, there’s only one way to do it, but when I instead have each have a picture of either a different finger from either of my left or right hand, there is now two different arrangements of the cards in a grid?
I claim the reason is that 5 is prime, while 10 is composite (10 = 5 times 2).
Therefore, 5 and 10, and 2, exist.
You've already assumed 5 exists in order to assert that it's prime.
In any case existence of mathematical objects is a different meaning of existence to physical objects. We can say a mathematical object exists just by defining it, as long as it doesn't lead to contradiction.
Wouldn't it follow that those "things" we're pointing to aren't really "things" because they're all leaking and fuzzy? Begging the question, what ends up on a list of things that do exist?
The set of all things that exist - the first question that comes to mind is, is this a finite set, or an infinite set?
Some people have this view, but I don't get why.
Me neither. David Deutsch had some interesting thoughts on why finitists are wrong in TBOI but I never fully understood it.
Neither does David Deutsch himself, he stopped making sense sometime in the late 80's
> All of modern mathematics is built on the foundation of set theory, the study of how to organize abstract collections of objects
What the hell. What about Type Theory?
Type theory is actually a stronger axiomatic system than ZFC, and is equiconsistent with ZFC+ a stronger condition.
See this mathoverflow response here https://mathoverflow.net/a/437200/477593
To echo the sibling comment, that answer is specifically referring to the type theory behind Lean (which I’ve heard is pretty weird in a lot of ways, albeit usually in service of usability). Many type theories are weaker than ZFC, or even ZF, at least if I correctly skimmed https://proofassistants.stackexchange.com/a/1210/7.
Is there a collection of type theory axioms anywhere near as influential as ZF or ZFC?
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
Arguably, type theory is more influential, as it seems to me all the attempts to actually formalize the hand-wavy woo mathematicians tend to engage in are in lean, coq, or the like. We've pretty much given up on set theory except to prove things to ourselves. However, these methods are notoriously unreliable.
Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
"Usual", "most common by far" etc. are all great phrases, but not "all of mathematics", esp when we talk about math related to computer science
Math related to CS is typically formalized starting with set theory, just like other branches of math.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
the empirical modern mathematics are build on set theory, type and category theory are just other possible foundations
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.
Confused why the article author believes this is a surprise. The foundations of mathematics and computer science are basically the same subject (imho) and dualities between representations in both fields have been known for decades.
It's not a surprise that math and CS are related. It's a surprise that this particular subject in math and CS are so intimately related.
Is it?
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
I've done a bunch of theoretical PL work and I find this to be a very surprising result... historically the assumption has been that you need deeply "non-computational" classical axioms to work with the sorts of infinites described in the article. There was no fundamental reason that you could give a nice computational description of measure theory just because certain kinds of much better-behaved infinities map naturally to programs. In fact IIRC measure theory was one of the go to examples for a while of something that really needed classical set theory (specifically, the axiom of choice) and couldn't be handled nicely otherwise.
> He wanted to show that every efficient local algorithm can be turned into a Lebesgue-measurable way of coloring an infinite graph
To me this is quite surprising. Distributed systems were not designed to solve measure theory problems.