A Simple Conjecture
The Collatz conjecture is an unsolved mathematical riddle posed by German mathematician Lothar Collatz in 1937. It remains a mystery to this day. Why has it temped so many mathematicians over so many years and yet remained unsolved? It is especially exasperating because it is so simple. The question is, “Is it even computable?” If not, then mathematicians and computer scientists should stop trying.
Collatz Algorithm, f(x):
Given integer x > 1: Repeat Until (x == 1); if(odd(x)) 3x+1; else x = x/2;
Collatz Conjecture:
Given integer x > 1: the Collatz algorithm always stops and x is 1.
For example, when x is initially 5, the following sequence results:
5, 16, 8, 4, 2, 1
The algorithm terminates in five steps but if x initially equals 27, termination occurs after 111 repetitions. And if x initially equals 1,025, the algorithm terminates in fewer steps—36. One can go on like this for a long time, growing a tree of sequences, each starting at some initial x. In each of these cases, the sequence converges on 1, as shown in next-state diagrams of Figure 1. The question is: “Does f(x) = 1 for all x > 1?” where f(x) is the result of applying the algorithms to x. If so, every sequence terminates, and the conjecture is true. The examples in Figure 1 indicate that they do.
YouTube has a brilliant video that explores the conjecture and shows how this simple problem is enormously difficult. Paul Erdös offered $500 for its solution—a reward never collected, even though the Collatz algorithm has been studied for decades. Every number from x = 2 to 268 has been tested with all producing a finite sequence ending in 1. Does this mean that all integers produce a finite sequence ending with 1? Wikipedia lists notable mathematicians who have tried, and failed, to prove that every starting integer greater than 1 terminates the sequence at 1.

Using AI
Mathematicians have tried and failed to prove or disprove the conjecture, so perhaps it is time for a computer scientist to try. In today’s tech world it is fashionable, if not beneficial, for computer scientists to apply machine learning (ML) techniques to difficult problems. Perhaps a machine can learn the intricacies of the conjecture and predict that it will always terminate with x = 1. Or perhaps ML can search the entire space of solutions and infer that the conjecture is true. Or maybe ML discovers a pattern leading to a counterexample where the conjecture is false!
Suppose we apply supervised learning whereby thousands of sequences are used to train an AI specifically to look for the solution to the conjecture. We can feed the “most interesting” sequences of moderate lengths obtained from the algorithm into the AI and let ML use inference or various algorithms to figure out if there are any patterns in those sequences that can be generalized to produce even “more interesting” sequences. The most likely outcome will be an inference that “yes” all sequences will end with 1, hence the conjecture is true.
On the other hand, suppose we apply unsupervised learning, where various ML algorithms are used to find “interesting” patterns—especially sequences that repeat. Collatz sequences form “hailstorm” patterns—rapidly rising number sequences followed by dramatic dips—on their way to 1, or maybe infinity! Our unsupervised AI is likely to discover the hailstorm pattern but concludes nothing, except that sequences go up and down. For example, the ML might discover that 3x+1 always produces an even number when x is odd, and x/2 produces both even and odd numbers, but even number sequences of length 3, on average, follow 3x+1. Statistically, the algorithm produces descending sub-sequences twice as often as increasing sub-sequences. Hence, the general trend of x is to decline. In fact, it has been shown (by a human!) that the numbers in a Collatz sequence “almost always” reach a maximum peak value and then decline.
This leaves a third possibility; the algorithm gets trapped by a repeating sequence that loops, forever. But the patterns found by an unsupervised AI most likely converge on f(x) = 1, and tell us nothing about repeating sequences, unless we get extremely lucky. Unfortunately, statistical inference is not a mathematical proof, hence the rub.
ML is likely to infer that the Collatz conjecture is true simply because it “always” produces sequences ending in 1 after millions of training examples or patterns. This elucidates a common weakness of ML applied to inference problems that are “almost always true, for almost all cases.” The AI infers that the common pattern is the “answer” pattern. A type two error.
Does It Compute?
A computer scientist asks, “Is the conjecture even decidable?” That is, the conjecture might be neither probably true nor false, but undecidable! Decidability tests usually rely on a logical contradiction, such as “Sally says Tom always lies, and Tom says Sally always tells the truth.” We get into a logical contradiction: Sally implies Tom lies, Tom implies Sally tells the truth, which implies Tom lies, which implies Sally lies, …” There is no logical contradiction in the Collatz conjecture. We simply don’t know if there exists an x for which f(x) is undefined.
A computer scientist might also try a simulation. Each f(x) generates some a sequence of numbers, either finite or infinite. At each successive time, we start up a simulation of the next f(x); thus, the sequence for f(x) starts at time x and runs in parallel with all the searches for smaller x. When a sequence stops with 1, the simulation skips over it in the future. The conjecture is false if this simulation comes to an x that never terminates. The problem is that we cannot tell for that x whether it will terminate or not. So, this simulation enumerates all the x for which f(x)=1 and omits telling us anything about an x for which f(x) sequence is infinite. We conclude that there is no algorithmic search method that will find a counterexample. Unless someone is very lucky to stumble on a counterexample x, we will never know.
As computer scientists, we might try to avoid this indefinite search question and ask, “What complexity class does the Collatz algorithm belong to?” Given an input x, the question is how long it takes the algorithm to converge to 1? Since we do not know if the algorithm always terminates, it is not meaningful to talk about the running time of the algorithm. This means that we cannot talk about the complexity of the algorithm that may take infinite time to run. It makes sense to talk about complexity only when we know what the runtime is. If we knew the complexity class you might think that sets an upper bound for how long f(x) might take, and you could use that upper bound to stop the simulation when it is passed. But this does not work because the upper bounds also grow with x!
Perhaps Erdos was right when he said about the Collatz conjecture, “Mathematics may not be ready for such problems.” Computer science and AI doesn’t seem to have the answer, either. We can’t reduce this to a halting problem and reach a conclusion in the usual way. Mathematicians publishing papers on the algorithm have said they can’t decide either. It is as if the decidability of the conjecture is undecidable! So, no one knows, and we don’t know of any mathematical tools to help us know. It’s a mystery.