Disclamer: I am not theoretical computer scientist.

That's correct for the technical reason that the Travelling Salesman Problem (TSP) isn't a decision problem. It's a function problem, we need to map an instance of the problem to a tour that has minimal length (I think technically this is better described as an optimization problem, but I'm not sure the difference matters for what I'm going to say here).

As it turns out, there's a corresponding set of complexity classes for function problems FP and FNP (more[sup]1[/sup] interesting classes can be found in the

Complexity Zoo, though good intuitive explanations of most of this stuff seem hard to come by.)

I'm not sure exactly how TSP fits into that group of classes (I think it's probably FNP-complete but that's just kind of a guess) BUT the method that Casey mentioned on stream today actually can be used to show that it is in FP[sup]2[/sup] if P = NP, sort of.

The argument was basically binary search from some value like the sum of all the weights until you find the minimum length, M. From here you can just remove edges from the graph and query the oracle again: if there stops being a tour of length M you know the edge you just removed was part of the last minimal length tour. From that you can actually solve the function problem version by removing edges until you're left with just edges that would be part of the minimal length tour.

Casey's problem with that argument was that you could set the weights such that they sum to 2^(2^n) and thus your binary search would take 2^n uses of the oracle to find the minimum length. This is a pretty interesting argument. My problem with it is that the difficulty in doing the binary search comes purely from the fact that the lengths of the numbers are growing exponentially. If the lengths of the numbers are growing exponentially I don't think you can stop considering them when talking about the size of the input since operations of the numbers then become O(2^n) instead of O(1). I can't really think of any examples of the top of my head, but I suspect you could do some similar double exponential weighting with problems "in" P and end up showing that they're actually not in P.

If you restrict the weight precision to something that can only grow by 2^(P(n) the the argument ends up working. Is that satisfactory? I'm not sure. On one hand it adds a constraint onto specifying instances of the problem so it is making things easier. But on the other hand numbers with arbitrary precision seems to make it so you're largely focusing on the size of the numbers, rather than the actual problem.

That's pretty much where I am with this topic. It's pretty possible that I poorly explained or even misunderstood something in what I just said so I welcome responses pointing out mistakes or asking for clarification.

1. A bunch fall out of the setting a limit on how many of the non-deterministic paths a NDTM lead to a correct result, for example if some constant fraction over half of them accept you can just guess and you have BPP (because you can just guess which path to take), which is basically problems that can be solved in polynomial time with probabilistic algorithms

2. Technically it would show that it's in FP^NP (meaning FP with an oracle to NP, which basically means "problems that would be in FP if we could solve NP problems in one step"). From here if we assume P = NP then we could just solve NP problems in polynomial time and as long as we only use the oracle a polynomial amount of times it's fine since a polynomial times a polynomial is still a polynomial; so the oracle doesn't matter if P = NP)