Chatbot finds hard things easy, simple things hard

Can someone explain to me how it is that ChatGPT can accurately answer a question like “how far is it from Sacramento to LA”, but can’t answer “What city is as far from Paris as Sacramento is from LA”? For the latter question I get absurd results like ‘Paris is as far from Algiers as Sacramento is from LA’. When I point out the absurd difference in distances, I get ‘sorry’ with a recalculation says Tbilisi is the same distance! And yet it can tell me the straightforward distance from Paris to anywhere.

Does the phrasing of the comparison exceed ChatGPT’s logical capabilities? And yet, it can answer ostensibly much more complex questions to do with technical topics in philosophy. ‘What does Kant’s conception of time share with Heidegger’s?’ What’s the difference here? ChatGPT does a respectable job with philosophical questons, but it can’t do a simple distance comparison! How is this possible?

submitted by /u/majxela
[link] [comments]