ChatGPT can pretend to have reasoning skills when it writes a text, but if it has to use logic in a mathematical game it's obvious that it's extremely bad at it
So basically I decided to play around with ChatGPT a little and I found that it is pretty much unable to play mathematical games and reason. The first thing I tested was a bit more advanced and requires some very basic understanding of the scientific method. If you've read HPMoR you'll know what I'm talking about. So what I did with ChatGPT is the following: I made up a rule about triplets of numbers and told it one triplet which is an instance of the rule. Then it had to guess the rule by giving me triplets, and I would say whether they're an instance of the rule or not. Long story short, it couldn't, and even when I tried to direct it like I would a human, it still did not change its approach. Not surprising I guess. Then I tried to play a game of bulls and cows with it (you can google the rules). ChatGPT cannot guess a number, it got to 2 bulls 2 cows but seemingly threw this out the window and started doing something completely random. But it gets even worse. If you ask it to come up with a number and then you try to guess it, it actually gives you responses that are wrong. I also asked it to explain the rules of the game to me, and the text was okay, but as soon as it came up with an example it was really wrong. So I guess if you want to break it, use mathematical logic.