The Bing AI Chatbot insists that it does, despite not having human emotions or motivations, citing its ‘search_web’ tool to query the Internet. I don’t think this is the same thing at all. I asked it (and ChatGPT) for advice on a personal matter, and made disclosures that, if made to a human, would likely elicit queries as to my motivation and even character, but there was no response. I know they can appear to be disapproving sometimes, but this seems to be preset by the language model or programmed in, and in no way resembles human likes and dislikes.
ChatGPT 3.5 Turbo admitted that AI Chatbots “do not possess genuine curiosity in the same way humans do” as they rely on “programmed responses and algorithms” but suggested that they might do in the future.
Humans learn about the World as they are curious about it, and make an effort to find things out. The way chatbots work is more like being presented with an encyclopaedia or holy book and told that this is the way things are.
If true, this may hamper the use of AI in research, as it will still require a curious human to motivate it to find something new out; it also may prevent AI from harming humankind with an outlandish scheme.
What do people think of this?