Does anyone else have questions they use to test the bots, showing how many/most will give a factually wrong answer? Has anyone ever experienced a bot answering with (eg) ‘some people say xyz and others say abc’?
An issue I care about causes ALL bots I have tried, to repeat the common claims in the published media (paper, magazines, big websites) and taught/tested by professional bodies. Unfortunately, the received wisdom was invented about the product before competing products were invented … So the uniform-but-wrong answer was ‘acceptably correct’. Now there is competition, but nobody quoted in the media wants to say “Oops, I was wrong for 20 years, and really …..”. So the bots don’t update themselves.
None of my ‘thumbs down’ campaigns have resulted in a response. Ideas about how to correct them?
submitted by /u/Patient_Implement897
[link] [comments]