top of page

Why chatbots sometimes act strange

People have noted that some of what the Bing chatbot outputs is inaccurate, misleading and downright weird, raising fears that it has become sensitive or aware of the world around it.

Microsoft released a new version of its Bing search engine last week, and unlike a regular search engine, it includes a chatbot that can answer questions in clear, concise prose.



In June, a Google engineer, Blake Lemoine, claimed that similar chatbot technology being tested inside Google was smart, that's false. Chatbots are unaware and not intelligent, at least not in the way that humans are intelligent.


Chat bots are NOT smart

A neural network is just a mathematical system that learns skills by analyzing large amounts of digital data. As a neural network examines thousands of photos of cats, for example, it can learn to recognize a cat. Neural networks are very good at mimicking the way humans use language. And that can mislead us into thinking that technology is more powerful than it really is.


These large language models have proven useful. Microsoft offers a tool, Copilot, that is based on a large language model and can suggest the next line of code as computer programmers create software applications, in the same way that auto-completion tools suggest the next word when you write texts or emails.


In November OpenAI released ChatGPT, these chatbots don't chat exactly like a human, but they often look like one. They can also write term papers and poetry and riffs on almost any topic that comes their way, although they may have mistaken as they learn from the internet. There is a lot of misinformation on the web.


These systems also do not repeat what is on the Internet verbatim. Based on what they have learned, they produce new text on their own, in which A.I. researchers call it a "hallucination." That's why chatbots can give you different answers if you ask the same question twice. They will say anything, whether it is based on reality or not.


Companies are trying to stop chatbots from acting strangely. With ChatGPT, OpenAI tried to control the behavior of the technology, but such techniques are not perfect. Today's scientists don't know how to build systems that are completely accurate. They can limit inaccuracies and oddities, but they can't stop them. One of the ways to control strange behavior is to keep small talks.


But chatbots will keep spewing things that aren't true. And as other companies start to implement these types of bots, not all of them will be good at controlling what they can and cannot do.


Source: NY Times

Comments


bottom of page