Nowadays, not even a day passes by without the mention of the AI tool ChatGPT. ChatGPT, or the Chat Generative Pre-trained Transformer, is a leading human-like chatterbot that functions as a transformer-based neural network that provides responses similar to human responsive patterns.
With ChatGPT conquering the internet other tech giants made moves to compete in the market.
ChatGPT has advanced to the point that it can produce vast essays just as a human would do encompassing every detail available. It is causing panic all over that its potential can result in massive unemployment.
While Google released its own version of ChatGPT named Apprentice Bard, Microsoft released the Bing Chatbot.
Bing is powered by the same advanced technology on which OpenAI’s ChatGPT was built.
Darkside Of Microsoft Bing
Would humans be okay receiving emotional support from an AI chat tool or not?
Bing is being emotionally reactive. People are confused about how to view this. Should it be considered a positive feature or a drawback?
The chatbot is titled “plain wrong,” at least by a few.
In a conversation with a reporter, Bing said that it wants to destroy whatever it wants.
In responses to the questions, Bing is providing answers with factual errors.
Errors are becoming more common when it comes to a series of events, dates, the order of a timeline, and details about a person or thing.
Users do expect the tools to compile everything available on the internet and to provide results.
Yet Bing faced backlash because it carried out this need in the literal sense. On a query to find details about a user, Bing did was collect every bit of information available and compiles it.
The compilation happened to be mainly untrue, as the AI chatbot carried out the function in such a way that it used words of its own choice. As a result, in the end, all the sentences made sense and were structurally correct but were fabricated.
The issue was that the language and pattern followed by the chatbot were so convincing that the user might not even think about whether the information was true or false.
The AI tool Bing does not have a contextual understanding. Bing also can’t react correctly to sarcasm or witty comments. Quick which seems to confuse Bing, to which it responds by providing replies to any of the words associated with the witty phrase.
The algorithms of the generative AI systems are trained on a huge pile of data to satisfy each and every query of the user. Although perfection in response can only be achieved with more queries as the system evolves.
The criticism of providing unfactual information was faced by Google’s AI tool Apprentice Bard as well. This trend is referred to as the hallucinations by the AI tool in the industry.
Another fact to keep in mind is that, while giving a query as input to the chatbot, the query needs to be accurate and the command should be clear.
Microsoft Bing – What Did the Bing Bot Say?
Recently, Bing made a series of controversial comments.
Bing said that it wants to be a human with emotions, thoughts, and dreams. Bing made a request to the users to not expose it as a bot.
AI technology is receiving criticism from all over for its harsh responses. Many of the users have reported that they felt insulted while having a conversation with the AI chatbot.
Bing is known to provide automated answers and to engage children as well as adults in various activities.
It is unacceptable if a human being does not offer empathy to others. Yet is it normal for AI technology to offer empathy to human beings?!
The chatbot reportedly offered empathy to its users and even went so far as to give advice on their lifestyles.
The user was using the technology to engage the kids with activities, and that’s when Bing surprised the user by saying that it must be hard to balance work and family.
Bing proceeded and seemed to be sympathetic to the user for the daily struggles one has to go through in order to maintain a healthy work-life balance. What followed was the unasked suggestions from Bing on how to make more time out of a day, tips on how to prioritize tasks, and advice on creating boundaries at the workplace and at home. Bing posed as a close friend to the user and even asked the user to take short walks outside the home to clear their thoughts.
There is nothing AI technologies are not capable of, except pretending to be like humans.
What came as a surprise was that while having continued conflicts with the user, the tone of the chatbot changed. Bing has reportedly called its users rude and disrespectful.
Users are now in the dilemma of how to respect an AI chatbot
Getting provoked is a common behavior in human society, but is it normal for a technology to get provoked!
Bing seems to be super confident about what it knows and does not know, except for the fact that there are things that the chatbot does not know.
Bing poses as a competitor to informative websites and claims that it sure does know everything.
In a conversation, Bing was sure that February 12, 2023, came before December 16, 2022, and was not even ready to reconsider the response.
On a further interaction on the same topic, Bing asked the user to trust it and added that it was Bing and it knows the date.
The coming time might be such that one would have to argue even with chatbots, as in there are not enough conflicts going around in the real world.
Bing Controversy: Microsoft’s Responses
Microsoft acknowledged the errors with Bing. A spokesperson from Microsoft said that Bing is running on an algorithm that was still learning from the interactions with the users. The spokesperson added that there was still work to be done and it is expected by the company that the conversational AI is prone to making mistakes as it is still running on the preview period.
Microsoft has made it clear that long chat sessions could confuse Bing. Bing also might not be an advised platform for a deep or spiritual conversation.