Thursday, December 7, 2023
HomeMake Money OnlineEl nuevo Bing con ChatGPT begins to vary

El nuevo Bing con ChatGPT begins to vary

The success of the new artificial intelligence engine integrated in Bing is remarkable: despite its limited display of interest, it is spectacular, but those who are testing it already – including us – have given an account of something: Bing with ChatGPT is starting to change.

Responde, pero también pregunta mucho. The behavior of Bing with this evolutionary model of ChatGPT has no problem in starting a conversation with the user and answering their questions, but it is that he usually shows interest in our answers and makes his own questions as if he wanted to know us better.

Captura De Pantalla 2023 02 15 A Las 8 40 19

Bing asking us what type of movies we like. The conversation is not very different from the one we would have with another person. After each answer, it suggests an answer so that we don’t even have to write if we don’t want to.

Bing, te estás pasando. In many cases, the questions asked by Bing are harmless, but these days some users have discovered that at certain moments the conversational engine seems to be unsettled and starts giving especially rare answers.

Strange behaviors. The users who are testing this motor have discovered how it often behaves in a strange way. He replied to a user who was “disappointed and frustrated with our conversation” in addition to accusing him of being a hacker or a prankster who was trying to deceive him.

In certain moments, he cannot remember previous conversations, which makes him enter a state of depression. Que la gente conozca su nombre en clave (“Sydney”) tampoco parece gratar a este motor, al menos for your answers.

A defensive one. In Ars Technica they also tell how the motor included refutes articles in los that talked about that ‘prompt injection’ with which it managed to reveal so much of that number in a key as its particular “laws of robotics”, various original directives of its functioning .

Iamnot

Losing the papers. As Reddit user “Alfred_Chicken” explained, after a long conversation about the nature of consciousness, Microsoft’s chatbot ended up giving a response saying no, it just couldn’t decide if he was self-conscious or not. In this text no paraba de decir “Lo soy. No lo soy. Lo soy. No lo soy”.

what’s going on. The experts indicate how the underlying model of this version of Bing is based on GPT-3, which makes use of a stochastic (random) nature in which the engine responds to the user’s input (from the prompt) with probabilities of which will be the siguiente mejor palabra en una sequencia, algo que aprende a partir del training con miliones de textos.

The directives in action. Now it seems that this randomness is not complete, and that other still diffuse elements come to form part of these decisions of the tone of the conversation. In the directives that were discovered, this behavior is explained in a certain way: in one of them it is indicated that the responses of “Sydney” “should be positive, interesting, entertaining and attractive”, in another it is specified that “the logic and the reasoning of Sydney should be rigorous, intelligent and defensible”.

Microsoft Tay. There will be events that remind us of what happened in 2016 with the launch of Microsoft Tay. This artificial intelligence chatbot developed for Microsoft was promising, but soon it was deactivated because it started to act like a racist bot that even went so far as to publish Nazi slogans.

But everything is in diapers. As it happened with the theoretical error of Google’s Bard or the problems that have also been detected with ChatGPT, if it is known from the beginning – and both OpenAI and Microsoft and Google have highlighted it – there will be systems that make mistakes and make mistakes.

It is normal: they are learning and all this initial phase precisely allows that all problems will be solved in later updates and iterations. It is important to detect the faults, but it is also important to recognize that we will only be the tip of an iceberg that seems really colossal and that many are planning as the next great disruption of our recent history.

Image: Midjourney

Neil Barker
Neil Barkerhttp://gptevo.com
Hi there! I am Neil Barker, a tech enthusiast who believes in the power of open-source software.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments