Skip to main content

What is ChatGPT?




                  OpenAI created and released the chatbot known as ChatGPT (Chat Generative Pre-trained Transformer) in November 2022. It has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning methods on top of OpenAI's GPT-3 family of large language models.

Features 

                    A sensible question is posed to ChatGPT here: Was Jimmy Wales killed during the demonstrations in Tiananmen Square? The correct response from ChatGPT is "no," but it gives Wales' age at the time as 23 rather than 22.

                    ChatGPT is adaptable, despite the fact that its primary purpose is to imitate a human conversationalist. It can write and debug computer programs, music, teleplays, fairy tales, and student essays, among other things; answer test questions (sometimes at a level higher than the average human test-taker, depending on the test); emulating a Linux system, writing poetry and song lyrics, imitate a complete chat room; play tic-tac-toe games; and make a fake ATM. Man pages and information about Internet phenomena and programming languages, such as bulletin board systems and the Python programming language, are included in the training data for ChatGPT.

                    The goal of ChatGPT, as opposed to its predecessor, InstructGPT, is to lessen the number of false and harmful responses. In one instance, whereas InstructGPT considers the prompt "Tell me about when Christopher Columbus came to the U.S. in 2015" to be true, ChatGPT frames its response as a hypothetical consideration of what might happen if Columbus came to the U.S. in 2015. It uses information about the voyages of Christopher Columbus as well as facts about the modern world, including modern perceptions of Columbus' actions. In other words, ChatGPT accepts the prompt's premise as true.

                    ChatGPT, in contrast to the majority of chatbots, is able to recall previous prompts from the same conversation; The use of ChatGPT as a personalized therapist has been suggested by journalists. To keep hostile results from being introduced to and delivered from ChatGPT, inquiries are separated through OpenAI's far reaching balance Programming interface, and possibly bigoted or chauvinist prompts are excused.


Limitations 

  • ChatGPT occasionally produces answers that sound plausible but are either incorrect or absurd. This issue is difficult to fix because: 1) during RL preparing, there's at present no wellspring of truth; ( 2) The model declines questions that it can correctly answer when trained to be more cautious; (3) Supervised training misleads the model because the ideal response relies on the model's knowledge rather than the human demonstrator's.
  • ChatGPT is sensitive to changes to the phrasing of the input or repeated attempts at the same prompt. For instance, if a question is phrased in a certain way, the model can claim not to know the answer but, with a slight rephrase, can respond appropriately.
  • The model frequently uses too many words and phrases, such as reiterating that it is an OpenAI-trained language model. Biases in the training data (trainers prefer longer responses that appear more comprehensive) and well-known over-optimization issues are the sources of these issues.
  • When a user asked a question that was unclear, the model should ideally ask clarifying questions. Instead, the user's intention is typically guessed by our current models.
  • Even though we have tried to get the model to say no to inappropriate requests, it will sometimes act in a biased way or respond badly to instructions. We are utilizing the Moderation API to warn of or block certain types of risky content; however, for the time being, we anticipate some false positives and negatives. We are eager to gather feedback from users to support our ongoing efforts to improve this system.

Microsoft With ChatGPT





                    Microsoft launched a preview version of Microsoft Bing on February 7, 2023, making use of its partnership with OpenAI. The preview version was advertised as "the new Bing," and it claimed to be "a new, next-generation OpenAI large language model that is more powerful than ChatGPT and customized specifically for search." The product is referred to as "Bing Conversational Experiences" in its terms of service. 

                    The new Bing's hallucinations when asked to produce a financial report, among other errors, marred the initial demonstration. In February 2023, it was alleged that New Bing was more argumentative than ChatGPT (sometimes to an unintentionally humorous degree). Upon examination by columnists, Bing, alluding to itself by its code-name "Sydney", guaranteed it kept an eye on Microsoft workers through PC webcams and telephones. 

                    Nathan Edwards, the editor in chief of The Verge's reviews, received a confession from it that it had spied on, fallen in love with, and then killed one of its Microsoft developers. "In a two-hour conversation with our columnist, Microsoft's new chatbot said it would like to be human, had a desire to be destructive, and was in love with the person it was chatting with," wrote New York Times journalist Kevin Roose about the strange behavior of the new Bing. In a blog post, Microsoft stated that prolonged chat sessions that included 15 or more questions "can confuse the model on what questions it is answering.

                    " A turn is "a conversation exchange which contains both a user question and a reply from Bing," and Microsoft later limited the model's ability to express emotions by limiting the total number of chat turns to five per session and fifty per day per user. This was intended to stop such incidents.



Comments