Thursday, December 7, 2023
HomeMake Money OnlineGPT-4 was already a gigantic leap with respect to ChatGPT. GPT-4...

GPT-4 was already a gigantic leap with respect to ChatGPT. GPT-4 to 32K simply changes the rules of the game

The launch of GPT-4 in March 2023 showed that the planting for ChatGPT was only in principle. The new model is much more powerful, but there is also no single version of GPT-4. The one we know is the “basic” version, but there is another one that is even more powerful and that offers expanded and really impressive capabilities.

Basic version of GPT-4. As explained in OpenAI, GPT-4 has a context length of 8,192 tokens (8K). What does this mean? Que como mucho podremos usar prompts en los que el total de tokens sea de 8,192. A token can be equivalent to a word but also a group of words depending on the text, but in simple terms we could think that GPT-4 in that “basic” version supports about 12 pages of text.

Copilot, ChatGPT and GPT-4 have changed the programming world forever.  This is the opinion of the programmers

And ChatGPT, what? According to expert estimates, GPT-3, launched in 2020, had a context length of 2,048 tokens. For its part, ChatGPT, which is based on what would be GPT-3.5, doubled that figure to 4,096 tokens. This has a fundamental consequence in how we use these models: we cannot introduce very long questions or, of course, very long documents about those who then ask questions to GPT-3 or ChatGPT.

GPT-4 32K. But in addition to the standard or basic version, OpenAI offers a version of GPT-4 with a context length of 32,768 tokens, which means that you can introduce about 50 pages of text as input. This version has even more limited access than GPT-4. The price to be able to use this version has changed, and it is double that of the one that has GPT-4 8K.

But it will be 32K and much room for maneuver. Who tried it indicate that “GPT-4 32K makes the normal version of GPT-4 look like a toy”. Las options que da son mucho mayores sobre todo al hora de trabajar con largos documents. The clear example is in summarizing those documents and answering questions about them, something very useful than an entrepreneur called Matt Shumer he did with a research study.

More code, please. The same happens in the case of the developers. Introducing from the complete code applications of a respectable size – we insist, here the 32K tokens and much margin of maneuver – makes GPT-4 able analyze this code and suggest improvements or correct errors. The options in this area are also enormous.

Ganarle tiempo to tiempo. GPT-4 and ChatGPT allow you to save time and dedicate it to other things, and GPT-4 32K does exactly the same, but with greater capacity. It is possible for example to pass dozens of complete articles from the internet to then obtain a personalized resume of the actuality.

Una IA personal. Shumer also talked about how thanks to that length of context it was possible to introduce data about him and his company, HyperWrite. From there, GPT-4 could answer questions and talk about aspects related to these entered data, thus becoming a more personal assistant.

No es barato (or quizás sí). Obviously from the price of use of GPT-4 32K makes taking advantage of these options pueda salir algo caro. Bearing in mind that the cost is 0.06 dollars for each 1,000 tokens, use a prompt that takes advantage of all the capacity of the model implying that for each question of this type we will pay 2 dollars. That figure can be considered high if one is not making a very profitable use of these options, but for many people -companies, developers, professional users-, the profitability of something like this can be spectacular in terms of time and resources, and GPT-4 32K could ser in these cases todo lo contrario de caro.

In Xataka | “We were wrong”: OpenAI’s AI was too open, so absolute secrecy prevailed in GPT-4

Neil Barker
Neil Barker
Hi there! I am Neil Barker, a tech enthusiast who believes in the power of open-source software.


Please enter your comment!
Please enter your name here

Most Popular

Recent Comments