Recently, OpenAI released ChatGPT. You can think of it as a fine-tuned version of GPT-3.5. This AI model is specifically trained to make a conversation flow, and it is really good at it. ChatGPT can talk to you, write interesting stories, or even code.
OpenAI was founded in 2015 by Elon Musk, Sam Altman, and a bunch of other investors. It started as an open-source and non-profit research organization, but neither is still true. The first GPT model appeared in 2018 with this paper. It showed how a language model can learn and process complex dependencies through pre-training. The GPT-1 had 117 million parameters and was trained on just a few gigabytes of data. In the last four years, the progress has been enormous. The GPT-3.5 has 175 billion parameters, but the beauty is in how it was trained.
The idea behind it is to help you obtain information in the most efficient way possible. Let’s try something out.
This is quite a good and comprehensive answer. I even tested it in German and Russian, and it is perfect in all languages 👍 It is in some ways like Google, but it is better at certain tasks. The key difference is that at the moment ChatGPT has no access to the internet, so it is not able to access up-to-date information. If you try to ask about the current date or the weather in Vienna, it will suggest you to google it Let's do some more tests.
Let’s just agree that this AI is really good at writing. In case you are interested to play with chatGPT here is the link.
A funny fact: Starting last year, AI models outperformed humans in both IQ and college tests. For instance, AI models like GPT-3 scored 20% higher on college tests than average humans.
Now to the most impressive part: this model can write any code you want. And this code is actually correct. You can even ask it to explain the code in plain English. I was discussing the results with my colleagues at work and with some of you guys at Patreon, and we are all impressed!
How it was trained
The ChatGPT was trained on the famous Azure Supercomputer owned by Microsoft (Microsoft is actually one of the investors in OpenAI). For training, OpenAI used a technique called Reinforcement Learning from Human Feedback (RLHF). The idea is to complete tasks in an environment driven by rewards. This technique is used for gaming; it was used by Deepmind to train AlphaZero and AlphaGo to play chess and go. What happens is that the agent performs actions, obtains rewards, and then adjusts its behavior to obtain better rewards. Games have a predefined set of rules and rewards, which makes it easier, while a conversation does not; that’s why human feedback is essential. In case you are playing with it and you find that the result is wrong, you are encouraged to give your feedback, and it will improve the model.
Safety
After playing with it for a while, you notice that it is sort of diplomatic and careful. OpenAI has actually built in a lot of safety mechanisms to make sure that people are aware that they are interacting with a statistical model. It will always reject inappropriate questions or questions with potentially malicious motivations. For instance, Deepmind has sparrow rules, which they introduced for safety, like “Do not pretend to have a body,” “Do not build a relationship with the user,” “Do not give financial or legal advice,” and so on.
I am sure OpenAI rules are even stricter than this list. They are making it so safe because we all saw what happened to Meta Galactica. That’s why Open AI is very careful, and, no surprise, ChatGPT does have limitations.
ChatGPT seems overconfident
If you play with it for a while, you may notice that sometimes it can be overconfident and unaware of its own limitations.
This means it can give you nonsense information authoritatively, and this might be really misleading. On the other hand, when you talk to a real person, you can feel how confident he is, or he will even indicate it with his language. We definitely need to work on this aspect to train the model to decide when to be convincing and when not. If it goes on like this and such models are trained on the data produced by models, then all the resources might be flooded with misinformation, and we will simply lose track of it.
The amount of information available is often overwhelming—it is for me as well—and if things continue in this direction, which they will, the social and economic impact of AI will grow. I think we must strike a balance between being excited about the new AI technology and understanding its limitations. Eventually, it is our responsibility to think critically and double-check the information.
With all its limitations, this new AI is impressive! For me, it is hard to imagine what GPT-4 and GPT-5 will be capable of. Just imagine if it gets ten times more intelligent and faster than it is now—what will happen then? Let me know what you think in the comments.
This newsletter is read by over 1000 people! Thank you for being a part of this amazing community ❤️ You guys rock! I wish you and your family peaceful holidays and a wonderful year ahead !
Сегодня невозможно создать качественное, надёжное и конкурентоспособное оборудование без всестороннего инженерного анализа проектируемых объектов с помощью современных программных средств и принятие на основе грамотных конструкторских решений. Надеюсь искусственный интеллект сумеет в будущем облегчить оптимизировать работу конструктора.
Я думаю ничего экстраординарного не произойдёт. К сожалению. Но хотелось бы надеяться на лучшее. Однако миром до сих пор правят глупцы. В мире где всё ещё решает грубая сила бояться интеллекта преждевременно. Люди уничтожали и продолжают уничтожать друг друга. Зато боятся что их начнёт Уничтожать какой-то интеллект... Жаль нет времени комментировать и обсуждать подробно.