OpenAI recently introduced a new AI model, CriticGPT. It is intended to find errors within ChatGPT. Studies show that the tool outperforms humans in 63 percent of cases and can therefore make AI better.
Since the introduction of ChatGPT, artificial intelligence has penetrated into everyday life for many people. But the system is not error-free and can sometimes develop certain prejudices. The company behind the tool, OpenAI, recently presented a new model called CriticGPT. This was specifically designed to detect errors in ChatGPT’s code.
The development is intended to improve the process of adapting AI systems to human requirements by supporting human reviewers and increasing the accuracy of output from large language models (LLMs). CriticGPT, based on the GPT-4 family. It analyzes code and points out potential errors. This makes it easier for human reviewers to spot errors that might otherwise be missed.
CriticGPT: Error detection 63 percent better than humans
In a research paper titled “LLM Critics Help Catch LLM Bugs,” OpenAI researchers showed that CriticGPT performed better than human reviewers 63 percent of the time. This was due, among other things, to the fact that the tool generated fewer useless “little things” and fewer false alarms.
OpenAI trained the model to detect a wide range of coding errors. To do this, the team trained the algorithm with a database of code examples that contained intentionally inserted errors.
This method allows CriticGPT to detect both injected and naturally occurring errors in ChatGPT’s output. But the tool could not only find errors in the actual code, but also in other tasks.
In experiments, the model identified errors in 24 percent of ChatGPT training data that human reviewers previously classified as error-free. A team later confirmed these errors, underscoring CriticGPT’s potential for reviewing uncoded tasks.
Effectiveness for more complex inputs has not yet been proven
Despite the promising results, CriticGPT, like all AI models, has its limitations. The team at OpenAI trained it on relatively short responses from ChatGPT, which may not be enough to evaluate longer, more complex tasks. In addition, CriticGPT is not completely immune to incorrect output.
OpenAI plans to integrate CriticGPT-like models into its own processes to provide AI-powered support to trainers. This is intended to be a step towards better tools for evaluating outputs of LLM systems, which are difficult for humans to evaluate without additional support.
Also interesting:
- European elections 2024: ChatGPT makes the Wahl-O-Mat
- OpenAI will use Reddit content to train ChatGPT in the future
- GPT-4o: All information about the new ChatGPT version from OpenAI
- Dr. ChatGPT: “Tell me what I want to hear” – Beware of self-diagnosis with AI
The post CriticGPT: New AI model from OpenAI should detect errors in ChatGPT by Felix Baumann appeared first on BASIC thinking. Follow us too Facebook, Twitter and Instagram.