ChatGPT parent company OpenAI recently introduced a new AI model. However, the company wants to proceed “carefully” when introducing OpenAI o1, as the model could be misused for the production of bioweapons.
With the introduction of ChatGPT, OpenAI has revolutionized the world of artificial intelligence. Now the group has presented a new AI model with OpenAI o1.
OpenAI o1 required according to company information Although “more time to think”. However, AI is capable of thinking through complex tasks and solving more difficult problems in the areas of science, programming and mathematics.
But this is exactly what could make the new AI model dangerous. That’s why the company has given OpenAI o1 the highest risk level it has ever assigned to one of its models.
Could OpenAI o1 be misused to produce bioweapons?
OpenAI has classified its new AI model as “medium risk” related to the production of chemical, biological, radiological and nuclear weapons. Opposite the Financial Times The company has stated that OpenAI o1 has “significantly improved” the ability of experts to develop bioweapons.
AI models capable of making step-by-step conclusions could pose an increased risk of misuse in the wrong hands. Mira Murati, CTO of OpenAI, told the Financial Timesthat the company wanted to be particularly “cautious” when introducing these new capabilities to the public. However, the AI will be generally accessible to ChatGPT subscribers and programmers.
Red Teamers and experts from various scientific fields tested the model to push it to its limits. However, according to Murati, the AI performed far better than the previous ones in the general security criteria.
Experts are calling for laws to restrict AI models
The new possibilities brought by the AI model o1 “reinforces the importance and urgency” of laws regulating artificial intelligence, Yoshua Bengio, a professor of computer science at the University of Montreal, told the Financial Times.
Such a law is currently being discussed in California. The manufacturers of AI models would therefore have to take measures to, for example, minimize the risk of developing biological weapons.
Bengio, who is one of the world’s leading AI scientists, sees the danger in the constant development of AI models. “The risks will continue to increase if the right guardrails are missing,” said the researcher. “Improving AI’s ability to think and using that ability to deceive is particularly dangerous.”
Also interesting:
- Telegram: This is how you can report content and users
- Card payment disruption: What is TeleCash?
- Activate Cell Broadcast on the iPhone – here’s how
- ECJ ruling: Why does Apple have to pay back taxes to the EU?
The post OpenAI o1: New AI model enables the production of bioweapons by Maria Gramsch appeared first on BASIC thinking. Follow us too Facebook, Twitter and Instagram.
As a Tech Industry expert, I am deeply concerned about the implications of OpenAI’s new AI model enabling the production of bioweapons. While AI technology has the potential to bring about tremendous advancements in various fields, including healthcare and agriculture, it also poses significant risks if not properly regulated and controlled.
The idea of using AI to create bioweapons is extremely dangerous and could have catastrophic consequences if it falls into the wrong hands. It is crucial that we have strict regulations and oversight in place to prevent the misuse of such technology for nefarious purposes.
OpenAI and other organizations developing AI must prioritize ethical considerations and ensure that their technology is used for the greater good of humanity. We must work together as a global community to establish guidelines and safeguards to prevent the misuse of AI in ways that could harm society.
Ultimately, the responsibility lies with both the developers and policymakers to ensure that AI is used responsibly and ethically to benefit humanity, rather than pose a threat to our safety and security.
Credits