Spy allegations: Did Deepseek stolen data from Chatgpt?

The contribution of espionage allegations: Did Deepseek stolen data from Chatgpt? Fabian Peters first appeared on Basic Thinking. You always stay up to date with our newsletter.

Deepseek Ki Spye Saten Openai Chatgpt

The China-Ki Deepseek should be able to keep up with large voice models and at the same time be both cheaper and more efficient. Chatgpt developer Openaai is now pursuing indications that indicate espionage and data theft.

The Chinese company Deepseek has put the AI ​​industry into turmoil. Because the “Deepseek R1” language model of the same name conquered the apps stores in the storm overnight. In the United States, the app even overtook Chatgpt and in the meantime was number one on the download charts for iPhone apps.

Deepseek R1: Openai boss reacts calmly

The AI ​​model from Deepseek should not only keep up with the large voice models from Google, Meta and Openaai, but also even surpass them. There was therefore panic on the stock exchanges – especially in the United States. But the hysteria seems exaggerated.

Because Deepseek questions the business models of other AI providers and will mix up the AI ​​industry. However, some assessments are based on misunderstandings. Openai boss Sam Altman therefore reacts calmly. In one Post on x (formerly Twitter) he writes:

The R1 from Deepseek is an impressive model, especially in terms of what you can deliver for the price. Of course we will deliver a lot better models, and it is also quite invigorating to have a new competitor! We will publish some publications.

Stolen from chatt? Spy allegations against Deepseek

However, it looks different behind the scenes. Because according to one report The news agency Bloomberg Openai and partner company Microsoft refer to indicate espionage and data theft. The accusation: Deepseek may have captured data from Chatgpt.

See also  Apple Sidecar: This is how you can use your iPad as a second screen

Security experts from Microsoft had already observed an extensive data transfer via an interface in the Openai software at the end of 2024. The data thieves are supposed to be in connection with the Chinese company Deepseek.

David Sacks, AI representative of the US government, explained in one interview with Fox news: “There is valid evidence that Deepseek has distilled information from the AI ​​models from Openai”. Microsoft, Deepseek and Openaai have not yet commented.

Also interesting:

  • AI project “Stargate”: USA put Europe under pressure-despite unclear financing
  • “Free Our Feeds”: New initiative should protect social media from billionaires
  • Fraud on Twitter takeover? US stock exchange supervision sues Elon Musk
  • Electricity and water consumption: the effects of AI on the environment

The contribution of espionage allegations: Did Deepseek stolen data from Chatgpt? Fabian Peters first appeared on Basic Thinking. Follow us too Google News and Flipboard.


As a Tech Industry expert, I believe it is important to approach spy allegations with caution and thoroughly investigate the claims before jumping to conclusions. In the case of Deepseek allegedly stealing data from Chatgpt, it is essential to gather all the facts and evidence before making any judgments.

If there is evidence to support the allegations, it would be a serious breach of trust and could have significant consequences for both companies involved. Data theft is a serious issue in the tech industry and can have far-reaching implications for individuals and businesses.

It is crucial for companies to have robust security measures in place to protect their data and prevent unauthorized access. It is also important for companies to conduct thorough background checks on employees and partners to mitigate the risk of data theft.

Overall, it is important for the tech industry to take spy allegations seriously and work together to prevent and address any potential breaches of data security. It is crucial for companies to be transparent and accountable in their actions to maintain trust and integrity within the industry.

Credits

See also  Do AI systems need warnings – like medications?