All of a sudden,в чем разница порнографии и эротики DeepSeek is everywhere.
Its R1 model is open source, allegedly trained for a fraction of the cost of other AI models, and is just as good, if not better than ChatGPT.
This lethal combination hit Wall Street hard, causing tech stocks to tumble, and making investors question how much money is needed to develop good AI models. DeepSeek engineers claim R1 was trained on 2,788 GPUs which cost around $6 million, compared to OpenAI's GPT-4 which reportedly cost $100 million to train.
DeepSeek's cost efficiency also challenges the idea that larger models and more data leads to better performance. Amidst the frenzied conversation about DeepSeek's capabilities, its threat to AI companies like OpenAI, and spooked investors, it can be hard to make sense of what's going on. But AI experts with veteran experience have weighed in with valuable perspectives.
Hampered by trade restrictions and access to Nvidia GPUs, China-based DeepSeek had to get creative in developing and training R1. That they were able to accomplish this feat for only $6 million (which isn't a lot of money in AI terms) was a revelation to investors.
But AI experts weren't surprised. "At Google, I asked why they were fixated on building THE LARGEST model. Why are you going for size? What function are you trying to achieve? Why is the thing you were upset about that you didn't have THE LARGEST model? They responded by firing me," posted Timnit Gebru, who was famously terminated from Google for calling out AI bias, on X.
This Tweet is currently unavailable. It might be loading or has been removed.
Hugging Face's climate and AI lead Sasha Luccioni pointed out how AI investment is precariously built on marketing and hype. "It's wild that hinting that a single (high-performing) LLM is able to achieve that performance without brute-forcing the shit out of thousands of GPUs is enough to cause this," said Luccioni.
This Tweet is currently unavailable. It might be loading or has been removed.
DeepSeek R1 performed comparably to OpenAI o1 model on key benchmarks. It marginally surpassed, equaled, or fell just below o1 on math, coding, and general knowledge tests. That's to say, there are other models out there, like Anthropic Claude, Google Gemini, and Meta's open source model Llama that are just as capable to the average user.
But R1 causing such a frenzy because of how little it cost to make. "It's not smarter than earlier models, just trained more cheaply," said AI research scientist Gary Marcus.
This Tweet is currently unavailable. It might be loading or has been removed.
The fact that DeepSeek was able to build a model that competes with OpenAI's models is pretty remarkable. Andrej Karpathy who co-founded OpenAI, posted on X, "Does this mean you don't need large GPU clusters for frontier LLMs? No, but you have to ensure that you're not wasteful with what you have, and this looks like a nice demonstration that there's still a lot to get through with both data and algorithms."
This Tweet is currently unavailable. It might be loading or has been removed.
Wharton AI professor Ethan Mollick said it's not about it's capabilities, but models that people currently have access to. "DeepSeek is a really good model, but it is not generally a better model than o1 or Claude" he said. "But since it is both free and getting a ton of attention, I think a lot of people who were using free 'mini' models are being exposed to what a early 2025 reasoner AI can do and are surprised."
This Tweet is currently unavailable. It might be loading or has been removed.
DeepSeek R1 breakout is a huge win for open source proponents who argue that democratizing access to powerful AI models, ensures transparency, innovation, and healthy competition. "To people who think 'China is surpassing the U.S. in AI,' the correct thought is 'open source models are surpassing closed ones,'" said Yann LeCun, chief AI scientist at Meta, which has supported open sourcing with its own Llama models.
This Tweet is currently unavailable. It might be loading or has been removed.
Computer scientist and AI expert Andrew Ng didn't explicitly mention the significance of R1 being an open source model, but highlighted how the DeepSeek disruption is a boon for developers, since it allows access that is otherwise gatekept by Big Tech.
"Today's 'DeepSeek selloff' in the stock market -- attributed to DeepSeek V3/R1 disrupting the tech ecosystem -- is another sign that the application layer is a great place to be," said Ng. "The foundation model layer being hyper-competitive is great for people building applications."
This Tweet is currently unavailable. It might be loading or has been removed.
Topics Artificial Intelligence DeepSeek
Mochi Madness 2018Lighting Up the Holiday Season‘Koko’s Neighborhood,’ New Video Exhibit by Yoshie Sakai, to Open at CSUDHBack in the Day‘Hiroshima Boy’ Gets Edgar Nomination'Ishiro Honda' Book‘Strangers’ in the NightLove for the TulipsTea Master to Be Honored at Traditional Arts RoundtableFujima Kansuma’s 100th Birthday to Be Celebrated at Aratani 10 old favorites to stream right now on HBO Max John Krasinski tells Rainn Wilson why he sold 'Some Good News' 5 ways to donate to end police brutality How to manage too many browser tabs Sony made the right call and postponed the PlayStation 5 reveal Grimes says she and Elon Musk have changed their baby's name Why not watch Rob Lowe chase ghosts in 'The Lowe Files'? Campaigner Ruth Hunt on straightwashing and erasure of LGBTQ history Coronavirus scams are thriving. Google hopes a new site can help potential victims. 6 ways to be antiracist, because being 'not racist' isn't enough
0.1907s , 7985.1796875 kb
Copyright © 2025 Powered by 【в чем разница порнографии и эротики】DeepSeek R1: Why AI experts think it's so special,Global Hot Topic Analysis