+Follow
老虎我最凶
No personal profile
1
Follow
0
Followers
0
Topic
0
Badge
Posts
Hot
老虎我最凶
2025-09-10
Win win win
Sorry, the original content has been removed
老虎我最凶
2025-09-10
Win win win
Sorry, the original content has been removed
老虎我最凶
2025-02-01
double win!
Ultraman "Confession": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away
Go to Tiger App to see more news
{"i18n":{"language":"en_US"},"userPageInfo":{"id":"4198132281708462","uuid":"4198132281708462","gmtCreate":1735031377303,"gmtModify":1737015698431,"name":"老虎我最凶","pinyin":"lhwzxlaohuwozuixiong","introduction":"","introductionEn":"","signature":"","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","hat":null,"hatId":null,"hatName":null,"vip":1,"status":2,"fanSize":0,"headSize":1,"tweetSize":33,"questionSize":0,"limitLevel":0,"accountStatus":0,"level":{"id":0,"name":"","nameTw":"","represent":"","factor":"","iconColor":"","bgColor":""},"themeCounts":0,"badgeCounts":0,"badges":[],"moderator":false,"superModerator":false,"manageSymbols":null,"badgeLevel":null,"boolIsFan":false,"boolIsHead":false,"favoriteSize":0,"symbols":null,"coverImage":null,"realNameVerified":"success","userBadges":[{"badgeId":"972123088c9646f7b6091ae0662215be-1","templateUuid":"972123088c9646f7b6091ae0662215be","name":"Elite Trader","description":"Total number of securities or futures transactions reached 30","bigImgUrl":"https://static.tigerbbs.com/ab0f87127c854ce3191a752d57b46edc","smallImgUrl":"https://static.tigerbbs.com/c9835ce48b8c8743566d344ac7a7ba8c","grayImgUrl":"https://static.tigerbbs.com/76754b53ce7a90019f132c1d2fbc698f","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2025.02.04","exceedPercentage":"60.34%","individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1100},{"badgeId":"a83d7582f45846ffbccbce770ce65d84-1","templateUuid":"a83d7582f45846ffbccbce770ce65d84","name":"Real Trader","description":"Completed a transaction","bigImgUrl":"https://static.tigerbbs.com/2e08a1cc2087a1de93402c2c290fa65b","smallImgUrl":"https://static.tigerbbs.com/4504a6397ce1137932d56e5f4ce27166","grayImgUrl":"https://static.tigerbbs.com/4b22c79415b4cd6e3d8ebc4a0fa32604","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2025.01.02","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1100}],"userBadgeCount":2,"currentWearingBadge":null,"individualDisplayBadges":null,"crmLevel":2,"crmLevelSwitch":0,"location":null,"starInvestorFollowerNum":0,"starInvestorFlag":false,"starInvestorOrderShareNum":0,"subscribeStarInvestorNum":1,"ror":null,"winRationPercentage":null,"showRor":false,"investmentPhilosophy":null,"starInvestorSubscribeFlag":false},"baikeInfo":{},"tab":"post","tweets":[{"id":477028461048072,"gmtCreate":1757468654140,"gmtModify":1757468656545,"author":{"id":"4198132281708462","authorId":"4198132281708462","name":"老虎我最凶","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4198132281708462","authorIdStr":"4198132281708462"},"themes":[],"htmlText":"Win win win ","listText":"Win win win ","text":"Win win win","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/477028461048072","repostId":"1141889500","repostType":2,"isVote":1,"tweetType":1,"viewCount":1020,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":477027578634456,"gmtCreate":1757468191951,"gmtModify":1757468195796,"author":{"id":"4198132281708462","authorId":"4198132281708462","name":"老虎我最凶","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4198132281708462","authorIdStr":"4198132281708462"},"themes":[],"htmlText":"Win win win ","listText":"Win win win ","text":"Win win win","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/477027578634456","repostId":"1141889500","repostType":2,"isVote":1,"tweetType":1,"viewCount":960,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":398776793673920,"gmtCreate":1738382646956,"gmtModify":1738382650443,"author":{"id":"4198132281708462","authorId":"4198132281708462","name":"老虎我最凶","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4198132281708462","authorIdStr":"4198132281708462"},"themes":[],"htmlText":"double win!","listText":"double win!","text":"double win!","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/398776793673920","repostId":"1120077738","repostType":2,"repost":{"id":"1120077738","kind":"news","pubTimestamp":1738381392,"share":"https://ttm.financial/m/news/1120077738?lang=en_US&edition=fundamental","pubTime":"2025-02-01 11:43","market":"us","language":"zh","title":"Ultraman \"Confession\": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away","url":"https://stock-news.laohu8.com/highlight/detail?id=1120077738","media":"新智元","summary":"下一个是GPT-5。","content":"<p><html><head></head><body>While everyone was still marveling at DeepSeek's amazing strength, OpenAI finally couldn't sit still.</p><p>In the early hours of last night, the o3-mini went online in an emergency, refreshing SOTA in benchmarks such as math codes and returning to the throne.</p><p>Most importantly, free users can also experience it!</p><p>The strength of o3-mini is not to be bragged. In the \"Last Human Examination\", o3-mini (high) is the best in both accuracy and Calibration Error.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/4c22b9ec8914751bc5abe6021c200f9a\" title=\"\" tg-width=\"1080\" tg-height=\"1020\"/></p><p>A few hours after o3-mini went online, OpenAI officially opened Reddit AMA for about 1 hour of online Q&A.</p><p>Altman himself went online and answered all the questions from netizens.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/b529a27074e998944972eca3a1e5b0b7\" title=\"\" tg-width=\"1080\" tg-height=\"832\"/></p><p>The main highlights are:</p><p><ul style=\"list-style-type: disc;\"><li>DeepSeek is really good, and we will continue to develop better models, but the lead will not be as big as before</p><p></li><li>Rather than a few years ago, I am more inclined to think that AI is likely to emerge rapidly and rapidly</p><p></li><li>On the issue of open source weighted AI models, we are on the wrong side</p><p></li><li>The advanced voice mode is coming up with an update, which we'll call GPT-5 directly instead of GPT-5o, and there's no specific timeline yet.</p><p></li></ul>In addition to Altman himself, Mark Chen, Chief Research Officer, Kevin Weil, Chief Product Officer, Srinivas Narayanan, Vice President of Engineering, Michelle Pokrass, Head of API Research, and Hongyu Ren, Head of Research, were also online and answered all the questions of netizens carefully.</p><p>Next, let's take a look at what they all said.</p><p><strong>Ultraman deeply repented and stood on the wrong team on open source AI</strong></p><p>DeepSeek suddenly counterattacked, perhaps something that no one expected.</p><p>In the AMA Q&A, Altman himself also deeply confessed that he was on the wrong team on open source AI, and had to admit the powerful advantages of DeepSeek.</p><p>To the amazement of many people, Altman actually said that OpenAI's lead is not as good as before.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/a555c5dee2ce12ce297456a37aeb64b9\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"316\"/></p><p>All of the following are our compilation of Ultraman's classic answers.</p><p><strong>Q: Let's talk about the big topic of the week: Deepseek. Obviously this is a very impressive model, and I also know that it was probably trained on the output of other LLMs. How would this change your plans for future models?</strong></p><p>Ultraman: It is indeed a very good model! We will develop better models, but we won't maintain as much of a lead as we have in previous years.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/30050f5f20ed6d7c0bb40a1780d02d5b\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"550\"/></p><p><strong>Q: Do you think recursive self-improvement will be a gradual process, or one that takes off suddenly?</strong></p><p>Altman: Personally, I think that I am more inclined to think that AI may advance rapidly than a few years ago. It may be time to write something on this topic...</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/f68199e96b99c46e8b299a9d90eae294\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"463\"/></p><p><strong>Q: Can we see all the tokens the model thinks about?</strong></p><p>Ultraman: Yes, we'll be showing a more helpful, detailed version soon. Thanks to R1 for the updated information.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/72407471b6a5ff25832883af26985910\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"400\"/></p><p>Kevin Weil, Chief Product Officer: We're trying to show more content than we've got right now — and that's going to happen soon. As for showing everything is yet to be determined, showing all chains of thought (CoT) results in model distillation for competitors, but we also know that users (at least advanced users) want to see these, so we'll find a proper balance.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7ca7bd7647c025f9ee6b93c60c35e8e\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"313\"/></p><p><strong>Q: When will the full-blood version of o3 go online?</strong></p><p>Altman: I estimate it will be more than a few weeks, but not more than a few months.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/7c2b17cac2383cb54c6008fdf3846d7a\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"387\"/></p><p><strong>Q: Will there be an update to the voice mode? Is this the focus of a potential GPT-5o? What is the approximate timeline for GPT-5o?</strong></p><p>Ultraman: Yes, an update to Advanced Voice Mode is coming soon! I think we'll just call it GPT-5 instead of GPT-5o. There is no specific timeline yet.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/9a299ae6009abd55614bc73665df44b7\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"555\"/></p><p><strong>Q: Would you consider publishing some model weights and publishing some research?</strong></p><p>Altman: Yeah, we're talking about it. Personally, I think we are on the wrong side on this issue and need to come up with a different open source strategy; Not everyone at OpenAI holds that view, and it's not our highest priority right now.</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>How far are we from offering Operator on a regular Plus plan?</strong></p><p></li><li><strong>What are the top objectives of the robotics division?</strong></p><p></li><li><strong>What does OpenAI think of more specialized chips/TPUs like Trillium, Cerebras, etc? Is OpenAI looking at this?</strong></p><p></li><li><strong>What to invest in to hedge AGI and ASI's future risks?</strong></p><p></li><li><strong>What was your most memorable vacation?</strong></p><p></li></ul>Ultraman:</p><p><ul style=\"list-style-type: disc;\"><li>Several months</p><p></li><li>First, produce a really good robot on a small scale and learn from it</p><p></li><li>The GB200 is hard to surpass at the moment!</p><p></li><li>A good choice is to boost your inner state-resilience, resilience, calm, happiness, etc</p><p></li><li>It's hard to choose! But the first two that come to mind: backpacking in Southeast Asia or safari trips in Africa</p><p></li></ul><strong>Q: Do you plan to increase the price of the Plus series?</strong></p><p>ALTMAN: I actually want to taper off.</p><p><strong>Q: Let's say it's 2030 and you just created what most people would call AGI. It performs well on all test benchmarks, and it outperforms your best engineers and researchers in both speed and performance. What's next? Are there any other plans besides \"putting it on the site to provide the service\"?</strong></p><p>Altman: The most important impact, in my opinion, will be accelerating the speed of scientific discovery, which I think is the biggest contributor to improving quality of life.</p><p><strong>4o image generation, coming soon</strong></p><p>Next, added are responses from other OpenAI members.</p><p><strong>Q: Are you still planning to launch a 4o image generator?</strong></p><p>Chief Product Officer Kevin Weil: Yes! We're working on it. And I think the wait is worth it.</p><p><strong>Q: Great! Is there a rough timeline?</strong></p><p>Kevin Weil, Chief Product Officer: You're trying to get me into trouble. Maybe a few months.</p><p>There's a similar problem.</p><p><strong>Q: When can we see ChatGPT-5?</strong></p><p>Chief Product Officer Kevin Weil: Just shortly after o-17 micro and GPT- (π +1).</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>What other types of agents can we expect?</strong></p><p></li><li><strong>Also provide an agent for free users, which can speed up adoption...</strong></p><p></li><li><strong>Any updates on the new version of DALL E?</strong></p><p></li><li><strong>One last question, and one that everyone asks... when will AGI be implemented?</strong></p><p></li></ul>Chief Product Officer Kevin Weil:</p><p><ul style=\"list-style-type: disc;\"><li>About More Agents: Very, very soon. I think you'll be satisfied.</p><p></li><li>4o based image generation: I can't wait for you guys to use it in about a few months. It was great.</p><p></li><li>AGI: Yes</p><p></li></ul><strong>Q: Are you planning to add file attachment functionality to the inference model?</strong></p><p>Srinivas Narayanan, Vice President of Engineering: Under development. Future inference models will be able to use different tools, including retrieval functions.</p><p>Kevin Weil, Chief Product Officer: Just to say, I can't wait to see inference models that can use tools:)</p><p><strong>Q: Really. When you solve this problem, some very useful AI application scenarios will be opened. Imagine it being able to understand the contents of your 500GB work document.</strong></p><p>When you're ready to respond to an email, a panel opens next to your email app that continuously analyzes all the information related to the person, including your relationship, topics discussed, past work, and more. Perhaps something from a certain document you've long forgotten will be flagged because it is so relevant to the current discussion. I want this feature so badly.</p><p>Srinivas Narayanan, Vice President of Engineering: We are working on increasing the context length. No definite date/announcement yet.</p><p><strong>Q: How important is the Stargate project for OpenAI's future?</strong></p><p>Kevin Weil, Chief Product Officer: Very important. Everything we've seen shows that the more computing power we have, the better models we can build, and the more valuable products we can make.</p><p>We are now scaling the model in two dimensions simultaneously – larger-scale pre-training, and more reinforcement learning (RL) /\"strawberry\" training – both of which require computational resources.</p><p>Serving hundreds of millions of users also requires computing resources! And as we move to more intelligent agency products that work for you consistently, that also requires computing resources. So you can think of Stargate as our factory, where power/GPUs are turned into amazing products.</p><p><strong>Q: Internally, which model are you using now? o4, o5 or o6? How much smarter are these internal models compared to o3?</strong></p><p>Michelle Pokrass, head of API research: We've lost count.</p><p><strong>Q: Please allow us to interact with text/canvas while using advanced voice features. I want to be able to speak into it and have it make iterative changes to the document.</strong></p><p>Chief Product Officer Kevin Weil: Yes! We have a lot of good tools that have been developed relatively independently – the goal is to get those tools into your hands as soon as possible.</p><p>The next step is to integrate all of these features so you can talk to a model that reasons while searching and generates a canvas that can run Python. All tools need to work better together. Also, by the way, all models require full tool usage capabilities (O-series models can't use all tools at present), and this will also be implemented.</p><p><strong>Q: When will the O-series models support the memory function in ChatGPT?</strong></p><p>API Research Lead Michelle Pokrass: In development! Unifying all of our functionality with the O-Series model is our top priority.</p><p><strong>Q: Will there be significant improvements to the 4o? I really like the custom GPT, it would be awesome if it could be upgraded, or even better if we were able to choose what model to use in the custom GPT (like the o3 mini).</strong></p><p>Michelle Pokrass, API Research Lead: Yes, we haven't finished the 4o series yet!</p><p></body></html></p>","source":"lsy1569730104218","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Ultraman \"Confession\": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 12.5px; color: #7E829C; margin: 0;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nUltraman \"Confession\": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away\n</h2>\n<h4 class=\"meta\">\n<p class=\"head\">\n<strong class=\"h-name small\">新智元</strong><span class=\"h-time small\">2025-02-01 11:43</span>\n</p>\n</h4>\n</header>\n<article>\n<p><html><head></head><body>While everyone was still marveling at DeepSeek's amazing strength, OpenAI finally couldn't sit still.</p><p>In the early hours of last night, the o3-mini went online in an emergency, refreshing SOTA in benchmarks such as math codes and returning to the throne.</p><p>Most importantly, free users can also experience it!</p><p>The strength of o3-mini is not to be bragged. In the \"Last Human Examination\", o3-mini (high) is the best in both accuracy and Calibration Error.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/4c22b9ec8914751bc5abe6021c200f9a\" title=\"\" tg-width=\"1080\" tg-height=\"1020\"/></p><p>A few hours after o3-mini went online, OpenAI officially opened Reddit AMA for about 1 hour of online Q&A.</p><p>Altman himself went online and answered all the questions from netizens.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/b529a27074e998944972eca3a1e5b0b7\" title=\"\" tg-width=\"1080\" tg-height=\"832\"/></p><p>The main highlights are:</p><p><ul style=\"list-style-type: disc;\"><li>DeepSeek is really good, and we will continue to develop better models, but the lead will not be as big as before</p><p></li><li>Rather than a few years ago, I am more inclined to think that AI is likely to emerge rapidly and rapidly</p><p></li><li>On the issue of open source weighted AI models, we are on the wrong side</p><p></li><li>The advanced voice mode is coming up with an update, which we'll call GPT-5 directly instead of GPT-5o, and there's no specific timeline yet.</p><p></li></ul>In addition to Altman himself, Mark Chen, Chief Research Officer, Kevin Weil, Chief Product Officer, Srinivas Narayanan, Vice President of Engineering, Michelle Pokrass, Head of API Research, and Hongyu Ren, Head of Research, were also online and answered all the questions of netizens carefully.</p><p>Next, let's take a look at what they all said.</p><p><strong>Ultraman deeply repented and stood on the wrong team on open source AI</strong></p><p>DeepSeek suddenly counterattacked, perhaps something that no one expected.</p><p>In the AMA Q&A, Altman himself also deeply confessed that he was on the wrong team on open source AI, and had to admit the powerful advantages of DeepSeek.</p><p>To the amazement of many people, Altman actually said that OpenAI's lead is not as good as before.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/a555c5dee2ce12ce297456a37aeb64b9\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"316\"/></p><p>All of the following are our compilation of Ultraman's classic answers.</p><p><strong>Q: Let's talk about the big topic of the week: Deepseek. Obviously this is a very impressive model, and I also know that it was probably trained on the output of other LLMs. How would this change your plans for future models?</strong></p><p>Ultraman: It is indeed a very good model! We will develop better models, but we won't maintain as much of a lead as we have in previous years.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/30050f5f20ed6d7c0bb40a1780d02d5b\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"550\"/></p><p><strong>Q: Do you think recursive self-improvement will be a gradual process, or one that takes off suddenly?</strong></p><p>Altman: Personally, I think that I am more inclined to think that AI may advance rapidly than a few years ago. It may be time to write something on this topic...</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/f68199e96b99c46e8b299a9d90eae294\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"463\"/></p><p><strong>Q: Can we see all the tokens the model thinks about?</strong></p><p>Ultraman: Yes, we'll be showing a more helpful, detailed version soon. Thanks to R1 for the updated information.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/72407471b6a5ff25832883af26985910\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"400\"/></p><p>Kevin Weil, Chief Product Officer: We're trying to show more content than we've got right now — and that's going to happen soon. As for showing everything is yet to be determined, showing all chains of thought (CoT) results in model distillation for competitors, but we also know that users (at least advanced users) want to see these, so we'll find a proper balance.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7ca7bd7647c025f9ee6b93c60c35e8e\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"313\"/></p><p><strong>Q: When will the full-blood version of o3 go online?</strong></p><p>Altman: I estimate it will be more than a few weeks, but not more than a few months.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/7c2b17cac2383cb54c6008fdf3846d7a\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"387\"/></p><p><strong>Q: Will there be an update to the voice mode? Is this the focus of a potential GPT-5o? What is the approximate timeline for GPT-5o?</strong></p><p>Ultraman: Yes, an update to Advanced Voice Mode is coming soon! I think we'll just call it GPT-5 instead of GPT-5o. There is no specific timeline yet.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/9a299ae6009abd55614bc73665df44b7\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"555\"/></p><p><strong>Q: Would you consider publishing some model weights and publishing some research?</strong></p><p>Altman: Yeah, we're talking about it. Personally, I think we are on the wrong side on this issue and need to come up with a different open source strategy; Not everyone at OpenAI holds that view, and it's not our highest priority right now.</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>How far are we from offering Operator on a regular Plus plan?</strong></p><p></li><li><strong>What are the top objectives of the robotics division?</strong></p><p></li><li><strong>What does OpenAI think of more specialized chips/TPUs like Trillium, Cerebras, etc? Is OpenAI looking at this?</strong></p><p></li><li><strong>What to invest in to hedge AGI and ASI's future risks?</strong></p><p></li><li><strong>What was your most memorable vacation?</strong></p><p></li></ul>Ultraman:</p><p><ul style=\"list-style-type: disc;\"><li>Several months</p><p></li><li>First, produce a really good robot on a small scale and learn from it</p><p></li><li>The GB200 is hard to surpass at the moment!</p><p></li><li>A good choice is to boost your inner state-resilience, resilience, calm, happiness, etc</p><p></li><li>It's hard to choose! But the first two that come to mind: backpacking in Southeast Asia or safari trips in Africa</p><p></li></ul><strong>Q: Do you plan to increase the price of the Plus series?</strong></p><p>ALTMAN: I actually want to taper off.</p><p><strong>Q: Let's say it's 2030 and you just created what most people would call AGI. It performs well on all test benchmarks, and it outperforms your best engineers and researchers in both speed and performance. What's next? Are there any other plans besides \"putting it on the site to provide the service\"?</strong></p><p>Altman: The most important impact, in my opinion, will be accelerating the speed of scientific discovery, which I think is the biggest contributor to improving quality of life.</p><p><strong>4o image generation, coming soon</strong></p><p>Next, added are responses from other OpenAI members.</p><p><strong>Q: Are you still planning to launch a 4o image generator?</strong></p><p>Chief Product Officer Kevin Weil: Yes! We're working on it. And I think the wait is worth it.</p><p><strong>Q: Great! Is there a rough timeline?</strong></p><p>Kevin Weil, Chief Product Officer: You're trying to get me into trouble. Maybe a few months.</p><p>There's a similar problem.</p><p><strong>Q: When can we see ChatGPT-5?</strong></p><p>Chief Product Officer Kevin Weil: Just shortly after o-17 micro and GPT- (π +1).</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>What other types of agents can we expect?</strong></p><p></li><li><strong>Also provide an agent for free users, which can speed up adoption...</strong></p><p></li><li><strong>Any updates on the new version of DALL E?</strong></p><p></li><li><strong>One last question, and one that everyone asks... when will AGI be implemented?</strong></p><p></li></ul>Chief Product Officer Kevin Weil:</p><p><ul style=\"list-style-type: disc;\"><li>About More Agents: Very, very soon. I think you'll be satisfied.</p><p></li><li>4o based image generation: I can't wait for you guys to use it in about a few months. It was great.</p><p></li><li>AGI: Yes</p><p></li></ul><strong>Q: Are you planning to add file attachment functionality to the inference model?</strong></p><p>Srinivas Narayanan, Vice President of Engineering: Under development. Future inference models will be able to use different tools, including retrieval functions.</p><p>Kevin Weil, Chief Product Officer: Just to say, I can't wait to see inference models that can use tools:)</p><p><strong>Q: Really. When you solve this problem, some very useful AI application scenarios will be opened. Imagine it being able to understand the contents of your 500GB work document.</strong></p><p>When you're ready to respond to an email, a panel opens next to your email app that continuously analyzes all the information related to the person, including your relationship, topics discussed, past work, and more. Perhaps something from a certain document you've long forgotten will be flagged because it is so relevant to the current discussion. I want this feature so badly.</p><p>Srinivas Narayanan, Vice President of Engineering: We are working on increasing the context length. No definite date/announcement yet.</p><p><strong>Q: How important is the Stargate project for OpenAI's future?</strong></p><p>Kevin Weil, Chief Product Officer: Very important. Everything we've seen shows that the more computing power we have, the better models we can build, and the more valuable products we can make.</p><p>We are now scaling the model in two dimensions simultaneously – larger-scale pre-training, and more reinforcement learning (RL) /\"strawberry\" training – both of which require computational resources.</p><p>Serving hundreds of millions of users also requires computing resources! And as we move to more intelligent agency products that work for you consistently, that also requires computing resources. So you can think of Stargate as our factory, where power/GPUs are turned into amazing products.</p><p><strong>Q: Internally, which model are you using now? o4, o5 or o6? How much smarter are these internal models compared to o3?</strong></p><p>Michelle Pokrass, head of API research: We've lost count.</p><p><strong>Q: Please allow us to interact with text/canvas while using advanced voice features. I want to be able to speak into it and have it make iterative changes to the document.</strong></p><p>Chief Product Officer Kevin Weil: Yes! We have a lot of good tools that have been developed relatively independently – the goal is to get those tools into your hands as soon as possible.</p><p>The next step is to integrate all of these features so you can talk to a model that reasons while searching and generates a canvas that can run Python. All tools need to work better together. Also, by the way, all models require full tool usage capabilities (O-series models can't use all tools at present), and this will also be implemented.</p><p><strong>Q: When will the O-series models support the memory function in ChatGPT?</strong></p><p>API Research Lead Michelle Pokrass: In development! Unifying all of our functionality with the O-Series model is our top priority.</p><p><strong>Q: Will there be significant improvements to the 4o? I really like the custom GPT, it would be awesome if it could be upgraded, or even better if we were able to choose what model to use in the custom GPT (like the o3 mini).</strong></p><p>Michelle Pokrass, API Research Lead: Yes, we haven't finished the 4o series yet!</p><p></body></html></p>\n<div class=\"bt-text\">\n\n\n<p> source:<a href=\"https://mp.weixin.qq.com/s/No1nD8qDhLX_IHiBD-9G-A\">新智元</a></p>\n\n\n</div>\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"https://static.tigerbbs.com/0c4e1ff8a8f3ef9b71959d07d4048d2e","relate_stocks":{},"source_url":"https://mp.weixin.qq.com/s/No1nD8qDhLX_IHiBD-9G-A","is_english":false,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"1120077738","content_text":"当所有人还在惊叹DeepSeek的惊人实力时,OpenAI终于坐不住了。昨夜凌晨,o3-mini紧急上线,在数学代码等基准测试中刷新SOTA,重回王座。最关键的是,免费用户也可以体验!o3-mini实力可不是吹的,在「人类最后一次考试」中,o3-mini(high)直接在准确率和校准误差(Calibration Error)均是最优。o3-mini上线几小时后,OpenAI官方开启了Reddit AMA大约1小时左右的在线问答。奥特曼本人也上线,回答了网友们的一切问题。主要精彩亮点有:DeepSeek的确很优秀,我们也会继续研发更好的模型,但领先优势不会像以前那么大了比起几年前,我现在更倾向于认为AI可能会出现快速突飞猛进在开源权重AI模型这个问题上,我们站错了队高级语音模式即将迎来更新,我们会直接称它为GPT-5,而不是GPT-5o,目前还没有具体的时间表。除了奥特曼本人,首席研究官Mark Chen、首席产品官Kevin Weil、工程副总Srinivas Narayanan、API研究负责人Michelle Pokrass、研究负责人Hongyu Ren也一同在线,认真回答了网友们所有问题。接下来,让我们一起看看他们都说了什么。奥特曼深刻忏悔,在开源AI上站错了队DeepSeek突然逆袭,或许是所有人都没有想到的。在AMA问答中,奥特曼本人也深深忏悔自己在开源AI上站错了队,不得不承认DeepSeek的强大优势。让许多人惊叹不已的是,奥特曼竟然说出,OpenAI领先优势不如以往。以下所有,是我们汇总的奥特曼的经典回答。Q:让我们来谈谈本周的重大话题:Deepseek。显然这是一个非常令人印象深刻的模型,我也知道它可能是在其他LLM的输出基础上训练的。这会如何改变你们对未来模型的计划?奥特曼:它确实是一个非常好的模型!我们会开发出更好的模型,但我们不会像往年那样保持那么大的领先优势了。Q:你认为递归式自我改进会是一个渐进的过程,还是一个突然起飞的过程?奥特曼:我个人认为,比起几年前,我现在更倾向于认为AI可能会出现快速突飞猛进。可能是时候就这个话题写点东西了...Q:我们能看到模型思考的所有token吗?奥特曼:是的,我们很快就会展示一个更有帮助、更详细的版本。感谢R1提供的更新信息。首席产品官Kevin Weil:我们正在努力展示比现在更多的内容——这将很快实现。至于是否展示所有内容还有待确定,展示所有思维链(CoT)会导致竞争对手的模型蒸馏,但我们也知道用户(至少是高级用户)想要看到这些,所以我们会找到一个合适的平衡点。Q:满血版o3什么时候上线?奥特曼:我估计会超过几周,但不会超过几个月。Q:语音模式会有更新吗?这是否是潜在的GPT-5o关注的重点?GPT-5o的大致时间表是什么?奥特曼:是的,高级语音模式的更新即将到来!我想我们会直接称它为GPT-5,而不是GPT-5o。目前还没有具体的时间表。Q:你会考虑发布一些模型权重,并发表一些研究吗?奥特曼:是的,我们正在讨论。我个人认为,在这个问题上我们站错了队,需要想出一个不同的开源策略;不是OpenAI的每个人都持有这种观点,而且这也不是我们目前的最高优先级。再来一个问题弹:我们距离在常规Plus计划中提供Operator还有多远?机器人部门的首要目标是什么?OpenAI如何看待更专业的芯片/TPU,比如Trillium、Cerebras等?OpenAI是否在关注这方面?投资什么来对冲AGI和ASI未来的风险?你最难忘的假期是什么?奥特曼:几个月先小规模生产一个真正优秀的机器人,从中学习经验GB200目前很难被超越!一个好的选择是提升自己的内在状态——韧性、适应力、平静、快乐等很难选择!但首先想到的两个是:在东南亚背包旅行或者非洲野生动物园之旅Q:你是否计划对Plus系列产品进行提价?奥特曼:实际上我想逐渐减少。Q:假设现在是2030年,你们刚刚创造了一个大多数人会称之为AGI的系统。它在所有测试基准上都表现出色,而且在速度和性能上都超过了你们最优秀的工程师和研究人员。接下来怎么办?除了「把它放到网站上提供服务」之外,还有其他计划吗?奥特曼:在我看来,最重要的影响将是加速科学发现的速度,我认为这是对提高生活质量贡献最大的因素。4o图像生成,快来了接下来,补充的是其他OpenAI成员的回应。Q:你们还打算推出4o图像生成器吗?首席产品官Kevin Weil:是的!我们正在努力。而且我认为等待是值得的。Q:太好了!有大致的时间表吗?首席产品官Kevin Weil:你这是想让我惹麻烦啊。可能几个月吧。还有一个类似的问题。Q:我们什么时候可以看到ChatGPT-5?首席产品官Kevin Weil:就在o-17 micro和GPT-(π+1)之后不久。又是一个问题弹:我们还可以期待什么其他类型的智能体?也为免费用户提供一个智能体,这样可以加快采用率...关于新版DALL·E有什么更新吗?最后一个问题,也是每个人都会问的...AGI什么时候实现?首席产品官Kevin Weil:关于更多智能体:非常非常快就来了。我想你会满意的。基于4o的图像生成:大约几个月后,我迫不及待想让你们用上。它很棒。AGI:是的Q:你们是否计划在推理模型中添加文件附件功能?工程副总Srinivas Narayanan:正在开发中。未来推理模型将能够使用不同的工具,包括检索功能。首席产品官Kevin Weil:只是想说,我迫不及待想看到能使用工具的推理模型了 :)Q:真的。当你解决这个问题时,一些非常有用的AI应用场景就会被打开。想象一下,它能够理解你500GB的工作文档内容。当你准备回复一封邮件时,在你的邮件应用旁边会打开一个面板,持续分析与这个人相关的所有信息,包括你们的关系、讨论的主题、过去的工作等等。也许某个你早已遗忘的文档中的内容会被标记出来,因为它与当前的讨论非常相关。我太想要这个功能了。工程副总Srinivas Narayanan:我们正在努力增加上下文长度。还没有明确的日期/公告。Q:「星际之门」项目对于OpenAI未来来说,有多重要?首席产品官Kevin Weil:非常重要。我们所看到的一切都表明,我们拥有的计算能力越多,就能构建越好的模型,也就能制造出越有价值的产品。我们现在正在同时在两个维度上扩展模型——更大规模的预训练,以及更多的强化学习(RL)/「草莓」训练——这两者都需要计算资源。为数亿用户提供服务也需要计算资源!而且随着我们转向更多能持续为你工作的智能代理产品,这也需要计算资源。所以你可以把「星际之门」看作是我们的工厂,将电力/GPU转化为令人惊叹的产品的地方。Q:在内部,你们现在用的是哪个模型?o4、o5还是o6?与o3相比,这些内部模型的智能程度提高了多少?API研究负责人Michelle Pokrass:我们已经数不清了。Q:请允许我们在使用高级语音功能的同时与文本/画布进行交互。我希望能够对着它说话,让它对文档进行迭代修改。首席产品官Kevin Weil:是的!我们有很多不错的工具是相对独立开发的——目标是让这些工具尽快交到你手中。下一步是整合所有这些功能,这样你就可以与一个模型对话,它在搜索的同时进行推理,并生成一个可以运行Python的画布。所有工具都需要更好地协同工作。另外顺便说一下,所有模型都需要完整的工具使用能力(o系列模型目前还不能使用所有工具),这个也会实现的。Q:o系列模型什么时候会支持ChatGPT中的记忆功能?API研究负责人Michelle Pokrass:正在开发中!将我们所有的功能与o系列模型统一起来是我们的首要任务。Q:4o是否会有重大改进?我真的很喜欢自定义GPT,如果它能够升级就太棒了,或者如果我们能够在自定义GPT中选择使用什么模型(比如o3 mini)就更好了。API研究负责人Michelle Pokrass:是的,我们还没有完成4o系列!","news_type":1,"symbols_score_info":{}},"isVote":1,"tweetType":1,"viewCount":1725,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"hots":[{"id":477028461048072,"gmtCreate":1757468654140,"gmtModify":1757468656545,"author":{"id":"4198132281708462","authorId":"4198132281708462","name":"老虎我最凶","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4198132281708462","authorIdStr":"4198132281708462"},"themes":[],"htmlText":"Win win win ","listText":"Win win win ","text":"Win win win","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/477028461048072","repostId":"1141889500","repostType":2,"isVote":1,"tweetType":1,"viewCount":1020,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":477027578634456,"gmtCreate":1757468191951,"gmtModify":1757468195796,"author":{"id":"4198132281708462","authorId":"4198132281708462","name":"老虎我最凶","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4198132281708462","authorIdStr":"4198132281708462"},"themes":[],"htmlText":"Win win win ","listText":"Win win win ","text":"Win win win","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/477027578634456","repostId":"1141889500","repostType":2,"isVote":1,"tweetType":1,"viewCount":960,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":398776793673920,"gmtCreate":1738382646956,"gmtModify":1738382650443,"author":{"id":"4198132281708462","authorId":"4198132281708462","name":"老虎我最凶","avatar":"https://community-static.tradeup.com/news/cea559f3cd03b229f43522cf0695b634","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4198132281708462","authorIdStr":"4198132281708462"},"themes":[],"htmlText":"double win!","listText":"double win!","text":"double win!","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/398776793673920","repostId":"1120077738","repostType":2,"repost":{"id":"1120077738","kind":"news","pubTimestamp":1738381392,"share":"https://ttm.financial/m/news/1120077738?lang=en_US&edition=fundamental","pubTime":"2025-02-01 11:43","market":"us","language":"zh","title":"Ultraman \"Confession\": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away","url":"https://stock-news.laohu8.com/highlight/detail?id=1120077738","media":"新智元","summary":"下一个是GPT-5。","content":"<p><html><head></head><body>While everyone was still marveling at DeepSeek's amazing strength, OpenAI finally couldn't sit still.</p><p>In the early hours of last night, the o3-mini went online in an emergency, refreshing SOTA in benchmarks such as math codes and returning to the throne.</p><p>Most importantly, free users can also experience it!</p><p>The strength of o3-mini is not to be bragged. In the \"Last Human Examination\", o3-mini (high) is the best in both accuracy and Calibration Error.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/4c22b9ec8914751bc5abe6021c200f9a\" title=\"\" tg-width=\"1080\" tg-height=\"1020\"/></p><p>A few hours after o3-mini went online, OpenAI officially opened Reddit AMA for about 1 hour of online Q&A.</p><p>Altman himself went online and answered all the questions from netizens.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/b529a27074e998944972eca3a1e5b0b7\" title=\"\" tg-width=\"1080\" tg-height=\"832\"/></p><p>The main highlights are:</p><p><ul style=\"list-style-type: disc;\"><li>DeepSeek is really good, and we will continue to develop better models, but the lead will not be as big as before</p><p></li><li>Rather than a few years ago, I am more inclined to think that AI is likely to emerge rapidly and rapidly</p><p></li><li>On the issue of open source weighted AI models, we are on the wrong side</p><p></li><li>The advanced voice mode is coming up with an update, which we'll call GPT-5 directly instead of GPT-5o, and there's no specific timeline yet.</p><p></li></ul>In addition to Altman himself, Mark Chen, Chief Research Officer, Kevin Weil, Chief Product Officer, Srinivas Narayanan, Vice President of Engineering, Michelle Pokrass, Head of API Research, and Hongyu Ren, Head of Research, were also online and answered all the questions of netizens carefully.</p><p>Next, let's take a look at what they all said.</p><p><strong>Ultraman deeply repented and stood on the wrong team on open source AI</strong></p><p>DeepSeek suddenly counterattacked, perhaps something that no one expected.</p><p>In the AMA Q&A, Altman himself also deeply confessed that he was on the wrong team on open source AI, and had to admit the powerful advantages of DeepSeek.</p><p>To the amazement of many people, Altman actually said that OpenAI's lead is not as good as before.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/a555c5dee2ce12ce297456a37aeb64b9\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"316\"/></p><p>All of the following are our compilation of Ultraman's classic answers.</p><p><strong>Q: Let's talk about the big topic of the week: Deepseek. Obviously this is a very impressive model, and I also know that it was probably trained on the output of other LLMs. How would this change your plans for future models?</strong></p><p>Ultraman: It is indeed a very good model! We will develop better models, but we won't maintain as much of a lead as we have in previous years.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/30050f5f20ed6d7c0bb40a1780d02d5b\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"550\"/></p><p><strong>Q: Do you think recursive self-improvement will be a gradual process, or one that takes off suddenly?</strong></p><p>Altman: Personally, I think that I am more inclined to think that AI may advance rapidly than a few years ago. It may be time to write something on this topic...</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/f68199e96b99c46e8b299a9d90eae294\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"463\"/></p><p><strong>Q: Can we see all the tokens the model thinks about?</strong></p><p>Ultraman: Yes, we'll be showing a more helpful, detailed version soon. Thanks to R1 for the updated information.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/72407471b6a5ff25832883af26985910\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"400\"/></p><p>Kevin Weil, Chief Product Officer: We're trying to show more content than we've got right now — and that's going to happen soon. As for showing everything is yet to be determined, showing all chains of thought (CoT) results in model distillation for competitors, but we also know that users (at least advanced users) want to see these, so we'll find a proper balance.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7ca7bd7647c025f9ee6b93c60c35e8e\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"313\"/></p><p><strong>Q: When will the full-blood version of o3 go online?</strong></p><p>Altman: I estimate it will be more than a few weeks, but not more than a few months.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/7c2b17cac2383cb54c6008fdf3846d7a\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"387\"/></p><p><strong>Q: Will there be an update to the voice mode? Is this the focus of a potential GPT-5o? What is the approximate timeline for GPT-5o?</strong></p><p>Ultraman: Yes, an update to Advanced Voice Mode is coming soon! I think we'll just call it GPT-5 instead of GPT-5o. There is no specific timeline yet.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/9a299ae6009abd55614bc73665df44b7\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"555\"/></p><p><strong>Q: Would you consider publishing some model weights and publishing some research?</strong></p><p>Altman: Yeah, we're talking about it. Personally, I think we are on the wrong side on this issue and need to come up with a different open source strategy; Not everyone at OpenAI holds that view, and it's not our highest priority right now.</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>How far are we from offering Operator on a regular Plus plan?</strong></p><p></li><li><strong>What are the top objectives of the robotics division?</strong></p><p></li><li><strong>What does OpenAI think of more specialized chips/TPUs like Trillium, Cerebras, etc? Is OpenAI looking at this?</strong></p><p></li><li><strong>What to invest in to hedge AGI and ASI's future risks?</strong></p><p></li><li><strong>What was your most memorable vacation?</strong></p><p></li></ul>Ultraman:</p><p><ul style=\"list-style-type: disc;\"><li>Several months</p><p></li><li>First, produce a really good robot on a small scale and learn from it</p><p></li><li>The GB200 is hard to surpass at the moment!</p><p></li><li>A good choice is to boost your inner state-resilience, resilience, calm, happiness, etc</p><p></li><li>It's hard to choose! But the first two that come to mind: backpacking in Southeast Asia or safari trips in Africa</p><p></li></ul><strong>Q: Do you plan to increase the price of the Plus series?</strong></p><p>ALTMAN: I actually want to taper off.</p><p><strong>Q: Let's say it's 2030 and you just created what most people would call AGI. It performs well on all test benchmarks, and it outperforms your best engineers and researchers in both speed and performance. What's next? Are there any other plans besides \"putting it on the site to provide the service\"?</strong></p><p>Altman: The most important impact, in my opinion, will be accelerating the speed of scientific discovery, which I think is the biggest contributor to improving quality of life.</p><p><strong>4o image generation, coming soon</strong></p><p>Next, added are responses from other OpenAI members.</p><p><strong>Q: Are you still planning to launch a 4o image generator?</strong></p><p>Chief Product Officer Kevin Weil: Yes! We're working on it. And I think the wait is worth it.</p><p><strong>Q: Great! Is there a rough timeline?</strong></p><p>Kevin Weil, Chief Product Officer: You're trying to get me into trouble. Maybe a few months.</p><p>There's a similar problem.</p><p><strong>Q: When can we see ChatGPT-5?</strong></p><p>Chief Product Officer Kevin Weil: Just shortly after o-17 micro and GPT- (π +1).</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>What other types of agents can we expect?</strong></p><p></li><li><strong>Also provide an agent for free users, which can speed up adoption...</strong></p><p></li><li><strong>Any updates on the new version of DALL E?</strong></p><p></li><li><strong>One last question, and one that everyone asks... when will AGI be implemented?</strong></p><p></li></ul>Chief Product Officer Kevin Weil:</p><p><ul style=\"list-style-type: disc;\"><li>About More Agents: Very, very soon. I think you'll be satisfied.</p><p></li><li>4o based image generation: I can't wait for you guys to use it in about a few months. It was great.</p><p></li><li>AGI: Yes</p><p></li></ul><strong>Q: Are you planning to add file attachment functionality to the inference model?</strong></p><p>Srinivas Narayanan, Vice President of Engineering: Under development. Future inference models will be able to use different tools, including retrieval functions.</p><p>Kevin Weil, Chief Product Officer: Just to say, I can't wait to see inference models that can use tools:)</p><p><strong>Q: Really. When you solve this problem, some very useful AI application scenarios will be opened. Imagine it being able to understand the contents of your 500GB work document.</strong></p><p>When you're ready to respond to an email, a panel opens next to your email app that continuously analyzes all the information related to the person, including your relationship, topics discussed, past work, and more. Perhaps something from a certain document you've long forgotten will be flagged because it is so relevant to the current discussion. I want this feature so badly.</p><p>Srinivas Narayanan, Vice President of Engineering: We are working on increasing the context length. No definite date/announcement yet.</p><p><strong>Q: How important is the Stargate project for OpenAI's future?</strong></p><p>Kevin Weil, Chief Product Officer: Very important. Everything we've seen shows that the more computing power we have, the better models we can build, and the more valuable products we can make.</p><p>We are now scaling the model in two dimensions simultaneously – larger-scale pre-training, and more reinforcement learning (RL) /\"strawberry\" training – both of which require computational resources.</p><p>Serving hundreds of millions of users also requires computing resources! And as we move to more intelligent agency products that work for you consistently, that also requires computing resources. So you can think of Stargate as our factory, where power/GPUs are turned into amazing products.</p><p><strong>Q: Internally, which model are you using now? o4, o5 or o6? How much smarter are these internal models compared to o3?</strong></p><p>Michelle Pokrass, head of API research: We've lost count.</p><p><strong>Q: Please allow us to interact with text/canvas while using advanced voice features. I want to be able to speak into it and have it make iterative changes to the document.</strong></p><p>Chief Product Officer Kevin Weil: Yes! We have a lot of good tools that have been developed relatively independently – the goal is to get those tools into your hands as soon as possible.</p><p>The next step is to integrate all of these features so you can talk to a model that reasons while searching and generates a canvas that can run Python. All tools need to work better together. Also, by the way, all models require full tool usage capabilities (O-series models can't use all tools at present), and this will also be implemented.</p><p><strong>Q: When will the O-series models support the memory function in ChatGPT?</strong></p><p>API Research Lead Michelle Pokrass: In development! Unifying all of our functionality with the O-Series model is our top priority.</p><p><strong>Q: Will there be significant improvements to the 4o? I really like the custom GPT, it would be awesome if it could be upgraded, or even better if we were able to choose what model to use in the custom GPT (like the o3 mini).</strong></p><p>Michelle Pokrass, API Research Lead: Yes, we haven't finished the 4o series yet!</p><p></body></html></p>","source":"lsy1569730104218","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Ultraman \"Confession\": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 12.5px; color: #7E829C; margin: 0;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nUltraman \"Confession\": Standing on the Wrong Team on Open Source AI! DeepSeek takes OpenAI advantage away\n</h2>\n<h4 class=\"meta\">\n<p class=\"head\">\n<strong class=\"h-name small\">新智元</strong><span class=\"h-time small\">2025-02-01 11:43</span>\n</p>\n</h4>\n</header>\n<article>\n<p><html><head></head><body>While everyone was still marveling at DeepSeek's amazing strength, OpenAI finally couldn't sit still.</p><p>In the early hours of last night, the o3-mini went online in an emergency, refreshing SOTA in benchmarks such as math codes and returning to the throne.</p><p>Most importantly, free users can also experience it!</p><p>The strength of o3-mini is not to be bragged. In the \"Last Human Examination\", o3-mini (high) is the best in both accuracy and Calibration Error.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/4c22b9ec8914751bc5abe6021c200f9a\" title=\"\" tg-width=\"1080\" tg-height=\"1020\"/></p><p>A few hours after o3-mini went online, OpenAI officially opened Reddit AMA for about 1 hour of online Q&A.</p><p>Altman himself went online and answered all the questions from netizens.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/b529a27074e998944972eca3a1e5b0b7\" title=\"\" tg-width=\"1080\" tg-height=\"832\"/></p><p>The main highlights are:</p><p><ul style=\"list-style-type: disc;\"><li>DeepSeek is really good, and we will continue to develop better models, but the lead will not be as big as before</p><p></li><li>Rather than a few years ago, I am more inclined to think that AI is likely to emerge rapidly and rapidly</p><p></li><li>On the issue of open source weighted AI models, we are on the wrong side</p><p></li><li>The advanced voice mode is coming up with an update, which we'll call GPT-5 directly instead of GPT-5o, and there's no specific timeline yet.</p><p></li></ul>In addition to Altman himself, Mark Chen, Chief Research Officer, Kevin Weil, Chief Product Officer, Srinivas Narayanan, Vice President of Engineering, Michelle Pokrass, Head of API Research, and Hongyu Ren, Head of Research, were also online and answered all the questions of netizens carefully.</p><p>Next, let's take a look at what they all said.</p><p><strong>Ultraman deeply repented and stood on the wrong team on open source AI</strong></p><p>DeepSeek suddenly counterattacked, perhaps something that no one expected.</p><p>In the AMA Q&A, Altman himself also deeply confessed that he was on the wrong team on open source AI, and had to admit the powerful advantages of DeepSeek.</p><p>To the amazement of many people, Altman actually said that OpenAI's lead is not as good as before.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/a555c5dee2ce12ce297456a37aeb64b9\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"316\"/></p><p>All of the following are our compilation of Ultraman's classic answers.</p><p><strong>Q: Let's talk about the big topic of the week: Deepseek. Obviously this is a very impressive model, and I also know that it was probably trained on the output of other LLMs. How would this change your plans for future models?</strong></p><p>Ultraman: It is indeed a very good model! We will develop better models, but we won't maintain as much of a lead as we have in previous years.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/30050f5f20ed6d7c0bb40a1780d02d5b\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"550\"/></p><p><strong>Q: Do you think recursive self-improvement will be a gradual process, or one that takes off suddenly?</strong></p><p>Altman: Personally, I think that I am more inclined to think that AI may advance rapidly than a few years ago. It may be time to write something on this topic...</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/f68199e96b99c46e8b299a9d90eae294\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"463\"/></p><p><strong>Q: Can we see all the tokens the model thinks about?</strong></p><p>Ultraman: Yes, we'll be showing a more helpful, detailed version soon. Thanks to R1 for the updated information.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/72407471b6a5ff25832883af26985910\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"400\"/></p><p>Kevin Weil, Chief Product Officer: We're trying to show more content than we've got right now — and that's going to happen soon. As for showing everything is yet to be determined, showing all chains of thought (CoT) results in model distillation for competitors, but we also know that users (at least advanced users) want to see these, so we'll find a proper balance.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7ca7bd7647c025f9ee6b93c60c35e8e\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"313\"/></p><p><strong>Q: When will the full-blood version of o3 go online?</strong></p><p>Altman: I estimate it will be more than a few weeks, but not more than a few months.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/7c2b17cac2383cb54c6008fdf3846d7a\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"387\"/></p><p><strong>Q: Will there be an update to the voice mode? Is this the focus of a potential GPT-5o? What is the approximate timeline for GPT-5o?</strong></p><p>Ultraman: Yes, an update to Advanced Voice Mode is coming soon! I think we'll just call it GPT-5 instead of GPT-5o. There is no specific timeline yet.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/9a299ae6009abd55614bc73665df44b7\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"555\"/></p><p><strong>Q: Would you consider publishing some model weights and publishing some research?</strong></p><p>Altman: Yeah, we're talking about it. Personally, I think we are on the wrong side on this issue and need to come up with a different open source strategy; Not everyone at OpenAI holds that view, and it's not our highest priority right now.</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>How far are we from offering Operator on a regular Plus plan?</strong></p><p></li><li><strong>What are the top objectives of the robotics division?</strong></p><p></li><li><strong>What does OpenAI think of more specialized chips/TPUs like Trillium, Cerebras, etc? Is OpenAI looking at this?</strong></p><p></li><li><strong>What to invest in to hedge AGI and ASI's future risks?</strong></p><p></li><li><strong>What was your most memorable vacation?</strong></p><p></li></ul>Ultraman:</p><p><ul style=\"list-style-type: disc;\"><li>Several months</p><p></li><li>First, produce a really good robot on a small scale and learn from it</p><p></li><li>The GB200 is hard to surpass at the moment!</p><p></li><li>A good choice is to boost your inner state-resilience, resilience, calm, happiness, etc</p><p></li><li>It's hard to choose! But the first two that come to mind: backpacking in Southeast Asia or safari trips in Africa</p><p></li></ul><strong>Q: Do you plan to increase the price of the Plus series?</strong></p><p>ALTMAN: I actually want to taper off.</p><p><strong>Q: Let's say it's 2030 and you just created what most people would call AGI. It performs well on all test benchmarks, and it outperforms your best engineers and researchers in both speed and performance. What's next? Are there any other plans besides \"putting it on the site to provide the service\"?</strong></p><p>Altman: The most important impact, in my opinion, will be accelerating the speed of scientific discovery, which I think is the biggest contributor to improving quality of life.</p><p><strong>4o image generation, coming soon</strong></p><p>Next, added are responses from other OpenAI members.</p><p><strong>Q: Are you still planning to launch a 4o image generator?</strong></p><p>Chief Product Officer Kevin Weil: Yes! We're working on it. And I think the wait is worth it.</p><p><strong>Q: Great! Is there a rough timeline?</strong></p><p>Kevin Weil, Chief Product Officer: You're trying to get me into trouble. Maybe a few months.</p><p>There's a similar problem.</p><p><strong>Q: When can we see ChatGPT-5?</strong></p><p>Chief Product Officer Kevin Weil: Just shortly after o-17 micro and GPT- (π +1).</p><p>Another question bomb:</p><p><ul style=\"list-style-type: disc;\"><li><strong>What other types of agents can we expect?</strong></p><p></li><li><strong>Also provide an agent for free users, which can speed up adoption...</strong></p><p></li><li><strong>Any updates on the new version of DALL E?</strong></p><p></li><li><strong>One last question, and one that everyone asks... when will AGI be implemented?</strong></p><p></li></ul>Chief Product Officer Kevin Weil:</p><p><ul style=\"list-style-type: disc;\"><li>About More Agents: Very, very soon. I think you'll be satisfied.</p><p></li><li>4o based image generation: I can't wait for you guys to use it in about a few months. It was great.</p><p></li><li>AGI: Yes</p><p></li></ul><strong>Q: Are you planning to add file attachment functionality to the inference model?</strong></p><p>Srinivas Narayanan, Vice President of Engineering: Under development. Future inference models will be able to use different tools, including retrieval functions.</p><p>Kevin Weil, Chief Product Officer: Just to say, I can't wait to see inference models that can use tools:)</p><p><strong>Q: Really. When you solve this problem, some very useful AI application scenarios will be opened. Imagine it being able to understand the contents of your 500GB work document.</strong></p><p>When you're ready to respond to an email, a panel opens next to your email app that continuously analyzes all the information related to the person, including your relationship, topics discussed, past work, and more. Perhaps something from a certain document you've long forgotten will be flagged because it is so relevant to the current discussion. I want this feature so badly.</p><p>Srinivas Narayanan, Vice President of Engineering: We are working on increasing the context length. No definite date/announcement yet.</p><p><strong>Q: How important is the Stargate project for OpenAI's future?</strong></p><p>Kevin Weil, Chief Product Officer: Very important. Everything we've seen shows that the more computing power we have, the better models we can build, and the more valuable products we can make.</p><p>We are now scaling the model in two dimensions simultaneously – larger-scale pre-training, and more reinforcement learning (RL) /\"strawberry\" training – both of which require computational resources.</p><p>Serving hundreds of millions of users also requires computing resources! And as we move to more intelligent agency products that work for you consistently, that also requires computing resources. So you can think of Stargate as our factory, where power/GPUs are turned into amazing products.</p><p><strong>Q: Internally, which model are you using now? o4, o5 or o6? How much smarter are these internal models compared to o3?</strong></p><p>Michelle Pokrass, head of API research: We've lost count.</p><p><strong>Q: Please allow us to interact with text/canvas while using advanced voice features. I want to be able to speak into it and have it make iterative changes to the document.</strong></p><p>Chief Product Officer Kevin Weil: Yes! We have a lot of good tools that have been developed relatively independently – the goal is to get those tools into your hands as soon as possible.</p><p>The next step is to integrate all of these features so you can talk to a model that reasons while searching and generates a canvas that can run Python. All tools need to work better together. Also, by the way, all models require full tool usage capabilities (O-series models can't use all tools at present), and this will also be implemented.</p><p><strong>Q: When will the O-series models support the memory function in ChatGPT?</strong></p><p>API Research Lead Michelle Pokrass: In development! Unifying all of our functionality with the O-Series model is our top priority.</p><p><strong>Q: Will there be significant improvements to the 4o? I really like the custom GPT, it would be awesome if it could be upgraded, or even better if we were able to choose what model to use in the custom GPT (like the o3 mini).</strong></p><p>Michelle Pokrass, API Research Lead: Yes, we haven't finished the 4o series yet!</p><p></body></html></p>\n<div class=\"bt-text\">\n\n\n<p> source:<a href=\"https://mp.weixin.qq.com/s/No1nD8qDhLX_IHiBD-9G-A\">新智元</a></p>\n\n\n</div>\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"https://static.tigerbbs.com/0c4e1ff8a8f3ef9b71959d07d4048d2e","relate_stocks":{},"source_url":"https://mp.weixin.qq.com/s/No1nD8qDhLX_IHiBD-9G-A","is_english":false,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"1120077738","content_text":"当所有人还在惊叹DeepSeek的惊人实力时,OpenAI终于坐不住了。昨夜凌晨,o3-mini紧急上线,在数学代码等基准测试中刷新SOTA,重回王座。最关键的是,免费用户也可以体验!o3-mini实力可不是吹的,在「人类最后一次考试」中,o3-mini(high)直接在准确率和校准误差(Calibration Error)均是最优。o3-mini上线几小时后,OpenAI官方开启了Reddit AMA大约1小时左右的在线问答。奥特曼本人也上线,回答了网友们的一切问题。主要精彩亮点有:DeepSeek的确很优秀,我们也会继续研发更好的模型,但领先优势不会像以前那么大了比起几年前,我现在更倾向于认为AI可能会出现快速突飞猛进在开源权重AI模型这个问题上,我们站错了队高级语音模式即将迎来更新,我们会直接称它为GPT-5,而不是GPT-5o,目前还没有具体的时间表。除了奥特曼本人,首席研究官Mark Chen、首席产品官Kevin Weil、工程副总Srinivas Narayanan、API研究负责人Michelle Pokrass、研究负责人Hongyu Ren也一同在线,认真回答了网友们所有问题。接下来,让我们一起看看他们都说了什么。奥特曼深刻忏悔,在开源AI上站错了队DeepSeek突然逆袭,或许是所有人都没有想到的。在AMA问答中,奥特曼本人也深深忏悔自己在开源AI上站错了队,不得不承认DeepSeek的强大优势。让许多人惊叹不已的是,奥特曼竟然说出,OpenAI领先优势不如以往。以下所有,是我们汇总的奥特曼的经典回答。Q:让我们来谈谈本周的重大话题:Deepseek。显然这是一个非常令人印象深刻的模型,我也知道它可能是在其他LLM的输出基础上训练的。这会如何改变你们对未来模型的计划?奥特曼:它确实是一个非常好的模型!我们会开发出更好的模型,但我们不会像往年那样保持那么大的领先优势了。Q:你认为递归式自我改进会是一个渐进的过程,还是一个突然起飞的过程?奥特曼:我个人认为,比起几年前,我现在更倾向于认为AI可能会出现快速突飞猛进。可能是时候就这个话题写点东西了...Q:我们能看到模型思考的所有token吗?奥特曼:是的,我们很快就会展示一个更有帮助、更详细的版本。感谢R1提供的更新信息。首席产品官Kevin Weil:我们正在努力展示比现在更多的内容——这将很快实现。至于是否展示所有内容还有待确定,展示所有思维链(CoT)会导致竞争对手的模型蒸馏,但我们也知道用户(至少是高级用户)想要看到这些,所以我们会找到一个合适的平衡点。Q:满血版o3什么时候上线?奥特曼:我估计会超过几周,但不会超过几个月。Q:语音模式会有更新吗?这是否是潜在的GPT-5o关注的重点?GPT-5o的大致时间表是什么?奥特曼:是的,高级语音模式的更新即将到来!我想我们会直接称它为GPT-5,而不是GPT-5o。目前还没有具体的时间表。Q:你会考虑发布一些模型权重,并发表一些研究吗?奥特曼:是的,我们正在讨论。我个人认为,在这个问题上我们站错了队,需要想出一个不同的开源策略;不是OpenAI的每个人都持有这种观点,而且这也不是我们目前的最高优先级。再来一个问题弹:我们距离在常规Plus计划中提供Operator还有多远?机器人部门的首要目标是什么?OpenAI如何看待更专业的芯片/TPU,比如Trillium、Cerebras等?OpenAI是否在关注这方面?投资什么来对冲AGI和ASI未来的风险?你最难忘的假期是什么?奥特曼:几个月先小规模生产一个真正优秀的机器人,从中学习经验GB200目前很难被超越!一个好的选择是提升自己的内在状态——韧性、适应力、平静、快乐等很难选择!但首先想到的两个是:在东南亚背包旅行或者非洲野生动物园之旅Q:你是否计划对Plus系列产品进行提价?奥特曼:实际上我想逐渐减少。Q:假设现在是2030年,你们刚刚创造了一个大多数人会称之为AGI的系统。它在所有测试基准上都表现出色,而且在速度和性能上都超过了你们最优秀的工程师和研究人员。接下来怎么办?除了「把它放到网站上提供服务」之外,还有其他计划吗?奥特曼:在我看来,最重要的影响将是加速科学发现的速度,我认为这是对提高生活质量贡献最大的因素。4o图像生成,快来了接下来,补充的是其他OpenAI成员的回应。Q:你们还打算推出4o图像生成器吗?首席产品官Kevin Weil:是的!我们正在努力。而且我认为等待是值得的。Q:太好了!有大致的时间表吗?首席产品官Kevin Weil:你这是想让我惹麻烦啊。可能几个月吧。还有一个类似的问题。Q:我们什么时候可以看到ChatGPT-5?首席产品官Kevin Weil:就在o-17 micro和GPT-(π+1)之后不久。又是一个问题弹:我们还可以期待什么其他类型的智能体?也为免费用户提供一个智能体,这样可以加快采用率...关于新版DALL·E有什么更新吗?最后一个问题,也是每个人都会问的...AGI什么时候实现?首席产品官Kevin Weil:关于更多智能体:非常非常快就来了。我想你会满意的。基于4o的图像生成:大约几个月后,我迫不及待想让你们用上。它很棒。AGI:是的Q:你们是否计划在推理模型中添加文件附件功能?工程副总Srinivas Narayanan:正在开发中。未来推理模型将能够使用不同的工具,包括检索功能。首席产品官Kevin Weil:只是想说,我迫不及待想看到能使用工具的推理模型了 :)Q:真的。当你解决这个问题时,一些非常有用的AI应用场景就会被打开。想象一下,它能够理解你500GB的工作文档内容。当你准备回复一封邮件时,在你的邮件应用旁边会打开一个面板,持续分析与这个人相关的所有信息,包括你们的关系、讨论的主题、过去的工作等等。也许某个你早已遗忘的文档中的内容会被标记出来,因为它与当前的讨论非常相关。我太想要这个功能了。工程副总Srinivas Narayanan:我们正在努力增加上下文长度。还没有明确的日期/公告。Q:「星际之门」项目对于OpenAI未来来说,有多重要?首席产品官Kevin Weil:非常重要。我们所看到的一切都表明,我们拥有的计算能力越多,就能构建越好的模型,也就能制造出越有价值的产品。我们现在正在同时在两个维度上扩展模型——更大规模的预训练,以及更多的强化学习(RL)/「草莓」训练——这两者都需要计算资源。为数亿用户提供服务也需要计算资源!而且随着我们转向更多能持续为你工作的智能代理产品,这也需要计算资源。所以你可以把「星际之门」看作是我们的工厂,将电力/GPU转化为令人惊叹的产品的地方。Q:在内部,你们现在用的是哪个模型?o4、o5还是o6?与o3相比,这些内部模型的智能程度提高了多少?API研究负责人Michelle Pokrass:我们已经数不清了。Q:请允许我们在使用高级语音功能的同时与文本/画布进行交互。我希望能够对着它说话,让它对文档进行迭代修改。首席产品官Kevin Weil:是的!我们有很多不错的工具是相对独立开发的——目标是让这些工具尽快交到你手中。下一步是整合所有这些功能,这样你就可以与一个模型对话,它在搜索的同时进行推理,并生成一个可以运行Python的画布。所有工具都需要更好地协同工作。另外顺便说一下,所有模型都需要完整的工具使用能力(o系列模型目前还不能使用所有工具),这个也会实现的。Q:o系列模型什么时候会支持ChatGPT中的记忆功能?API研究负责人Michelle Pokrass:正在开发中!将我们所有的功能与o系列模型统一起来是我们的首要任务。Q:4o是否会有重大改进?我真的很喜欢自定义GPT,如果它能够升级就太棒了,或者如果我们能够在自定义GPT中选择使用什么模型(比如o3 mini)就更好了。API研究负责人Michelle Pokrass:是的,我们还没有完成4o系列!","news_type":1,"symbols_score_info":{}},"isVote":1,"tweetType":1,"viewCount":1725,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"lives":[]}