$NVIDIA(NVDA)$
Large models running on mid-to-low-end cards with optimized algorithms can also achieve results comparable to GPT. So, is there no need for high-end computing power? Why don't the opposing large models optimize their algorithms as well? Even with high-end cards, wouldn't they still be ahead by an order of magnitude? Isn't the most critical factor still high-end computing power?
![](https://community-static.tradeup.com/news/a5d77d7901188992c1510de77cd8b42d)
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.
Comments