beerbaron Posted February 26 Share Posted February 26 8 hours ago, bargainman said: Indeed, it's always "day one". No one asks about how much energy this is all consuming though, and where the new content will come from once everything is generated. Yeah i have been scratching part about the energy side. Any ideas to play this, datacenters use fixed price energy contracts so can go with select few utilities. Link to comment Share on other sites More sharing options...
mattee2264 Posted February 26 Share Posted February 26 Amusing that during the pandemic Green Energy was such a massive investment theme. Now speculators have moved on to cryptos and AI both of which are going to use a huge amount of energy and other resources. Link to comment Share on other sites More sharing options...
formthirteen Posted February 27 Share Posted February 27 https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/ Quote The AI assistant has had 2.3 million conversations, two-thirds of Klarna’s customer service chats It is doing the equivalent work of 700 full-time agents It is on par with human agents in regard to customer satisfaction score It is more accurate in errand resolution, leading to a 25% drop in repeat inquiries Customers now resolve their errands in less than 2 mins compared to 11 mins previously It’s available in 23 markets, 24/7 and communicates in more than 35 languages It’s estimated to drive a $40 million USD in profit improvement to Klarna in 2024 Sounds like shareholders will love ChatGPT, employees, not so much. Google and others could probably replace 80% of their staff with ChatGPT and nobody would notice. Elon Musk fired 80% of Twitter's employees and it lead to a huge productivity boost for the company. Link to comment Share on other sites More sharing options...
formthirteen Posted March 14 Share Posted March 14 (edited) Spoiler Europe trying harder than ever not to innovate. Getting better and better at hand-waving. Meanwhile in the US: Edited March 14 by formthirteen Link to comment Share on other sites More sharing options...
WayWardCloud Posted March 14 Share Posted March 14 I watched this interview of Murati and it felt so odd so I looked her up. She's CTO yet has an education in mechanical engineering and has never coded. Meanwhile Ilya Sutkever who built the damn thing is apparently being sidelined and silenced since the failed putsch. Link to comment Share on other sites More sharing options...
formthirteen Posted April 10 Share Posted April 10 What are people here investing in, other than overheated semiconductor stocks, to capitalize on the AI boom? Here are some ideas: Energy: Dominion Energy (D) Data centers: Equinix (EQIX) Digital Realty Trust (DLR) Iron Mountain (IRM) CyrusOne (CONE) References: https://wtop.com/business-finance/2022/01/n-virginia-still-tops-global-data-center-markets-and-whats-a-gigawatt/ Quote Cushman & Wakefield’s 2022 rankings of the Top 10 Data Center markets in the world places Northern Virginia at the top, a title it has held for years. And it is not even close. Quote “The rate of increase is just mind boggling. The market is adding anywhere from 300 to 400 megawatts, a third of a gigawatt, every few months. It just keeps multiplying and multiplying,” said Kevin Imboden, senior research manager for Cushman & Wakefield’s data center advisory group. Cushman & Wakefield’s 2022 report predicts Northern Virginia will soon become the world’s first two gigawatt data center market. To put that in perspective, that is two billion watts, or enough to perpetually power 1.5 million homes. https://www.coresite.com/blog/top-10-u-s-data-center-markets-and-why-they-are-hot Quote Boasting more than 250 data centers, Northern Virginia (NOVA) is widely recognized as the data center capital of the world – for good reason. Quote a recent report predicts Northern Virginia will be the world’s first two-gigawatt data center market. For some context, NOVA is five times larger than the next largest market (Dallas), and larger than the six closest (in terms of number of data centers) markets combined. https://twitter.com/BurggrabenH/status/1777796620593004733 Quote I don’t have a geographical breakdown of where this processors were mainly shipped but it seems that Virginia in the US is one State that attracts data centers. It has 3.2m households only. Dominion Energy is the local utility and they seem to forecast a doubling of demand for power from 2022 to 2024. Wow! Quote But Nvidia sells millions of other graphic cards & processors in addition the H100 and has competition too. Most “AI” product specifications I checked seem to consume 100-500W. It adds up: there are tens of millions of AI processor shipments forecast for years to come. It may mean that the AI sector alone will add a theoretical 1,000 GWh of power demand by 2027 or one Germany + France combined. Link to comment Share on other sites More sharing options...
Phoenix01 Posted June 9 Share Posted June 9 My perspective is that Nvidia has cornered the market on Compute and all the major AI players are scrambling to get as much as possible to scale their models and gain first mover advantage on this massive opportunity. This will peak like it did for Cisco and Nortel, but I do not know the timeline and have no insight. The major tech players have no choice but to accelerate the development of their AI models, but the eventual monetization path is not clear. It will be interesting to see how this plays out, but again I have no insight. Data is the foundation for training these models. Shutterstock is the leader in providing licensed data for training the AI models. That is the part of the puzzle that has yet to solved. I have posted more on the SSTK thread. Link to comment Share on other sites More sharing options...
LC Posted June 9 Share Posted June 9 I would break up the ecosystem into its components. What is needed for AI: -Energy -Compute -Data -Model(s) -End uses Energy strikes me as a race to the bottom. Compute not far behind (although I am surprised Nvidia has such a large lead here. It's been a year already and nobody has put out a competing product? Data I see as potential opportunities. Who has the latest/greatest data? How is it gathered/filtered/accessed/sold/used? I think there is opportunity there. Modelling. Kind of goes hand in hand with data. The modelling techniques themselves are well known. So this becomes an exercise in who has the most computing power and data size and quality. End uses. I see a lot of opportunity here (as does the rest of the world). Video is the next step. Who will own the best AI video generator. Adobe? Google/Meta/MSFT? A bit of this circles back to data: how will these models learn/license from all the existing privately owned video (think movies, tv shows) that exists? Are the major film and TV studios going to be big winners here as they (1) License their data for $$$ and (2) use the end product to save $$$? Is someone like Adobe or one of the Tech giants going to control the ecosystem? Perhaps the tech co's get the data for free in exchange for use of the model? Would be a huge win for tech. Of course there are other end uses as well. Curious what some ideas are for winners/losers there. Link to comment Share on other sites More sharing options...
DooDiligence Posted June 9 Share Posted June 9 You forgot saving the planet by destroying humanity Link to comment Share on other sites More sharing options...
UK Posted September 9 Author Share Posted September 9 https://www.bloomberg.com/opinion/articles/2024-09-09/ai-started-as-a-dream-to-save-humanity-then-big-tech-took-over?srnd=homepage-europe Link to comment Share on other sites More sharing options...
UK Posted September 24 Author Share Posted September 24 https://stratechery.com/2024/enterprise-philosophy-and-the-first-wave-of-ai/ Link to comment Share on other sites More sharing options...
UK Posted September 25 Author Share Posted September 25 https://ia.samaltman.com/ Link to comment Share on other sites More sharing options...
UK Posted October 4 Author Share Posted October 4 https://www.bloomberg.com/opinion/articles/2024-10-03/chatgpt-s-advanced-voice-mode-is-an-uncanny-cultural-chameleon?srnd=homepage-europe Link to comment Share on other sites More sharing options...
rogermunibond Posted November 13 Share Posted November 13 Latest LLM models are not advancing as fast as predicted by scaling laws/theory. https://finance.yahoo.com/news/openai-google-anthropic-struggling-build-100020816.html Link to comment Share on other sites More sharing options...
Spekulatius Posted Sunday at 12:56 AM Share Posted Sunday at 12:56 AM On 11/13/2024 at 1:45 PM, rogermunibond said: Latest LLM models are not advancing as fast as predicted by scaling laws/theory. https://finance.yahoo.com/news/openai-google-anthropic-struggling-build-100020816.html This decreasing bang for the buck does not bode well for future spending. OpenAI in particular has been spending like a drunken sailor with not much revenue to show for it. We will probably see layoffs on this sector within 12 months if this story turns out to true. Link to comment Share on other sites More sharing options...
LC Posted Sunday at 01:14 AM Share Posted Sunday at 01:14 AM Who would've thought mimicking petabytes of 4chan, reddit, and yahoo comments could only take you so far? I think this is an expectation problem, rather than an execution problem. Current iterations of generative AI can provide a ton of time saved when applied to bullshit bureaucracy. We see 20-25% time savings with reasonably high accuracy. Apply that to the red-tape layers of industries like government, healthcare, finance? That is a meaningful amount. I think one of the challenges is implementation. How does a 50+ year old company still using Cobol, jump into using Claude or Gemini? It is a big step for them. Link to comment Share on other sites More sharing options...
lnofeisone Posted Sunday at 01:38 AM Share Posted Sunday at 01:38 AM On 11/13/2024 at 1:45 PM, rogermunibond said: Latest LLM models are not advancing as fast as predicted by scaling laws/theory. https://finance.yahoo.com/news/openai-google-anthropic-struggling-build-100020816.html A lot of advances are now being made using hybrid of chatgpt, gemini, whatever + org's native data. The Technical term for this is retrieval augmented generation (RAG). Knowledge graphs are making a very strong come back. I have a few clients who have seen an enormous ROI using this approach in solving problems that were very difficult/expensive in the past. Link to comment Share on other sites More sharing options...
formthirteen Posted Sunday at 07:57 AM Share Posted Sunday at 07:57 AM (edited) 6 hours ago, LC said: I think this is an expectation problem, rather than an execution problem. Current iterations of generative AI can provide a ton of time saved when applied to bullshit bureaucracy. Yes, LLMs are perfect for making sense of intended and unintended bullshit (bureaucracy). If you don't know what I mean, here's a summary from ChatGPT (sorry): Quote LLMs excel at interpreting and simplifying bureaucratic language, whether it’s intentionally complex (to obscure meaning or sound authoritative) or unintentionally convoluted (due to poor writing or overuse of jargon). Trained on diverse patterns, they can extract meaning, summarize key points, simplify language, and identify inconsistencies, making them perfect for cutting through the "bullshit" often associated with bureaucracy. Here are a couple of examples: Quote Intended Complexity: A government notice states: "Remuneration adjustments will be enacted pursuant to the fiscal stabilization mandate under Section 45b." Translation: "Salaries will be adjusted based on budget rules." Quote Unintended Convolutions: A policy memo says: "The implementation of the outlined procedures is contingent upon the satisfactory completion of prerequisite reviews and subsequent authorizations." Translation: "We'll proceed once reviews and approvals are done." Quote Example: A manager says, "We’re optimizing cross-functional synergies for vertical integration." Translation: "We’re working across teams to streamline operations." Quote Example: "We are exploring opportunities in emerging markets." This sounds ambitious but commits to nothing. Humans need this nuanced signaling system to navigate complex social structures, classify others, and maintain hierarchy or cohesion within groups. This signaling system is bullshit. Only advanced humans and AI can learn to interpret and act according to the bullshit rules. To go beyond bullshit, you need superhuman capabilities or superhuman AI. Edited Sunday at 08:02 AM by formthirteen Link to comment Share on other sites More sharing options...
formthirteen Posted Sunday at 08:03 AM Share Posted Sunday at 08:03 AM (edited) …Von Neumann is an example of a superhuman who was able to understand bullshit complex language better than anyone, including bullshit generators AI: Edited Sunday at 08:04 AM by formthirteen Link to comment Share on other sites More sharing options...
Spekulatius Posted Sunday at 02:29 PM Share Posted Sunday at 02:29 PM (edited) 6 hours ago, formthirteen said: …Von Neumann is an example of a superhuman who was able to understand bullshit complex language better than anyone, including bullshit generators AI: Is this even a puzzle? The fly got one hour before the cyclist meet each other (20 miles apart and the relative speed is 10+10miles) and its speed is 15mph, so the fly flies 15mph in the hour it has. All the other info is irrelevant. It’s actually interesting how you can confuse people by giving them irrelevant information. It works especially well in finance and politics. Edited Sunday at 02:31 PM by Spekulatius Link to comment Share on other sites More sharing options...
formthirteen Posted Sunday at 10:21 PM Share Posted Sunday at 10:21 PM 7 hours ago, Spekulatius said: Is this even a puzzle? I'm afraid everything in life is a puzzle to some degree: Puzzles to laugh at Link to comment Share on other sites More sharing options...
nsx5200 Posted Monday at 02:47 PM Share Posted Monday at 02:47 PM On 11/17/2024 at 2:57 AM, formthirteen said: Yes, LLMs are perfect for making sense of intended and unintended bullshit (bureaucracy). If you don't know what I mean, here's a summary from ChatGPT (sorry): All those examples you gave reminded me of reading annual reports/sec filings. These days, I dump them into notebooklm and query it for summary and answers. I heard that investment houses were experimenting with AI to try to gain an edge. If they're successful, we may start to see the edge from deep diving on SEC filings shrink, forcing some of us to evolve. Or we all can just buy bitcoin and call it done. Link to comment Share on other sites More sharing options...
rogermunibond Posted Wednesday at 08:31 PM Share Posted Wednesday at 08:31 PM (edited) Test time compute - Nadella just referenced OpenAI's o1 model in his talk yesterday. Additional compute at the time of answering. Pretty cool. https://techcrunch.com/2024/11/20/ai-scaling-laws-are-showing-diminishing-returns-forcing-ai-labs-to-change-course/ Edited Wednesday at 08:32 PM by rogermunibond Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now