formthirteen Posted November 20, 2023 Share Posted November 20, 2023 (edited) And it's gone ($80B valuation, $10B MSFT funding, 505 out of 700 employees, etc). Personal interest, money, the invisible hand, won again. Grand ideals such as board member Ilya Sutskever's ”towards a plurality of humanity loving AGIs” went down the drain. Quote Xi, Putin, and other dictators will be disappointed I'm sure there's a lesson here for governments and voters in this debacle. Edited November 20, 2023 by formthirteen Link to comment Share on other sites More sharing options...
dwy000 Posted November 20, 2023 Share Posted November 20, 2023 MSFT looking like accidental geniuses here. They get Altman, Brockman and now virtually the entire team. They just bought OpenAI for next to nothing. And given their original investment in OpenAI appears to have been mostly in cloud credits not cash they can even save on that spend. I can't fathom what the Board meetings must have been like and how they saw this as the best outcome. Link to comment Share on other sites More sharing options...
formthirteen Posted November 20, 2023 Share Posted November 20, 2023 (edited) Where will this end Answer: Edited November 20, 2023 by formthirteen Link to comment Share on other sites More sharing options...
Parsad Posted November 20, 2023 Share Posted November 20, 2023 May go down as one of the greatest board boondoggles in history...especially if MSFT ends up becoming dominant in AI and Altman leading the way! Cheers! Link to comment Share on other sites More sharing options...
Parsad Posted November 20, 2023 Share Posted November 20, 2023 4 hours ago, Saluki said: What if AI had it's purpose (to get smarter, to grow etc) and something humans were doing (contributing to climate change, bombing each other which destroys computing power as well as people, or if there are just too much of us and too many natural resources are devoted to maintaining us which could be diverted to computing power). If it decided that we were in it's way, not only could it take us out, but there is not only nothing we could do to stop it, but we wouldn't even see it coming. Maybe advanced societies eventually get taken out by their own technology, which is one possible answer to Fermi's paradox. Have you ever seen Spielberg's movie AI? After the little boy robot ends up underwater, a new ice age arrives (assume how and why at your own peril), and thousands of years later he's found under the ice by beings from another planet...or more likely AI robots evolved over thousands of years. The little boy robot was the only remaining memory of what humanity was like...so his memories became valuable data for his rescuers. It's likely that humanity will destroy itself one way or another. Our AI robot creations may end up being the only lasting vestige of humanity one day and well outlive our existence. If that's the case...so be it! Cheers! Link to comment Share on other sites More sharing options...
tnp20 Posted November 20, 2023 Share Posted November 20, 2023 From an investing perspective....this is all fight for the future of AI and who will dominate....nastier the cat fight the bigger the prize at the end.... We will need enormous cloud data centers running enormous AI and AGI models in the future...what we have today is a mere fraction of what we will have in 10 years..... Placing bets on Microsoft, Google and Amazon is winning bet....despite the multiples which if you look out in 5 years will be well within norms.... The other winners.....AMD, Nvidia (yes valuation is an issue so go slow on this one), Intel and other startups like Bittorrent if they become public. Of course Semi plays ....ASLM, TSM, KLA and INTC are also great bets...if you have a 10+ year view. This is basically David Tepper and Dan Loeb portfolio adds over last few quarters ...probably other smart investors too.... Chip makers may be the biggest beneficiaries ....why ? (i) Just the huge growth of chips required for AI clouds globally - likely arms race between USA and China despite chinese handicap of not having USA chips (ii) The power requirement for these chips will be enormous - both for training large foundation models and also for inferencing once trained....so ever faster replacement cycle for these chips....so growing market with a 2-3 year chip replacement cycle to take advantage of faster processing and lower powe consumption of newer chips...google and microsoft and amazon are already working on more power efficient chips to make inferencing these AI models cheaper as they will need to scale this up massively. Link to comment Share on other sites More sharing options...
tnp20 Posted November 20, 2023 Share Posted November 20, 2023 (edited) Much is made of how China is knee capped in the AI space by the USA chip restrictions....yes it is an issue but there are ways to over come it.....also if you look at the research paper and other papers coming out of China...they may be even ahead of us in AI.... The reason Nvidia is ahead is because of their massively parallel architecture and fast data transfers in between chips and memory...their GPU/Tensor chips can do floating point matrix multiplication very fast ... Over time people realized that they dont need the huge floating point accuracy for the weights in the neural networks and even 4-bit (instead of 16 bit) architecture is sufficient to be very good ...there are now many smaller models optimized for 4-bit architecture that can run on home grown PCs. You can achieve massively parrallel architecture with lower performance chips if you can some how tie them together so they can communicate faster in between....so a Chinese lower spec AI cloud machine may be 10 times bigger than a USA equivalent but they can get the same job done in roughly the same time frame as the USA machine...this has implication on upfront costs and energy utilization but chinese may not care and take a brute force approach to staying competitive with model size and complexity whilst they catch up in chip technology....also the inferencing may not matter if their energy costs are substantially lower - from solar, wind and other green energy. For sure, they will be handicapped, but they can do a lot to keep that gap small whilst they catch up on the chip technology.... There are also developments in optimizing the models to run better faster ...right now most have taken the brute force approach but there is much to be gained just from spending time and energy on optimizing. For example some of the recent smaller models are better than much large models from 2 years ago...etc. China Will Be At Forefront of AI, Alphabet’s Pichai Says https://archive.ph/qcXqV Despite what they said in BABA conference, I think BABA and Tencents are also good AI bets... Edited November 20, 2023 by tnp20 Link to comment Share on other sites More sharing options...
Phoenix01 Posted November 21, 2023 Share Posted November 21, 2023 12 hours ago, tnp20 said: From an investing perspective....this is all fight for the future of AI and who will dominate....nastier the cat fight the bigger the prize at the end.... We will need enormous cloud data centers running enormous AI and AGI models in the future...what we have today is a mere fraction of what we will have in 10 years..... Placing bets on Microsoft, Google and Amazon is winning bet....despite the multiples which if you look out in 5 years will be well within norms.... The other winners.....AMD, Nvidia (yes valuation is an issue so go slow on this one), Intel and other startups like Bittorrent if they become public. Of course Semi plays ....ASLM, TSM, KLA and INTC are also great bets...if you have a 10+ year view. This is basically David Tepper and Dan Loeb portfolio adds over last few quarters ...probably other smart investors too.... Chip makers may be the biggest beneficiaries ....why ? (i) Just the huge growth of chips required for AI clouds globally - likely arms race between USA and China despite chinese handicap of not having USA chips (ii) The power requirement for these chips will be enormous - both for training large foundation models and also for inferencing once trained....so ever faster replacement cycle for these chips....so growing market with a 2-3 year chip replacement cycle to take advantage of faster processing and lower powe consumption of newer chips...google and microsoft and amazon are already working on more power efficient chips to make inferencing these AI models cheaper as they will need to scale this up massively. There are 3 keys factors to developing AI: Talent, Compute and Data. There has been lots of discussion about the talent that OpenAI has thrown away, and lots of discussion about the chips that are required to allow the progress to continue. However, there is very little discussion about the Datasets required to train the AI. There are big opportunities in this space and this is a bottleneck to commercialize the AI models. You might want to take a look at Shutterstock (SSTK) who has an interesting business model to leverage their huge high quality picture, video, audio, 3D collection for AI training. The OpenAI team has a long term relationship with SSTK to supply access to there collection to train and maintain their models. Google, Amazon, Meta, Nvidia,... have also signed up for access to the SSTK collection. Link to comment Share on other sites More sharing options...
mattee2264 Posted November 21, 2023 Share Posted November 21, 2023 Mag7 does seem like the no-brainer way to play the AI revolution. Deep pockets, cashflows from core businesses to invest in research and hiring the best human capital, existing capabilities in things like machine learning, automation, coding and so on. And with antitrust laws so weak they can buy out any emergent competitors. It reminds me a little of cloud. With the benefit of foresight an investor would have realised that cloud was a fantastic money maker that companies like Amazon, Google and Microsoft were well placed to exploit. And at the time these companies were under-priced because the potential of cloud was not reflected in their price. As a counter example though historically incumbent firms such as IBM at the dawn of the PC age and Microsoft in the Internet Age had deep pockets but new firms emerged and captured a lot of the value creation. And to some degree new technologies involved some creative destruction cannibalising to some extent the old technologies. And clearly AI prospects are to some extent priced in with Nvidia an extreme example and probably to justify Tesla's crazy valuation you are also betting they make a lot of money through AI rather than selling cars. But then again in the Dot Com Bubble mega-caps got up to 60-70-80x earnings or more. And so if the AI bubble really takes off then Mag7 could easily quintuple over the next 5-10 years. So Mag 7 doubling YTD coming off a severe tech bear market is nothing and if AI can fulfil its early promise and Mag7 are the winners then still a lot of money to be made. The issues I see are that: a) AI is going to require a lot of investment. Returns could be quite far in the future. And there might be an incentive to prioritise short term commercial applications that are more incremental than transformative in nature. And while that would reduce the investment required it would also reduce the value creation. And because core businesses are so profitable and capital light if the economics are inferior that is going to show up in the numbers and disappoint investors. b) Establishing a moat in a new technology takes time so there is vulnerability to creative destruction and while deep pockets give an advantage it does not guarantee success. After all most of the Mag 7 emerged from nowhere and most of the mega-cap techs of the dot com boom are either gone or are insignificant players. Even within the Mag7 there will be winners and losers as they are competing with each other. c) AI is going to be something that governments are going to want to regulate. It is a threat to jobs which can disrupt the social order and put pressure on government budgets. It can also lead to the spread of disinformation. Currently attempts to regulate Big Tech have been pretty pathetic. But AI is going to increase incentives to do so. Link to comment Share on other sites More sharing options...
Spekulatius Posted November 21, 2023 Share Posted November 21, 2023 I think one also could be missing that AI models become a commodity - everyone has them and uses them -nobody has an edge, similar to what happened with PC's actually. In PC's only a few business like Intel or MSFT profited because they had an monopoly position. So maybe NVDA can benefit LT but much seems priced in. Having just another language model offered in a cloud may just be an commodity. I do think durable advantages can be had to applying AI to specific problems using proprietary datasets. Link to comment Share on other sites More sharing options...
mattee2264 Posted November 21, 2023 Share Posted November 21, 2023 Trust is probably going to be a big deal. Especially for businesses who are going to place a lot of reliance on the output of AI applications. And Big Tech are going to want to include in their ecosystem and integrate with their other service offerings to increase the likelihood you'd pick their product. Where markets could have got it very wrong is overestimating the short term revenue opportunity. Especially when you consider how massive revenues of Mag7 companies already are. And that to some extent that AI market is going to be shared between them and possibly some start ups as well as if AI becomes very bubbly everyone and anyone will be able to secure funding. Link to comment Share on other sites More sharing options...
tnp20 Posted November 21, 2023 Share Posted November 21, 2023 Talking of Data......the most obvious ones to benefit from data and tools are... (i) Oracle ...Oracle is making a big push into AI cloud has bought a lot of NVIDIA chip...they will offer Ai tools to take existing data and make it accessible via RAG (Retrieval Augmented generation) rather than train the models on it. So RAG is different from training the models on the data. RAG mechanism is where AI is trained to know what data is, what format it is in (not that it cares about format), where it lives and how to go get it using natural language and present it in what ever way the user wants. This is different from the model being trained on the data but it will be an important tool in the overall scheme of things. RAG can access different varieties of data from different sources and weave it together as needed. (ii) Databricks ....watch these guys ...they are making the right moves and are pre-IPO.... (iii) Snowflake ....they play in the cloud data space....this is the one that was bought by Berkshire's lieutenants. ... Link to comment Share on other sites More sharing options...
tnp20 Posted November 21, 2023 Share Posted November 21, 2023 AGI is less than 10 years away....Google about to release Gemini multi-modal model ...5 Trillion tokens....GPT has some catch up to do... I play with Anthropic ...has a 200,000 token context window....you can load up all the Warren Buffett letters from the 1950s and it becomes a Buffett expert/brain and can answer any questions knowing all that context...next step would be to use that Buffett criteria and use RAG queries into something like Koyfin, FactSet, Bloomberg and also get company specific data such as press releases, Analysts conferences, CCs and it can probably get close to Buffet level of stock selection. Link to comment Share on other sites More sharing options...
DooDiligence Posted November 21, 2023 Share Posted November 21, 2023 GOOGL pre-split and then sub-$100 was a gift. Link to comment Share on other sites More sharing options...
ValueArb Posted November 21, 2023 Share Posted November 21, 2023 On 11/20/2023 at 11:13 AM, dwy000 said: MSFT looking like accidental geniuses here. They get Altman, Brockman and now virtually the entire team. They just bought OpenAI for next to nothing. And given their original investment in OpenAI appears to have been mostly in cloud credits not cash they can even save on that spend. I can't fathom what the Board meetings must have been like and how they saw this as the best outcome. I posted some of Matt Levine's thoughts on the OpenAI meltdown on the AI thread, but this is the important part. Quote But for a moment ignore all of that and just think about OpenAI Inc., the 501(c)(3) public charity, with a mission of “building safe and beneficial artificial general intelligence for the benefit of humanity.” Like any nonprofit, it has a mission that is described in its governing documents, and a board of directors who supervise the nonprofit to make sure it is pursuing that mission, and a staff that it hires to achieve the mission. The staff answers to the board, and the board answers to … no one? Their own consciences? There are no shareholders; the board’s main duties are to the mission. This end was inevitable because of how they started OpenAI as a public charity and giving the board that mission. There was always going to be conflict between Altman and employees trying to cash in on their work, and a board that didn't have that as their goal. Every time OpenAI achieved something new the board was probably asking Altman "great, but how safe is it, and how can we ensure its safe for humanity?". Over time it sounds like he tired of it, and started misleading them in order to do whatever was best to monetize their work. That bifurcated structure just sets up two opposing camps pursuing very different objectives and had to have lead to a lot of strife over the years. Basically Sam can't blame the board that he recruited because they were willing to agree to make benefiting humanity their sole mission with not caring about maximizing revenues and profits, or valuation at all. His objectives appeared to change over time, but theirs never could. He would have been better off with one easily corruptable board member who put enriching themselves first, then he could have always outvoted the "lame humanity first" members to do whatever he wanted. Link to comment Share on other sites More sharing options...
tnp20 Posted November 22, 2023 Share Posted November 22, 2023 (edited) OpenAI is just a distracting side show.....ignore the drama and keep eyes on the prize.... I know few people who are using Auzre AI cloud special access program ...this is early AI stuff beyond OpenAI....microsoft may not need OpenAI...this stuff is very good....and many other model will surpass whatever OpenAI will come up with....such as Google's Gemini , Anthropic's claude and I am sure Amazon is throwing massive dollars at it not to fall too behind. Google is best positioned (since it makes its own AI chips) , along with who ever can get access to those fast NVIDIA chips ...as you need lots of silicon to train Trillion token systems.... LLMs will end up the way of the dodo...multi-modal foundational models is the new thing...human AI is not all about text...its about language, vision, sound, touch, motion, physical world, mathematics, reasoning, ...an AI model that succeeds in all these domains as well as be very good in language will be the winner.....and gets closer to AGI....we are seeing early glimpses of that with Google's gemini...though it wont be true AGI just yet. Edited November 22, 2023 by tnp20 Link to comment Share on other sites More sharing options...
UK Posted November 22, 2023 Author Share Posted November 22, 2023 (edited) https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/ What is clear is that Altman and Microsoft are in the driver seat of AI. Microsoft has the IP and will soon have the team to combine with its cash and infrastructure, while shedding coordination problems inherent in their partnership with OpenAI previously (and, of course, they are still partners with OpenAI!). I’ve also argued for a while that it made more sense for external companies to build on Azure’s API rather than OpenAI’s; Microsoft is a development platform by nature, whereas OpenAI is fun and exciting but likely to clone your functionality or deprecate old APIs. Now the choice is even more obvious. And, from the Microsoft side, this removes a major reason for enterprise customers, already accustomed to evaluating long-term risks, to avoid Azure because of the OpenAI dependency; Microsoft now owns the full stack. Google, meanwhile, might need to make some significant changes; the company’s latest model, Gemini, has been delayed, and its Cloud business has been slowing as spending shifts to AI, the exact opposite outcome the company had hoped for. How long will the company’s founders and shareholders tolerate the perception that the company is moving too slow, particularly in comparison to the nimbleness and willingness to take risks demonstrated by Microsoft? That leaves Anthropic, which looked like a big winner 12 hours ago, and now feels increasingly tenuous as a standalone entity. The company has struck partnership deals with both Google and Amazon, but it is now facing a competitor in Microsoft with effectively unlimited funds and GPU access; it’s hard not to escape the sense that it makes sense as a part of AWS (and yes, B corps can be acquired, with considerably more ease than a non-profit). Ultimately, though, one could make the argument that not much has changed at all: it has been apparent for a while that AI was, at least in the short to medium-term, a sustaining innovation, not a disruptive one, which is to say it would primarily benefit and be deployed by the biggest companies. The costs are so high that it’s hard for anyone else to get the money, and that’s even before you consider questions around channel and customer acquisition. If there were a company poised to join the ranks of the Big Five it was OpenAI, thanks to ChatGPT, but that seems less likely now (but not impossible). This, in the end, was Nadella’s insight: the key to winning if you are big is not to invent like a startup, but to leverage your size to acquire or fast-follow them; all the better if you can do it for the low price of $0. Edited November 22, 2023 by UK Link to comment Share on other sites More sharing options...
Phoenix01 Posted November 22, 2023 Share Posted November 22, 2023 Share Message - Sam Altman: Ousted OpenAI boss to return days after being sacked https://www.bbc.co.uk/news/business-67494165 Link to comment Share on other sites More sharing options...
Morgan Posted November 22, 2023 Share Posted November 22, 2023 This OpenAI and Altman saga was crazy to watch. I have never seen 95% of employees threaten to resign after a CEO being let go. I don't remember a CEO coming back with a new board either. Pretty nuts. Link to comment Share on other sites More sharing options...
Spekulatius Posted November 22, 2023 Share Posted November 22, 2023 2 minutes ago, Morgan said: This OpenAI and Altman saga was crazy to watch. I have never seen 95% of employees threaten to resign after a CEO being let go. I don't remember a CEO coming back with a new board either. Pretty nuts. It was either that , or MSFT takes their employees and leaves a worthless shell, which would probably been acquired by MSFT resulting in the same outcome. MSFT owns this thing either way, that much is clear. Link to comment Share on other sites More sharing options...
Morgan Posted November 22, 2023 Share Posted November 22, 2023 25 minutes ago, Spekulatius said: It was either that , or MSFT takes their employees and leaves a worthless shell, which would probably been acquired by MSFT resulting in the same outcome. MSFT owns this thing either way, that much is clear. True Link to comment Share on other sites More sharing options...
UK Posted November 23, 2023 Author Share Posted November 23, 2023 https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said. Link to comment Share on other sites More sharing options...
patience_and_focus Posted November 23, 2023 Share Posted November 23, 2023 On 11/21/2023 at 10:01 AM, tnp20 said: AGI is less than 10 years away....Google about to release Gemini multi-modal model ...5 Trillion tokens....GPT has some catch up to do... I play with Anthropic ...has a 200,000 token context window....you can load up all the Warren Buffett letters from the 1950s and it becomes a Buffett expert/brain and can answer any questions knowing all that context...next step would be to use that Buffett criteria and use RAG queries into something like Koyfin, FactSet, Bloomberg and also get company specific data such as press releases, Analysts conferences, CCs and it can probably get close to Buffet level of stock selection. https://www.theinformation.com/articles/google-delays-cloud-release-of-gemini-ai-that-aims-to-compete-with-openai Link to comment Share on other sites More sharing options...
mattee2264 Posted November 24, 2023 Share Posted November 24, 2023 https://futurism.com/economist-ai-doomed-bubble Article above pours a bit of cold water on some of the hype. Basic criticism is that LLMs learned to write before they learned how to think and while they can string words together in convincing ways they have no idea what the words mean and are unable to use common sense, wisdom or logical reasoning to distinguish truth from falsehood. And as a result they are unreliable. And dangerously so because they are programmed to sound so confident and convincing. In another essay Smith and Funk recall the "Eliza effect" a 1960s computer program that caricatured a psychiatrist and convinced many users that the program had human-like intelligence and emotions. And we are vulnerable to this illusion because of our inclination to anthropomorphize. So in many ways Chat GPT and the like are just another example of pseudo intelligence. It reminds me a little of well of the way that parents have a tendency to extrapolate thinking just because their kid does something semi-intelligent he will grow up to be a genius. In the same way the argument seems to be "Well it is 2023 and already these AI models can write college-grade essays and do high school math. So by the end of the decade AI will be capable of doing most jobs better than humans with unimaginable productivity benefits" But the history of AI shows that what tends to happen is that a brick wall is reached and then there is an AI winter that can span decades. And the scary thing is that because of FOMO Big Tech companies not to mention all the VC funds and so on are going to invest billions and billions with very uncertain returns. Perhaps they will make money out of it at least in the early days because if enough consumers and companies believe in AI they will want to buy the AI products even if they aren't really that game-changing and prove to be unreliable. And the illusion is very strong. But the problem with fads is that while you might buy a fad product once, you aren't likely to be a repeat buyer, and to justify the massive investment it needs to become a recurring revenue stream. Link to comment Share on other sites More sharing options...
Parsad Posted November 24, 2023 Share Posted November 24, 2023 https://www.yahoo.com/news/pentagon-moving-toward-letting-ai-120645293.html Skynet precursor?! Cheers! Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now