Jump to content

If the AI bubble like the Internet, in what year are we now?


james22

If the AI bubble like the Internet, in what year are we now?  

49 members have voted

  1. 1. If the AI bubble like the Internet, in what year are we now?

    • 1995
      18
    • 1996
      5
    • 1997
      7
    • 1998
      8
    • 1999
      4
    • 2000
      7


Recommended Posts

2 hours ago, james22 said:

In a paper published in the arXiv preprint server July 18, researchers said "performance and behavior of both GPT-3.5 and GPT-4 vary significantly" and that responses on some tasks "have gotten substantially worse over time."

 

https://techxplore.com/news/2023-07-pains-chatgpt-dumber.html

This might be an opportunity for others to catch up, but doing this kind of research is essential for the profitability of GPT-4 etc, even if quality goes down, you can always bring it back up.  People expect LLM costs to be free per query (ad-supported or subscription-based).  If you can't find a way to reduce the costs of your LLM, you could have power users eating up significant profitability for each prompt.  For example, for state-of-the-art replies, you should use a technique like Chain of Thought prompting or its descendants.  This requires prompting the LLM multiple times, evaluating which prompt is the best (checked via an outside program like a compiler, calculator, knowledge graph etc.), and then using the best prompt as a basis for trying again and again. One prompt could query an LLM 20 or more times. If you just want to take share and you don't care about profitability you don't optimize LLM size, but LLMs are expensive so figuring out what the smallest LLM is that can satisfactorily answer your request is important.  Accuracy can go from 10% to close to 100%, but it's really expensive: https://www.promptengineering.org/tree-of-thought-prompting-walking-the-path-of-unique-approach-to-problem-solving/#:~:text=Tree of Thought Prompting%2C thus,and focusing on promising ones.

Edited by cameronfen
Some reason a lot of my query was deleted.
Link to comment
Share on other sites

  • Replies 77
  • Created
  • Last Reply

Top Posters In This Topic

 

https://finance.yahoo.com/news/ammo-inc-issues-letter-shareholders-123000873.html

We are also developing and testing a new suite of AI (Artificial Intelligence) tools to enhance the user experience. AI applications include FAQs, real-time instructional/ factual outdoor related inquiries, digital buyer assistance, reviews of inventoried products, and internally by employees for ad hoc data requests to help improve customer service.

 

This company said it will take them about 9 months to add a shopping cart to their website so you can purchase multiple items at once, instead of one at a time. Now they are talking about integrating AI.  Did they suddenly get so good at tech, that they are leapfrogging in skills? Or are they trying to ride the AI wave?  

 

During the height of the dot-com boom, I remember companies would change their name to "Name.com" and their stock would shoot through the roof the next day.  That was shortly before the crash.  Once I see companies start to try this AI schtick and the market overreacts, I will think we are close to a top.  But the market doesn't seem to be responding to everyone trying to cash in on the hype, so I don't know where we are. 

 

 

 

Link to comment
Share on other sites

  • 4 weeks later...
  • 3 weeks later...

JB. Yep. Okay. I wanna talk about tech. So you wrote this thing about the second half of the chess board and you're talking about, I guess exponential growth. Tell, tell us about this analogy that you're using and why you wanted to write about that over the last week.

 

NC. Yeah, so there's an old story that technologists use and, and it's kind of a fable about the origin of chess in India in the 12, 13 hundreds. And the story goes that the minister that invented chess goes to his local ruler and says, I have this new game. King loves it. Says, what kind of reward do you want? I love this game. You know You can You know have all the gold in my treasury or whatever you want.

 

And so the, the minister says, no, all I want is this. And he points to the chess board and says, I want you to put one grain of wheat on the first square, two on the second, four on the third, eight on the so forth double every time. And whatever grain there is on the chess board at the end, that's my reward. The king thinks he's getting off easy. He calls his treasure, starts doling it out. And the first half of the chess board's pretty much okay, it's like 279 tons of wheat in the first half of the chess board. The problem is in the second half of the chess board. 'cause you've doubled so much and you keep doubling the numbers in the second half of the chess board get mammoth. There's more grain on the 32nd square than there is in the whole first half. Right. Right. By the end of it, you've got like 1500 times the amount of wheat production in the world today.

 

So either the king one story goes, the king rewards the minister for being so clever gives him a better job. The other one is he has a guy kill on the spot. Yeah. Because he is You know sassing the king and that's not good. Yeah. And the analogy to tech is more, Gordon Moore wrote Moore's Law in 1965 with a couple of already turns of improvement in the semi cycle. 

 

JB. Moore's law is that the, the speed of the chip doubles every what, two years?

 

NC. It's the power per dollar doubles every two years. Or the size it'll shrink by half every two years. Yeah. Yeah. So the two of those things together, we now have had 32 turns of Moore's law since 1960. So we're literally at the 33rd square of the chess board.

 

JB. Okay. Right now where things start to get wild.

 

NC. Where things start to get wild because you've already permeated the, in this case society with so much technology and now AI comes along. And my thinking was we don't need another consistently doubling of Moore's Law to keep working. AI already starts with this massive platform and we're in the second half of the chess board with AI as the driver, not just the hardware and cost per computing.

 

MB. Yeah. Great analogy. Love It. Okay. What are the implications for investors?

 

NC. The implications come down to thinking about what stocks work. And it's, as much as we all like to sort of talk about how expensive tech is and what these multiples look like, there is a rationale behind them. There's a reason why those stocks have worked so well for so long. They have worked the first half of the chess boards so effectively. Right. The second half, it's not gonna change. You know and I hear so many times. Okay. Whether it be value investing reverting to some kind of long-term mean or eventually things have to stop. Moore's Law has to end. And the point is it doesn't. Yeah. 'cause we have this now huge base and every next double, even if the doubles only happen every five years instead of every two years, it's still a huge amount of change.

 

MB. Look at, look at this chart from Goldman. This. So this is world technology compared to the world M T and the world ex-MT and technology is just in another universe.

 

This is earnings per share by the way. Yeah, not even price earnings per share.

 

So I pulled a couple of numbers from this chart. So in MSCI world, the top five names, global stocks, everything Apple, Microsoft Alphabet, Amazon, Nvidia, they are 14% of the market cap of all stocks in the world. Yeah. And that's why.

 

JB. So there is now an emerging way of thinking about this, where we're not talking about the chip level anymore. Now we're saying one unit of compute is actually a full data center. Hmm. Seriously, like the data center is the computer. If that's the realm that we're going into. And it increasingly feels like we are, that I think that upends a lot of what we used to think about when we traditionally tried to value companies in this, in this ecosystem. Because there aren't gonna be 50 companies running their own data centers. They're just too expensive. So we have reached a point where only a few companies are gonna be able to actually com compete in this world.

 

They're gonna largely have this to themselves. So if the unit is no longer the chip or the, or the computer or even the server, the unit is actually the full data center. And that's how we're starting to think about this. I think a lot of people are gonna have to get more comfortable with technology companies that trade 30, 40 times earnings and could conceivably grow 20% plus for a decade to come.

 

Unless all of a sudden we, we all just decide we're not interested in making progress anymore and everything. Like everything is good how it is right now, let's just stop. That's the only thing that can stop where I think this is all going. So I don't know how that factors into Moore's Law.

 

But I do know that when I listen to technologists on podcasts, this is now the way they're talking, the GPU doesn't matter. The data center filled with GPUs is the difference maker and is the thing that's gonna drive spending and CapEx and r and d and that's like a whole new, that's a whole new ballgame.

 

NC. The same exact thing happened with mainframes in the seventies and eighties. Okay. Remember Ross? Remember Ross Perot? Yeah. EDS. That was the entire EDS pitch. Let us literally buy your data center from you and then will run it. Okay. And this is just the next level. Yeah. Like forget about having to have a computer You know a PC or an office computer will handle everything in the cloud. Yeah.

 

JB. It's happening. Right. The power of any one individual computer is not terribly important. Yeah. It's, it's where the work, where the work is being done. And that's why I think Snowflake is a stock that we're all gonna be forced to pay attention to for the next 10 years. Obviously Nvidia, and then you look at like some of the field programmable g arrays. So you look at like a lattice semiconductor, that stock's going crazy outta nowhere. People discovering that. And they're saying, wait a minute. Now we're at a point where people that wanna do ai, you actually can't prefab the chip for these companies. You have to just give them a chip that they can program in the field. Like literally program it to the specs of whatever they're building.

 

NC. That's a whole new world. So I feel like we might be be still relatively early in semiconductor stocks. You know in the types of networking companies that are supporting these data centers here.

 

MB. Here's pushback but not pushback. So the NASDAQ 100 and I know a lot of that is outside of these chip names, but the NASDAQ 100 has compounded at 19% the year since 2013. It's a lot of Wealth creation. The counterpoint to what I just said is that Nvidia is training according to Goldman at 26 times, estimated 24 month forward earnings. So two years from now, for a company that's growing like this and defining it, what could be the biggest game changing technology that the world has ever seen? That doesn't sound that rich.

 

NC. It doesn't. It doesn't. I mean, growth investing is not so much about valuation as it is about momentum. You know. And that's why momentum and growth together are two most powerful factors in investing. And yeah, I mean valuation matters to some degree always, but ultimately is not in the short term. Where are you gonna be? You know it. You're right. Not on the Cape You know shoulder PEs predict zero five-year returns.

Link to comment
Share on other sites

Interesting article.  I mean I have no idea but I go back and forth on these arguments.  I am still not convinced growth will translate to profits.   I mean, what about AMD and Intel, are they just going to sit on their hands?  As I understand it, AMD is close and Intel behind but still a lot of smart people there.  NVIDIA's profits have to be drawing interest, I know they are.   Then you have Apple, are they designing their own chips now?   I don't think we can simply take the growth of the industry and apply it to potentially high earnings of a particular company.   Desktop computers have been getting more and more powerful for the last 20 years and it hasn't translated to profits for Intel/AMD.   For now, NVIDIA is so far ahead and the demand so substantial that they are killing it but I question if that's long term sustainable.

Link to comment
Share on other sites

How good is AI in generating new ideas?

 

The conventional wisdom has been not very good. Identifying opportunities for new ventures, generating a solution for an unmet need, or naming a new company are unstructured tasks that seem ill-suited for algorithms. Yet recent advances in AI, and specifically the advent of large language models like ChatGPT, are challenging these assumptions.

 

https://archive.ph/e6L3N#selection-4423.0-4429.345

Link to comment
Share on other sites

  • 3 weeks later...

https://www.wsj.com/tech/ai/ais-costly-buildup-could-make-early-products-a-hard-sell-bdd29b9f?mod=lead_feature_below_a_pos1

 

AI often doesn’t have the economies of scale of standard software because it can require intense new calculations for each query. The more customers use the products, the more expensive it is to cover the infrastructure bills. These running costs expose companies charging flat fees for AI to potential losses.  Microsoft used AI from its partner OpenAI to launch GitHub Copilot, a service that helps programmers create, fix and translate code. It has been popular with coders—more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code—because it slashes the time and effort needed to program. It has also been a money loser because it is so expensive to run.  Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

 

Link to comment
Share on other sites

  • 5 weeks later...
4 minutes ago, james22 said:

 

 

I'm definitely a believer that we underestimate the impact long term and overestimate it short term. 

 

Just because some CEO is excited about what his company is working on doesn't necessarily change much for me. After all, Adam Neuman was excited about We work. 

 

But we'll see. 

Link to comment
Share on other sites

It is exciting for sure. But what is underappreciated is how much upfront investment is required and how long it might take to get a return on it. 

 

Currently the economics of Big Tech are incredible. Capital intensity is very low. Pricing power is very strong as everyone is hooked on their products. Even as revenues are slowing they are still posting double digit earnings growth. They are free cash flow machines and a lot of that free cash flow has been allocated towards share buybacks and reinvesting in their very profitable core businesses with fairly limited risk. 

 

If they start investing a lot in AI there is a lot more risk. Big Tech have an edge because they have incredible financial resources and are already pretty proficient in the use of non-generative AI. But IBM was a behemoth and lost its way as technology changed. And new technologies sometimes benefit users more than the providers. There was a lot of excitement as the 90s tech giants built out the internet. But it was only much later that very profitable internet companies emerged that were able to harness the technology for their benefit. 

 

 

Link to comment
Share on other sites

  • 2 weeks later...

Wow, $86B in value intentionally destroyed in a few days? From Matt Levine: Bloomberg

 

Quote

This is arguably a silly way to look at the situation, because, for a few years ending last Friday, nobody really thought of OpenAI as a nonprofit. OpenAI was an $86 billion tech startup that was building artificial intelligence tools that were expected to result in huge profits for its investors (Microsoft Corp., venture capital firms) and employees (many of whom owned stock). But technically that OpenAI — OpenAI Global LLC, the $86 billion startup with employee and VC and strategic shareholders — was a subsidiary controlled by the nonprofit, OpenAI Inc., and the nonprofit asserted itself dramatically last Friday when its board of directors fired its chief executive officer, Sam Altman, and threw everything into chaos.

 

But for a moment ignore all of that and just think about OpenAI Inc., the 501(c)(3) public charity, with a mission of “building safe and beneficial artificial general intelligence for the benefit of humanity.” Like any nonprofit, it has a mission that is described in its governing documents, and a board of directors who supervise the nonprofit to make sure it is pursuing that mission, and a staff that it hires to achieve the mission. The staff answers to the board, and the board answers to … no one? Their own consciences? There are no shareholders; the board’s main duties are to the mission.

 

...

 

Also, of course, the material conditions of the OpenAI staff are pretty unusual for a nonprofit: They can get paid millions of dollars a year and they own equity in the for-profit subsidiary, equity that they were about to be able to sell at an $86 billion valuation. When the board is like “no, the mission requires us to zero your equity and cut off our own future funding,” I mean, maybe that is very noble and mission-driven of the board. But, just economically, it is rough on the staff.

 

Yesterday virtually all of OpenAI’s staff signed an open letter to the board, demanding that the board resign and bring back Altman. The letter claims that the board “informed the leadership team that allowing the company to be destroyed ‘would be consistent with the mission.’” Yes! I mean, the board might be wrong about the facts, but in principle it is absolutely possible that destroying OpenAI’s business would be consistent with its mission. If you have built an unsafe AI, you delete the code and burn down the building. The mission is conditional — build AGI if it is safe — and if the condition is not satisfied then you go ahead and destroy all of the work. That is the board’s job. It’s the board’s job because it can’t be the staff’s job, because the staff is there to do the work, and will be too conflicted to destroy it. The board is there to supervise the mission.

 

 

 

Link to comment
Share on other sites

  • 1 month later...
  • 2 weeks later...

Something that has occurred to me is that in the short term it does not really matter whether AI is any good or not or the extent to which it boosts productivity and over what timeframe. So long as enough people believe it is worthwhile it will still result in a massive investment boom that will be very supportive to the US economy in the same way that mostly unproductive US government spending is. 

 

And that combination of massive US fiscal deficits and massive AI investment spending at the least could offset the recessionary pressures in the global economy and may even result in an economic boom similar to the dot com bubble. 

 

Of course if AI doesn't fulfil its promise and the investment boom turns to bust or continued deficit spending proves unsustainable then it sets us up for a massive hangover. 

 

But in the short term at least the above seems very bullish and suggests we are closer to 1995 than 1999. 

 

 

 

Link to comment
Share on other sites

  • 4 weeks later...

So few actual products and customers, but sure lets throw trillions at building more AI chips ahead of demand. 

 

https://www.wsj.com/tech/ai/sam-altmans-vision-to-remake-the-chip-industry-needs-more-than-money-1dc0678a?mod=hp_lead_pos3

 

Quote

Altman has held discussions with chip makers about joining with them and using trillions of dollars to build and operate new factories, along with investments in energy and other AI infrastructure. Many of the world’s largest chip companies, including 

Nvidia, design their chips but outsource their production to companies such as TSMC. 

 

Building a cutting-edge chip factory typically costs at least $10 billion. But even with that, the scale Altman is discussing is extreme: Stacy Rasgon, an analyst at Bernstein Research, estimates that a little more than $1 trillion has been spent on chip-manufacturing equipment in the entire history of the industry.

 

 

Link to comment
Share on other sites

  • 2 weeks later...

At some stage LLMs will make their way towards on device inference.  Some of the new google models are device friendly.

 

I'm sure Apple has designs on this as well with their neural engines and coreML, as well as the shared memory architecture.

Link to comment
Share on other sites

Yes, still very early. If I had to equate with the 90's and tech bubble I would feel we are somewhere around 95/96. Decent chance of a continuous inflating of a bubble over the next 2-3 years, then a drop as people get disillusioned, and then another leg up again once a new base is established. Just a guess though, macro isn't something I or anybody else can really predict.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...