Jump to content

Recommended Posts

Posted
8 hours ago, mattee2264 said:

https://futurism.com/economist-ai-doomed-bubble

 

Article above pours a bit of cold water on some of the hype. Basic criticism is that LLMs learned to write before they learned how to think and while they can string words together in convincing ways they have no idea what the words mean and are unable to use common sense, wisdom or logical reasoning to distinguish truth from falsehood. And as a result they are unreliable. And dangerously so because they are programmed to sound so confident and convincing. 

 

In another essay Smith and Funk recall the "Eliza effect" a 1960s computer program that caricatured a psychiatrist and convinced many users that the program had human-like intelligence and emotions. And we are vulnerable to this illusion because of our inclination to anthropomorphize. So in many ways Chat GPT and the like are just another example of pseudo intelligence.

 

It reminds me a little of well of the way that parents have a tendency to extrapolate thinking just because their kid does something semi-intelligent he will grow up to be a genius. 

 

In the same way the argument seems to be "Well it is 2023 and already these AI models can write college-grade essays and do high school math. So by the end of the decade AI will be capable of doing most jobs better than humans with unimaginable productivity benefits"

 

But the history of AI shows that what tends to happen is that a brick wall is reached and then there is an AI winter that can span decades. 

 

And the scary thing is that because of FOMO Big Tech companies not to mention all the VC funds and so on are going to invest billions and billions with very uncertain returns. 

 

Perhaps they will make money out of it at least in the early days because if enough consumers and companies believe in AI they will want to buy the AI products even if they aren't really that game-changing and prove to be unreliable. And the illusion is very strong. But the problem with fads is that while you might buy a fad product once, you aren't likely to be a repeat buyer, and to justify the massive investment it needs to become a recurring revenue stream. 

 

 

The argument being made is for general intelligence ? or AGI and i think no one thinks we are there yet. However even with the current capabilities, there are a few emerging use cases which can improve productivity for sure

 

for example, my kids use ChatGpt as a tutor and they are finding it very useful often better than TA in college. 

 

so even if these models are not disruptive, they are definitely useful in a narrow use case as they stand today. Add multi modal capabilties and ongoing enhancements, and its quite possible it help in improving productivity 

Posted

ChatGpt has certainly made more than a splash this year but which white-collar jobs do you all think will be impacted the most not just by ChatGpt, but all of AI as a whole in the near-term?

 

I keep hearing from friends and acquaintances who are already relying on chatgpt to supplement their white collar work in marketing and sales especially when it comes to writing content and client-facing emails.

 

Could your or your subordinates’ line of work require re-skilling in the near-term to leverage AI technology to, for example, enhance productivity? Perhaps the same way that certain software tools are a requirement in certain roles whereas previously people were able to leverage more analog methods such as pen and paper?

Posted

https://arxiv.org/pdf/2311.07590.pdf

 

Quote

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. 

 

Posted
6 hours ago, ValueArb said:

 

More recent research, however, has found that most children learn to lie effectively between the ages of 2 and 4. The first successful lie can be pegged as a developmental achievement because it marks the child's discovery that her mind and thinking are separate from her parents'. 

 

https://www.scholastic.com/parents/family-life/social-emotional-learning/development-milestones/truth-about-lying.html#:~:text=More recent research%2C however%2C has,are separate from her parents'.

 

Posted

Lying requires the ability to distinguish truths from falsehoods and deliberately choose the falsehood. AI is not capable of that.

 

What it is capable of which makes it unreliable and dangerous is making incorrect statements with a lot of conviction. And as anyone with a basic grounding of human psychology will understand that when someone sounds very confident when giving a response they are often believed. In fact politicians make a career out of that ability. 

AI also has a massive halo effect so people will probably find out the hard way that it is not always reliable and has major limitations.

 

 

  • 2 weeks later...
  • 3 weeks later...
Posted

So generative AI doesn't seem to understand the concept of copyright. And copyright infringement could become a major issue with a lot of lawsuits. NYT has already sued OpenAI/Microsoft. Disney seems also to be weighing up filing a lawsuit.  Seems like it could be something that could slow the progress until it gets sorted out and perhaps pour some cold water on the Ai enthusiasm in 2024. Any thoughts? 

Posted (edited)
11 hours ago, mattee2264 said:

So generative AI doesn't seem to understand the concept of copyright. And copyright infringement could become a major issue with a lot of lawsuits. NYT has already sued OpenAI/Microsoft. Disney seems also to be weighing up filing a lawsuit.  Seems like it could be something that could slow the progress until it gets sorted out and perhaps pour some cold water on the Ai enthusiasm in 2024. Any thoughts? 

I'm thinking about 2 scenarios from similar areas. We have the search model where websites have to be there to get visits and there is a mutual benefit for both websites and the search engine. We also have Youtube where the copyright owners gets some cash for their content, but to be honest it seems like Alphabet make some huge amounts on all the content that's not connected rightfully to the copyrights owners. I guess it will be closest to the search engine model because the AI companies will benefit most from this and the content owners have low negotiating power because there are many of them with similar content. 

Edited by competitive-advantage
Posted

Think there is a difference.

 

With the search engines companies can gate any content they do not want freely available for search.

With YouTube the content creators upload their content in the same way they do on social media sites.

 

This seems a bit more analogous to Napster. 

 

And as GIGO applies then it only seems fair that producers of high quality informational content (OK maybe that is stretching it for journalism but at least they fact check and have highly informed sources) should consent to their content being used as training data and compensated accordingly. 

Posted

I totally agree that there's a difference. Chat GPT don't link so much and it's harder for Open AI to do because the data is old. Bard is making a lot of links to external sources, so Alphabet is already trying to pay with visits for the content used instead of cash.

 

I like the Napster analogy. If the same happens for AI as for the music industry it would lead to paid subscriptions, ads and royalties.

  • 1 month later...
Posted

What makes me uncomfortable is that AI brings the Big Tech companies into competition with each other.

 

Previously they'd all carved out niches:

GOOGLE: search/Youtube

META: facebook/instagram

MSFT: enterprise software

APPL: phones and related services

AMZN: cloud but mostly storage and e-commerce

 

Now you'd imagine they will all try to come out with some kind of AI pilot that organizes your life. They will probably all try to come out with some kind of ChatGPT equivalent which replaces search (but even with search the money is made through the adverts as in a typical search you'd click on multiple links and visit multiple websites whereas with ChatGPT equivalent you make a single prompt and it spits out results and most likely in the premium versions you pay not to have ads)

 

The monetization seems a lot more immediate than the internet as any AI upgrades are going to result in a monthly subscription fee. But we aren't talking about new products for tiny start-ups we are talking about companies already making hundreds of billions a year in revenues and are so large that they are necessarily limited by consumer and corporate budgets of whom they have already captured considerable wallet share. 

 

And unlike cloud which was basically gravy to them because they already had all the real estate so easy to basically rent it out, these AI tools cost a lot to train (even before considering potential copyright issues) and massive amounts of R&D are required to increase the functionality and iron out the bugs and the worry will be that if they skimp on R&D another Mag7 firm will steal a technological lead and eventually establish the standard and with economies of scale you probably need a pretty big share of the market to make a lot of money and their existing core businesses are so good that there is a high bar set to ensure that additional revenue is not offset by lower average margins. 

 

 

Posted
Quote

Early Adopters of Microsoft’s AI Bot Wonder If It’s Worth the Money

Artificial-intelligence aide handles email, meetings and other things, but its price and limited use have some skeptical

“It has allowed people to say, ‘You know what, there is already 10 other people on the call. I’m going to skip this one. I’m going to catch up in the morning by reading the digest and skipping to the parts of the meeting I really needed to hear,’” said Art Hu, the global chief information officer at 

 

In other areas, testers say the tech has fallen short: Copilot for Microsoft 365, including other generative AI tools, sometimes hallucinated, meaning it fabricated responses. Users said Copilot, at times, would make mistakes on meeting summaries. 

 

At one ad agency, a Copilot-generated summary of a meeting once said that “Bob” spoke about “product strategy.” The problem was that no one named Bob was on the call and no one spoke about product strategy, an executive at the company said. 

 

In other programs—particularly the ones that handle numbers—hallucinations are more problematic. Testers said Excel was one of the programs on which they were less likely to use the AI assistant because asking it to crunch numbers sometimes generated mistakes. 

Jared Spataro, Microsoft’s corporate vice president of workplace applications, said that Excel was still in preview and lagged behind the other programs in usefulness.

 

Other features that Microsoft has touted, including the ability for Copilot to generate PowerPoint slides, have also been disappointing, some users said. Guido Appenzeller, a partner at investment firm Andreessen Horowitz, posted a thread on X showing the mistakes it makes when prompted to make a presentation. “It is a mess and not anywhere close to adding value,” Appenzeller said.

 

Ethan Mollick, a professor of AI at the University of Pennsylvania’s Wharton School, noted in a post that in Outlook, Copilot suggested times for a potential meeting that were booked or on Saturdays. Mollick otherwise gave the software a positive review, calling it a “pretty impressive set of tools.”

 

Microsoft’s Spataro said PowerPoint is “still learning its way.” Some early adopters said the initial excitement about the AI tools wore off quickly. Lenovo said that aside from the AI used to transcribe meetings on Teams, there was about a 20% drop in the use of Copilot for most software after a month.

 

https://www.wsj.com/tech/ai/early-adopters-of-microsofts-ai-bot-wonder-if-its-worth-the-money-2e74e3a2?mod=hp_lead_pos5

Posted

I have recently been writing some admissions letters for grad school. I have used chat gpt heavily during this process but was thoroughly underwhelmed. What I found is that it requires a lot of prompting and even then it doesn't always make sense or phrase things clearly. 

 

It lacked in grammar and phrasing as well as substance (the only substance it has is what limited substance you may give it). 

 

The chat gpt paragraphs were useful to flesh out an idea. However, beyond that it was utterly useless and required heavy revision to be of the appropriate standard. 

 

I am left wondering, if people think this is so great then what quality of work were they already putting out?

Posted (edited)

I just don't see how LLMs overlap much with Search. Maybe we are thinking that way only because Microsoft made the claim a year ago and tried to bundle ChatGPT with Bing, but that alliance has gone nowhere. Now that we've seen the new tool in action and people have had time to look for use cases it seems to me like a different animal altogether, creating its own new market.

 

LLMs, as long as you teach your employees how to properly prompt supervise and double check,  are useful to enhance the productivity of many office jobs and can answer/summarize complex questions into neat little essays. Applied to imagery instead of text, they can generate pictures (Dall-E) and now even early videos (OpenAI Sora), which I could see mounting a challenge to someone like Adobe.

 

All the above is absolutely amazing (and probably just the beginning) but it has nothing to do with what I use Google Search for. Or am I the one missing something?

Edited by WayWardCloud
Posted
20 minutes ago, Parsad said:

Just the beginning!  Lots of jobs will be obsolete over the next 30 years.  Cheers!

👍

 

And many more jobs that doesn't exist today will be created, I guess.

Posted
5 hours ago, Hektor said:

👍

 

And many more jobs that doesn't exist today will be created, I guess.

 

I'm sure there will be many jobs created.  Although I think the net jobs created will actually for the first time decrease historically...and that will probably be permanent as AI/robotics becomes more and more efficient over time.  

 

Cheers!

Posted
2 hours ago, Parsad said:

 

I'm sure there will be many jobs created.  Although I think the net jobs created will actually for the first time decrease historically...and that will probably be permanent as AI/robotics becomes more and more efficient over time.  

 

Cheers!

Yes. I think we have found a good repeater/AI. No matter what new jobs get created, the repeater/AI will be the main worker in these new jobs. Thus the need for regular/average human will be greatly reduced unfortunately. 

Posted

After I asked Gemini to read BRK's letters I gave it the below prompt and that is what I got as an answer. Pretty good IMO, not 100% WEB style but close. Scary good considering it would have taken me 1 day to try to do as good.

 

Please make a letter Warren Buffett style giving the following advices to a 15 year old girl learning about the stock market. About 1 page:

-I have learned that you were given a school stock picking contest and that it ends in june of this year.
-Stock pickign is hard. Short term is a voting machine and long term is a weighting machine.
-Picking stock winners overs the next few months is a fool's errand.
-Explain what metrics one should be focused on to be a successfull long term investor.
-Explain fear and how it's important to be risk averse but bet accordingly when the opportunity presents itself.

Give some catching examples from Warren's past, and make sure to use a down to earth language similar to what Warren uses.

 

 

To a Future Investor: A Few Words on the Market

Dear [Girl's Name],

Word on the street is you're diving headfirst into the world of stocks, thanks to a school contest. That's fantastic! It's never too early to start learning the ropes, and what better time than when you've got the whole game laid out in front of you?

Now, before you get swept away in the frenzy of picking winners and losers, let me offer a word of seasoned advice: the market is a tricky beast. In the short term, it can be like a chaotic voting machine, swayed by emotions and fleeting trends. Chasing quick wins over a few months is akin to playing a game of chance, and even the most seasoned investors wouldn't recommend that.

 

Instead, think long term. Imagine the market as a giant weighing machine, where the weight of a company's true value ultimately determines its worth. This is where the real magic happens. By focusing on strong businesses with sustainable competitive advantages, healthy financials, and honest, capable management, you're setting yourself up for success over the years, not just the next few months.

 

Remember, fear and greed are two emotions that can cloud your judgment. It's natural to be cautious, especially when the market takes a tumble. But remember, fear can also be a valuable tool, preventing you from making impulsive decisions based on short-term fluctuations.

 

However, there's a flip side to the coin. When the market throws a good pitch, offering stocks at undervalued prices, you need to be ready to swing the bat. Just like in 1989 when everyone was panicking about rising interest rates, and we at Berkshire Hathaway saw an opportunity to scoop up shares of Coca-Cola at a discount. We knew the company's fundamentals were rock-solid, and the temporary market jitters wouldn't change that. That investment, my dear, turned out to be a home run.

 

So, the key takeaway? Approach the market with a healthy dose of skepticism, a long-term perspective, and the courage to act when the time is right. Don't get caught up in the short-term noise, and focus on building a foundation of knowledge and sound investment principles. Remember, slow and steady wins the race, especially in the marathon of investing.

 

If you're ever feeling lost, don't hesitate to reach out to someone you trust, a mentor, or a financial advisor who can guide you on your journey. After all, even the Oracle of Omaha didn't get here alone.

Wishing you all the best in your investing endeavors,

Your fellow student of the market,

[Your Name]

Posted
On 12/30/2023 at 5:27 PM, mattee2264 said:

So generative AI doesn't seem to understand the concept of copyright. And copyright infringement could become a major issue with a lot of lawsuits. NYT has already sued OpenAI/Microsoft. Disney seems also to be weighing up filing a lawsuit.  Seems like it could be something that could slow the progress until it gets sorted out and perhaps pour some cold water on the Ai enthusiasm in 2024. Any thoughts? 

 

Some companies are focused on this

https://helpx.adobe.com/firefly/faq.html#training-data

https://www.computerworld.com/article/3699053/adobe-offers-copyright-indemnification-for-firefly-ai-based-image-app-users.html

 

 

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...