Jump to content

ChatGPT


UK

Recommended Posts

ChatGPT is terrible and its based on a type of "AI" that is a dead end that will never work for the vast majority of serious applications. 

 

To the first point, I'm a software engineer and i've seen code its generated, and its basically a templating engine. It is entirely useless for the work I do. Anyone who has used it for other tasks realizes it's "confidently wrong" in far too many use cases.

 

To the second point, Benedict Evans said it best.

:

Quote

2m people have now signed up to use ChatGPT, and a lot of people in tech are more than excited, and somehow much more excited than they were about using the same tech to make images a few weeks ago. How does this generalise? What kinds of things might turn into a generative ML problem? What does it mean for search (and why didn’t Google ship this)? Can it write code? Journalism? Analysis? And yet, conversely, it’s very easy to break it - to get it to say stuff that’s clearly wrong. The last wave of enthusiasm around chat bots largely fizzled out as people realised their limitations, with Amazon slashing the Alexa team last month. What can we think about this? 


The conceptual breakthrough of machine learning, it seems to me, was to take a class of problem that is ‘easy for people to do, but hard for people to describe’ and turn that from logic problems into statistics problems. Instead of trying to write a series of logical tests to tell a photo of a cat from a photo of a dog, which sounded easy but never really worked, we give the computer a million samples of each and let it do the work to infer patterns in each set. This works tremendously well, but comes with the inherent limitation that such systems have no structural understanding of the question - they don’t necessarily have any concept of eyes or legs, let alone ’cats’. 
To simplify hugely, generative networks run this in reverse - once you’ve identified a pattern, you can make something new that seems to fit that pattern. So you can make more picture of ‘cats’ or ‘dogs’. To begin with, these tended to have ten legs and fifteen eyes, but as the models have got it better the images have got very very convincing. But they’re still not working from a canonical concept of ‘dog’ as we do (or at least, as we think we do) - they’re matching or recreating or remixing a pattern. 
I think this is why, when I asked ChatGPT to ‘write a bio of Benedict Evans’, it says I work at Andreessen Horowitz (I left), went to Oxford (no), founded a company (no), and am a published author (not yet). Lots of people have posted similar examples of ‘false facts’ asserted by ChatGPT. It often looks like an undergraduate confidently answering a question for which it didn’t attend any lectures. It looks like a confident bullshitter. 


But I don’t think that’s quite right. Looking at that bio again, it’s an extremely accurate depiction of the kind of thing that bios of people like me tend to say. It’s matching a pattern very well. This is a probabilistic model, but we perceive the accuracy of probabilistic answers differently depending on the domain. If I ask for ‘the chest burster scheme in Alien as directed by Wes Anderson’ and get a 92% accurate output, no-one will complain that Sigourney Weaver had a different hair style. But if I ask for some JavaScript, or a contract, I might get a 98% accurate result that looks a LOT like the JavaScript I asked for, but that 2% might break the whole thing. To put this another way, some kinds of request don’t really have wrong answers, some can be roughly right, and some can only be precisely right or wrong.  


So, the basic use-case question for machine learning was “what can we turn into image recognition?” or “what can we turn into pattern recognition?” The equivalent question for generative ML might be “what can we turn into pattern generation?” and “what use cases have what kinds of tolerance for the error range or artefacts that come with this?” How many Google queries are searches for something specific, and how many are actually requests for an answer that could be generated dynamically, and with what kinds of precision? 


There’s a second set of questions, though: how much can this create, as opposed to, well, remix? 


It seems to be inherent that these systems make things based on patterns that they already have. They can be used to create something original, but the originality is in the prompt, just as a camera takes the photo you choose. But if the advance from chatbots to ChatGPT is in automating the answers, can we automate the questions as well? Can we automate the prompt engineering? 


It might be useful here to contrast AlphaGo with the old saying that a million monkeys with typewriters would, in time, generate the complete works of Shakespeare. AlphaGo generated moves and strategies that Go experts found original and valuable, and it did that by generating huge numbers of moves and seeing which ones worked - which ones were good. This was possible because it could play Go and see what was good. It had feedback - automated, scalable feedback. Conversely, the monkeys could create a billion plays, some gibberish and some better than Shakespeare, but they would have no way to know which was which, and we could never read them all to see. Borges’s library is full of masterpieces no human has ever seen, but how can you find them?


Hence, a generative ML system could make lots more ‘like disco’ music, and it could make punk if you described it specifically enough (again, prompt engineering), but it wouldn’t know it was time for a change and it wouldn’t know that punk would express that need. So, can you automate that? Or add humans to the loop? Where, at what point of leverage, and in what domains? This is really a much more general machine learning question - what are domains that are deep enough that machines can find or create things that people could never see, but narrow enough that we can tell a machine what to look for?

 

  • Like 1
Link to comment
Share on other sites

11 hours ago, ValueArb said:

ChatGPT is terrible and its based on a type of "AI" that is a dead end that will never work for the vast majority of serious applications. 

 

To the first point, I'm a software engineer and i've seen code its generated, and its basically a templating engine. It is entirely useless for the work I do. Anyone who has used it for other tasks realizes it's "confidently wrong" in far too many use cases.

 

To the second point, Benedict Evans said it best.

:

 

I am in DevOps and it is perfect for being a templating enginge and exactly what I need most of the time and especially takes away the work I hate most of the time (setting up the basics)

Link to comment
Share on other sites

  • 2 weeks later...
49 minutes ago, Castanza said:

Not specific to ChatGPT but this is absolutely mind blowing. Worth a listen even if you only listen for 5 minutes. 
 

 

 

 

 

Its a Potemkin Village, just a litany of factual errors and odd segues from the beginning. Says NeXt made three ground breaking applications that never existed. It has Rogan ask Jobs about the Newton, which he had nothing to do with. Called his wife "liz".  Attributes Apples "Digital Hub" to Adobe and says their head of research was John Lasseter, who never worked at Adobe, he's the founder of Pixar!

 

Aside from the factual errors, it doesn't even express Job's thoughts well. 

 

Also the guy who introduces it is doing the usual overblown AI promotional schtick.

 

Edited by ValueArb
Link to comment
Share on other sites

5 minutes ago, ValueArb said:

 

Its a Potemkin Village, just a litany of factual errors and odd segues from the beginning. The guy who introduces it is doing the usual overblown AI promotional schtick.

Sure, I cant speak to how factual the conversation is. I’m more impressed with the language articulation/conversation give and take etc. 

 

I mean if you think about thought process and how humans use logic and we come to conclusions. I don’t think it’s too crazy to expect AI in the future to be able to mimic humans in this regard. Do we truly have original thoughts? Or are they simple the sum of specific decisions and interactions inputs and outputs? 
 

No clue and I’m starting to sound like someone who has smoked too much pot. But think how crazy it would be if 100 years from now AI allows us to tap (I say that loosely) into the minds of past brilliant thinkers. 


Could all be smoke and mirrors too. 

Link to comment
Share on other sites

36 minutes ago, Castanza said:

Sure, I cant speak to how factual the conversation is. I’m more impressed with the language articulation/conversation give and take etc. 

 

I mean if you think about thought process and how humans use logic and we come to conclusions. I don’t think it’s too crazy to expect AI in the future to be able to mimic humans in this regard. Do we truly have original thoughts? Or are they simple the sum of specific decisions and interactions inputs and outputs? 
 

No clue and I’m starting to sound like someone who has smoked too much pot. But think how crazy it would be if 100 years from now AI allows us to tap (I say that loosely) into the minds of past brilliant thinkers. 


Could all be smoke and mirrors too. 

 

In 100 years I agree AI will be amazing. Question is how soon that will occur and I'm firmly a skeptic so far (though ironically I created AI generated art today to discuss its implications with my daughters, who both plan careers in the arts). I believe AI will be a great tool for very focused applications, for example my current company uses this same sort of pattern matching AI to predict periods and pregancies for women who wear our sleep ring. Creating a model from tens of thousands of woman with their body temperature graphs, ages, weights, heart rates, etc, allows that model to make pretty accurate correlations of when one womans personal graph fits the model for a pregnant woman or one starting their period. But for general purpose usages it falls apart.

 

I'd have to replay the video to better explain why I think this, but what AI should be good at (relating facts) this is very poor at. And what you want it to do, relate the actual thoughts and philosophy of a great person, I think its deceptively poor at. The Jobs bot said some things that didn't seem reflective of things he would actually say, it was mimicking him without grasping him. But again I need to play it again to give you more coherent examples.

Link to comment
Share on other sites

2 hours ago, Castanza said:

Not specific to ChatGPT but this is absolutely mind blowing. Worth a listen even if you only listen for 5 minutes. 
 

 

 

 

 

There is this idea, that AI could disrupt some current content creation, especially on social media or future metaverse: 

 

https://stratechery.com/2022/dall-e-the-metaverse-and-zero-marginal-content/

 

 

Link to comment
Share on other sites

https://www.wsj.com/articles/chatgpt-creator-openai-is-in-talks-for-tender-offer-that-would-value-it-at-29-billion-11672949279?mod=hp_lead_pos6

 

OpenAI, the research lab behind the viral ChatGPT chatbot, is in talks to sell existing shares in a tender offer that would value the company at around $29 billion, according to people familiar with the matter, making it one of the most valuable U.S. startups on paper despite generating little revenue. Venture-capital firms Thrive Capital and Founders Fund are in talks to invest in the deal, which would total at least $300 million in share sales, the people said. The deal is structured as a tender offer, with the investors buying shares from existing shareholders such as employees, the people said. The new deal would roughly double OpenAI’s valuation from a prior tender offer completed in 2021, when OpenAI was valued at about $14 billion, The Wall Street Journal reported. OpenAI has generated tens of millions of dollars in revenue, in part from selling its AI software to developers, but some investors have expressed skepticism that the company can generate meaningful revenue from the technology.

Link to comment
Share on other sites

8 hours ago, UK said:

https://www.wsj.com/articles/chatgpt-creator-openai-is-in-talks-for-tender-offer-that-would-value-it-at-29-billion-11672949279?mod=hp_lead_pos6

 

OpenAI, the research lab behind the viral ChatGPT chatbot, is in talks to sell existing shares in a tender offer that would value the company at around $29 billion, according to people familiar with the matter, making it one of the most valuable U.S. startups on paper despite generating little revenue. Venture-capital firms Thrive Capital and Founders Fund are in talks to invest in the deal, which would total at least $300 million in share sales, the people said. The deal is structured as a tender offer, with the investors buying shares from existing shareholders such as employees, the people said. The new deal would roughly double OpenAI’s valuation from a prior tender offer completed in 2021, when OpenAI was valued at about $14 billion, The Wall Street Journal reported. OpenAI has generated tens of millions of dollars in revenue, in part from selling its AI software to developers, but some investors have expressed skepticism that the company can generate meaningful revenue from the technology.

 

I question how legit a $29B valuation is when it's only supported by a $300M stock sale. It sounds like OpenAI is in the eye of the current hype hurricane, and some insiders are taking advantage to reap the highest possible price by keeping supply well under demand.

Link to comment
Share on other sites

4 hours ago, Spekulatius said:

Tried this out for the first time and I am sort of impressed. It’s better than some essays I habe seen. my son also really had piano lessons even though I didn’t really put this in my input.

 

I’ll be impressed when it can write an essay like that… but also as a palindrome.

 

Regarding the piano… i asked it to make up a story about a friend of mine (rotational encoder engineer whose wife wanted him to clean out the garage)… and ChatGPT correctly guessed that he loved garage sales and flea markets.

Link to comment
Share on other sites

2 hours ago, ValueArb said:

I have a friend who is a professional developer who is using ChatGPT to help him generate code and swears by it. I'm setting up a lunch to get the full scoop.

I do it. It works exceptionally well for a particular narrow class of well-defined problems (e.g., optimization is one where it saved me a few hours). It doesn't work well for AI (statistics, really)-centered problems. 

Link to comment
Share on other sites

8 hours ago, UK said:

Yay -the paper clip  is back. This makes sense. The openAI Chatbot is actually pretty good to create outlines for some standardized stuff. i can find these online as well, but having this AI Chatbot directly in Word or Excel maybe is more convenient.

Best Microsoft Clippy GIFs | Gfycat

Edited by Spekulatius
Link to comment
Share on other sites

19 minutes ago, Spekulatius said:

Yay -the paper clip  is back. This makes sense. The openAI Chatbot is actually pretty good to create outlines for some standardized stuff. i can find these online as well, but having this AI Chatbot directly in Word or Excel maybe is more convenient.

Best Microsoft Clippy GIFs | Gfycat

 

🙂 or:

https://en.m.wikipedia.org/wiki/BonziBuddy

 

Bonzi_Buddy.png

Edited by UK
Link to comment
Share on other sites

15 minutes ago, Spekulatius said:

openAI Chatbot is actually pretty good to create outlines

 

For software development, it’s the opposite.  The human comes up with the roadmap (“i’m going to make a stock trading app”) and uses AI to perform the mundane details (“write a function that sorts integers”).

Link to comment
Share on other sites

So I had lunch with my developer friend who is using ChatGPT and it was pretty enlightening.

 

He has a chat setup where he shares his code with the AI so it can model how he does things. Then he asks it to generate boilerplate code that would take him hours to write for dozens of classes and it spits it right out for him based on the attributes of his classes that he's shared with it. Essentially he's using the tool to help automate production of the most time consuming but least intellectually challenging work. He still has to review and fix occasional errors but its saving him a lot of time.

 

So what I learned is that you can customize ChatGPT in your own chat by giving it your data, and as it learns you can make it more powerful and useful for your problem domains. He's also setup a separate marketing chat where he's training it on all his products marketing documents and asks it to spit out marketing copy for his new release with emphasis on specific features he tells it. He said it reads great.


He also said he'd be happy paying $500 a month for it. It's main problem is performance, its slow at time and sometimes he gets cut off for rest of the day because he's made too many queries. 

 

So this makes sense to me. It's a pattern matching tool with enough intelligence that you can train it to do tasks for you. The idea it's going to write code on its own or that the code it writes is going to be perfect quality are both ludicrous. I'm betting people are creating startups right now to leverage it across very narrow domains (customer support for plumbers, traffic court advisor, etc) where they can polish it by training it on specific better curated data to make it far more trustworthy in those domains.

Link to comment
Share on other sites

3 hours ago, ValueArb said:

So I had lunch with my developer friend who is using ChatGPT and it was pretty enlightening.

 

He has a chat setup where he shares his code with the AI so it can model how he does things. Then he asks it to generate boilerplate code that would take him hours to write for dozens of classes and it spits it right out for him based on the attributes of his classes that he's shared with it. Essentially he's using the tool to help automate production of the most time consuming but least intellectually challenging work. He still has to review and fix occasional errors but its saving him a lot of time.

 

So what I learned is that you can customize ChatGPT in your own chat by giving it your data, and as it learns you can make it more powerful and useful for your problem domains. He's also setup a separate marketing chat where he's training it on all his products marketing documents and asks it to spit out marketing copy for his new release with emphasis on specific features he tells it. He said it reads great.


He also said he'd be happy paying $500 a month for it. It's main problem is performance, its slow at time and sometimes he gets cut off for rest of the day because he's made too many queries. 

 

So this makes sense to me. It's a pattern matching tool with enough intelligence that you can train it to do tasks for you. The idea it's going to write code on its own or that the code it writes is going to be perfect quality are both ludicrous. I'm betting people are creating startups right now to leverage it across very narrow domains (customer support for plumbers, traffic court advisor, etc) where they can polish it by training it on specific better curated data to make it far more trustworthy in those domains.

This is exactly why it is interesting to Microsoft and will be integrated with GitHub's co-pilot.

Link to comment
Share on other sites

On 1/19/2023 at 9:41 PM, ValueArb said:

So I had lunch with my developer friend who is using ChatGPT and it was pretty enlightening.

 

He has a chat setup where he shares his code with the AI so it can model how he does things. Then he asks it to generate boilerplate code that would take him hours to write for dozens of classes and it spits it right out for him based on the attributes of his classes that he's shared with it. Essentially he's using the tool to help automate production of the most time consuming but least intellectually challenging work. He still has to review and fix occasional errors but its saving him a lot of time.

 

So what I learned is that you can customize ChatGPT in your own chat by giving it your data, and as it learns you can make it more powerful and useful for your problem domains. He's also setup a separate marketing chat where he's training it on all his products marketing documents and asks it to spit out marketing copy for his new release with emphasis on specific features he tells it. He said it reads great.


He also said he'd be happy paying $500 a month for it. It's main problem is performance, its slow at time and sometimes he gets cut off for rest of the day because he's made too many queries. 

 

So this makes sense to me. It's a pattern matching tool with enough intelligence that you can train it to do tasks for you. The idea it's going to write code on its own or that the code it writes is going to be perfect quality are both ludicrous. I'm betting people are creating startups right now to leverage it across very narrow domains (customer support for plumbers, traffic court advisor, etc) where they can polish it by training it on specific better curated data to make it far more trustworthy in those domains.

Thanks for this,  some good insights.

 

I tried to get it to work on 10k info and it struggled with the size, the complexity of tables and then summarizing at the level I was after. It also would at times just plain make stuff up.  I have no idea what I'm doing so could just be me but I'm still skeptical.   I think, as you said, it needs to target very specific use cases. For now. 

Edited by no_free_lunch
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...