Jump to content

If the AI bubble like the Internet, in what year are we now?


james22

If the AI bubble like the Internet, in what year are we now?  

55 members have voted

  1. 1. If the AI bubble like the Internet, in what year are we now?

    • 1995
      19
    • 1996
      6
    • 1997
      7
    • 1998
      10
    • 1999
      4
    • 2000
      9


Recommended Posts

46 minutes ago, Gmthebeau said:

 

There will be some applications but it's all incremental basically, nothing game changing.  It won't be anywhere near as big as the internet.

 

Prior to the AI frenzy, we heard how the Metaverse was going to be the next big thing.  It hasn't caught on, so they moved on to AI - basically to get a rally started.

 

I would bet the XLU (utilities) vastly outperforms the AIQ (AI ETF) over the next 20 years.

Do you code?  Have you tried Code Interpreter?  🤯 GPT-4 Code Interpreter is good as a junior software engineer but is able to generate code faster than any human.  Alpha Fold is a big improvement in protein folding which has huge implications for biotech, not to mention other older techniques that are adopted by companies like Schrodinger.  Self-driving cars are progressing slowly but will get there.  Medical diagnosis via computer vision is a relatively import field too, but more niche than these other applications.  The improving classical statistical modeling has already changed how 50+% of businesses work.  Without machine learning, there is no need for many data scientists, data engineers, business analysts.  

Edited by cameronfen
Link to comment
Share on other sites

  • Replies 88
  • Created
  • Last Reply

Top Posters In This Topic

9 minutes ago, cameronfen said:

Do you code?  Have you tried Code Interpreter?  🤯 GPT-4 Code Interpreter is good as a junior software engineer but is able to generate code faster than any human.  Alpha Fold is a big improvement in protein folding which has huge implications for biotech, not to mention other older techniques that are adopted by companies like Schrodinger.  Self-driving cars are progressing slowly but will get there.  Medical diagnosis via computer vision is a relatively import field too, but more niche than these other applications.  The improving classical statistical modeling has already changed how 50+% of businesses work.  Without machine learning, there is no need for many data scientists, data engineers, business analysts.  

 

No, I don't code.  I just saw some guy on CNBC talking about his AI model to pick stocks.  I remember this same guy on there a couple years ago recommending UPST at like $300 per share before it fell to $15 (now its like $50), so I suppose AI can't do any worse than him.  Can't believe people are still paying him fees.

 

Link to comment
Share on other sites

Just now, Gmthebeau said:

 

No, I don't code.  I just saw some guy on CNBC talking about his AI model to pick stocks.  I remember this same guy on there a couple years ago recommending UPST at like $300 per share before it fell to $15 (now its like $50), so I suppose AI can't do any worse than him.  Can't believe people are still paying him fees.

 

Yea just because AI is for real, doesn't mean there aren't a lot of grifters selling vaporware.  

Link to comment
Share on other sites

2 hours ago, Gmthebeau said:

There will be some applications but it's all incremental basically, nothing game changing.  It won't be anywhere near as big as the internet.

 

Its easy to get confused in this space so let me summarize.

AI = deep learning + machine learning.

 

Deep learning is the neural network type of AI ..simulates brain neurons.

Machine learning is a statistical / mathematical model type of AI - simulates decisions trees, linear regression, optimization problems.

 

AI has been around since the 1970s including deep learning neural network and machine learning mathematical approaches...it made progress at glacial pace and is often difficult to use. When in 2010-2020 ..Date warehouse and Date lake and massive amount of data became available - the statistical type machine learning took off and showed good results and is being used today...that sort of got incorporated into lot of applications without a lot of hype - you needed data scientists and operations reaseatch type folks to implement it using various machine learning tools..and it was done successfully ...but this is not the big break through.

 

Neural network AI has been difficult to use. CNN used for face recognition made some progress but you had to train these models manually and using structured data to make them work which made them difficult  to develop but once develped were very useful so the results were mixed - but some had good success.

 

But then in 2017 a paper came out from google called "Attention is all you need". It was a breakthrough in approach in how Neural networks were built. Of course there were some earlier breakthroughs that led to the 2017 moment but approximation is 2017 is when the NEW AI revolution started. This new AI - you could give it the entire world corpus + internet in unstructured form and it could self learn - it trains it self over several iteration. The end result is in two parts. (i) A brain with lots of knowledge and (ii) A sponge that can learn new things very quickly.  What you see with ChatGPT is (i) - it spits out things its learned. But whats most valuable is (ii). Imagine something that is easy to train with new data of any form - you no longer have to mess around with number of neurons, number layers, number of input/outputs, type of network to make it work - it just works out of the box with any kind of data - any - structrured or unstructued, text , images, sound etc ...any ...and this ability to learn fast with any kind of data is what makes it powerful.

 

Find interviews with leaders in this space - start with the famous Bill Gates interview, interview by Geoffery Hinton...if you understand the technology like they do (and me too) its not difficult to think this is Industrial Revolution 4.0 and it will be bigger (and longer) than the internet ...it will develop in fits and starts like the internet but at some point the S curve takes off. 

 

I promise you that you wont waste time and effort if you get up to speed on this stuff....but looking from the outside at a black box , its easy to conclude its all hype. Just that this one is different.

 

 

image.jpeg.89286c3b6b53061a0c49891628909b05.jpegAI before 2017 (Apple newton failed because it did not have network connectivity to do useful things)

image.jpeg.4d24d5fb32134b7f67073af0240e617a.jpeg                 AI 2017    (first iphone with network connectivity, limited apps) **BREAKTHROUGH EVENT==>IMPLICATIONS 

 

Image from t-mobile.com  AI of the future ( iphone 13 with plethora of apps and hardware features)

 

 

Edited by tnp20
Link to comment
Share on other sites

CAUTION: Anyone that wants to get up to speed on the AI revolution must focus on the AI breakthrough of 2017 and onwards.

 

The machine learning - the mathematical/statistical/decision tree/linear regression has been used successfully for a while by data scientsts - THIS IS NOT IT !!!!

 

The old Neural network AI - RNNs, CNNs, LSTMs etc   - THIS IS NOT IT !!!!

 

The new AI is built on top of RNN, CNN, LSTM - its got some new features like LSTM, Attention mechanism, encoder/decoder etc - THIS IS IT !!!!

 

This new AI is badly referred by various acronyms: At is heart its a TRANSFORMER MODEL.  These TRANSFORMER MODELS are also called FOUNDATION MODELS.

LLM (Large Language models) are text version of these FOUNDATION MODELS....other TRANSFORMER MODELS work with pictures, videos, voice, sound, numbers, telemetry, any kind of data really. This is what you want to learn about !!!!

 

I am not saying machine learning is not important but thats not the new breakthrough - machine learning is awesome and you should understand it in the broader picture but its not the breakthrough that will fundamentally transform business and society that this other one will. Lot of the early stuff has wow factor like Chatgpt and Dalle-2 (try it its awesome)...and like the internet..internet took off when the porn industry got hold of it...lot of the early stuff will seem non-serious but thats because people haven't thought deeply about how to use this new tool to deeply transform their business ...this stuff does not belong in the IT department (yes its bizzare when I say this), it belongs on the front lines of business when the operations people need to figure out how to use it and what business "people, process, technology" need to be completely transformed.

 

So when some one says they use "AI" ..one must absolutely clarify which one they mean - this is where some of the hype comes in as CEO's IT department tell him/her "yeah we use AI...been using it for a decade"...and he goes on a conference call and says "AI" 10 times.  Now the picture is blurred a bit as you could combine this new AI with machine learning AI so they could be using both and together in some cases...but the Industrial Revolution 4.0 is about this new AI (2017 version).

 

This 2017 version of Neural network AI is called Generative AI.

Edited by tnp20
Link to comment
Share on other sites

Yeah I understood that from reading about Chat GPT, basically they take existing data on the internet and apply a transformative model on it in order to get the output.

 

It has limitations as the model is entirely dependent on the input data, which can be false and also can be flooded by AI generated data. This in time can make models wildly unreliable.

 

I will look more into generative AI, see what answers people are coming up with.

Edited by Paarslaars
Link to comment
Share on other sites

17 hours ago, cameronfen said:

I wrote this on Seekng Alpha: https://seekingalpha.com/article/4608680-a-data-scientist-explains-large-language-models-and-implications-for-businesses. Let me know if you are looking for something else.  

 

Awesome primer on this. That for sharing. I followed you on SA as well.

 

 

Somewhat unrelated, I still struggle how to use both BARD and ChatGPT and get useful results. For once they are both absolutely terrible in math. I tried to do some history research using BARD/ChatGPT but things I know a bit about and thy can’t be found all that readily in history books. In my case, it was about the liberation of the Mosel valley in March 1945 by US forces in WW2.

 

I don’t know where both get their info from but pretty much any detail is incorrect , some partly and much completely. Bard and ChatGPT both contradict each other and depending on how you ask questions, they are even contradicting themselves.

 

The answers are not totally incorrect, so I think these Chatbots are making stuff up based on similar events that were playing out at about the same time a d/or extrapolation. So I think students who use this to write their paper as a short cut for real works are in for a rude awakening.

 

(FWIW, the liberation in question occurred from March 14 to March 18, 1945).

IMG_1034.jpeg

IMG_1033.jpeg

Link to comment
Share on other sites

2 hours ago, Spekulatius said:

Awesome primer on this. That for sharing. I followed you on SA as well.

 

 

Somewhat unrelated, I still struggle how to use both BARD and ChatGPT and get useful results. For once they are both absolutely terrible in math. I tried to do some history research using BARD/ChatGPT but things I know a bit about and thy can’t be found all that readily in history books. In my case, it was about the liberation of the Mosel valley in March 1945 by US forces in WW2.

 

I don’t know where both get their info from but pretty much any detail is incorrect , some partly and much completely. Bard and ChatGPT both contradict each other and depending on how you ask questions, they are even contradicting themselves.

 

The answers are not totally incorrect, so I think these Chatbots are making stuff up based on similar events that were playing out at about the same time a d/or extrapolation. So I think students who use this to write their paper as a short cut for real works are in for a rude awakening.

 

(FWIW, the liberation in question occurred from March 14 to March 18, 1945).

IMG_1034.jpeg

IMG_1033.jpeg

One thing you can use is plugins. For arithmetic use a calculator plugin that comes with GPT-4.  Basically the way this works is GPT produces a prompt and then runs the calculator to verify the prompt is correct. This is also how code interpreter works.  
 

History is more difficult because there is no black box program that is evaluatIng a given historical output is correct.  You can try using a knowledge graph plugin, if one has been developed for GPT.  Knowledge graph basically scrapes Wikipedia etc to get a graph of relationships (ie this caused that, or this leader started that event etc.)

 

You can also try chain or thought or more advanced multi step prompting.  

Link to comment
Share on other sites

(i) Right now hallucination problem is big. Just makes facts up. But it will resolve over time ...early teething troubles in my view..better quality of data, better promting methods and better questions  - you may find it gives different answers  if you ask questions differently so this is bit of "user" training as well on how to ask the questions.

 

(ii) Maths and logic - bard is better than chatGPT but these are general purpose models  - google is working on a switch model where it has multiple models within one large model and the model sends the query best able to answer that topic question ...ie maths model. Another approach is the plugin approach mentioned above. have [plugins for different things.

 

google has this AI thing called Minerva - look it up.

 

Early days. THings are not perfect and will never be perfect because humans arent perfect and it neurons models humans to some extent and these things are not like hard coded data and facts ...its more vector space driven and what things are in proximity of those vector spaces ...akin to approximation....so like humans the answer accuracy will be always probabilistic. If you want certainty, you hard code it - but then you have to deal with human introduced bugs...

 

 

 

 

 

Link to comment
Share on other sites

8 hours ago, cameronfen said:

One thing you can use is plugins. For arithmetic use a calculator plugin that comes with GPT-4.  Basically the way this works is GPT produces a prompt and then runs the calculator to verify the prompt is correct. This is also how code interpreter works.  
 

History is more difficult because there is no black box program that is evaluatIng a given historical output is correct.  You can try using a knowledge graph plugin, if one has been developed for GPT.  Knowledge graph basically scrapes Wikipedia etc to get a graph of relationships (ie this caused that, or this leader started that event etc.)

 

You can also try chain or thought or more advanced multi step prompting.  

Yes, I heard about a wolfram browser plug-in that can be used with ChatGPT which sounds cool. Unfortunately my work does not allow browser plugin downloads and they also blocked ChatGPT now. I think most of the value will be created with additional application running on top of ChatGOT and other AI chatbots that allow for a more tailored experience.

 

For the history issue, I was sort of responding to someone I follow on Twitter who claimed that his son wrote a history essay using ChatGPT and I just wonder if he ever checked the work, because when I tried it m it produced junk and half truth output that looked good when you look at it at first glance and then when you double check it , it is almost all wrong. That sort of output is imo the most dangerous one. It’s like an all knowing idiot who seems to know everything  but upon closer look it’s all gibberish and incorrect.


I can see a coding Assistent having quite a bit of value. I don’t code but do a lot of work in excel and I think an AI assistant done right could add enormous value and I think that’s what MSFT is after with their $30 add on subscription offfering.

 

 

Edited by Spekulatius
Link to comment
Share on other sites

Question for those with deep knowledge in this area.

 

Does the technology have the ability to detect and scrub corrupt data/connection from its learning? If I intentionally introduce a small cloud of disguised garbage data, to generate erroneous connection, is the technology able to detect and erase? And if I repeat over an extended time frame, is there a practical limit to my stealth corruption?

 

Example: Assume that I am pretty sure a competitors 'bots are continuously sniffing cyberspace for live time change in a range of securities with deep markets. I buy a number of puts on said securities, flood cyberspace with a large cloud of garbage negative data, and trigger a flash crash that I close out against. However we've all paid our lobbyists, the exchange rescinds the trades, and it is as though nothing ever happened. But did the technology remove the erroneous learning from that corrupt data?

 

If not, I have an opportunity!

 

SD

Link to comment
Share on other sites

^ @SharperDingaan so when these models are trained, you have code that attempts to filter “bad” data. Keep in mind that attack, is really difficult as there a 100 billion to 1 trillion tokens in the training set after being filtered.  Additionally once the model is trained, you have to use reinforcement learning with human feedback.  This also helps to scrub bad “habits” the model has picked up.  Basically you have people rate model outputs, and you modify the model to produce outputs that humans rate highly as desirable (whatever desirable is defined as).  So long story short, there are multiple steps of training making this adversarial attack on training data very hard to pull off.  

Link to comment
Share on other sites

5 hours ago, SharperDingaan said:

Does the technology have the ability to detect and scrub corrupt data/connection from its learning? If I intentionally introduce a small cloud of disguised garbage data, to generate erroneous connection, is the technology able to detect and erase? And if I repeat over an extended time frame, is there a practical limit to my stealth corruption?


There is whole complex debate going on to solve it even including injecting fake data to balance out profiling bias

 

Anthropic backed by Google I think has the most pure model to cleanse it if bad stuff.

 

Folks are too focused on knowledge embedded in the model. The real power lies not in the knowledge but the learning sponge that was created using the vast amount of data. The sponge can learn anything new no matter what data type. This means you can feed your pure data what ever that may be and train it to avoid bad results being spit out. If you feed it your propriety data that has nothing to do with internet garbage you solve mission specific problems. Who cares what the capital of Botswana is when you are trailing it to do voice recognition.

 

this learning sponge was an indirect result of it being trained on vast quantities of unstructured data … eventual it figured out how to make sense of it all and encode it appropriately internally which made it ready to learn almost anything

Edited by tnp20
Link to comment
Share on other sites

Interesting in that it is the same as In the mining world. Toxins (cyanide used in gold leaching) never go away, they are just managed via 'dilution'; sample a lake near the discharge point and you die (too little dilution), sample the same lake far away from the discharge point and the toxin is barely detectable. The toxin is not evenly distributed, and you can kill a population via a slow accumulation of toxicity, the same as yeast in a sugar solution. Corrupted data remains where it is, and you can slowly keep adding to it without setting things off, against the day you would like to use it.

 

Interesting in that there is a human algorithm over-ride. The AI learning simply supports preconceived notion; if it aligns, it stays ... if not, it is overridden. Best case, the human 'editor' edits lightly, and allows the AI learning to 'speak its own voice'. Worst case, the human editor is bribed to override the AI learning .. to whatever you want it to be, whilst the AI takes the blame. Lots of possibilities .....

 

Interesting in what happens if the human algorithm over-ride is removed. Example: Good data, and AI learning, say that humans are similar to a virus, and that this virus causes destruction of the environment. The learned solution is that viruses can be managed via a 'vaccine'.  AI learns how to produce a vaccine .... that neutralises humans. Oops.

 

A material takeaway from development of the nuclear bomb, is that you cannot just delegate away the responsibility for it (Vishnu's 'Now, I am become death'). The, 'I just make bombs, I am not responsible for the decades long radiation damage after they have been set off', thing doesn't work. It would appear to be a very similar thing with AI, hence the debate.

 

We live in interesting times.

 

SD

 

  

 

 

Edited by SharperDingaan
Link to comment
Share on other sites

There are clear (and not so clear) dangers with this new AI. Its powerful and its also very scary in terms of some of the negative possibilities. Hence the rush to legislate it so early (since when has an industry been regulated so early on in the cycle ????). Not just legislation but agreements amongst national governments ...sort of nuclear arms control treaty or Geneva convention on war crimes or use of chemicals weapons etc.

 

My post was in response to "This is hype". This stuff is powerful. In the wrong hands, its deadly serious.  the Genie is out of the bottle...no way to put it back....and the revolution is coming...and we must ensure it stays within the safe channels.

 

This is a very accurate piece ....though not complete...there are many dangers we have not even thought about...

 

 

 

 

Edited by tnp20
Link to comment
Share on other sites

21 hours ago, SharperDingaan said:

The AI learning simply supports preconceived notion; if it aligns, it stays ... if not, it is overridden. Best case, the human 'editor' edits lightly, and allows the AI learning to 'speak its own voice'. Worst case, the human editor is bribed to override the AI learning .. to whatever you want it to be, whilst the AI takes the blame. Lots of possibilities .....

 

This is a problem with chatGPT/Bard like AI where its gathered data from uncontrolled data sources (Internet and world corpus). Where  you have control over the data you use to train it, you have a much greater control in the output both through further fine tuning and also through appropriate "prompting" which is sort of like guiding it to give better answers. This is an issue with any data source, even the best of the data could have underlying hidden quality issues but then the problem would arise in any model, not just AI...the old saying garbage in , garbage out.

 

In this case, you would start off with controlled high quality data to train the model. If you discover weaknesses in the data or the model, you adjust and retrain. You also need fake data to avoid biases based on accurate historic data. For example, there havent been many women composers in the history books. This is accurate data. But if somehow AI makes a biased decision off that to identify men only candidates for a composer job, then this becomes a problem so clearly labelled fake data is injected to cure the bias problem, but the model also knows its fake data so other kinds of decisions are adjusted for the fake data to avoid giving wrong answers...its a complex subject.

 

You can adjust models for complex discrepancies that prop up in the results ...as you would with other systems that are non-AI.

 

 

Edited by tnp20
Link to comment
Share on other sites

Great CHT video.

 

Notable is that at the practical level, TechFin is just one more face of AI. Competing across regulated business silo's, via the use of technology which is not regulated. The airbnb using 3rd party AI to match buyer/seller directly via an unregulated digital marketplace; arguing that it is not in the regulated short-term rental accommodation business, but in an entirely different industry. We compete upstream of the stream of buyers seeking regulated short-term rental accommodation; and all quite true.

 

The obvious solution is a version of nuclear arms deterrence via mutually assured destruction; global regulation/restriction on how AI can be used, subject to an external control. No matter how good/smart you are, there is no value to manipulation; if your body ends up swinging below Blackfriars Bridge as a message to others; as Roberto Calvi discovered.

 

https://www.forbes.com/sites/sofialottopersio/2019/08/23/when-the-apparent-suicide-of-gods-banker-roberto-calvi-was-ruled-a-murder/?sh=4d37be431cd4

 

SD

 

Edited by SharperDingaan
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...