Jump to content

Perplexity AI


Recommended Posts

10 hours ago, mattee2264 said:

Isn't it a little worrying for Mag7 investors that a start-up founded only a few years ago can release LLMs that compare favourably to the ones put out by OpenAI/Microsoft and Google? Reinforces the point that in AI no one really has a moat at this point. Mag7's ability to stockpile chips gives them an edge and they can outspend everyone on R&D. But in the internet age it was Google who ended up with the dominant search engine not AOL/Netscape/Microsoft. 

Any company with a good idea/product will have little problem attracting funding and users aren't locked in to a specific LLM at this point so will switch if something better comes along. 

 

And so far it is LLMs that are generating all the buzz and drawing in all the users and seeming to have the most practical use as a lot of people are using LLMs to write emails, assignments, research papers, marketing copy etc. 

From what I see, there is now a trend to tap into AWS/GCP to accelerate the journey to LLM and other analytics. Lots of orgs have proprietary data but have no clue how to get it to LLM. 

 

As far as the moat for Google/Msft/Aws, I'd argue the moat now got bigger. There are only few start ups that are really thriving here. Anthropic is one. Perplexity is another. Many fold shop before they get to market. The cost to train these models is north of $1M per run. There are a lot of great ideas but few companies with funding to try them so AWS/GCP/Azure have plenty to pick from and fund if they so desire.

Link to comment
Share on other sites

  • 2 weeks later...
On 3/4/2024 at 11:10 AM, rogermunibond said:

Isn't prompt engineering like Google-fu?

It's more than that.  I mean there's user level stuff but then there are things like https://www.langchain.com/langchain that create components that get chained together for different flows.  It's usually not just a single statement.  When you message Chat GPT for example it doesn't just take what you wrote and give it to the LLM.  It goes through layers and tries to prevent attacks and uses what are called "system" level prompts to wrap your user level prompt.  Plus there's no memory so it actually sends your entire conversation through the LLM every time.  So there's a lot too it.

Link to comment
Share on other sites

On 3/6/2024 at 1:54 AM, mattee2264 said:

Isn't it a little worrying for Mag7 investors that a start-up founded only a few years ago can release LLMs that compare favourably to the ones put out by OpenAI/Microsoft and Google? Reinforces the point that in AI no one really has a moat at this point. Mag7's ability to stockpile chips gives them an edge and they can outspend everyone on R&D. But in the internet age it was Google who ended up with the dominant search engine not AOL/Netscape/Microsoft. 

Any company with a good idea/product will have little problem attracting funding and users aren't locked in to a specific LLM at this point so will switch if something better comes along. 

 

And so far it is LLMs that are generating all the buzz and drawing in all the users and seeming to have the most practical use as a lot of people are using LLMs to write emails, assignments, research papers, marketing copy etc. 

 

Made me LOL.  Sorry but this is the nature of tech, always has been.  Gates always said he was more worried about a couple of folks in a garage cooking something up that would turn the entire industry upside down.  These days a LOT of stuff is out in the open.  Take a look at huggingface which hosts community and models of many many open source LLMs.  This is academia driven sort of.  they always were driven to publish or perish and release their stuff to get reviewed.

 

These days though the limits are starting to show with training.  It's crazy how expensive it is to train an LLM, how many GPUs and how much energy it takes.  it's really only very large companies with huge budgets, GPUs and data that can afford it.  Plus the data annotation process is difficult and error prone, nevermind ethically challenging.

 

Still, once models are out, a lot of them are open source, so you can finetune for less money, and just try prompt engineering to get what you want/need.

 

It's still day one.. early early days which is shocking but true.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...