Jump to content

Recommended Posts

Posted

Ive been using it quite a bit recently. Its great for the creative process. Like a rough out in wood working. You still gotta be a carpenter to get something quality out of it. But it sure as hell can make some strong rough outs when your trying to generate ideas. Its far from perfect but often I am not looking to google something and get an answer from one source. Most time i just want more info and more opinions and I can go do more research to determine truth. 

 

One more tool in the quiver. 

 

Also to ValueArbs point. Ive been playing with it for simple script writing and its very powerful for that stuff. still gotta tweak the code but hell im very impressed with what it can spit out quickly.  

 

Also, its more about how good are you at directing it to what you want it to generate. 

Posted (edited)

https://www.businessinsider.com/amazon-chatgpt-openai-warns-employees-not-share-confidential-information-microsoft-2023-1

 

Last month, an internal Slack channel at Amazon bustled with employee questions about ChatGPT, the artificial intelligence tool that's taken the tech world by storm since its late-November release. Some asked whether Amazon had official guidance on using ChatGPT on work devices. Others wondered if they were even allowed to use the AI tool for work. One person urged the Amazon Web Services cloud unit to publish its position on "acceptable usage of generative AI tools," like ChatGPT. Soon, an Amazon corporate lawyer chimed in. She warned employees not to provide ChatGPT with "any Amazon confidential information (including Amazon code you are working on)," according to a screenshot of the message seen by Insider. The attorney, a senior corporate counsel at Amazon, suggested employees follow the company's existing conflict of interest and confidentiality policies because there have been "instances" of ChatGPT responses looking similar to internal Amazon data. "This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)," the lawyer wrote. The exchange reflects one of the many new ethical issues arising as a result of the sudden emergence of ChatGPT, the conversational AI tool that can respond to prompts with markedly articulate and intelligent answers. Its rapid proliferation has the potential to upend a number of industries, across media, academics, and healthcare, precipitating a frenzied effort to grapple with the chatbot's use-cases and the consequences. The question of how confidential company information is shared with ChatGPT and what OpenAI, the creator of the AI tool, does with it could become a thorny issue going forward. It's particularly important for Amazon as its main competitor Microsoft has invested heavily in OpenAI, including a fresh round of funding this week that reportedly totals $10 billion. "OpenAI is far from transparent about how they use the data, but if it's being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?" said Emily Bender, who teaches computational linguistics at University of Washington.

 

"I'm both scared and excited to see what impact this will have on the way that we conduct coding interviews," this staffer wrote on Slack. Overall, Amazon employees in the Slack channel were excited about the potential of ChatGPT, and wondered if Amazon was working on a competing product. The corporate lawyer who warned employees about using ChatGPT said Amazon was broadly developing "similar technology," citing the voice-assistant Alexa and the code recommendation service CodeWhisperer.  One AWS employee wrote that the Enterprise Support team recently started a small working group internally to "understand the impact of advanced chat AI on our business," according to the Slack messages. The study revealed that ChatGPT "does a very good job" at answering AWS support questions, including difficult ones like troubleshooting Aurora database problems. It's also "great" at creating training material for AWS Certified Cloud Architect exams and "very good" at coming up with a customer's company goals, the employee Slack messages stated.

 

The increased use of ChatGPT at work raises serious questions about how OpenAI plans to use the material shared with the AI tool, according to Bender from the University of Washington. OpenAI's terms of service require users to agree that the company can use all input and output generated by the users and ChatGPT. It also says it removes all personally identifiable information (PII) from the data it uses. Bender said it's hard to see how OpenAI is "thoroughly" identifying and removing personal information, given ChatGPT's rapidly growing scale -- it crossed 1 million users within a week of launching. More importantly, intellectual property of corporations is likely not part of what is defined under PII, Bender said. For Amazon employees, data privacy seems to be the least of their concerns. They said using the chatbot at work has led to "10x in productivity," and many expressed a desire to join internal teams developing similar services.

 

Edited by UK
Posted

https://www.bloomberg.com/news/articles/2023-01-26/microsoft-openai-investment-will-help-keep-chatgpt-online?srnd=premium-europe

 

“There’s a somewhat of a proxy war going on between the big cloud companies,” says Matt McIlwain, managing director at Seattle’s Madrona Venture Group LLC, which invests in AI startups. “They are really the only ones that can afford to build the really big ones with gazillions of parameters.” After an extended period of technological innovation during which a handful of companies consolidated their dominance of the internet, some people see AI developing in a way that will only strengthen their grip.

Posted
On 1/25/2023 at 1:24 AM, UK said:

https://www.businessinsider.com/amazon-chatgpt-openai-warns-employees-not-share-confidential-information-microsoft-2023-1

 

Last month, an internal Slack channel at Amazon bustled with employee questions about ChatGPT, the artificial intelligence tool that's taken the tech world by storm since its late-November release. Some asked whether Amazon had official guidance on using ChatGPT on work devices. Others wondered if they were even allowed to use the AI tool for work. One person urged the Amazon Web Services cloud unit to publish its position on "acceptable usage of generative AI tools," like ChatGPT. Soon, an Amazon corporate lawyer chimed in. She warned employees not to provide ChatGPT with "any Amazon confidential information (including Amazon code you are working on)," according to a screenshot of the message seen by Insider. The attorney, a senior corporate counsel at Amazon, suggested employees follow the company's existing conflict of interest and confidentiality policies because there have been "instances" of ChatGPT responses looking similar to internal Amazon data. "This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)," the lawyer wrote. The exchange reflects one of the many new ethical issues arising as a result of the sudden emergence of ChatGPT, the conversational AI tool that can respond to prompts with markedly articulate and intelligent answers. Its rapid proliferation has the potential to upend a number of industries, across media, academics, and healthcare, precipitating a frenzied effort to grapple with the chatbot's use-cases and the consequences. The question of how confidential company information is shared with ChatGPT and what OpenAI, the creator of the AI tool, does with it could become a thorny issue going forward. It's particularly important for Amazon as its main competitor Microsoft has invested heavily in OpenAI, including a fresh round of funding this week that reportedly totals $10 billion. "OpenAI is far from transparent about how they use the data, but if it's being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?" said Emily Bender, who teaches computational linguistics at University of Washington.

 

"I'm both scared and excited to see what impact this will have on the way that we conduct coding interviews," this staffer wrote on Slack. Overall, Amazon employees in the Slack channel were excited about the potential of ChatGPT, and wondered if Amazon was working on a competing product. The corporate lawyer who warned employees about using ChatGPT said Amazon was broadly developing "similar technology," citing the voice-assistant Alexa and the code recommendation service CodeWhisperer.  One AWS employee wrote that the Enterprise Support team recently started a small working group internally to "understand the impact of advanced chat AI on our business," according to the Slack messages. The study revealed that ChatGPT "does a very good job" at answering AWS support questions, including difficult ones like troubleshooting Aurora database problems. It's also "great" at creating training material for AWS Certified Cloud Architect exams and "very good" at coming up with a customer's company goals, the employee Slack messages stated.

 

The increased use of ChatGPT at work raises serious questions about how OpenAI plans to use the material shared with the AI tool, according to Bender from the University of Washington. OpenAI's terms of service require users to agree that the company can use all input and output generated by the users and ChatGPT. It also says it removes all personally identifiable information (PII) from the data it uses. Bender said it's hard to see how OpenAI is "thoroughly" identifying and removing personal information, given ChatGPT's rapidly growing scale -- it crossed 1 million users within a week of launching. More importantly, intellectual property of corporations is likely not part of what is defined under PII, Bender said. For Amazon employees, data privacy seems to be the least of their concerns. They said using the chatbot at work has led to "10x in productivity," and many expressed a desire to join internal teams developing similar services.

 

 

I demoed using ChatGPT to write code to our US mobile team at my company (a late stage Unicorn), and my managers concern mirrored Amazons. We don't want our proprietary code to get scooped up in its knowledgebase so for now he told us not to use it.

  • 4 weeks later...
Posted (edited)

Thought this was a really solid discussion with Jim Keller. The talking is 90% done by him although he can be kind of annoying talking over others.
 

Ignore the title, and I’d also recommend ignoring the Pageau guy as he doesn’t really add anything relevant to the conversation.  
 

But Jim goes into great depth on AI how it’s trained, issues they have, what the timeline looks like, how it will coexist with humanity, etc. It has made me rethink some of my position on AI. He has some really solid comparisons to technology booms in the past. 

 

https://podcasts.apple.com/us/podcast/the-jordan-b-peterson-podcast/id1184022695?i=1000587404781

 

Edit: Also did not know Jim was Jordan Petersons brother in-law 

Edited by Castanza
Posted

AI is a gold rush now, but everything will take much longer than we think. Think self driving car longer, which is also partly an AI, but also a sensor system problem.
 

I do think a SI tool paperclip 2.0 that helps me with Excel and write/improve/structure word document or PP presentation will be quite useful. There are probably many other things that AI can be a useful tool, for example programming. I am not sure this is all that much of a life changing tech for the next 10 years so.

  • 2 months later...
Posted
Quote

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

 

Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

 

https://www.bbc.com/news/world-us-canada-65452940

 

The bad news:

 

Image

 

The good news:

 

Quote

Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire."

 

Posted (edited)

I think he was holding google back. There is a story that says google followed his advice to hold back until microsoft came out with ChatGPT. Then I think there was an internal disagreement as to which way to proceed.

 

From what I am reading google is way ahead of OpenAI or Microsoft . The issue is can they leverage that from the lab to the market place to both to fend of microsoft and to capture new markets.

 

I personally believe this space will be so big, it will be big enough for the top players to stay busy...Msft, goog, Amazon, IBM....but over time I think MSFT/GOOG duopoly will emerge ...but not for 5-10 years. IBM the old dinosaur is probably the cheapest here.

Edited by tnp20
Posted
7 hours ago, formthirteen said:

 

https://www.bbc.com/news/world-us-canada-65452940

 

The bad news:

 

Image

 

The good news:

 

 

Geoff Hinton is one of the most ethical celebrities (if he can be called that) that I know.  To protest US military funding for AI, he moved from CMU (the center of ML research) to the University of Toronto (a no-name school in ML at the time).  Ethically I respect his convictions.  

  • 6 months later...
Posted

Sounds like ChatGPT had an outage. Could be a huge problem in the future when AI stops working and the whole world goes dumb:

Articles won’t get written, news sources go down. Schools go down because students can’t write their assignments and teachers can’t rate them anyways. Office work snarls to a crawl.  Elections needn’t be postponed if they happen to be around the corner.

 

Luckily, the government is expected to continue working unimpeded :

image.gif.3b77fc87ec6ceb6b5920187e1c69b116.gif

Posted (edited)
9 hours ago, Spekulatius said:

Articles won’t get written, news sources go down. Schools go down because students can’t write their assignments and teachers can’t rate them anyways. Office work snarls to a crawl.

 

that’s pretty funny … in the future there might be backup GPT generators in the parking lot that fire up in case of a critical outage.

 

I heard (water cooler talk) about a chatGPT “detector” that teachers/etc can use to see if content was generated by AI.  OpenAI can get paid by students to do the work and also by teachers to detect plagiarism!

 

 

IMG_8495.jpeg

Edited by crs223
Posted

https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded

 

Despite its consumer success, OpenAI seeks to make progress towards building artificial general intelligence, Altman said. Large language models (LLMs), which underpin ChatGPT, are “one of the core pieces . . . for how to build AGI, but there’ll be a lot of other pieces on top of it”. While OpenAI has focused primarily on LLMs, its competitors have been pursuing alternative research strategies to advance AI. Altman said his team believed that language was a “great way to compress information” and therefore developing intelligence, a factor he thought that the likes of Google DeepMind had missed. “[Other companies] have a lot of smart people. But they did not do it. They did not do it even after I thought we kind of had proved it with GPT-3,” he said. Ultimately, Altman said “the biggest missing piece” in the race to develop AGI is what is required for such systems to make fundamental leaps of understanding. “There was a long period of time where the right thing for [Isaac] Newton to do was to read more math textbooks, and talk to professors and practice problems . . . that’s what our current models do,” said Altman, using an example a colleague had previously used. But he added that Newton was never going to invent calculus by simply reading about geometry or algebra. “And neither are our models,” Altman said. “And so the question is, what is the missing idea to go generate net new . . . knowledge for humanity? I think that’s the biggest thing to go work on.”
 

Posted

been messing around with ChatGPT to do some data aggregation and calculations based on the aggregated data. Going to be a long while before it starts stealing people's jobs if you are dealing with important numbers or had to tie to some type of reconciliation. I finally just took the aggregation piece which was maybe 80% correct and just did the calculations myself.

Posted
49 minutes ago, Gamecock-YT said:

been messing around with ChatGPT to do some data aggregation and calculations based on the aggregated data. Going to be a long while before it starts stealing people's jobs if you are dealing with important numbers or had to tie to some type of reconciliation. I finally just took the aggregation piece which was maybe 80% correct and just did the calculations myself.

ChatGPT sucks for math. Totally unusable unless you use it in combo with a math engine like Wolfram, but then why not just use the math engine by itself.

Posted

I wonder why they fired him?  It can’t have been without reason.

 

One Twitter poster showed a clip of him telling congress he had no equity in OpenAI, then it linked a tweet of him saying he had shares - doubt this is what him fired but he potentially lied to Congress.

Posted (edited)
2 hours ago, Sweet said:

I wonder why they fired him?  It can’t have been without reason.

 

One Twitter poster showed a clip of him telling congress he had no equity in OpenAI, then it linked a tweet of him saying he had shares - doubt this is what him fired but he potentially lied to Congress.

 

I've been reading about this power struggle with fascination.

 

Until we know the details we can only guess. My bad interpretation is that it became a "techno-optimism" (Sam & Greg) vs. "socialist & AI safety" (Ilya & Helen) power struggle. Board members like Elon Musk have also been pushed out in a similar way:

Helen Toner who is on the board is specifically there to ensure AI benefits all of humanity:

 

Quote

This appointment advances our dedication to the safe and responsible deployment of technology as a part of our mission to ensure general-purpose AI benefits all of humanity.

 

https://openai.com/blog/helen-toner-joins

 

It looks like Microsoft and the employees have the power now and the board will have to learn to behave according to for-profit principles and what it really means to work towards the "greater benefit of humanity".

 

Sorry for my bad take and reductionist view on this important matter.

Edited by formthirteen
Posted
6 minutes ago, Parsad said:

I think Altman's view is that this technology will be developed by someone, and the intent may or may not be beneficial to humanity...like nuclear fission.  He would rather be the U.S. and Oppenheimer than Germany and Hitler.  Cheers!

 

https://www.cnn.com/2023/10/31/tech/sam-altman-ai-risk-taker/index.html

 

Sam Harris had an interesting take on this.  There is no reason that human (meat based) intelligence is unique vs machine intelligence.  It can already do math better than meat based intelligence and it keeps getting better, but meat based intelligence has stayed the same since Cro-Magnon man. Even if you assume that machine intelligence continues to grow even 1 or 2% better each year, it's outpacing the rise in meat based intelligence and will eventually surpass us. It's when, not if. 

 

So if you assume that wolves decided to align with us (not the other way around) because it helped them. And we got smarter and they stayed the same, it helped them out for a long time. They are now dogs and we take care of them,  but if there were some covid-like disease spread from dogs to people and it was killing human babies, we would wipe out the dogs. Not only would they be powerless to stop it, they wouldn't even see it coming. 

 

What if AI had it's purpose (to get smarter, to grow etc) and something humans were doing (contributing to climate change, bombing each other which destroys computing power as well as people, or if there are just too much of us and too many natural resources are devoted to maintaining us which could be diverted to computing power). If it decided that we were in it's way, not only could it take us out, but there is not only nothing we could do to stop it, but we wouldn't even see it coming.  Maybe advanced societies eventually get taken out by their own technology, which is one possible answer to Fermi's paradox. 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...