Jump to content

AI - Artificial Intelligence


Jurgis

Recommended Posts

We got an email from the CEO of the hospital system I work for, that they are implementing an AI program, to scan all charts and report if anyone is accessing medical records that they shouldn't. Right now they use a random audit system, but supposedly the AI will be monitoring every person on the system in real time.

I wouldn't read too much into the announcement.

For AI (however you define it) to work, connectivity and a certain level of transparency is required.

This evolution makes the data processor more accountable for confidentiality access and privacy standards.

 

Link to comment
Share on other sites

  • 1 month later...
  • Replies 129
  • Created
  • Last Reply

Top Posters In This Topic

https://www.nature.com/articles/d41586-019-02156-9 - AI Poker Bot Is First to Beat Professionals at Multiplayer Game

 

 

I fold for AI.  8)

 

I met the PI of the Carnagie Mellon team at a conference while he was discussing Libratus.

 

There was a joint talk on Libratus and DeepStack at AAAI 2017: http://www.aaai.org/Conferences/AAAI/2017/aaai17speakers.php (search for Poker panel)

Link to comment
Share on other sites

  • 2 months later...

I haven't read it, but this book looks like it might be good on the subject of AI and is for sale today at AMZN

 

https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence-ebook/dp/B06WGNPM7V?_bbid=12998103&tag=ebookdealspagesite-20

 

Thanks. Bought it.

 

I did too, started reading it this morning. I have seen Max Tegmark on shows about cosmology, math, etc.

Link to comment
Share on other sites

I haven't read it, but this book looks like it might be good on the subject of AI and is for sale today at AMZN

 

https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence-ebook/dp/B06WGNPM7V?_bbid=12998103&tag=ebookdealspagesite-20

 

Thanks. Bought it.

 

I did too, started reading it this morning. I have seen Max Tegmark on shows about cosmology, math, etc.

 

Yeah, I've heard some of his stuff.

 

I also get his "Future of Life Institute" emails that discuss AI future/ethics/etc. (They also touch topics like autonomous weapons and nuclear weapons that belong more to Politics section).

Link to comment
Share on other sites

  • 2 weeks later...

 

Interesting fact: I think most of Boston Dynamic robots are trained with a model-based system (i.e. this is how the world works and therefore do this) and not deep learning which is interesting.  See here: https://www.alexirpan.com/2018/02/14/rl-hard.html

 

Link to comment
Share on other sites

  • 2 weeks later...
  • 2 weeks later...
  • 1 month later...

AI Superpowers China Silicon Valley and the New World Order, by Kai-Fu Lee

 

$2.99 on sale today at Amazon:

 

https://smile.amazon.com/AI-Superpowers-China-Silicon-Valley/dp/132854639X/ref=sr_1_2?crid=EN8UTCBQOB4I&keywords=ai+superpowers&qid=1577393419&smid=A1KUURLZXZKET0&sprefix=ai+super%2Caps%2C157&sr=8-2

 

I browsed, but I did not buy. Seems like not much new for decently informed person.

Link to comment
Share on other sites

  • 1 month later...

For a change something investing related:

 

a16z thinks that AI startups are worse than SaaS startups:

https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-its-different-from-traditional-software/

 

There's also a TechCrunch article that's a follow-up: "Do AI startups have worse economics than SaaS shops?"

However, it's behind their premium paywall. If anyone can get a full article, shoot me a message. I'm interested.

 

(OT: I searched for the article using the title. OMFG, there's like huge industry of sites that just copy shit from TC and post it on their sites. I did not realize this was a thing.)

Link to comment
Share on other sites

Yeah, it seems pretty clear to me that AI companies have worse economics that SaaS, but SaaS have better economics than any other business.  That said, I think there's a chance that an AI company evolves that has a stronger moat and better economics than any SaaS business, but I'd expect the average SaaS business to have way better economics than the average AI business.

Link to comment
Share on other sites

  • 6 months later...

Reviving this thread. This is unfortunate use of algorithms/machine learning/AI.

 

When Algorithms Give Real Students Imaginary Grades

https://www.nytimes.com/2020/09/08/opinion/international-baccalaureate-algorithm-grades.html

Thanks for sharing.

 

Hard to tell if this was an issue of taking algo out of lab and going against live data vs. truly shoddy data science work. This is a truly hard problem with so many variables and technological complexities (e.g., some of the tests are free form) so I'm leaning on the latter. I'm skeptical that they had the time to really do the proper evaluation. Without IB releasing info, I doubt anyone will successfully reverse engineer their algo. But, it would be nice if all IB participants banded together and provided their scores to do some bias analysis.

Link to comment
Share on other sites

Reviving this thread. This is unfortunate use of algorithms/machine learning/AI.

 

When Algorithms Give Real Students Imaginary Grades

https://www.nytimes.com/2020/09/08/opinion/international-baccalaureate-algorithm-grades.html

 

I'm not gonna defend the results.

 

However, what in your (and author's) opinion should have been done? The simple answer is not having exams at all. Would that have worked better for poor kids? Colleges would have been forced to use the same (or similar) info for their admissions that the computer used for exam: "an array of student information, including teacher-estimated grades and past performance by students in each school".

 

There is also a conclusion by author: "Algorithms should not be used to assign student grades."

This is bullcrap. First of all, they already are with pretty much zero opposition: https://www.ets.org/gre/revised_general/scores/how/ (yeah, human is in the loop, but still)

Second, the answer is to improve algorithms rather than discard them.

 

Author also is wrong on a number of other counts: they don't agree with "Computers make neutral decisions" - yeah, computers can have bias, but human graders definitely have bias - and are susceptible to fatigue, misunderstandings, and even fraud. I'd guess that's one of the reasons why ETS uses algorithmic scorer in addition to human one.

 

Author tries to score a lot of points with claims: "Algorithms can’t monitor or detect hate speech, ... they can’t predict crime, they can’t determine which job applicants are more suited than others, they can’t do effective facial recognition" - except that algorithms can do all of these and they do all of these and they are getting better in doing all of these. Yeah, you can prohibit using AI for facial recognition by law, but it does not mean that algorithms are or won't be better in recognizing people than people are.

 

Anyway, it sucks to be caught in this, but the way to go is to improve algorithms rather than giving up and going back to warm and fuzzy human-graded default.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...