Jump to content

Tech that replaces jobs


SmallCap

Recommended Posts

Makes me think of mobile devices run amok.  Suddenly my lawn mower takes out the neighbours thousand dollar garden.  I can see myself chasing it around the yard trying to stop it.  Then the forklift in my warehouse suddenly decided to take out a whole row of stacked Ipads and run over them until they were crushed into dust.  Several self driving trucks carrying hundreds of thousands of tons of steel careened into an office building downtown killing hundreds. 

 

Because human driven forklifts never kill anyone? 

 

  "OSHA statistics indicate that there are roughly 85 forklift fatalities and 34,900 serious injuries each year"

 

 

Or human driven vehicles never kill anyone?

 

  "There were 1.25 million road traffic deaths globally in 2013"

 

AI won't be perfect. It doesn't have to be perfect, just better.

 

Link to comment
Share on other sites

AI won't be perfect. It doesn't have to be perfect, just better.

 

The problem is that humans and AI are judged using different standards. If human kills someone by accident it's OK'ish since you know it's human what can you do... If AI kills someone by accident, it's clearly broken, nefarious, and should be never used again.

 

At least regulatory agencies subscribe to this fallacy less than regular people. Otherwise we would not have seen cruise control, autopilot or industrial robots at all.  ::)

Link to comment
Share on other sites

AI won't be perfect. It doesn't have to be perfect, just better.

 

The problem is that humans and AI are judged using different standards. If human kills someone by accident it's OK'ish since you know it's human what can you do... If AI kills someone by accident, it's clearly broken, nefarious, and should be never used again.

 

At least regulatory agencies subscribe to this fallacy less than regular people. Otherwise we would not have seen cruise control, autopilot or industrial robots at all.  ::)

 

Of course you are correct.  Humans are not rational, which is just one more reason why they shouldn't be running things.  When either a human or an AI makes a mistake you could say that it is in a way broken, but when a human makes a mistake (s)he might learn from that mistake, get better, and never do it again.  When an AI makes a mistake, it might learn from the mistake and upload the new info to millions of other similar AIs who all learn from it and never make it again. AIs are superior in every way.  Humans should never touch heavy machinery.  I'm reading a fascinating book ("The Righteous Mind" by Jonathan Haidt) about how humans react emotionally to just about everything and use reason only later in an attempt to justify their emotional reaction.  Reason is just a post hoc tool we use to explain what we already want to think.

 

Link to comment
Share on other sites

 

Reminds me of the Sci-fi book I read when I was in college.  "Tom Paine Maru" by L. Neil Smith.  People had implants in their heads and whenever they heard a voice they heard it in their own language regardless of what language it was actually spoken in.  And similarly whenever they saw text of any type they saw it in their own language.

 

Link to comment
Share on other sites

AI won't be perfect. It doesn't have to be perfect, just better.

 

The problem is that humans and AI are judged using different standards. If human kills someone by accident it's OK'ish since you know it's human what can you do... If AI kills someone by accident, it's clearly broken, nefarious, and should be never used again.

 

At least regulatory agencies subscribe to this fallacy less than regular people. Otherwise we would not have seen cruise control, autopilot or industrial robots at all.  ::)

 

Of course you are correct.  Humans are not rational, which is just one more reason why they shouldn't be running things.  When either a human or an AI makes a mistake you could say that it is in a way broken, but when a human makes a mistake (s)he might learn from that mistake, get better, and never do it again.  When an AI makes a mistake, it might learn from the mistake and upload the new info to millions of other similar AIs who all learn from it and never make it again. AIs are superior in every way.  Humans should never touch heavy machinery.  I'm reading a fascinating book ("The Righteous Mind" by Jonathan Haidt) about how humans react emotionally to just about everything and use reason only later in an attempt to justify their emotional reaction.  Reason is just a post hoc tool we use to explain what we already want to think.

 

You could be that old blind guy with the super long soul patch on Kung Fu (.)

Link to comment
Share on other sites

AI won't be perfect. It doesn't have to be perfect, just better.

 

The problem is that humans and AI are judged using different standards. If human kills someone by accident it's OK'ish since you know it's human what can you do... If AI kills someone by accident, it's clearly broken, nefarious, and should be never used again.

 

At least regulatory agencies subscribe to this fallacy less than regular people. Otherwise we would not have seen cruise control, autopilot or industrial robots at all.  ::)

 

Of course you are correct.  Humans are not rational, which is just one more reason why they shouldn't be running things.  When either a human or an AI makes a mistake you could say that it is in a way broken, but when a human makes a mistake (s)he might learn from that mistake, get better, and never do it again.  When an AI makes a mistake, it might learn from the mistake and upload the new info to millions of other similar AIs who all learn from it and never make it again. AIs are superior in every way.  Humans should never touch heavy machinery.  I'm reading a fascinating book ("The Righteous Mind" by Jonathan Haidt) about how humans react emotionally to just about everything and use reason only later in an attempt to justify their emotional reaction.  Reason is just a post hoc tool we use to explain what we already want to think.

 

You could be that old blind guy with the super long soul patch on Kung Fu (.)

 

I can only point the way, Grasshopper. You must walk the path yourself.

 

Link to comment
Share on other sites

AI won't be perfect. It doesn't have to be perfect, just better.

 

The problem is that humans and AI are judged using different standards. If human kills someone by accident it's OK'ish since you know it's human what can you do... If AI kills someone by accident, it's clearly broken, nefarious, and should be never used again.

 

At least regulatory agencies subscribe to this fallacy less than regular people. Otherwise we would not have seen cruise control, autopilot or industrial robots at all.  ::)

 

Of course you are correct.  Humans are not rational, which is just one more reason why they shouldn't be running things.  When either a human or an AI makes a mistake you could say that it is in a way broken, but when a human makes a mistake (s)he might learn from that mistake, get better, and never do it again.  When an AI makes a mistake, it might learn from the mistake and upload the new info to millions of other similar AIs who all learn from it and never make it again. AIs are superior in every way.  Humans should never touch heavy machinery.  I'm reading a fascinating book ("The Righteous Mind" by Jonathan Haidt) about how humans react emotionally to just about everything and use reason only later in an attempt to justify their emotional reaction.  Reason is just a post hoc tool we use to explain what we already want to think.

 

You could be that old blind guy with the super long soul patch on Kung Fu (.)

 

I can only point the way, Grasshopper. You must walk the path yourself.

 

Just jack into the cloud, feed the data into your daemon ML, and the path will appear.  8)

Link to comment
Share on other sites

Expert predictions: https://www.technologyreview.com/s/607970/experts-predict-when-artificial-intelligence-will-exceed-human-performance/?set=607997

 

Link to original paper: https://arxiv.org/pdf/1705.08807.pdf

 

Note though that the prediction for Go was 2027 and it actually happened in 2016-17.

 

So "predictions are hard, especially the ones about the future".  8)

Link to comment
Share on other sites

  • 2 months later...

Potentially relevant links.

Info available in the context of a so-so US jobs report and perhaps more workers unease than what some headline numbers suggest.

 

https://www.brookings.edu/blog/the-avenue/2017/08/14/where-the-robots-are/

 

https://pilotonline.com/opinion/editorial/cartoons/michael-ramirez-unemployment/image_c60edd85-908d-53f7-9a5d-4ce9e767809d.html

Link to comment
Share on other sites

Airline pilot - Tech wise, airplanes already could fly with 1 pilot instead of 2 with autopilot doing pretty much all work. So 50% reduction immediately. But policy wise it's not gonna be accepted by humans. There's irrational fear of flying already, so there's not gonna be change until somehow humans accept 1 pilot+auto. Maybe 7+ years.

 

The next generation of aircraft are already being developed for mid-2020s and the design requires 2 pilots. It will take 7+ years even after regulation changes for manufacturers to produce a single pilot airliner. Could you fly an airliner single pilot in an emergency? Absolutely. But the current design won't fly (haaah!) for everyday single pilot operations.

 

Labor unions will fight tooth and nail to prevent this (because 50% less safe) and the traveling public will be on their side. I can't see this happening for 15-20yrs+, and even at that point it will be cargo carriers like FedEx/UPS first.

Fly, by your handle you may know more about this than me. But as I understand it, a plane can basically already fly itself today including ILS landings etc. Pilots are there more as fail safe devices if something bad happens i.e. you need a brain in an emergency. Given the cost/reward situation this is worth it since the cost of the pilots is quite small compared to the cost of the overall flight. Also in an emergency it appears that you need two people - one to fly the plane and one to handle comms and checklists. So I don't see that one going away anytime soon.

 

 

Plane does already fly itself except for the first/last ~500 ft of flight when the flying pilot switches on/off the autopilot, and yes modern aircraft are equipped for auto-landing. Here's a video of an A320 auto land in the sim:

 

 

Seems easy enough right?

 

Well factor in a crowded airspace, having to deviate for weather in the area, working with flight ops to come up with another alternate landing site, and things start to get a bit more difficult. Especially as all this in theory would be occurring closer to the ground, where any flying mistake is magnified. Hence why now one pilot focuses solely on 'flying' and the other pilot on handling the comms. But you can see why some more tech upgrades (I'm thinking air traffic control primarily) to simplify the pilots job in the first/final stage of flight, it is certainly possible.

 

But was mentioned, the unions would/will throw an unholy fit. So more likely it would be one of the lesser known non-unionized cargo airlines being the guinea pigs. There is one thing on the horizon that could accelerate the development, a pilot shortage. Lots of the low cost carrier and especially the regional carriers are struggling to attract pilots with their low pay structures. With a dearth of new pilots coming into the industry since the recession, the prohibitive costs involved for training, and more regulation in regards to minimum flight hours to qualify for a professional license something will eventually hit a breaking point.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...