E46 BMW Social Directory E46 FAQ 3-Series Discussion Forums BMW Photo Gallery BMW 3-Series Technical Information E46 Fanatics - The Ultimate BMW Resource BMW Vendors General E46 Forum The Tire Rack's Tire Wheel Forum Forced Induction Forum The Off-Topic The E46 BMW Showroom For Sale, For Trade or Wanting to Buy

Welcome to the E46Fanatics forums. E46Fanatics is the premiere website for BMW 3 series owners around the world with interactive forums, a geographical enthusiast directory, photo galleries, and technical information for BMW enthusiasts.

You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!

If you have any problems with the registration process or your account login, please contact contact us.

Go Back   E46Fanatics > Everything Else > The Off-Topic > General Off-Topic

General Off-Topic
Everything not about BMWs. Posts must be "primetime" safe and in good taste. You must be logged in to see sub-forums.
Click here to browse all new posts.

Reply
 
Thread Tools Search this Thread Rating: Thread Rating: 4 votes, 5.00 average. Display Modes
Old 01-20-2015, 10:48 AM   #1
jeffro3000
Registered User
 
Join Date: Feb 2008
Location: Huntsville, AL
Posts: 2,812
My Ride: 2000 328i
Artificial Intelligence

Breakthrough of the century or Terminator-style destruction to humanity?

Came across this article on Medium yesterday that lays out the timeline for a lot of the recent breakthroughs in neural nets and deep learning, and explains it in an understandable way. Sci-fi movie stuff is quickly turning into legit science. Computers can learn things now. On their own.

https://medium.com/backchannel/googl...n-5207c26e4523

Elon Musk has also been vocal about his concerns with AI, and just donated $10 million to an institute aimed at keeping AI beneficial to humanity.

http://www.wired.com/2015/01/elon-musk-ai-safety/

jeffro3000 is offline   Reply With Quote
Old 01-20-2015, 10:55 AM   #2
Zell
Registered User
 
Zell's Avatar
 
Join Date: Jun 2009
Location: such united many state
Posts: 5,883
My Ride: so turbo wow
Neural Networks are the tits.
__________________
Zell is offline   Reply With Quote
Old 01-20-2015, 11:16 AM   #3
Iceman00
Banned User
 
Join Date: Jul 2008
Location: FLA
Posts: 2,894
My Ride: E90 6MT
So don't outfit them with powerful Hydraulics and metal frames, and we might have an opportunity to beat them with CQC.
Iceman00 is offline   Reply With Quote
Old 01-20-2015, 11:25 AM   #4
SPDSKTR
Registered User
 
Join Date: Dec 2010
Location: Birmingham, AL, USA
Posts: 2,180
My Ride: E46 ZHP TS2+ Coupe
My company was a subcontractor on a project called Cyberdyne with our local power company. I think it's some kind of data center, because the majority of the building is for servers and computers.

Yep.
__________________
Quote:
Originally Posted by Brucifer325 View Post
Thread sucks so bad they moved it to the Feedback Forum.

Last edited by SPDSKTR; 01-20-2015 at 11:35 AM.
SPDSKTR is offline   Reply With Quote
Old 01-20-2015, 11:26 AM   #5
ImPulSe
Registered User
 
Join Date: Nov 2003
Location: NYC & Long Island
Posts: 755
My Ride: Bentley GT
ImPulSe is offline   Reply With Quote
Old 01-20-2015, 11:35 AM   #6
SamDoe1
Registered User
 
Join Date: Oct 2010
Location: Minnesnowta
Posts: 3,603
My Ride: 4cyl of fury
Quote:
Originally Posted by Iceman00 View Post
So don't outfit them with powerful Hydraulics and metal frames, and we might have an opportunity to beat them with CQC.
The issue with AI is that it can learn and make decisions for itself. Not giving it the ability to have mechanical means is irrelevant. What if it learns to hack a UAV and use that? What if it learns how to hack missile guidance systems?

I think AI is fascinating but ultimately not a good idea. Computers are much faster at everything compared to humans, why make a life form (use that loosely here) that can effectively replace humans at the top of the chain?

Semi-AI is a good idea though. Computers that can make decisions and learn in a given space would certainly help humanity get further, especially with long term space travel.
SamDoe1 is online now   Reply With Quote
Old 01-20-2015, 11:48 AM   #7
cowmoo32
.--. . -. .. ...
 
cowmoo32's Avatar
 
Join Date: Jul 2003
Location: FL
Posts: 5,539
My Ride: Yukon
What if you keep it isolated and have a manual switch to kill the power? I fully appreciate the trepidation when dealing with something so powerful but when we can have a manual override I don't see it as being that dangerous. Now if it was able to migrate and clone itself across different networks, then we could have a problem. We will see this happen in our lifetimes, should be interesting to say the least.
__________________

flickher

Want something 3D printed? PM me


What's this about a brownie in motion?
cowmoo32 is online now   Reply With Quote
Old 01-20-2015, 11:52 AM   #8
SamDoe1
Registered User
 
Join Date: Oct 2010
Location: Minnesnowta
Posts: 3,603
My Ride: 4cyl of fury
Quote:
Originally Posted by cowmoo32 View Post
What if you keep it isolated and have a manual switch to kill the power? I fully appreciate the trepidation when dealing with something so powerful but when we can have a manual override I don't see it as being that dangerous. Now if it was able to migrate and clone itself across different networks, then we could have a problem. We will see this happen in our lifetimes, should be interesting to say the least.
Part of intelligence is the preservation of life no? So as an intelligent "being", the computer would have the desire or motive to preserve its own life and as such would, in theory, take measures to prevent its own "death". That's the scary part is that even though there's an off switch, who's to say the computer wouldn't take measures to mitigate that risk to itself?
SamDoe1 is online now   Reply With Quote
Old 01-20-2015, 11:55 AM   #9
Rif Raf
Registered User
 
Join Date: Aug 2008
Location: utah
Posts: 335
My Ride: 2001 330i w/ Sport
If we do create AI, will we be its god....maybe that's a different topic
__________________
The Absence of Nothing = Everything
Rif Raf is offline   Reply With Quote
Old 01-20-2015, 11:59 AM   #10
jeffro3000
Registered User
 
Join Date: Feb 2008
Location: Huntsville, AL
Posts: 2,812
My Ride: 2000 328i
A couple interesting bits with near-term relevance:

Quote:
...he acknowledges that the advanced techniques his own group is pioneering may lead to a problem where AI gets out of human control, or at least becomes so powerful that its uses might best be constrained. (Hassabis’ DeepMind co-founder Shane Legg is even more emphatic: he considers a human extinction due to artificial intelligence the top threat in this century. And DeepMind investor Elon Musk has just dropped $10 million to study AI dangers.) That’s why, as a condition of the DeepMind purchase, Hassabis and his co-founders demanded that Google set up an outside board of advisors to monitor the progress of the company’s AI efforts. DeepMind had already decided that it would never license its technology to the military or spy agencies, and it got Google to agree to that as well.
^I wonder if the NSA will hack in and steal it anyways?

Quote:
Dean shows a head-to-head comparison between the neural [language translation] model and Google’s current system - and his deep learning newcomer one is superior in picking up nuances in diction that are key to conveying meaning. “I think it’s indicative that if we scale this up, it’s going to do pretty powerful things,” says Dean.

DeepMind is also ready for production. Hassabis says within six months or so, its technology will find their way into Google products. His organization is broken up into divisions, and one***8202;—***8202;headed by his co-founder Mustafa Suleyman—is devoted to applied uses of the AI, working closely with Google to see what might be of use.
~6 more months and we'll see what they can pull off in real-world usage.
jeffro3000 is offline   Reply With Quote
Old 01-20-2015, 12:06 PM   #11
NOVAbimmer
Registered User
 
Join Date: Aug 2006
Location: VA
Posts: 13,073
My Ride: E60M
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

0.A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Problem solved?
__________________
NOVAbimmer is offline   Reply With Quote
Old 01-20-2015, 12:11 PM   #12
cowmoo32
.--. . -. .. ...
 
cowmoo32's Avatar
 
Join Date: Jul 2003
Location: FL
Posts: 5,539
My Ride: Yukon
Quote:
Originally Posted by SamDoe1 View Post
Part of intelligence is the preservation of life no? So as an intelligent "being", the computer would have the desire or motive to preserve its own life and as such would, in theory, take measures to prevent its own "death". That's the scary part is that even though there's an off switch, who's to say the computer wouldn't take measures to mitigate that risk to itself?
Yes but if it's in an isolated network there's only so much it could do. If I put you in a room with no doors or windows you can try all day but I'm still in control.
__________________

flickher

Want something 3D printed? PM me


What's this about a brownie in motion?
cowmoo32 is online now   Reply With Quote
Old 01-20-2015, 12:13 PM   #13
jeffro3000
Registered User
 
Join Date: Feb 2008
Location: Huntsville, AL
Posts: 2,812
My Ride: 2000 328i
Quote:
Originally Posted by NOVAbimmer View Post
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

0.A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Problem solved?
Yea but if AI is anything like humans at following rules, we're screwed lol
jeffro3000 is offline   Reply With Quote
Old 01-20-2015, 12:15 PM   #14
Iceman00
Banned User
 
Join Date: Jul 2008
Location: FLA
Posts: 2,894
My Ride: E90 6MT
I still think making them completely incompetent in CQC and not allowing them to access Bruce Lee/80/90s action flicks will give us the best chances of survival. Once they learn those, we are screwed.
Iceman00 is offline   Reply With Quote
Old 01-20-2015, 12:17 PM   #15
Lair
Registered User
 
Join Date: Mar 2008
Location: Liberal Paradise
Posts: 346
My Ride: e90,e90, $5k Boxster
I thought this was going to be about Audi drivers.
__________________
The Hunter S Thompson of e46f.

Lair is offline   Reply With Quote
Old 01-20-2015, 12:18 PM   #16
bagher
Registered User
 
bagher's Avatar
 
Join Date: Aug 2005
Location: Vienna, VA
Posts: 17,917
My Ride: Neocon outrage
Send a message via AIM to bagher
Quote:
Originally Posted by NOVAbimmer View Post
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

0.A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Problem solved?
Quote:
"I honestly don't find any inspiration in the three laws of robotics," said Helm. "The consensus in machine ethics is that they're an unsatisfactory basis for machine ethics." The Three Laws may be widely known, he says, but they're not really being used to guide or inform actual AI safety researchers or even machine ethicists.

"One reason is that rule-abiding systems of ethics — referred to as 'deontology' — are known to be a broken foundation for ethics. There are still a few philosophers trying to fix systems of deontology — but these are mostly the same people trying to shore up 'intelligent design' and 'divine command theory'," says Helm. "No one takes them seriously."

He summarizes the inadequacy of the Three Laws accordingly:

Inherently adversarial
Based on a known flawed ethical framework (deontology)
Rejected by researchers
Fails even in fiction
Goertzel agrees. "The point of the Three Laws was to fail in interesting ways; that's what made most of the stories involving them interesting," he says. "So the Three Laws were instructive in terms of teaching us how any attempt to legislate ethics in terms of specific rules is bound to fall apart and have various loopholes."

Goertzel doesn't believe they would work in reality, arguing that the terms involved are ambiguous and subject to interpretation — meaning that they're dependent on the mind doing the interpreting in various obvious and subtle ways.
http://io9.com/why-asimovs-three-law...-us-1553665410
__________________

Pheasants
bagher is offline   Reply With Quote
Old 01-20-2015, 12:31 PM   #17
BoogetyBoogety
Registered User
 
Join Date: Feb 2007
Location: Dallas!
Posts: 814
My Ride: 2010 SL550
Quote:
Originally Posted by NOVAbimmer View Post
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

0.A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Problem solved?
Came to post this, leaving very happy and relieved. Such a huge fan of the man... I had the incredible privilege of meeting Dr. Asimov in New York City in the early '80s, at some art reception. I wish I could say i had a scintillating conversation with him, but I just shook his hand and said "Dr. Asimov, I am such a fan of your work" and he replied "Thank you" and moved on.

I wanted to tell him that over the years, I acquired and subscribed to every single issue of the Magazine of Fantasy & Science Fiction with his articles in it, from '58 to just before his death in '92, but I didn't get a chance to...

AI will be amazing in a few decades. I regret I won't get to see it (unless I can wire my brain into some computer jar that allows me to experience it), but you young 'uns will...
__________________
Quote:
Originally Posted by jdc336
BE QUIET STAY IN SILENT YOU ARE NOT ALLOWED TO TALK ANY MORE. YOU ARE SO MADD AND IM HAPPY . DUMB YOUR MOM
BoogetyBoogety is online now   Reply With Quote
Old 01-20-2015, 01:18 PM   #18
SamDoe1
Registered User
 
Join Date: Oct 2010
Location: Minnesnowta
Posts: 3,603
My Ride: 4cyl of fury
Quote:
Originally Posted by NOVAbimmer View Post
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm
2.A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

0.A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Problem solved?
Sure. This is just like saying "ban guns, no more murders". How'd that go?

Quote:
Originally Posted by cowmoo32 View Post
Yes but if it's in an isolated network there's only so much it could do. If I put you in a room with no doors or windows you can try all day but I'm still in control.
Then what's the point of a contained system like that? Seems like a waste of time and existence.
SamDoe1 is online now   Reply With Quote
Old 01-20-2015, 02:05 PM   #19
NOVAbimmer
Registered User
 
Join Date: Aug 2006
Location: VA
Posts: 13,073
My Ride: E60M
Quote:
Originally Posted by bagher View Post
Quote:
Originally Posted by SamDoe1 View Post
Sure. This is just like saying "ban guns, no more murders". How'd that go?
to be fair, the "three rules" came about from Asimov's desire to write a story with robots that didn't involve "guy builds robot, robot kills guy in Faustian trope"
__________________
NOVAbimmer is offline   Reply With Quote
Old 01-20-2015, 02:14 PM   #20
SamDoe1
Registered User
 
Join Date: Oct 2010
Location: Minnesnowta
Posts: 3,603
My Ride: 4cyl of fury
Quote:
Originally Posted by NOVAbimmer View Post
to be fair, the "three rules" came about from Asimov's desire to write a story with robots that didn't involve "guy builds robot, robot kills guy in Faustian trope"
I know, I was just trying to make a point. Intelligence knows no bounds but the same applies for stupidity.
SamDoe1 is online now   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On
Censor is ON





All times are GMT -5. The time now is 10:48 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2016, vBulletin Solutions, Inc.
(c) 1999 - 2011 performanceIX Inc - privacy policy - terms of use