Thursday, 16 August 2012

// // Leave a Comment

Q&A with Luke Muehlhauser, CEO of Singularity Institute

An interesting reddit Q&A session with Luke Muehlhauser, the CEO of the Singularity Institute. He answers questions from "What if we make something more intelligent than us?" through to questions on morality and friendly AIs. The questions and the answers are interesting throughout.

Singularity Institute

The Singularity Institute is an interesting research organisation that focuses on creating AI in such a way that we don't destroy ourselves as a civilisation. The idea of friendly AI and ethics is an interesting one. As an algorithm-based researcher, my focus is rarely on the ethics but the efficiency of the algorithm.

This area of AI is known as AI Risk and a term you'll see a lot is Intelligence Explosion, where learning machines are capable of exceeding their original configuration and programming.

Should I stop doing AI?

No. However, it's definitely worth bearing these ideas in mind before starting a simulation.

0 comments:

Post a Comment