Artificial intelligence pioneer says throw it all away and start again

Geoffrey Hinton harbors doubts about AI’s current workhorse. (Johnny Guatto / University of Toronto)

In 1986, Geoffrey Hinton co-authored a paper that, three decades later, is central to the explosion of artificial intelligence.

But Hinton says his breakthrough method should be dispensed with, and a new path to AI found.

… he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri.

“My view is throw it all away and start again”

Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. “Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.”

Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as “unsupervised learning,” “I suspect that means getting rid of back-propagation.”

“I don’t think it’s how the brain works,” he said. “We clearly don’t need all the labeled data.”

Source: Axios

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence Key To Treating Illness

UC and one of its graduates have teamed up to use artificial intelligence to analyze the fMRIs of bipolar patients to determine treatment.

In a proof of concept study, Dr. Nick Ernest harnessed the power of his Psibernetix AI program to determine if bipolar patients could benefit from a certain medication. Using fMRIs of bipolar patients, the software looked at how each patient would react to lithium.

Fuzzy Logic appears to be very accurate

The computer software predicted with 100 percent accuracy how patients would respond. It also predicted the actual reduction in manic symptoms after the lithium treatment with 92 percent accuracy.

UC psychiatrist David Fleck partnered with Ernest and Dr. Kelly Cohen on the study. Fleck says without AI, coming up with a treatment plan is difficult. “Bipolar disorder is a very complex genetic disease. There are multiple genes and not only are there multiple genes, not all of which we understand and know how they work, there is interaction with the environment.

Ernest emphasizes the advanced software is more than a black box. It thinks in linguistic sentences. “So at the end of the day we can go in and ask the thing why did you make the prediction that you did? So it has high accuracy but also the benefit of explaining exactly why it makes the decision that it did.”

More tests are needed to make sure the artificial intelligence continues to accurately predict medication for bipolar patients.

Source: WVXU

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Machines can never be as wise as human beings – Jack Ma #AI

zuckerger and jack ma

“I think machines will be stronger than human beings, machines will be smarter than human beings, but machines can never be as wise as human beings.”

The wisdom, soul and heart are what human beings have. A machine can never enjoy the feelings of success, friendship and love. We should use the machine in an innovative way to solve human problems.” – Jack Ma, Founder of Alibaba Group, China’s largest online marketplace

Mark Zuckerberg said AI technology could prove useful in areas such as medicine and hands-free driving, but it was hard to teach computers common sense. Humans had the ability to learn and apply that knowledge to problem-solving, but computers could not do that.

AI won’t outstrip mankind that soon – MZ

Source: South China Morning Post

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Google Aims To Dominate AI

There are more than 1000 researchers at Google working on these machine intelligence applications

The Search Giant Is Making Its AI Open Source So Anyone Can Use It

Internally, Google has spent the last three years building a massive platform for artificial intelligence and now they’re unleashing it on the world


In November 2007, Google laid the groundwork to dominate the mobile market by releasing Android, an open ­source operating system for phones. Eight years later to the month, Android has an an 80 percent market share, and Google is using the same trick—this time with artificial intelligence.

Introducing TensorFlow,
the Android of AI

Google is announcing TensorFlow, its open ­source platform for machine learning, giving anyone a computer and internet connection (and casual background in deep learning algorithms) access to one of the most powerful machine learning platforms ever created.

More than 50 Google products have adopted TensorFlow to harness deep learning (machine learning using deep neural networks) as a tool, from identifying you and your friends in the Photos app to refining its core search engine. Google has become a machine learning company. Now they’re taking what makes their services special, and giving it to the world.

TensorFlow is a library of files that allows researchers and computer scientists to build systems that break down data, like photos or voice recordings, and have the computer make future decisions based on that information. This is the basis of machine learning: computers understanding data, and then using it to make decisions. When scaled to be very complex, machine learning is a stab at making computers smarter.

But no matter how well a machine may complement or emulate the human brain, it doesn’t mean anything if the average person can’t figure out how to use it. That’s Google’s plan to dominate artificial intelligence—making it simple as possible. While the machinations behind the curtains are complex and dynamic, the end result are ubiquitous tools that work, and the means to improve those tools if you’re so inclined.

Source:  Popular Science

Click here to learn more about Mark Zuckerberg’s vision for Facebook

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Enhancing Social Interaction with an AlterEgo Artificial Agent

AlterEgo: Humanoid robotics and Virtual Reality to improve social interactions

The objective of AlterEgo is the creation of an interactive cognitive architecture, implementable in various artificial agents, allowing a continuous interaction with patients suffering from social disorders. The AlterEgo architecture is rooted in complex systems, machine learning and computer vision. The project will produce a new robotic-based clinical method able to enhance social interaction of patients. This new method will change existing therapies, will be applied to a variety of pathologies and will be individualized to each patient. AlterEgo opens the door to a new generation of social artificial agents in service robotics.

Source: European Commission: CORDIS

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Emotionally literate tech to help treat autism

Researchers have found that children with autism spectrum disorders are more responsive to social feedback when it is provided by technological means, rather than a human.

When therapists do work with autistic children, they often use puppets and animated characters to engage them in interactive play. However, researchers believe that small, friendly looking robots could be even more effective, not just to act as a go-between, but because they can learn how to respond to a child’s emotional state and infer his or her intentions.

Children with autistic spectrum disorders prefer to interact with non-human agents, and robots are simpler and more predictive than humans, so can serve as an intermediate step for developing better human-to-human interaction,’ said Professor Bram Vanderborght of Vrije Universiteit Brussel, Belgium.

Researchers have found that children with autism spectrum disorders are more responsive to social feedback when it is provided by technological means, rather than a human,’ said Prof. Vanderborght.

Source: Horizon Magazine

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Rise of the Robot Therapist

 Social robots appear to be particularly effective in helping participants with behaviour problems develop better control over their behaviour

Romeo Vitelli Ph.D.
Romeo Vitelli Ph.D.

In recent years, we’ve seen a rise in different interactive technologies and new ways  of using them to treat various mental problems.  Among other things, this includes online, computer-based, and even virtual reality approaches to cognitive-behavioural therapy. But what about using robots to provide treatment and/or emotional support?  

A new article published in Review of General Psychology provides an overview of some of the latest advances in robotherapy and what we can expect in the future. Written by Cristina Costecu and David O. David of Romania’s Babes-Bolyai University and Bram Vanderborgt of Vrije Universiteit in Belgium, the article covers different studies showing how robotics are transforming personal care.    

What they found was a fairly strong treatment effect for using robots in therapy. Compared to the participants getting robotic therapy, 69 percent of the 581 study participants getting alternative treatment performed more poorly overall.

As for individuals with autism, research has already shown that they can be even more responsive to treatment using social robots than with human therapists due to their difficulty with social cues.

Though getting children with autism to participate in treatment is often frustrating for human therapists, they often respond extremely well to robot-based therapy to help them become more independent.

 Source: Psychology Today

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How To Teach Robots Right and Wrong

Artificial Moral Agents

Prof.-Nayef-Al-Rodhan_gallerylarge

Nayef Al-Rodhan

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations.

The inevitable next step, therefore, would seem to be the design of artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The most critical of these dilemmas is the question of whose morality robots will inherit.

Functional morality involves robot responses to scenarios unanticipated by the programmer, where the robot will need some ability to make ethical decisions alone. Here, they write, robots are endowed with the capacity to assess and respond to “morally significant aspects of their own actions.” This is a much greater challenge.

The attempt to develop moral robots faces a host of technical obstacles, but, more important, it also opens a Pandora’s box of ethical dilemmas.

Moral values differ greatly from individual to individual, across national, religious, and ideological boundaries, and are highly dependent on contextEven within any single category, these values develop and evolve over time.

Uncertainty over which moral framework to choose underlies the difficulty and limitations of ascribing moral values to artificial systems … To implement either of these frameworks effectively, a robot would need to be equipped with an almost impossible amount of information. Even beyond the issue of a robot’s decision-making process, the specific issue of cultural relativism remains difficult to resolve: no one set of standards and guidelines for a robot’s choices exists.    

For the time being, most questions of relativism are being set aside for two reasons. First, the U.S. military remains the chief patron of artificial intelligence for military applications and Silicon Valley for other applications. As such, American interpretations of morality, with its emphasis on freedom and responsibility, will remain the default.

Source: Foreign Affairs The Moral Code, August 12, 2015

PL – EXCELLENT summary of a very complex, delicate but critical issue Professor Al-Rodhan!

In our work we propose an essential activity in the process of moralizing AI that is being overlooked. An approach that facilitates what you put so well, for “AI to interact with humans as partners.”

We question the possibility that binary-coded AI/logic-based AI, in its current form, will one day switch from amoral to moral. This would first require a universal agreement of what constitutes morals, and secondarily, it would require the successful upload/integration of morals or moral capacity into AI computing. 

We do think AI can be taught “culturally relevant” moral reasoning though, by implementing a new human/AI interface that includes a collaborative engagement protocol. A protocol that makes it possible for AI to interact with the person in a way that the AI learns what is culturally relevant to each person, individually. AI that learns the values/morals of the individual and then interacts with the individual based on what was learned.

We call this a “whole person” engagement protocol. This person-focused approach includes AI/human interaction that embraces quantum cognition as a way of understanding what appears to be human irrationality. [Behavior and choices of which, from a classical probability-based decision model, are judged to be irrational and cannot be computed.]

This whole person approach, has a different purpose, and can produce different outcomes, than current omniscient/clandestine-style methods of AI/human information-gathering that are more like spying then collaborating, since the human’s awareness of self and situation is not advanced, but rather, is only benefited as it relates to things to buy, places to go and schedules to meet. 

Visualization is a critical component for AI to engage the whole person. In this case, a visual that displays interlinking data for the human. That breaks through the limitations of human working memory by displaying complex data of a person/situation in context. That incorporates a human‘s most basic reliable two ways of know, big picture and details, that have to be kept in dialogue with one another. Which makes it possible for the person themselves to make meaning, decide and act, in real-time. [The value of visualization was demonstrated in 2013 in physics with the discovery of the Amplituhedron. It replaced 500 pages of algebra formulas in one simple visual, thus reducing overwhelm related to linear processing.]        

This kind of collaborative engagement between AI and humans (even groups of humans) sets the stage for AI to offer real-time personalized feedback for/about the individual or group. It can put the individual in the driver’s seat of his/her life as it relates to self and situation. It makes it possible for humans to navigate any kind of complex human situation such as, for instance, personal growth, relationships, child rearing, health, career, company issues, community issues, conflicts, etc … (In simpler terms, what we refer to as the “tough human stuff.”)

AI could then address human behavior, which, up to now, has been the elephant in the room for coders and AI developers.

We recognize that this model for AI / human interaction does not solve the ultimate AI morals/values dilemma. But it could serve to advance four major areas of this discussion:

  1. By feeding back morals/values data to individual humans, it could advance their own awareness more quickly. (The act of seeing complex contextual data expands consciousness for humans and makes it possible for them to shift and grow.)
  2. It would help humans help themselves right now (not 10 or 20 years from now).
  3. It would create a new class of data, perceptual data, as it relates to individual beliefs that drive human behavior.
  4. It would allow for AI to process this additional “perceptual” data, collectively over time, to become a form of “artificial moral agent” with enhanced “moral reasoning” “working in partnership with humans.

Click here to leave a comment at the end of this post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail
Image

Excuse me but … physicists are not all-knowing

“The overwhelming success of modern physics does not give physicists the ability to pronounce judgment on other sciences … ” — Physicist, Mathew R. Francis

Source: slate.com
Quantum and Consciousness Often Mean Nonsense
Matthew R. Francis, May 29, 2014

Phil Lawson: Ultimate success in AI will not alone rest on the shoulders of physicists, mathematicians and coders who write deep learning algorithms. It will require interdisciplinary collaborations with neuroscientists, biologists, psychologists, therapists, culturalists and more. It may even require “Spherists” to inspire these interactions 🙂 

[Spherist, as defined on this blog: A person who has experienced the limits of compartmentalized thinking; who values connecting the dots and seeing a bigger picture. Who is also informed in the sciences of systems: chaos, turbulence, self-organization, wholeness, etc.]

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail
Aside

Point of this blog on Socializing AI

Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”
— Phil Lawson
(read more)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail