Tuesday, June 10

The Rise of Intention and Preference Machines

Yesterday, I mentioned Eric Horvitz's presentation Machine Learning, Reasoning, and Intelligence in Daily Life: Directions and Challenges.

He spends a good deal of his presentation talking about "preference machines" which include recommendation systems. Intention machines are services that use models to predict activities and goals. In short, they uses past history to predict future behavior.

First, an excerpt from the mobile arena, the "Predestination" project that predicts driver destinations.
We have been exploring the uses of the data in learning and reasoning systems, including the construction of a system that can predict and then harness drivers’ likely destinations, given initial driving trajectories [Krumm and Horvitz, 2006]. Beyond geocentric intention machines, we have been exploring the feasibility of building geocentric preference machines, that perform geocentric collaborative filtering: Given sets of sensed destinations of multiple people and the sensed destinations of a particular driver, what places, unvisited previously by that driver, might be of interest, and how and when might the driver be best informed (e.g., by hearing a paid advertisement when he or she is approaching such destinations).
Challenges in Learning and supervision
Priorities research explored a middle ground of allowing users to become more involved with in-stream supervision. In versions of Priorities, users could inspect and modify in-stream supervision policies. Such awareness and potential modification allows the in-stream supervision to become a grounded collaboration between the machine and user...

Challenging areas of research include developing a better understanding of the best approaches to constructing generic models that can provide valuable, usable initial experiences with intelligent applications and services, but that allow for efficient adaptation downstream with a user’s explicit training efforts or in-stream supervision. Research may lead to deeper insights about setting up systems for “ideal adaptability” given expectations about the nature of different kinds of environments, and adaptations, given the users and uses.
Machines and humans need to learn to work together. Sometimes machines can help us make decisions, but one key challenge is to translate the machine's recommendation into a rationale that humans can understand and for this to begin a "dialogue" to correct mistakes and provide more accurate predictions.

This barrier is one reason that Google does not use ML for their core ranking algorithm, see the recent post "Are Machine-Learned Models Prone to Catastrophic Errors?" for an enlightening interview with Google's Peter Norvig. Anand relates,
Peter tells me that their best machine-learned model is now as good as, and sometimes better than, the hand-tuned formula on the results quality metrics that Google uses... Google's search team worries that machine-learned models may be susceptible to catastrophic errors on searches that look very different from the training data. They believe the manually crafted model is less susceptible to such catastrophic errors on unforeseen query types.

No comments:

Post a Comment