Wednesday, August 15

SIGIR 2012 Best Paper Awards

Last night at the SIGIR 2012 banquet, James Allan presented the best paper awards.  This year there were two awards, plus an additional honorable mention!

One with the papers:

Honorable Mention
Robust Ranking Models via Risk-Sensitive Optimization  
Lidan Wang (UMd) Paul N Bennett (Microsoft) Kevyn Collins-Thompson (MSR)
This paper tackles the issue of robustness, and examines how systems that despite achieving gain overall may still significantly hurt many queries.  They present a framework for optimizing both effectiveness and robustness and the tradeoff between the two. 

Best Student Paper

Top-k Learning to Rank: Labeling, Ranking and Evaluation
Shuzi Niu (Institute of Computing Technology, CAS) Jiafeng Guo Yanyan Lan (Chinese Academy of Sciences) Xueqi Cheng (Institute of Computing Technology, CAS)
Best Paper Award

Time-Based Calibration of Effectiveness Measures 
Mark D Smucker (University of Waterloo), Charles L. A. Clarke (University of Waterloo)
In this paper, we introduce a time-biased gain measure, which explicitly accommodates such aspects of the search process... As its primary benefit, the measure allows us to evaluate system performance in human terms, while maintaining the simplicity and repeatability of system-oriented tests. Overall, we aim to achieve a clearer connection between user-oriented studies and system-oriented tests, allowing us to better transfer insights and outcomes from one to the other.

Monday, August 13

Norbert Fuhr SIGIR 2012 Salton Keynote Speech

Norbert Fuhr presented the Salton Award keynote speech.

James Allan presented Norbert Fuhr with the 10th Salton award.

He published IR paper in 1984, in Cambridge England.  The paper was 19 pages long.  Since then, he has authored over 200 papers.
 - foreshadowing learning ranking functions.
 - probablistic retrieval models
 - retrieval models for interactive retrieval

"Information Retrieval as Engineering Science"

[We have to listen to the old guys, but we don't have to accept it, but this doesn't hold for my talk today]

What is IR?
 - IR is about vagueness and imprecision in information systems

 - User is not able to precisely specify the object he is looking for
   --> "I am looking for a high-end Android smartphone at a reasonable price"
 - Typically, interative retrieval process.
- IR is not restricted to unstructured media

 - the person's knowledge about the objets in the database is incomplete / imprecise
 -> limitations in the representation
 -> imprecise object attributes (unreliable metadata, e.g. availability)

IR vs Databases
 -> DB: given a query, find objects o with o->q
 -> IR given a query q, find documents d with high values of P9d->q)
 -> DB is a special case of IR! (in a certain sense)

Foundation of DBs
 -> Codd's paper on a relational model was classified as Information Retrieval
 -> The concept of transactions separated the fields.

Fundamental differences between IR and DB is handling the pragmatic level.
DB:  User interactions with the application --> DBMs --> DB
IR: User interacts with the IR system -> over the collection
(separation between the management system and the application)

What IR could learn from DB systems
 Multiple steps of inference  "joins" a->b, b->c
 -> join, links over documents
 -> combine the knowledge across multiple documents

Expressive query language
 -> specify the inference scheme
 -> specify documents parts/aggregates to be retrieved

Data types and vague predicates
 -> not every string is text: times, dates, locations, amounts, person/product names.  [entities]
 -> provide data type-specific comparison predicates (<, set comparison, etc..)

IR as Engineering Science
 - Most of us feel our that we are engineers.  But, things are not as simple as they might seem.
 -> Example:  An IR person in civil engineering.

4 or 5 types of bridges - Boolean bridge, vector space, language modeling, etc..
 -- build all 5 and see which one stands up.
 -- Test the variants in live search
 -- Users in IR are blame themselves when they drive over a collapsing (non-optimal) system
 -- There could be serious implications by choosing a non-optimal system.

Why we need IR engineering
 -> IR is more than web search
Instiutions and companies have
 - large varieties of collections
 - board range of search tasks
[example: searching in the medical domain.  A doctor performs a search, and then waits 30 minutes for an answer.  We could return as engineers work on getting this down to 10 min)

Limitations of current knowledge
 - Moving in Terra Incognita
 - example: Africa.  Knowledge of the western world about the african geopgrahy several hundred years ago; the map of it was very innacurate and incomplete.
- At best, interpolation is reasonable.
- Extrapolation lacks theoretic foundation
-> But how to define the boundaries of current knowledge?

Theoretic Models
 -> Probability Ranking Principle
 -> Relevance oriented probabilistic models
 -> IR as uncertain inference
 -> Language Models

Value of Theoretic Models
 -> Deeper insight (scientific interest)
 -> General validity as basis for broad range of applications
 -> Make better predictions (engineer's view)

We should put more focus on the development of theoretic models.
 -> each theory is application within a well-defined application range

But, what is the application range?
 -> defined by the underlying assumptions
 -> Are the underlying assumptions of the model valid? For this, we need experiments to validate them.

 - Why vs How experiments

Why -> based on a solid theoretical model.
 -> performed for validating the model assumptions

 - based on some ad-hoc model
 - focus on the outcome
 - no interest in the underlying assumptions

How experiments
 -> Improvements that Don't Add Up: Ad Hoc retrieval results since 1998.
 -> Trec-8 adhoc collection, MAP
 -> It's easy to show improvements, but few beat the best official TREC result.
 -> Over 90% of the paper claim improvements that exist due to poor baselines, but do not beat the best TREC results.
-> Improvements don't add up.

Limitations of Emperical Approaches
 -> Is computer science truly scientific? CACM 7/2010

Theoertical vs Experimental

 - why
- explanatory power
- basis for a variety of approaches
 - long standing

 - How?
 - Good results on some collections
 - potential for some further improvements (in limited settings)
 - short lived

Why experimentation
 - Ex:  Binary Independence Retrieval model
 -> terms are distributed independently in the relevant and irrelevant documents
 -> did anyone ever check this?

Looking under the hood
 -> TF-IDF term weights in probablistic notion.  P(R) for a class of terms.
 -> Plots of relevance vs tf and IDF for trec adhoc and INEX IEEE

Systematic Experimentation
Towards evidence based IR
 -> A large variety of test collections
 -> large number of controlled variables

IR Engineering:  How are results affected by these components?

Controlled variables
 -> language, length, collection size, vocabulary size, domain, genre, structure
 -> length, linguistic structure, application domain, user expertise

What other variables are also important?

Even assuming these are the important variables, we have a high parameter search space.

A plug for -> supporting Benchmarking and meta-studies.

Grand IR Theory  vs Empirical Science
 -> Theory alone will not due.

Foundations of IR Engineering
-> Base layer: theory.  Then evidence, and we build the bridge on top of that.

IR Research Supporting Engineering
 1) Theory.  Proofs instead of empirics + heuristics
   - Experiments for validating underlying assumptions

2) Broad Empirical Evidence
 - Strict controlled experimental conditions
 - Repeat experiments with other colletions / tasks
 - variables affecting performance
 - metastudies

New IR Applications
 - Dozens of IR applications (see the SWIRL 2012 workshop)
 - Heuristic approaches are valuable for starting and comparison, but they are limited in the generality.
 -> We don't know how far we can generalize the method.

Conclusion: Possible Steps
 -> Encourage theoretic research of the 'why' type, e.g. having a separate conference track for these papers.
 -> Define and enforce rigid evaluation standards to be able to perform metastudies
 -> Setup repositories for standardized benchmarks.


Nick Belkin -> Where do the assumptions underlying the theory come from?  Where do we get evidence?    How would you approach that?
 -> A: Without any hypothesis, the observations are useless.  We need a back and forth between theory and experimentation.

DB and IR
-> Can they be united?  DB is part of IR.  IR is part of DB.   [the issues is bringing the people together]

David Hawking
 -> Have we hit the limit of our engineering capability?  What are the biggest opportunities for significant progress?
A:  We perhaps cannot improve the classical adhoc setting.  We need to know more about the user and their task.  Smartphone example:  your phone knows where are you, what time it is, looking for a Chinese restaurant (including opening hours).  We need to study the hard tasks for knowledge workers that integrate more deeply in their application.

Friday, August 10

Food and Drink Guide to Portland for SIGIR 2012

I'm preparing to leave for SIGIR 2012 in Portland.

If you're planning some sightseeing, or a place to catch up over a beer or meal with friends and colleagues, here are some ideas.

Portland has been made famous by the "Portlandia" series.  It's a quirky, young, outdoorsy, hipster, organic, crunchy kind of town, where "young people go to retire".  It's ground zero for the burgeoning craft beer, coffee, and micro-distilling movement in America.  It has been called "beervana" because of it's plethora of oustanding breweries, bars, and pubs.

To cut to the chase, here is my Food Map of Portland on Google maps.  Below are some of my sources and raw research notes.

Note: If you arrive in Portland early and you like food be sure to checkout the Bite of Oregon food festival taking places on Saturday and Sunday.

Portland food links:

Go for a hike

Fine dining Restaurants:
Beast  (think meat!) 
Le Pigeon (think Paris!) and their new place (foie gras profiteroles!) Little Bird (another french bistro)

Coffee Shops
Portland is known for having some of the best coffee in the country.  Here are some of the best places to try a cup.
Stumptown (several locations)
Ristretto Roasters
Coava Coffee
(the business area near the conference is a bit of a coffee & restaurant wasteland, so plan on venturing north into the heart of downtown)

Portland is big into food trucks
Nong's Khao Man Gai, Koi Fusion, Wolf and Bear, Big A$ sandwich, Lardo,  Good Food Here
Many are only open for lunch in downtown, so check their hours.  For late night nosh, check the east side of town (great after a night imbibing at one of the east side bars/pubs)
Mai Pho (lemongrass tofu over rice), Pyro Pizza and Give Pizza a Chance (same owner), and Sugar Cube for desert.

Top places to eat.
Ken and Zuke's Deli (think Portland's version of Katz's in NY)
People's Pig [temporarily closed] a food truck serving lunch, famous for it's Porchetta sandwich, see serious eats coverage)
Pok Pok - Thai street food (get the drinking vinegar) - be prepared to wait, there is always a long line (think 1 to 1.5 hours.  Go across the street and wait at the whisky soda lounge), famous for its fish sauce wings.  There is a new restaurant, Ping in downtown from the same owners, which was just named one of GQ's ten best new restaurants.  It's reasonably priced, casual food without crazy lines.
Beaker and Flask (restaurant of the year by Willamette Week)
Ken's Artisan Pizza
Olympic Provisions - great lunch places known for salumi
The Meadow - Artisan chocolates and food
Bakeshop - artisan croissants and bakery
Salt and Straw - ice cream
Coco Donut -- the locals prefer it to the more touristy Voodoo Donuts

Fine dining
Le Pigeon
Little Bird Bistro
Toro Bravo 
Beast (James Beard nominated (winner?) chef Naomi Pomeroy makes killer meat.  I have a cooking crush on her)

PSU Portland farmer's market (on Saturday)

Beer & Spirits

Distillery Row

Clear Creek Distillery

Top pubs / Beer Bars
Bailey's Taproom (an icon)
Horse Brass (an icon)
Belmont Station
Beer Mongers
Green Dragon

Brewpubs to visit
Hopworks urban brewery  (classic portland - eco-brewpub -- bikes, beer, and great food)
Roots Organic brewpub

See the Portland entries in 2012 top breweries in the world:
Hair of the dog (the gold standard of portland breweries - best brewery in portland -- also has great food.  A must visit!)
Upright brewing company (hot new upcoming - just beer, in an hard to find location.  hard core beer geeks need apply)
Deschutes (large, popular standby)
Cascade Brewing (known for crazy sour beer!)
Breakside Brewery
Hopworks urban brewery
Gigantic Brewing
 - brand new brewery just opened in may.
Rogue brewery is a hometime favorite --> their current Voodoo donut maple bacon beer is really unique

Must try beers
BridgePort IPA
Deschutes Bachelor Bitter
DOA, Hopworks
Rogue Maple Bacon Doughnut beer

Consider a beer walking tour:  cascade brewing -->  green dragon --> hair of the dog

Beaker and Flask (restaurant of the year by Willamette Week, awesome cocktails!)
Clyde Common (great food + drinks, casual, great for a group in downtown)
Rum Club

Thursday, April 12

Amazon CloudSearch, Elastic Search as a Service

The search division of Amazon, A9 today announced the release of CloudSearch.  Amazon CTO Werner Vogels announced it on his blog, All Thing Distributed.  The AWS service also has a new post on the announcement.

For the details and pricing, there is also the official CloudSearch details page.

CloudSearch is a fully managed search service based on Amazon's search infrastructure that provides near-realtime, faceted, scalable search.  The index is stored in memory for fast search and updates.

Dynamic Scaling
What makes A9 offering particularly interesting is it's ability to dynamically scale.  The architecture of A9's search system, with shards and replicas, is a common and well-understood model.  What makes Amazon's offering unique is the ability to easily scale your search cluster.  A9 will automatically add (and remove) search instances and index partitions as the index size grows and shrinks.  It will also dynamically add and remove replicas to respond to changes in search request traffic.    The exact details are still not clearly described technically in detail.

Right now, there is a limit to 50 search instances.  An extra large search instance can handle approximately 8 Million 1K documents. It appears that assumption is that the documents are quite small (e.g. product documents).  To put it in perspective, an rough rule of thumb for web documents is approximately 10k.  Given this, it translates into roughly 800k web documents per server * 50 servers = 40 million web documents.  This is not for building large-scale web search, yet.  However, it should be more than enough for most enterprise e-commerce and site-search applications.

The real value added by the search engine is in the ranking of results.

The control over the search index ranking is rudimentary with a few basic knobs.  You can add stopwords, perform stemming, and add synonyms.  This is very basic stuff.    How you might do more interesting (and important) IR ranking changes is vague.  From the article,
Rank expressions are mathematical functions that you can use to change how search results are ranked. By default, documents are ranked by a text relevance score that takes into account the proximity of the search terms and the frequency of those terms within a document. You can use rank expressions to include other factors in the ranking. For example, if you have a numeric field in your domain called 'popularity,' you can define a rank expression that combines popularity with the default text relevance score to rank relevant popular documents higher in your search results.
This indicators that it is possible to boost documents.  However, it is unclear how the underlying text search works in order to boost individual important fields (e.g. name, description).

For more details on the more advanced query processing needed to make search work in practice, read the post: Query Rewriting in Search Engines from Hugh Williams at EBay.  In order to employ these methods, you need log data, which brings me to my next point.

Missing Pieces
A key missing component is usage-driven framework to improve ranking that uses queries, clicks, and other user behavior indicators.  A feedback mechanism to change ranking based analysis (ideally automatic).

Overall, the most compelling aspect of this is the dynamic scaling.  It gives people a simple, platform that scales transparently for many enterprise search and ecommerce applications.