These days, it’s difficult to imagine the Internet without search engines. As Google and Yahoo like to proudly proclaim, a PPC advertisement on one of their networks reaches “98% of active Internet users” with well over half of that metric coming straight from a search engine. So . . . with such astounding success, there’s no better time than to predict the inevitable demise of the search engine.
Search engine death is inevitable simply because search engines in and of themselves really aren’t necessary. A search engine is an agent. Search engines are glue that bring Internet users and content sites together.
The best offline examples of such agency relationships are travel agents and real estate agents. These agents are necessary because of asymmetric access to information. In plain English terms, you need agents because an agent has information that you don’t.
In the olden days (like 1990), when you wanted to book a flight, you had two choices. First, you could call every airline’s toll-free number, wait on hold for 20 minutes at a time, hope that the telesales rep that answered the phone did a thorough job for you, then compare prices, call back the winning airline, and hope your seat and price were still available, all the while waiting on hold for another 20 minutes. Not too efficient.
Enter the travel agent. For a nominal fee (often paid by the airline) you could call up a friendly voice, describe what you wanted, and wait for the agent to get back to you with the best price and schedule. For obvious reasons, many people were willing to pay for such a service.
The Internet, of course, has all but eliminated travel agents. Sophisticated comparison algorithms like Kayak.com or Orbitz.com enable consumers to get better information, more quickly, and at a lower cost. The Internet, in short, has disintermediated the agent. There are plenty of other examples too – dating sites have disintermediated the local bar, Craigslist has disintermediated classified ads, and Skype is disintermediating phone companies.
Search engines in particular have had a disintermediating impact on offline information agents. For example, the yellow pages (for local services) and the library (for research) have lost their once lofty positions as the repositories of particular types of information, ever since search algorithms gained popular acceptance and technical refinement.
But search engines are really pretty inefficient when you think about it. Search engines rely on the user to describe what he wants. And most users aren’t very articulate when it comes to expressing their needs. Thus, the search engine ends up showing a lot of different results and hoping that one of the search engine result pages (SERPs) listings is the right one.
Moreover, because the search engine usually shows at least 10 listings per page, the information it can provide about each individual listing is limited, making it hard for the user to actually figure out what each listing is really about. Combine this with additional page space being usurped by keyword advertising, and the value of the natural search results to the end user becomes less and less.
The end result is often one of two things: the user clicks on multiple ads/organic results in the hope of finding the right site for him, or the user types in a different search to better refine his query. This process is not unlike the 1990s traveler calling multiple airlines multiple times until they have triangulated on the issue to get to the best result.
Thus, if a search engine is analogous to the ‘old world’ travel booking process of calling multiple airlines, it just makes sense that – like travel agents – this is an agency relationship that will eventually be replaced by a more efficient solution.
But with what you ask? An ultra-smart computer that knows what you are thinking the moment you put your hands on the keyboard? A system that sends you to the right result (the result that you really wanted) instead of sending you to page with 10 to 15 snippets of site information and forcing you to decide which one is right for you?
Well, actually, yes. Consider this scenario. You download a program to your desktop. The program begins by asking you a lot of questions about yourself – from basic demographic questions (age, sex, geographic location), to highly detailed psychographic questions (your personality, your pastimes, your fears, your future goals, your risk quotient).
Moreover, this program follows everything you do on the Internet (and you allow it to do so). It tracks the time you spend on each page, which sites you return to frequently, which sites you leave immediately. You can even add reviews of sites you like and don’t like. And finally, if you want, the program compares your behavior to that of millions of other Internet surfers across the universe. It finds surfers similar to you in terms of interests, life stage, or basic demographics.
After a few weeks (or maybe months), the program starts to combine all of this disparate information into a pretty darn accurate profile of your wants and needs. It combines your responses to surveys, your Internet searching habits, your ratings of Web sites, and the activity of similarly-situated Internet users into one big algorithm. And here’s the great part. Instead of using this algorithm to show you “search results”, it simply takes you to the “right” page.
So now, when you type in “los angeles travel”, it takes you directly to the Web site with the best travel deals for you to Los Angeles. Heck, depending on the information you have provided or the system has gleaned from you, it might even know the dates of your travel, your departure location, your preferred method of traveling, frequent flyer numbers, travel companions, credit card information, and whether you need a car, hotel, a kennel for your dog, and some new luggage. All you need to do is review the price of your trip, click submit and voila you’re off to LA!
Is this Ray Bradbury science fiction? I don’t think so. In fact, most of the tools needed to create such a system already exist. Collaborative filtering can be used to understand to predict user behavior based on the actions of similar consumers (already used by Amazon‘s “members like you also liked . . .” and Yahoo’s LaunchCast). Cookies help sites understand past user behavior (and imagine the depth of information that could be gleaned from a persistent cookie that follow a user from one site to another. This is part of the theory behind Root Markets). And consumers clearly know how to fill out surveys online, which in turn can be used to serve user-targeted advertising (just ask Tickle.com).
Perhaps the biggest question is whether consumers would be willing to part with so much information in exchange for a more personalized Internet experience. Again, I think that – in the right context – this wouldn’t be a problem. Way back in 1999, John Hagel III and Marc Singer published the book Net Worth that suggested this very scenario. The concept is really simple: consumers would actually pay for a program that aggregated all of their personal wants and needs and used this information to direct them to the right sites, rather than a list of potentially right Web sites.
And again, if you don’t believe me, look around the Internet, it’s already happening. Users pay $50 a year for Yahoo’s LaunchCast radio, in large part because it customizes an Internet radio station based on a users personal preferences (there are plenty of free Internet radio stations). eHarmony asks users to go through literally hundreds of psychographic and demographic questions (this is at least a 45 minute commitment) before the user gets matched with a single potential mate. And compensation analysis companies like Payscale.com also require users to enter tons of personal data (and I mean personal, like compensation information), and then pay a fee to get information about salaries in their field.
Someday, Internet users will have a choice – between a typical search engine experience (free but inefficient) or an informediary service (may cost money, but highly personalized results). Of course, “free” search engines aren’t really free when you consider the amount of extra time you spend digging through the results, and the ads that are ever-encroaching on the actual results (even on Google!). This is sort of similar to regular TV versus Tivo. Yes, you can still get TV for free, but more and more people will pay money for a system that eliminates commercials and ‘learns’ their viewing preferences.
I love the story about the Blockbuster executives bragging on a conference call about how they had a impenetrable barriers to entry in the video rental industry. ‘We have a store within five miles of 80% of Americans’ was the mantra. The economies of scale required to equal such a massive number of stores would be huge – the game was over, Blockbuster had won! As we now know, Blockbuster apparently forgot about the concept of “disruptive technology.” NetFlix and on-demand video through cable have made the local store, if anything, a competitive disadvantage. And the not-too-distant future of video downloads via the Internet will only lessen any real estate benefit.
In that vein, it’s clear that Google has “won” the search engine battle (just ask Yahoo CFO Susan Decker). That, however, may be a pyrrhic victory. OK, so I’m being more than a little melodramatic. There’s no question that search engines are going to be a big part of the Internet – and American – culture for some time to come, and Google is going to make a lot of money being the #1 search engine. But 20 years from now, or maybe even five years from now, I predict the disruptive technology that is infomediaries will arrive.
Maybe someday I’ll be telling my grandchildren about my younger years – back when I had to use travel agents, brokers, and search engines. Boy will they laugh.