There are (at least) two major short-comings to today’s forecasting approaches. First, forecasters are using increasingly narrow definitions so they aren’t seeing things in the broader context. Secondly (and related), forecasters largely fail to look beyond what is seen – many of today’s forecasts are just projections of the current state of affairs. Taken together, today’s forecasters are missing the broader landscape of innovation that is taking place.

It is important to remember that adoption of innovation – whether it is consumer adoption or enterprise adoption – follows s-shaped power law curves. Growth isn’t linear. Adoption typically starts very slowly and remains low for an extended period (ie several years). After several years of low to no adoption, grow accelerates at an accelerating rate (positive 2nd derivative) for an extended period. Eventually adoption slows – the 2nd derivative turns negative followed by the 1st derivative turning negative.

This is where today’s analysts tend to stop. The drive towards specialization forces forecasters to focus on very narrow definitions. Analysts tend to take a micro approach to forecasts and focus only on single s-shaped adoption curves. They look at things in isolation. They look at markets in two states – growing or shrinking.

In reality, developments materialize over long stretches of time and typically overlap other trends. A single s-shape adoption curve will beget dozens or even hundreds of other s-shape adoption curves. In this way, innovation is much larger in aggregate than forecasters convey with their narrowly-defined forecasts. We think in narrow terms. In reality innovation follows k-waves as opposed to standard normal curves. More, k-waves overlap. Because these developments occur over long periods of time most companies are typically able to adjust to the evolutionary process of adoption. They are able to move to areas with positive 2nd derivatives, while abandoning areas with negative 1st derivatives. This transition doesn’t always happen smoothly, but it does happen naturally. But the desired immediate applicability of forecasts fails to accurately account for this evolutionary process.

Universal search – the ability to search for content across sources – is one of the holy grails of consumer content management.  Over the last 5-6 years search has improved significantly and become more ubiquitous.  Google Desktop is a great search tool across locally stored files.  I’ve written in the past about Xobni and the ability to search and organize across email content (your inbox and archive files). Adding the link in the previous sentence was easily done using the search capabilities in the Wordpress link tool.

Google – through Google TV – is (trying to) make a strong push into video content search.  Crestron has been active here as well. @juliejacobson writes about a Crestron patent published a few weeks ago.  The patent abstract describes the pantent as:

a method for obtaining a single set of media search results from a search of media sources. The method includes providing a search query, executing a search of each of the media sources for media based on the provided search query, generating results of the searches, and consolidating the results of the searches into the single set of search results that include a list of media items with associated metadata.

This is exactly what is needed in content management.  Content is exploding across a myriad of sources and the ability to search across these sources with a single gesture is extremely limited.  One of the greater obstacles thus far in this endeavor is that the approaches have been hardware-centric. In order to gain wide acceptance, I think universal search will need to take place across a number of devices.  Services like Netflix have gained ubiquity because they are available across content-oriented devices. Universal content search will not reach a similar ubiquity until is is hardware agnostic.

BestBuy recently announced they would launch a connected TV under the Insignia brand using the Tivo user interface.  This is a great example of the Innovator’s Dilemma in action.

The Insignia brand is one of Best Buy’s house brands.  It (like other private label brands) is frequently used as the opening price point for devices.  House brands tend to do this best for maturing categories – where consumers have become comfortable with how they use their devices and are largely looking for replacement devices.  It also works for late adopters who are looking for their first purchase in a given category (and might be more price sensitive).  In both cases, these consumer segments are looking for low-priced options.

The Insignia brand has grown to represent some 10 percent of the television market.  Moving into the Connected TV space is an interesting move for the Insignia brand.  It is certaintly a growing segment of the declining TV market. In the first half of 2010 they represented about 8 percent of total shipments. Just a year later the share of connected TVs had grown to about 20 percent of total TV shipments.  While they’re grown significantly in terms of shipment share, they still represent a relatively low share of the installed base.

The move will allow the Insignia models to enter a growing segment of the TV market.

I’ve written about curation in the past. One of the keys to curation – one of the driving features to why curation matters – is discovery.  Curation drives discovery which drives more curation. The battle within curation today – and in the years to come – is how curation is done.  There are a variety of web services that empower the individual to become a curator. But machine curation will also play an important role.  Here are two recent techcrunch stories covering app discovery and content discovery. In both of these you can see how both machine and individual will play an important in curation and discovery.

I think a lot about how technology diffuses through a society. The implications of technology diffusion are more pronounced today than ever before.  Not enough diffusion of innovation modeling goes into the current thinking on device ownership and unit volume. More, large research vendors need to sell reports so they frequently release reports touting large estimates. It is all too common to see press releases read like this:

“Double-digit growth will continue through the following four years and by year X product Y will reach Z million units a year.”

The problem of course is that no one ever goes back and compares these early estimates.  Part of the rational for publishing estimates that are three or four years out.  Netbook estimates made in late 2009 and early 2010 are of course the poster child for excessive estimates and misguided adoption expectations. While looking for a link for another post, I found the following just as a reminder:

ABI forecasts annual netbook shipments will reach 139 million in 2013

IDC forecasts double-digit growth for netbooks in 2010

and the list goes on.