Monday, March 30, 2009

Mobile Beyond Carriers: VoIP and the Web are Here, But Are Content Companies Ready?

Skype Limited

As CNET reports along with others the launch of voice over Internet Protocol (VoIP) phone services via an iPhone application from Skype the world does not seem to be shifting on its axis, yet it's a significant step towards a new era in mobile communications. Already the introduction of Web capabilities on Apple's iPhone became the cornerstone of the appliance's widespread appeal among mobile content enthusiasts, fueled by the iPhone's Web browser that works pretty much as any Web browser should and by the iPhone AppStore's content applications that make use of Web communications. Now that VoIP is available on the iPhone - presumably with Apple's tacit blessing - one wonders who is eating whose lunch in the battle for the future of mobile services.

Image representing AT&T as depicted in CrunchBase

Certainly in the U.S. you can give the opening round of mobile battles to AT&T, which inked an exclusive deal with Apple for the iPhone's launch similar to exclusives that Apple negotiated elsewhere in the world. But as part of that deal Apple got AT&T and other global carriers to subsidize a huge chunk of the iPhone's retail price, enabling it to spread like wildfire amongst the "gotta have" gadget crowd. Now that VoIP services are beginning to make their debut on the iPhone - followed soon by emerging voice services by Google via their Web sites and, quite soon, via their Android mobile platform - the question becomes, have the mobile phone carriers been subsidizing the emergence of cross-network voice communications that will break their voice and content pricing strategies?

If this is the case, then I can't say that I can offer them much sympathy. AT&T happens to be the carrier for my local telephone service, which still has me guessing virtually every time whether a call to the towns next to us will be a local or "long distance" call. If I didn't need a copper-wire circuit for my home's burglar alarm I'd be done with them altogether. While VoIP is hardly a perfect voice medium, for eighty percent of our communications is just fine. Moreover, for many younger people spending more time texting than speaking on their phones it may be more than enough most of the time. In the meantime we're stuck with voice and data plans on most mobile carriers based on the premise that voice is a doggone hard service to provide. Well, that's good for the tech players such as Apple who want someone to subsidize their push into mobile services, but it is at the expense of a broader iPhone-less marketplace that is in effect subsidizing the upper end of mobile content comsumption.

This is not a scenario that is likely to change gracefully for the communications carriers. Web and VoIP-based services are going to start dominating mobile communications far more quickly than many imagine, especially as Google Android's cross-platform mobile operating system offers struggling communications companies some alternative pricing and marketing strategies. In the U.S., for example, Sprint is moving to lease out some of its underutilized mobile bandwidth to device and services vendors other than smart phones. How long is it before a number three or number four mobile carrier begins to smell the coffee and begin to offer pricing that reserves traditional mobile voice communications as a "just in case" or high-quality backup to a predominantly Web-based communications plan? Not long, by my estimate.

Content companies should bear in mind this more-rapid-than-expected shift towards Web-biased mobile carrier pricing when contemplating their own mobile pricing and marketing strategies. Already with the iPhone AppStore publishers and other content suppliers see the outlines of a premium content strategy that emphasizes functionality as a key part of their services as much as information. But as the Web and VoIP push in at an accelerated rate to drive more competitive carrier pricing, it's likely that revenues from these applications are going to be the icing on a much bigger cake of content services for the greater marketplace that will resembe more the Web as it is today than any return to a Compuserve-like "walled garden" era of application-enabled services.

The iPhone AppStore is a model for the emerging electronic newsstand era, but the content that will power the most successful of those applications will not come from a particular branded content producer oftentimes. The subsidization of iPhones by the carriers makes the prospect of endless premium content revenues enticing, yet as those subsidies fall by the wayside and the broader marketplace turns to mobile Web content it's doubtful that the novelty of mobile content applications alone will be enough to power publishers' mobile revenues. In short, the mobile Web has arrived and is going to drive publishers to have to confront the same issues of commoditization and increased competition that it faces today via desktop and laptop content consumption. Most publishers may look at Skype's move as little more than a bird flying by, but for those that know it's the canary in the coal mine of old-era mobile communications content pricing.
Reblog this post [with Zemanta]

Thursday, March 19, 2009

Time MINE: Online Lessons Creep Into Print Content

With newspapers and magazines folding virtually every week now in the face of a global economic crisis Clay Shirky is comparing the scope of change being experienced by the rise of online publishing's challenge to newspapers to the tumultuous change sparked by the rise of printing presses nearly five hundred years ago. From my perspective I think that the scope is actually far broader than that. As I outline in the Content Nation book, the scope of change fomented by the rise of online publishing is likely historical on an even broader scale, a scale perhaps never seen since the rise of centralized publishing by the world's first recorded civilizations thousands of years ago.

Whatever the ultimate breadth of the challenges facing traditional publishers, one thing is for certain: timidity in addressing the challenges presented by online publishing has not served them well. This timidity reflects not just in the online portals offered by most traditional media companies but as well in their print strategies. You'd think that some of the lessons learned from online publishing would have worked their way into print offerings a long time ago. Yet more than two years after Wired Magazine offered its users the ability to put their own photo on a customized cover of their magazine (part of a promotion by Xerox), the mass customization of print remains largely a novelty in the eyes of most mass media publishers. But there are glimmers of hopeful signs that publishers may be getting ready to push further on into print customization.

One recent sign of hope for mass customization is a new offering from Time, Inc.'s consumer media group called MINE, a service that allows people to build their own custom magazines from articles found in eight of their leading consumer publications. The actual customization seems to be quite limited at this point - you may specify your address, your age, up to five Time-owned magazines that you'd like to have content from and provide answers to four questions that indicate your presumed tastes (Like sushi or pizza? Sing in the shower? Would you like to learn juggling or celebrity impersonation? Would you like to have dinner with Leonardo da Vinci or Socrates?). From these choices Time will pop out articles tailored to your profile in five issues of your MINE magazine print or digital form, all for free (Lexus appears to be the major sponsor for this effort).

On the scale of today's print offerings this is a fairly bold experiment, enabling Time brands normally built up separately through their various flagship publications to comingle in a common publication. It echoes in some ways the use of The Wall Street Journal's branded business content in some local newspaper editions, but with a level of customization not seen heretofore the editorial side of a magazine cover. Silicon Valley entrepreneur Guy Kawasaki notes tongue in cheek in a recent Twitter message that perhaps it's even a copy of his Alltop's "online magazine rack" of popular topics concept. While I wouldn't discount that self-flattering comparison of Guy's entirely, I think that it's far more likely that Time has finally started to consider a broader range of lessons from online publications - albeit a bit late in the game - and how they may apply to their traditional strengths as direct marketing mavens.

The truth is that Time has been customizing both editorial and ad copy for years based on zip codes and other key demographic groupings. It may not be apparent to the typical person flipping through Sports Illustrated or whatever, but oftentimes they're highly tailored publications. With the technology in place already to do this type of customization on a per-title basis, it's a relatively small step to stage content on a more granular level from multiple titles into MINE issues. So in most respects MINE is an evolutionary step towards enabling multi-branded content in one delivery package. In a way MINE is akin to a "my [name of portal]" type of customization that has been part of online offerings for more than a decade - not only just evolutionary from a print perspective but old, old news from an online perspective.

So while MINE is a positive development, why is it that it is taking traditional publishers so long to develop business models that make more efficient use of print technology as a content delivery system? I for one don't believe that print is at all a dead medium: it's just a horribly neglected medium that has been allowed to die in the hands of very inefficient business models as all of the publishing efficiencies flow to online venues. Reprint services demonstrate every day that print can be a highly effective and profitable targeted communications medium. Yet most publishers derive single percentage digits of their revenues from custom printing. Hmm, tiny slivers of highly profitable printing versus huge swaths of increasingly unprofitable printing...what's wrong with this picture?

It's great that Time is trying out the market for custom aggregations of its own content, but let's he honest - publishers need to be far, far more aggressive in packaging their content in personalized publications tailored for individuals. Unfortunately for some publishers, the greatest opportunities in custom printing lie with those who are willing to let other business models drive the aggregation technologies that make that possible. Some of those business models may yet wind up in the hands of major publishers, but it's far more likely that after years of whining and wrestling, newspaper and magazine publishers will finally surrender to the notion that enabling their content to be licensed through whatever print or print-like electronic vehicle services their audience most effectively is going to be the most profitable and effective way for their print-formatted content to gain exposure. Applying the lessons of the Web to print must be a priority for print publications to survive and to thrive.

While I agree with Clay Shirky that the triviality of making electronic copies of content has changed the economics of the publishing business fundamentally, until some electronic medium has the simplicity, ease and readability of print publications there will be a highly exploitable market for print. In many instances people love to curl up in a time of relaxation to catch up with a print publication, oftentimes on a weekend or during travel. It's a luxury to spend time reading "unplugged" content - a luxury that will only be spent on a handful of print publications. Why not enable people to put whatever content will be of interest to them into that luxury experience? Branded portals for publishers are becoming less and less of a driver for building online revenues: why shouldn't publishers become more aggressive in putting their audiences in the driver's seat for aggregating the content that's of interest to them in print as well?

So kudos for Time testing the waters for their MINE publication, but I do hope that major publishers will finally begin to see the light and start enabling the printing of massively customized print and print-formatted publications that aggregate content from whatever sources interest their audiences the most. The result will be far higher ad rates, far higher returns on investment and a much more healthy print publishing business in the long run. Let's stop allowing printing presses to go dark in major cities just because the one publishing company running them cannot build a business model to support them. Let those printing presses role with whatever content will command the highest interest from audiences from whatever sources produce it, and the money will follow with due haste.
Reblog this post [with Zemanta]

Thursday, March 12, 2009

Closing the Online Revenue Gap: Attributor Powers Automated Monetization Solutions for Distributed Content

A fundamental problem that the publishing industry faces in getting revenues from online content is that most of the value that can be created from their content lies beyond their own Web sites and portals. With billions of Web publications vying to get people's attention and a relative handful of professionally produced publications to compete for that attention it's no small wonder many media executives are humming the now-familiar "content in context" meme as they ponder how to make use of the Web's ocean of content to promote their own wares. The sad truth, though, is that most publishers are ill-equipped to get any money from their content beyond their own online publications. Most media organizations have tiny content licensing business development teams that typically trudge through protracted deals with a handful of publishing partners, leaving the lion's share of potential revenues from partners on the table.

Attributor Corporation has been hot on the trail of how to close the gap between potential revenues from content used across the Web and and the ability to extract those revenues. The Attributor system works by listening to feeds of content from participating publishers. Attributor captures what they've published and then compares it to content that's been published on the Web. When Attributor finds content that's a full or partial match it compiles content usage reports for clients who can then can use automated tools from Attributor or their own methods to pursue the reuse of their content from a business and legal perspective.

How big is the opportunity for monetizing reused content? Recently Attributor shared with me some research based on content from prominent publishers' Web sites fed into its system along with usage data that surfaced some profound statistics. The key thought-provoker emerging from this research is that the audience for people viewing content on sites that were not active syndication or licensing partners was more than five times larger than the audience on the publishers' own sites. Almost half of these largely "passive syndicators" were copying 90 percent or more of the content from publishers' articles and more than 70 percent of the copied articles were using at least half of the available content from articles. Before the publishers reading this post slip on their hair shirts and moan in protest, please consider this first: what publisher wouldn't want to have a 5X increase in potentially monetizable content inventory with no additional overhead?

The research also indicated that two-thirds of the sites using content from these leading publishers were providing links back to the publisher's sites, indicating that they were at least nominally cooperative in building traffic to their sites. Armed with data from Attributor, publishers can pursue on a more highly automated basis Web sites that use their content and turn passive syndicators into active publishing partners - and in the process of doing so shift the balance of traffic back into sites that will feed revenues to the publisher. Attributor projects that using their technologies could help to reduce non-cooperative passive syndicators significantly, potentially doubling traffic captured at publishers' own sites and nearly tripling the traffic visiting cooperative syndication partners. No doubt it would also help content reusers pressing the boundaries of fair use policy to understand what individual publishers considered to be fair use more quickly and effectively.

Attributor sees its data gathering and analysis tools as a key to unlocking significant new online revenues for publishers. It sees at least two basic options that publishers using its data can undertake to establish revenue streams rapidly. Option one: Attributor helps publishers reclaim their fair share of ad revenues from ads served up by existing ad networks on sites using their content. This could in theory help for managing both active and passive syndication partners. Option two: enable Attributor to funnel ads from existing networks and publishers' own direct ad sales to syndication partners. Obviously there are other steps that publishers could take based on Attributor data, but either of these options suggested by Attributor help both to reclaim ad revenues for legitimate publishers and syndicators efficiently and to reduce the revenues fed out by ad networks to non-legitimate syndicators.

To make it easier for publishers large and small to get an idea of the potential for Attributor to help them monetize content they have launched FairShare, a no-fee service that enables people to get data on sites using their content from Attributor analytics provided in an RSS feed. FairShare will pump out stats on individual articles and how they've been reused on specific Web sites, including data on what percentage of an article has been used, whether the reuser is using ads on the page on which it appears and whethe there are linkbacks to their original content. As an option FairShare makes it easier for people using Creative Commons licensing to map their license terms to the patterns of use found in Attributor's Web site analysis. Although launched just a few days ago FairShare is already tracking more than 150,000 articles and has found more than 3.3 million shared copies of content. As seen in the example to the right, FairShare is finding sites that use just fair use snippets of ContentBlogger's content as well as sites that seem to take more than their fair share. If ContentBlogger were ad-supported and Attributor were funneling this data to the ad networks that support content clippers I could be seeing some automatic revenues from these sites. A nice thought in a slow ad economy, no?

Attributor technology has been launched recently as an underpinning for FreeWheel, a service that enables videos from YouTube and other outlets that are embedded on other Web sites to be served up with the ads that benefit the original video publisher the most. FreeWheel calls this concept "Monetization Rights Management," as opposed to the Digital Rights Management packaging that tries to keep others from distributing content themselves. FreeWheel notes - quite rightly, I believe - that legitimate viral distribution of content needs to be encouraged so that content can find its most valuable contexts. Once content is in a valuable context it can be monetized with ads and other marketing mechanisms that benefit both the creator of the content and the publisher that found a valuable context for their content.

As major publishers mull over the capabilities of Attributor technologies, hopefully they begin to see that it offers a key solution to the dilemmas of how to make money on content in an era in which controlling distribution is not only less feasible but also less desirable. To borrow from the language of my book Content Nation, the world is now a nation of publishers, a nation whose value cannot be ignored by traditional publishers as a source of monetizable contexts. Since most non-subscription Web content relies on search engines to maximize their ad revenues, Attributor's search-based technologies can enable publishers to understand who's using their content with the same tools that those publishers use to drive monetizable traffic to their sites. Using Attributor data and tools can enable a highly automated and efficient approach to revenue generation from viral distribution that would eliminate friction with those outlets that use a publisher's content fairly and that can allow publishers to keep on top of "bad apples" on a daily basis.

As major publishers such as The New York Times and The Guardian begin to set their content loose via sophisticated programming interfaces the Attributor concepts of using searching and content identification to establish commercial relationships automatically with publishers using their content can open up an era in which reused content is creating higher value and revenues rapidly for publishers with lower audience acquisition costs. With revenue acquistion schemes such as Attributor in place publishers can concentrate more on making their content as useful and as accurate as possible - and leave the inventiveness of where it's going to be most useful to the world at large.

Certainly publishers will continue to compete to make their own publications a destination of choice, but with only thousands of traditional publishers and billions of self-empowered Web and mobile publishers the time has come to use technology to harvest the value of content in as many publishing contexts as posssible as efficiently as possible. Most especially in the news industry, where getting people's attention in fleeting moments is increasingly difficult, the ability to harvest revenues from content reuse and linking more automatically is an absolute necessity.

This need to chase the contexts of content use in order to make money in online media does not mean that copyright is a dead concept. Far from it: copyright ensures that the creators of original works of authorship have the ability to claim ownership of the intellectual property that is rightfully theirs, especially when it is used in contexts where its use is harder to verify, such as in enterprises and in private communications such as emails, photocopying and reprints. But it's important to remember that the concepts of copyright were introduced into law when publishing was still a relatively fledgling industry, with few commercial outlets available and with the need to support getting information and ideas out to the public via a still-young technology a crying necessity. The "printing press" of today is not any particular Web site or service but the Web as a whole: every person has the potential to play a role in the mechanism of publishing. As such, copy rights, while still relevant, have become less important than context rights - the ability to say how participants in a global peer publishing and aggregation process should recognize the value of a creative work. Nearly three years ago I introduced this concept at a presentation at BookExpo in Washington, DC, using the above square logo as a symbol for context rights.

Today in the work of Attributor we see the beginnings of the effective monetization of context rights taking form. I am hopeful that publishers will finally begin to see the outlines of how to use technologies such as Attributor to forge more effective relationships with the global publishing mechanism of Content Nation to benefit the creative forces behind their content and to create new ways to define the value of their brands. It's a far different methodology than most publishers are used to, but in a world in which the fundamental nature of publishing has changed far more radically than most traditional publishers have dared to acknowledge, it is time for publishers to embrace context rights and to define their value propositions more effectively in a world whose very survival may depend upon the power of ubiquitous publishing to solve problems facing humanity rapidly.

(Full disclosure statement: I really have nothing to disclose, I have had no past or present commercial relationship with Attributor. I just believe that they are pursuing one of the most effective routes to content monetization available today and I hope that publishers pay close attention to their efforts.)

Thursday, March 5, 2009

Amazon Kindle on iPhone: eBooks go Mass Market. Kind Of. Almost.

Image representing Amazon Kindle as depicted i...

The New York Times, The Wall Street Journal's Walt Mossberger and other prominent lights are weighing in on the launch of an application on Apple's iPhone that enables reading e-books compatible with Amazon's Kindle mobile device, with many analysts cooing about this as a huge event. There's no doubt that Kindle e-books have everthing to gain from leapfrogging out of a pond of half a million Kindle devices into a lake of thirteen million-plus iPhone owners (just in time for "Content Nation," which is now available on Kindle). Better yet, since the Kindle application does not tie down Amazon to any exclusive marketing deal with Apple, the doorway is open for Amazon to march onto Nokias, Blackberries and phones equipped with Google's Andriod application. As people owning Kindle-compatible book titles move from one mobile device to another, the Kindle Store on the Web will make it possible for them to use their e-book on any equipped device, "closing" their book on one gizmo and being able to "open" it on another one at the same spot. Think of it as an iTunes for books that's not tied down to any particular player. Not much to complain about here at first glance: it's the creation of the first true mass market platform for electronic books from major publishers. Kudos to Amazon and to the publishers that are playing with them to advance Kindle sales.

But let's look past the first glance and get to what this really means for book publishing. The good news is that Kindle books can now reach the relatively affluent and educated audience that has enough money to buy iPhones - many of whom may have the money for both an iPhone and a Kindle reader but not necessarily the desire to lug around two book-reading gizmos all of the time. Now e-books get to take a major step towards the "nearly everywhere" profile that Web content has on both Internet and mobile-based devices. The bad news, though, is that the book industry, already beholden to Amazon almost as much as music companies are beholden to iTunes for electronic sales, appears to be repeating the mistakes that are likely to prevent their revenues from growing quickly enough to sustain their business models. Put simply, book publishers have turned over the keys to their electronic printing presses to Jeff Bezos and said, "Knock yourself out, you know what to do more than we do." E-books will progress only as quickly as it suits Amazon - and on only those platforms that suit them.

A benevolent monopoly of this kind for electronic book distribution might be beneficial for publishers if it had global reach, but those 13 million iPhones represent only about half of the greater New York City metropolitan market. A good chunk, to be sure, but a far step away from, say, the 1.6 billion people using the Web or the billions of mobile phone users around the world. And even within that universe of 13 million iPhone users, a fair amount of those people fall into the category of folks who Steve Jobs believed would never really read much of anything. In the meantime the audience for books continues to get grayer and grayer. To put it another way, I don't see all that many people in book stores toting around iPhones. The Kindle packaging for iPhone solves a key licensing and distribution problem for book publishers that's likely to improve their profits in the short term, but it does not come even close to building marketable exposure for books on a scale that is likely to draw attention away from other forms of electronic content.

This brings us back to those music publishing companies which had such high hopes for the DRM-enabled iPhone agreements that they signed only a few years ago. This "magic bullet" seemed great at the time - and it certainly has been great for Apple's profits. But it did little to slow the rapid erosion of profits from music sales at most of the major music publishers. Put simply, the insistence on having packaging that seemed to protect their existing business models only delayed the point at which music publishers had to face that their models were going to miss the lion's share of revenues that could be generated online from music. What they saw in the Web was the world's largest music store. What they should have seen was the world's largest theatre and radio station rolled into one.

Book publishers in general don't suffer from the electronic piracy problems that plagued the music industry, so no doubt it seemed like a logical step to move into rights-protected distribution that enabled book publishers to manage industry metrics in much the same way that they have managed metics on print book sales. But in focusing on protecting their existing business model, like the music industry the book industry is largely delaying the more troubling question of how they can make the most money possible from the global audience of billions who engage the Web and mobile devices daily.

Kindle book packaging is useful for traditional reading, but how, for example, can it facilitate even the most basic collaborative use of books? Basic uses of books such as discusions via book clubs, classroom discussion, fair-use excerpting, note-sharing and other value-add services are nowhere near the surface of the stack of potential Kindle developments. Beyond replicating basic uses of print books there is little if any thought given as to how multimedia can be integrated into Kindle books effectively. For example, the online version of the "Content Nation" book has about a dozen video clips embedded in the text. Even still photos of most of these clips did not make their way into the print edition because of traditional print publishing standards. Yet these same clips would be great to have in an electronic, Web-enabled version of the book.

While it's possible that an aggressive roll-out of Kindle readers on most major mobile devices could help to stave off some of the worst problems that are looming for book publishers, the truth is that they are years behind in developing the real opportunities for books in electronic format. Book publishers are facing the same revenue gaps that confront music, newspaper and magazine publishers that waited far too long to build robust online revenue models that could sustain them as their traditional revenue sources moved into legacy status. In the meantime the Google e-books initiative that builds on their book-scanning initiative promises to put millions of book titles on electronic devices that are no longer controlled by book publishers. In other words, Kindle may just turn out to be the "eight track tape" solution for books - a technology that seemed to be extremely popular at first with the public for listening to tape-recorded music but that turned out to be a dead end for early adapters when more flexible and higher-quality technologies came along.

Every time publishers resist the fundamental dynamics of the Web, they usually come to regret it. Traditional book publishers still have an opportunity to redefine their future independent of the Kindle, but it's more likely that the explosion of alternative online book publishing services will begin to overtake Kindle-based books over the next few years as sources of content that are more flexible, more shareable and more attuned to the needs of new generations of readers to whom the term "cracking the books" is largely a metaphor. Traditional books and book publishers will live on, and Kindle will help them to live on for many years to come. But in the meantime a new book industry is being defined that will be the true future of books - with or without Kindles.
Reblog this post [with Zemanta]

Tuesday, March 3, 2009

Microsoft's Kumo Prototype Wrestles Powerset Search Features into Live Search

While the concept of the content organization features found in the Powerset search application was always compelling, the original content in the demo application set up for the early version of Powerset was not the most powerful presentation of its strengths. Now in the hands of its acquirer Microsoft, the Powerset features appear to be ready to take on a much-improved content set and interface in the guise of an internal project at Microsoft labeled "Kumo." As revealed by Kara Swisher at All Things Digital, an internal Microsoft memo is encouraging staff to play with the prototype search engine to get some initial feedback.

In spite of some scathing negative reviews from the search engine intelligentia, the screen grabs provided by ATD of the Kumo interface look to be pretty competent. Gone is the over-busy Powerset interface, replaced by and interface that is at once Google-esque and yet unique. The top five web results are followed by results that match different facets of a search term. For example, results for the recording artist Taylor Swift return groupings of content available for her songs, her lyrics, her bio and her music downloads and her albums. On the left are possible searches by related artists and categories, as well as the ability to initiate new searches in video collections, bios and so on.

It's unclear at this point whether Kumo will be just a project name - it's apparently a word that means both "cloud" and "spider" in Japanese - or whether it's just an internal marker that may disappear at its features get absorbed into Microsoft's Live Search engine. For that matter, it's unclear that the features will make their way into production at all, though they are certainly useful enough. What is clear, though, is that Microsoft is going to continue to search for new ways to make alternatives to Google palatable in a way that might appeal to both enterprise and media audiences. I don't think that too many people harbor illusions about the ability to crack Google's dominant market share in search any time soon, but competition is good for the breed, they say.

I suppose the most intriguing aspect of Google's success that challenges the challengers such as Kumo is how Google has attained its success without explicit content categorization features. One can go to dozens of knowledge management and search conferences every year and hear about how important good content categorization features are for the success of search engines - and then look at the nearly naked search results on Google to contemplate just how true that may be. The assumption that categorization specialists have is that having categories makes it easier to browse content collections. Well, that may very well be true if you are in fact interested in browsing relatively finite and well-organized collections of content, but in general search engines have become less about browsing and more about delivering specific answers for most people. The average searcher seems to be trained now to refine their own searches via the "white box" rather than to traverse through browsing categories.

This isn't to say that content categorization isn't useful: it's more a matter of where it turns out to be most useful. Where it does seem to help most is in portal solutions where someone has come to a specific page of content and may want to explore that site or database from different facets. Where people understand that there's a finite, well-curated collection at their disposal, categorization seems to do quite well. Where it's a matter of sifting through billions of pages for the needle in the haystack, most folks are getting used to typing in the best search string that they can think of. With that said, the features in Kumo do provide an interesting and engaging alternative to Google search results, but they'd probably be better off either in specific content portals that need enrichment or in creating an on-demand portal from its results sets, so that it will be a more browsable set of content in its own right - and then, perhaps, attract a higher breed of advertising, if that's the goal. Instead of trying to out-Google Google, perhaps challengers such as Kumo need to think about how to out-aggregate the aggregators to build better revenue margins for smaller search operations. Something to wrestle with, perhaps.

Reblog this post [with Zemanta]

Sunday, March 1, 2009

Rich Content for Everyone: Zemanta Brings Semantic Analysis to Blogs and CMS Platforms

Image representing Zemanta as depicted in Crun...

Last week's Social Media Club meeting was great for any number of reasons that I covered in my Content Nation blog post, but it was capped by one of those moments of serendipity that come along only so often. As I settled in to my train seat on the way home, I noticed that my friend Jim Hirshfield was sitting in the seat behind me. Jim and I had last seen one another at last year's Cluetrain@10 celebration in New York City, just as he was looking to re-enter the startup space. Today Jim is VP of Business Development of Zemanta, a European startup with development offices in Slovenia that has developed a nifty platform that enables publishers to enrich their online content via their semantic language processing tools.

Zemanta technology operates via a plugin for popular blogging and Web CMS platforms and with popular brower-based email services such as Yahoo! Mail and Gmail. As with other semantic processing services that parse documents to suggest related links, tags and content, Zemanta semantic processing technology pumps text that's being typed in by a document author through its semantic filters to come up with relevant rich content that can be inserted into these documents. This in and of itself is not terribly revolutionary: publishing platforms have had similar tools for years to facilitate the development of rich content that can attract search engine traffic and keep audiences engaged in their content. What's highly interesting about Zemanta's approach is that it is a free download that can be integrated within seconds into platforms that are popular with both bloggers and professional publishers. A "pro" model is available that can be tailored for a publisher's own content on their own platforms.

Best of all, the stuff just plain works. As you type along, Zemanta's suggestions for images, links, tagging and related content pop up in convenient spots near a page's editing window. This real-time analysis is quite impressive and remarkably effective: it seems to take only a few sentences to get going and it gets only better as you type in more. A quick click or drag of the mouse and rich content is integrated into a blog post or article easily. It's giddily easy to enrich your articles: virtually every link, image and tag in this article was implemented with Zemanta. Zemanta's free download links into 10 million-plus items of content from free sources, including rights-cleared images from sources such as CrunchBase, Flickr and Google Maps, articles from key bloggers and Wikipedia as well as information posted on social networking services and content from Crunchbase, Amazon, YouTube and other popular sources. "Reblogging" content to other sites with trace linking to the original source is applied automatically to each post.

High-end services may provide more features, content and functionality for semantic content integration, but for publishers that don't have the time, money or project bandwidth for such solutions and that need to get more enriched content quickly Zemanta offers remarkable power in its free version - as well as the ability to upgrade to the premium version that enables publisher-specific sources to be integrated easily as well. This can be particularly important for a publisher that may have blogging or open-source CMS platforms that will not be so easily integrated into some of the high end semantic services. Zemanta allows these publishers to make rapid integration of content from their existing sources a very short project. In a world in which publishing platforms with 80 percent of what one would expect from a professional package now dominate the bulk of content being generated on the Web, Zemanta gives those platforms yet another "pretty-darn-good" asset that can help their content to compete effectively in online content markets. My thanks to Jim for being in the right place at the right time with a great tool for publishers of all sizes.

Reblog this post [with Zemanta]