Thursday, September 24, 2009

Google Sidewiki: Infrastructure for the Next Generation of Conversations

You are reading a blog post that started as a comment. That in and of itself is hardly unusual for people who decide to leave detailed comments on one blog and then expand on them in their own blog, but the way that I did it was through Google Sidewiki, a new feature of the Google Toolbar that is used commonly in the Firefox Web browser. Once installed, an icon on the Toolbar enables you to enter a comment-like bit of information relating to a blog entry or other Web page that you're viewing, either about the whole page or a section of text. Once you've had your say, your text (and it's only text, no links, images or other enhanced items are allowed) can be saved in Sidewiki and at the same time get pushed to an entry in one of your Blogger weblogs (finally, a small side-benefit for using Blogger). You can also easily share a comment with someone via email, Twitter or Facebook.

Tools like Sidebar have been in use for many years, but none of them have found that much of an audience. One of the reasons seems to be that comment editing systems that float on the side of a page tend not to draw your attention as you scroll down it. Sidebar may suffer this same fate in the short run, though its ability to be relevant throughout a page and contexutal to very specific parts of the page makes it an interesting companion tool that may escape similar disinterest given to other annotation tools. Its presence only in Firefox and Internet Explorer browsers also seems to limit the potential community of users, though versions for Chrome and other browsers such as Safari are likely soon. What is likely to save Sidebar from lack of interest is the fact that it's well, a Google tool, of course. Google has lacked a reasonable entry point into social media communities for some time outside of lackluster experiments such as Orkut. The voting, abuse control and integrated features that make it easy to share Sidebar content in lifestreaming services are ways for Google to play its strongest emphasis - putting all of the Web in context - alongside the strengths of other social media services. So, while it's still kind of an iffy play, it does offer some solid thinking

that may accelerate Google as a destination for valuable comment content extended out to all of the Web alongside its own Blogger blogs.

One angle where you can see how this can take on a new angle for building Google's destination content is in a feature that doesn't get much attention at first. After a bit of use I noticed a link in Sidewiki that says "view my Google profile." When you click on this link , you discover that your Google Profile page now has a tab that displays your Sidewiki comments along with links to the content that you were commenting on. This is an interesting feature, enabling Sidewiki content to act as a seeding mechanism for a Facebook-like stream of links and information. In typical Google fashion this is a subtle tool that builds content in places that you may not expect, integrating it both into the experience of visiting a Web site and visiting a friend's Google profile. This cries out for a widget-oriented implementation that can enable Sidewiki to integrate more closely with destination content as Facebook Connect enables through sites like the Huffington Post.

All of this points to the elephant not yet in the room but waiting in the hallway: Google Wave. It's clear that Sidewiki and its integration with Google Profiles is custom-tucked for Wave technology, which would enable highly sophisticated real-time content sharing with trusted peers. That's a relatively long-term strategy, though, leaving lots of room for other comment sharing tools to gain market momentum. Sidewiki is yet another interesting piece of the Google puzzle, a puzzle that encompasses so may individual little pieces popping out of the Googleplex one at a time that it's hard to appreciate at times what it is that Google is trying to do. Perhaps that's the way that they want it - a charging elephant might be a little more alarming to people. But in the meantime, a lot of people have a hard time seeing even pieces of Google's social media strategy making sense.

I found Michael Arrington's comments on the new Google Sidewiki feature to be an oddly neutral and superficial analysis, albeit with a bit of inside scoop. While this, like many other Google projects, may not seem like much at first, it has the potential for major impact. First, it comes at a time when comment spam is becoming a major problem. Technologies such as "captcha" character graphics that weed out automated comment spam are failing, as spammers are hiring people who work cheap enough to defeat these mechanisms cost-effectively with manual entry of spam. The Digg-like voting and ranking will help to push such garbage to the bottom of the comment pile.

Secondly, comments are becoming a major source of content unto themselves, as seen in platforms such as Facebook and Friendfeed. Sidewiki is an ingenious play to get that kind of community content embedded almost anywhere, while at the same time enabling the community to develop a personality of its own. This is a unique kind of platform play that defines a "between the raindrops" approach to these competitors.

This all points to one key factor - most technology platforms have done very little to improve the value of comments or to address long-standing technical issues. They're not a sexy tech feature by most techie standards, so the glory goes elsewhere. Google sees them as a major opportunity, and may have a major play as a result. I feel somewhat uncomfortable about the disintermediation factors, but the ability to post a comment as a blog entry on your Blogger weblog (finally a reward for having stuck with it!) enables you to shift the conversation to focus on your own content fairly handily. Key weakness in this feature: you can't post links or graphics in your Sidewiki content, so your entries won't be very rich. I am sure that this will be addressed in time, perhaps as a part of Wave technology being introduced.

At the end of the day, if it makes your core content more valuable and it's better technology than what you can get yourself, it's probably a good thing. I welcome better comment solutions that can compete with this, but right now we all need a little relief from comment fatigue - especially if you're trying to keep the spammers away.

Sunday, September 20, 2009

Unleashing the New Ether: Likely FCC Net Neutrality Ruling on Wireless Resets Content Stage

In the early days of radio, the signals that shot out from station transmitters went into what was then termed the "ether," the invisible and universal medium that carried radio waves through the air to whatever device could receive them. While we don't talk about the "ether" of radio much these days, it's clear that the concept of a universal and transparent transmission medium has not worn out its appeal. The infrastructure that carries much of today's hard-wired Web, for example, is based on Ethernet networking technology, a term that underscores Web technologies as an important analogy to radio's universal capabilities. Better than radio, there are virtually limitless Web "frequencies" - network addresses - that can broadcast on relatively clear Web channels on a global basis, frequencies that can accommodate hundreds of millions of broadcasters simultaneously.

The better-than-radio nature of the Web is a fairly constant source of frustration to telecommunications carriers, which are used to fee structures developed in the 20th century based on scarce transmission and connection resources. For these companies, the flat-rate nature of most Web access fees based on total available bandwidth limits their ability to charge for access to content based on whatever scheme suits their goals. This so-called "Net Neutrality" concept is therefore the target of much lobbying and jockeying by telecomms carriers interested in upping their profits from the Web. The debate over Net Neutrality is particularly keen in the United States because of proposed regulations by the U.S. Federal Communications Commission to support Net Neutrality concepts, and is about to get more keen as the FCC begins to roll out its proposed Net Neutrality stance. The Wall Street Journal reports along with others that FCC Chairman Julius Genachowski will announce in a speech on Monday that the FCC will target not only hard-wired connections to the Web for Net Neutrality governance but will as well put Web connections provided by wireless Web carriers under the same policy.

This is unhappy news for telecommunications companies, especially those such as AT&T who are struggling already to make advanced Web-browsing mobile devices such as the iPhone work on their already overburdened mobile wireless networks. To many of these companies, the concept of treating the Web as an infinite ether seems to run contrary to their ability to deliver services effectively. Yet here I sit, in the boarding lounge of an airline terminal, typing away happily on a high-quality broadband Web connection provided by a major telecommunications carrier. Moreover, if I were in an airport far from home, I might use my mobile Web connection to use Skype, now the world's largest international telephone call carrier, to avoid the stiff fees charged by traditional telecommunications companies. The mobile Web may be a little shaky, still, but it's a consistent enough medium in enough places that the FCC's argument for flat-fee network access is likely to hold water easily as a long-term policy for governing the growth of Web-based content and communications.

At the end of the day, though, this will be great news for publishers, who are struggling with an increasingly complex array of technology and marketing partners who are interested in taking their own share of the mobile pie from their efforts to get content to their audiences. As both consumer and business-oriented content suppliers get more adept at mobile Web distribution, it becomes more clear that while telecommunications carriers were necessary partners for the early days of mobile Web distribution, they will become increasingly onerous as the mobile Web comes into its own as a neutral carrier for their own sophisticated services. This doesn't leave much room for sympathy when it comes to the carriers, though: they get pretty hefty fees already from mobile Web services and can expect that the shift from hard-wired connections to the increasingly mobile Web is going to take care of them well in many ways.

Looking at Skype and the looming presence of Google Voice, though, it may tend to undercut telecommunications carriers' profits from traditional phone services that have helped to underwrite the growth of sophisticated mobile technologies. But by the time that this happens, most devices carrying mobile Web services will be affordable enough that today's premium prices for most devices are unlikely to be necessary, making it far more likely that we will enter an era in which Web-based phone calls will be a standard and not the exception. When this starts to happen, it's likely that mobile carriers will be making enough off of Web access that they won't care too much that many people will have foregone traditional phone access in favor of Web-only mobile access that also carries their phone calls.

I do think that the timing on the FCC's policies is just right, given the rapid development of Web services via mobile channels. It comes at a time that will help to accelerate both competition and useful services while still enabling carriers an important piece of the action while they ease their way into the Web-first mobile world. Good luck to Chairman Genachowski with his speech on Monday - and may the best ether win.

Thursday, September 17, 2009

Filling the On-Demand Pipeline: Will Google Books Perk up Espresso?

On-demand book publishing has been a quiet reality behind the scenes for several years, now, with outlets such as Amazon and a handful of major universities and bookstores generating some books on an on-demand basis rather than shelving inventory. On the retail side of the equation, however, on-demand publishing is almost a total cypher, in spite of technologies such as the Espresso Book Machine from from On-Demand Books. The EBM carries a still-hefty price tag and has kind of funky marketing (does anyone really name products with acronyms any more?), but nevertheless represents a great opportunity for many new business models to surface around print media. Yet most publishers have failed to commit any significant resources to delivering their titles to consumer-demanded printing.

A new alliance between Google and On-Demand Books, though, may help to prime the on-demand business model with an abundance of content. Google has agreed grant On-Demand Books access to 2 million public-domain book titles available via its Google Books service. According to eWeek, Google is suggesting an $8 price tag for these on-demand books, with $1 of the proceeds going to On-Demand Books and $1 to Google, which intends to donate its proceeds to charities. While there are already about 1.6 million titles available via Espresso machines, the highly affordable price tag for public-domain books and the online cachet of Google Books (not to mention millions more waiting in the wings for a settlement of Google's rights to out-of-print copyrighted works) may be a priming of the pipeline for wider distribution of on-demand books.

When computerized laser printers first came to the marketplace, they were huge, hunkering machines found in major computer centers that had to handle high-volume printing. Today, of course, anyone can park a high-quality, high-speed color laser printer in their home for a few hundred dollars. The Espresso Book Machine seems to be caught between these two extremes, affordable enough that some larger retail outlets are willing to give it a try but not built in enough volume that your average neighborhood coffee shop, print shop or book store could afford to pop one in the corner somewhere for on-demand books. With the Google Books deal, highly affordable printed books from a wealth of titles may help to push the volume of on-demand printing at the consumer outlet level to the point that more affordable versions of EBM technology could be deployed.

This may be just what Google has in mind, as it yet again takes content that most publishers considered unmonetizable and seeks ways to make money with it. A buck a book for high-quality free content that costs almost nothing to store online is not a bad deal. Add in Google's expanding footprint in eBooks via deals with retailers and ePub-compatible reading device makers and the unmonetizable starts to look like a pretty good deal. In this era in which many publishers are still focused largely on incremental gains for their cash cows, it's nice to see Google and On-Demand Books turning cow flops into blue sky markets that may transform on-demand books into a lush pasture for new profits.

Tuesday, September 15, 2009

The Physics of Publishing: AIP UniPHY Creates a Template for Expert Communities

I had the pleasure to hear two presentations recently by executives from the American Institute of Physics, the first by AIP Executive Director and CEO Fred Dylla at the recent ALPSP International Conference in Oxford, UK. Fred's presentation was an eloquent evaluation of the past, present and future of the scholarly publishing industry, in which he noted that indexing of scholarly content could be traced back to at least the 11th century. As much as we see scholarly publishing in many ways through the lens of print-oriented technologies, in fact scholarly debates preceded the widespread use of print publishing, and will outlast print as those debates move into new media. I really appreciated Dylla's far-sighted view of the industry, as well as the very immediate and concrete steps that AIP is undertaking to transform its place in that industry.

The more here-and-now aspects of AIP's efforts to advance scholarly publishing were outlined in greater detail by Tim Ingoldsby, AIP's Director of Strategic Initiatives and Publisher Relations, at the recent Fall Meeting of ASIDIC, as a part of a panel that I was moderating on social media. Tim's presentation focused on the details of the new AIP UniPHY online service, which uses a powerful combination of content sources and features to power this new online community used to locate and build relationships with experts in physics and related sciences. In many ways AIP Uniphy is leveraging key leading practices that can help scholarly publishers define highly effective models for their content and the community that creates and consumes it.

In short, UniPHY enables professionals to explore the topical and personal relationships that bind together experts through scholarly publishing and other channels of communication such as conferences. Organizations needing to locate experts in a particular field are limited in many fields to online search engines, social networking services and subscription database services to filter through who is working on a specific topic, or, alternatively, call upon consultants and peer contacts to make recommendations. Being able to find experts efficiently and to understand their relationships to one another is a critical factor for many organizations trying to come up with timely innovations for their products, services and research efforts, so AIP is addressing a key "pain point" in their marketplace.

AIP UniPHY is a free online service that enables registrants to search for scientists who have published materials via AIP on topics that have been mapped to AIP's very detailed PACS topic categorization scheme. Using semantic analysis and visualization technologies from Collexis, similar to those used in the Collexis BiomedExperts portal, the result is a detailed map of content produced by specific authors on very specific topics and of the people and places who are related to those authors. The very well-designed interface includes "six-degrees"-style mapping of relationships found through the analysis of people's publishing, as well as the ability for registrants to build out their own profiles for professional networking (a la LinkedIn) and to understand which people in their professional networks are involved in specific lines of research.

The beauty of combining scholarly publishing, a strong topic index and powerful semantic analysis of both content and expert relationships is that you wind up having a portal that is already very attractive to people who may be interested in interacting with one another in an online community. The use of Collexis technology to process AIP's content through their PACS categorization provides day-one content organization that can help people to see the value of using the service in a more social fashion. The more than 180,000 scientists who contribute content to AIP publications and events get tools on AIP UniPHY that help them to understand better who is doing what with whom and where, as well as tools that help them to keep track of closer relationships in their own networks more effectively. This provides a strong motivation for AIP members and publishers to register for the service, and will attract other people who are not publishers but who are seeking the expertise of people who publish to participate as members also.

I was struck in general by the receptivity that society publishers at the ALPSP conference had to social media and very pleased to see that AIP was advancing into a platform that is a fine demonstration of what scholarly publishers can do to build a new core to their ongoing value propositions. The "how" and the "how much" of paying for scholarly publications is still up for grabs in many ways, but the plain picture is that scholarly publishers need new revenue streams and value points other than simply providing paid access to easily reproduced content. AIP UniPHY sidesteps the entire Open Access/traditional payment model question (it presents only abstracts of premium content) and instead provides a potentially vibrant online community environment that will be very hard for others to duplicate with technology alone.

Once professionals have a commitment to a publishing platform that draws then together with other professionals that are important to their work and their lives, they will tend to stick with such a platform indefinitely. Clearly printed scholarly journals and their electronic derivatives are waning as a center of commitment at a community level, even if they are acknowledged as necessary to one's work and career. By focusing on the benefits of membership in an online community - and, after all, managing communities is what professional societies do best - AIP is setting the stage for future premium products that add value to that community of experts and expert-seekers in ways that will provide better value points for all concerned.

Most importantly, this model is highly reproducible; any publishing sector that has a detailed categorization scheme and lots of community-generated content at its disposal - in this instance, high-value scholarly content generated by a scientific community - can provide a platform that locks in reader interest and participation and that puts their premium content and services in their most valuable light. Society publishers need not be the only ones benefiting from this approach, but since they work on a "membership has its privileges" basis anyway, being able to highlight the benefits of being accessible in powerful ways via a platform such as AIP UniPHY certainly highlights the benefits of society publishing and membership clearly.

As Fred Dylla pointed out in his talk, there is a long history to learned profesionals and scholars sharing their knowledge and a potentially exciting future for societies that can move toward new models of publishing to support those experts. Here's hoping for all who are concerned about the future of scholarly publishing that AIP UniPHY can serve as an important model for drawing together experts effectively in ways that will create both highly valued content and effective research.

Monday, September 14, 2009

An Adless Recovery? The Rise of Social Media as a Major Marketing Investment

With many forecasts beginning to predict a bottom of sorts in the ad-supported content market, can an ad recovery be too far behind? It's a question that is probably harder to answer than ever, given the rise of social media tools as an increasingly important platform for marketing influence and insight. Yes, we're bound to see increases in ad spending as the economy improves, but while the ads were away, companies have been learning how to listen to their clients more effectively through public social media channels and their own online forums and customer support platforms to influence markets cost-effectively. One of the leaders in helping organizations to listen and to respond to their markets effectively is Lithium Technologies, which provides both community forum tools and social media monitoring tools that integrate with popular CRM platforms such as Salesforce.com. To some, tools such as Lithium may seem like stuff down in the bowels of product management efforts rather than marketing efforts. But in fact, it turns out that investments in social media gathering and monitoring are having measurable effects on marketing efforts.

As noted in a recent Lithium white paper, a Harvard business review study recorded a 56 percent increase in sales for an online auction site for people participating in the site's online community features. Similar results were seen at one Lithium customer, which reported $41 million dollars in increased sales from their online community members along with $8 million in reduced support costs. In other words, companies are learning that customers generating millions of page views on their own Web sites and social media portals learning from other customers and their own staffs are becoming powerful channels for revenue generation and brand management, as well as reducing support overhead. Of equal importance, though, is the ability of tools such as Lithium's "Social CRM" suite to monitor feedback and discussions in forums and social media outlets that can be channels to support staff and sales and marketing teams in ways that enable them to respond to market opportunties and threats expressed in social media even as they are emerging online.

With capabilities such as these, advertising becomes less of a critical tool to formulate messages that can be spread widely and effectively to the most important and influential market participants. Instead of focusing on "spinning" markets through ad campaigns, engaging markets through social media tools and empowering clients to have influence over their peer purchasers can enable companies to empower peers and product specialists whose influence can be more direct and immediate on sales processes than ads placed in online content of general interest. Why bother paying a prominent media figure like a sports hero, for example, to get people charged up about a new product or service via ads when influential peers whose opinions are trusted by others can do it for you for free?

So while advertising will play an important role in marketing for some time, the nature of how influence is spread through markets has changed fundamentally via social media, helping people to gravitate towards content generated by the markets themselves and by companies and organizations able to communicate effectively with markets on a peer level. To put it another way, when your clients and prospects generate more content and more engaging content than traditional publishers, you're going to put your marketing monies down on the content that produces most cost-effectively. I believe that we're just at the very early days of publishers beginning to understand the likely impact of social media on their own organizations - even as their clients are already well down the path of exploiting it directly for their own purposes. So much for intellectual property rights when you can have intellectual influence rights.

Monday, September 7, 2009

It Depends on What "Semantic" Is: NetBase And Natural Language Processing Hit Hiccups with HealthBase

The word "semantic" is bandied about quite a bit these days in online publishing, a term that is used to label everything from systems that automatically categorize content based on the presence of key terms in its body to more human-assisted forms of content organization. Whatever the particular technology or methodology, though, using the language structure of content and queries to infer more than merely the presence of key terms or concepts can get a little tricky with content on the open Web. An example of the challenges found in implying meaning from both search queries and related online content surfaced recently with the launch of the new HealthBase online portal.

HealthBase is a showcase for the technologies of NetBase, a Mountain View, CA-based company specializing in using semantic language processing to unearth relationships in conent collections not easily revealed by traditional keyword technologies. NetBase claims that HealthBase can help people to sort through Web content to find solutions to medical problems by parsing their queries through natural language semantic filters and then using semantic processing to find content organized by specific aspects of possible causes and solutions for medical problems. While HealthBase attracted some kind words from Search Engine Land, some test queries by Technorati delivered less flattering results. For the search query "aids," for example, a list of possible causes identified in Web content by HealthBase included "Jews," based on HealthBase interpreting the word "aids" as the word describing assisting people rather than the disease's acronym. The possible cures for this possible cause for "aids" included "salt" and "alcohol."

There can be little doubt that NetBase took an enormous risk by exposing its cutting edge technology in an open Web service focused on something as critical as healthcare, a field in which services from many well-funded providers have been focused for several years online. With many people doubting the reliablity of the Web as a source of medical information, glitches in a new service are not likely to make people feel more comfortable with using online content from unvetted sources to consider courses of treatment. But the real problem is not the NetBase technology so much as the expectations of how well some technologies can deal with a wide array of semantic issues found in subject domains only tangentally related to a field of science.

The idea of exploring sources of content using semantic tools to parse out possible causal relationships can be made to work, but these technologies need a lot of pre-defined context to guide their efforts. For example, semantic analysis tools tend to work well on documents that are either highly structured - say, a research paper abstract or a news article in which a lede paragraph contains key information in a fairly structured pattern. To get semantic processing working on more unstructured sources of content such as emails, Web pages and other more open-ended content formats requires a lot of "training data," documents that are typical of successful matches for a given domain of information. Similarly, search engines or databases that use natural language processing to infer a particular kind of topic from a query entered in a text interface may lack enough words to infer the right kind of context to be implied from those words in relation to a specific subject.

Keyword-0riented search engines such as Google remain popular in part because they don't try to infer too much semantic knowledge from a given query. Instead, they rely on the human understanding of the semantic context of a given keyword - for example, looking at the number of people visiting or linking to a page that appears to be a match - to help select possible matches for a given keyword. Type "aids" into Google, for example, and you get a lot of documents relevant to the disease AIDS. If you had this type of collection as a starting point and then applied semantic filters to look at causal relationships, then you'd probably be in a better context for applying domain-specific semantic processing tools.

Semantic processing applied in the manner of HealthBase can help to expose exciting possible relationships between different sets of content that may have otherwise never surfaced, making its potential worthy of being taken very seriously. But like someone trying to learn a foreign language by just walking down the streets of an unfamiliar country, applying the assumptions of one subject domain to any number of generally unrelated domains is not always the most efficient or reliable way to discover the most obvious causal relationships. Being able to learn and to apply lessons rapidly from a wide range of experiences is key to making such semantic processing work effectively. To some degree these kinds of services must offer "self-learning," that is, the ability of the semantic technology to be trained to recognize automatically when it's made mistakes based on human input and to be tuned rapidly by humans who will understand complex semantic relationships more rapidly than most software.

No doubt HealthBase will benefit from such tuning over time. The expectations of people looking for concrete causal relationships, though, may take more time. HealthBase is an exciting experiment in technology, which will benefit from more experiments in how to apply these technologies effectively to specific market needs.

Saturday, September 5, 2009

Life with Ariel: The World of Real-Time Content Defines Today's PCs

After a day or so of tweaking, software downloading and restoring files from my JungleDisk network backup drive, Ariel has come to life in full. The fourth of a series of Dell Latitude laptop PCs that I have used (we'll forget that Compaq that I had for a corporate job), Ariel is the third unit I've owned named after archangels, a small but welcome comfort when I have need of a machine that can deliver some assurance to a hard-working road warrior. The processing power of this ES6400 model and its solid-state RAM drive certainly help Ariel to deliver those assurances. Having been out of the PC purchasing loop for several years, now, though, I must say that Ariel is representative of a new place in the content hierarchy for PCs than former units that I have owned, more a waystation than a destination in the stream of real-time content going and coming from a myriad of inputs and outputs.

The edges and guts of Ariel are bristling with interfaces to all kinds of content sources and outputs. An SD card slot on the front for camera and mobile media, a Firewire port and four USB ports for high-speed serial connections, one of which doubles as an eSATA port for high-volume storage units, high-definition video output port and a plain old LAN connector. Inside are wireless cards for WiFi, broadband, GPS and for Bluetooth-enabled devices. A CD-DVD drive is there for legacy media and storage, while the slot for the analog modem finally said goodbye. In other words, this machine is more like a switchboard for the galaxy of content sources and output devices surrounding it than a little walled garden unto itself. The fact that I have oodles of disk space is not as important as the peta-oodles of storage and processing available in the networks surrounding Ariel.

The notion of PCs as switchboards and waystations for content is underscored by the main reason that I finally decided to spring for a new unit. My old unit was fine for browsing the Web and office automation tasks, but it groaned at the memory and processing required to produce video content. A new webcam that I purchased, able to produce high-definition video, was just not up to the task, complicated by a USB interface that was underpowered for processing video. Ariel is more than up to these tasks, equipped with its own tiny webcam to boot and a screen that is proportioned perfectly for video presentations. In a world in which video and other multimedia are beginning to become the focus of more mobile content than ever before - wait for a new generation of powerful mobile phones next year that will accelerate this trend signifcantly - PCs are becoming more of a filtering and production platform for sophisticated content that is consumed on other platforms oftentimes.

The other key factor that Ariel's power underscores is the depth and breadth of real-time information sources that it's able to handle. Dozens of browser tabs are no sweat for Ariel to manage, with streams from Twitter, email, videos humming along while I chug along on word processing, spreadsheets, graphics and slide presentations. Its dual-core CPU processor is designed to maximize the efficiency of multi-process computing, a capability that's underused via the Windows XP operating system loaded on to Ariel but a help nevertheless. This is power that used to be available only in the trading rooms of investment banks consuming hundreds of real-time information resources to make split-second decisions on securities.

With affordable multiple screen displays and larger displays becoming more common in both office and home computing to consume all of this information, our desktop and laptop computing capabilities are starting to focus on the types of benefits that used to be the focus of only a handful of securities traders. Integration of multiple content sources to help people attain the benefits of real-time computing power is going to become only more important as machines like Ariel begin to dominate the PC end of content production and consumption. With video and multimedia sources an increasingly important part of this real-time stream, the winners in publishing will the those who are able to understand the integration and collaboration requirements for people consuming information in ever more immediate decision-making cycles.

The other factor that's highlighted by Ariel's strengths is the constancy of content consumption in today's online environment. I settled for batteries that could keep Ariel going for about ten or twelve hours without recharging, but I could have opted for an even larger add-on unit that could have extended its off-cord power to eighteen hours. High-power mobile smart phones and smartbooks are about to enter this realm soon also, with the ability to power video, Web browsing and other content-intense applications for days between recharging. This "always on" culture of content production and consumption is leaving fewer and fewer gaps for people to consider alternative forms of publishing.

As emerging technologies such as Google Wave make instant content sharing and collaboration more immediate and global than ever before, the world of real-time content is going to produce even more emphasis on instant awareness and consensus-building through publishing services. While the world has not become Wall Street, in some ways the content marketing concepts - and challenges - that shaped financial markets with new generations of technologies in previous decades are becoming the baseline of how most enterprise and consumer publishers will have to adjust to content markets in the years ahead.

Immediacy is not just important, but essential to the process of making good decisions. Sophisticated analytics are needed to help people make sense of a myriad of real-time inputs and related archives. Sophisticated networks are needed to help people collaborate rapidly on high-value opportunities and to execute on those opportunities cost-effectively. All of this requires sophisticated and affordable cloud infrastructure that will enable these services to scale cost-effectively and to minimize technology investments in markets that reward rapid adoption of new technology advantages. Look no further than to companies like Bloomberg and Thomson Reuters to understand the full cycle of changes that will be required for your own markets if you plan to survive and to thrive in the years ahead as real-time information changes your own markets.

So here I go, off to a new era of slugging it out with my keyboard, mouse and webcam to produce and consume content in real-time more productively than ever before. I am glad to have Ariel as my new road warrior compadre. My travel bag will be a lot lighter thanks to all of its built-ins and my life will be more content-centric and real-time than ever. I hope that's a good thing.