Watson and The Future of ECM

In the past, I have whipped out my ECM powered crystal ball to pontificate about the future of Enterprise Content Management.  These are always fun to write and share (see Top 10 ECM Pet Peeve Predictions for 2011  and Crystal Ball Gazing … Enterprise Content Management 2020).  This one is a little different though …  on the eve of the AIIM International Conference and Expo at info360, I find myself wondering … what are we going to do with all this new social content … all of these content based conversations in all of their various forms?

We’ve seen the rise of the Systems of Engagement concept and number of new systems that enable social business.  We’re adopting new ways to work together leveraging technologies like collaborative content, wikis, communities, RSS and much more.  All of this new content being generated is text based and expressed in natural language.  I suggest you read AIIM’s report Systems of Engagement and the Future of Enterprise IT: A Sea Change in Enterprise for a perspective on the management aspects of the future of ECM.  It lays out how organizations must think about information management, control, and governance in order to deal with social technologies.

Social business is not just inside the firewall though.  Blogs, wikis and social network conversations are giving consumers and businesses a voice and power they’ve never have before … again based in text and expressed in natural language.  This is a big deal.  770 million people worldwide visited a social networking site last year (according to a comScore report titled Social Networking Phenomenon) … and amazingly, over 500 billion impressions annually are being made about products and services (according to a new book Empowered written by Josh Bernoff and Ted Schadler).

But what is buried in these text based natural language conversations?  There is an amazing amout of information trapped inside.  With all these conversations happening between colleagues, customers and partners … what can we learn from our customers about product quality, customer experience, price, value, service and more?  What can we learn from our internal conversations as well?  What is locked in these threads and related documents about strategy, projects, issues, risks and business outcomes.

We have to find out!  We have to put this information to work for us.

But guess what?  The old tools don’t work.  Data analysis is a powerful thing but don’t expect today’s business intelligence tools to understand language and threaded conversations.  When you analyze data … a 5 is always a 5.  You don’t have to understand what a 5 is or figure out what it means.  You just have to calculate it against other numeric indicators and metrics.

Content … and all of the related conversations aren’t numeric.  You must start by understanding what it all means, which is why understanding natural language is key.  Historically, computers have failed at this.  New tools and techniques are needed because content is a whole different challenge.  A very big challenge.  Think about it … a “5” represents a value, the same value, every single time.  There is no ambiguity.  In natural language, the word “premiere” could be a noun, verb or adjective.  It could be a title of a person, an action or the first night of a theatre play.  Natural language is full of ambiguity … it is nuanced and filled with contextual references.  Subtle meaning, irony, riddles, acronyms, idioms, abbreviations and other language complexities all present unique computing challenges not found with structured data.  This is precisely why IBM chose Jeopardy! as a way to showcase the Watson breakthrough.

IBM Watson (DeepQA) is the world’s most advanced question answering machine that uncovers answers by understanding the meaning buried in the context of a natural language question.  By combining advanced Natural Language Processing (NLP) and DeepQA automatic question answering technology, Watson represents the future of content and data management, analytics, and systems design.  IBM Watson leverages core content analysis, along with a number of other advanced technologies, to arrive at a single, precise answer within a very short period of time.  The business applications for this technology are limitless starting with clinical healthcare, customer care, government intelligence and beyond.

You can read some of my other blog postings on Watson (see “What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy! … or better yet … if you want to know how Watson actually works, hear it live at my AIIM / info360 main stage session IBM Watson and the Impact on ECM this coming Wednesday 3/23 at 9:30 am.

BLOG UPDATE:  Here is a link to the slides used at the AIIM / info360 keynote.

Back to my crystal ball … my prediction is that natural language based computing and related analysis is the next big wave of computing and will shape the future of ECM.  Watson is an enabling breakthrough and is the start of something big.  With all this new information, we’ll want to use to understand what is being said, and why, in all of these conversations.  Most of all, we’ll want to leverage this new found insight for business advantage.  One compelling and obvious example is to be to answer age old customer questions like “Are our customers happy with us?” “How happy” “Are they so happy, we should try to sell something else?” … or … “Are our customers unhappy?” “Are they so unhappy, we should offer them something to prevent churn?” Undestanding the customer trends and emerging opportunities across a large set of text based conversations (letters, calls, emails, web postings and more) is now possible.

Who wouldn’t want to undertstand their customers, partners, constituents and employees better?  Beyond this, Watson will be applied to industries like healthcare to help doctors more effectively diagnose diseases and this is just the beginning.  Organizations everywhere will want to unlock the insights trapped in their enterprise content and leverage all of these conversations … in ways we haven’t even thought of yet … but I’ll save that for the next time I use my ECM crystal ball.

As always … leave me your thoughts and ideas here and hope to see you Wednesday at The AIIM International Conference and Expo at info360 http://www.aiimexpo.com/.

IBM at 100: UPC … The Transformation of Retail

In my continuing series of IBM at 100 achievements … this is one of my favorites of all the ones I plan to republish here. The humble Universal Product Code (UPC), also known as the bar code, along with the related deployment of scanners, fundamentally changed many of the practices of retailers and all organizations that buy and move things, from large industrial equipment to pencils purchased in stationery stores. These two technologies led to the use of in-store information processing systems in almost every industry around the world, applied to millions of types of goods and items. UPC is planet Earth’s most pervasive inventory tracking tool.

N. Joseph Woodland, later an IBMer but then working at Drexel Institute of Technology, applied for the first patent on bar code technology on October 20, 1949, and along with Bernard Silver, received the patent on October 7, 1952. And there it sat for more than two decades. In those days there was no way to read the codes, until the laser became a practical tool. About 1970 at IBM Research Triangle Park, George Laurer went to work on how to scan labels and to develop a digitally readable code. Soon a team formed to address the issue, including Woodland. Their first try was a bull’s-eye bar code; nobody was happy with it because it took up too much space on a carton.

Meanwhile, the grocery industry in post-war America was adapting to the boom in suburban supermarkets–seeking to automate checkout at stores to increase speed, drive down the cost of hiring so many checkout clerks and systematize in-store inventory management. Beginning in the 1960s, various industry task forces went to work defining requirements and technical specifications. In time the industry issued a request to computer companies to submit proposals.

IBM’s team had also reworked its design going to the now familiar rows of bars each containing multiple copies of data. Woodland, who had helped create the original bull’s-eye design, then later worked on the bar code, writing IBM’s response to the industry’s proposal. Another group of IBMers at the Rochester, Minnesota Laboratory built a prototype scanner using optics and lasers. In 1973, the grocery industry’s task force settled on a standard that very closely paralleled IBM’s approach. The industry wanted a standard that all grocers and their suppliers could use.

IBM was well positioned and became one of the earliest suppliers of scanning equipment to the supermarket world. On October 11, 1973, IBM became one of the earliest vendors to market with a system, called the IBM 3660. In time it became a workhorse in the industry. It included a point-of-sale terminal (digital cash register) and checkout scanner that could read the UPC symbol. The grocery industry compelled its suppliers of products in boxes and cans to start using the code, and IBM helped suppliers acquire the technology to work with the UPC.

On June 26, 1974, the first swipe was done at a Marsh’s supermarket in Troy, Ohio, which the industry had designated as a test facility. The first product swiped was a pack of Wrigley’s Juicy Fruit chewing gum, now on display at the Smithsonian’s National Museum of American History in Washington, D.C. Soon, grocery stores began adopting the new scanners, while customers were slowly educated on their accuracy in quoting prices.

If there had been any doubts about the new system’s prospects, they were gone by the end of the 1970s. The costs of checking out customers went down; the accuracy of transactions went up; checkouts sped up by some 40 percent; and in-store inventory systems dramatically improved management of goods on hand, on order or in need of replenishment. And that was just the beginning. An immediate byproduct was the ability of stores to start tracking the buying habits of customers in general and, later, down to the individual, scanning bar coded coupons and frequent shopper cards. In the four years between 1976 and 1980, the number of grocery stores using this technology jumped from 104 to 2,207, and they were spreading to other countries.

In the 1980s, IBM and its competitors introduced the new technology to other industries (including variations of the American standard bar codes that were adopted in Western Europe). And IBM Raleigh kept improving the technology. In December 1980, IBM introduced the 3687 scanner that used holographic technologies—one of the first commercial applications of this technology. In October 1987, the IBM 7636 Bar Code Scanner was introduced–and as a result, throughout the 1980s factories adopted the IBM bar code to track in-process inventory. Libraries used it to do the same with books. In the 1990s, hand-held scanners made it easier to apply bar codes to things beyond cartons and cans and to scan them, eventually using wireless technology. Meanwhile innovation expanded in the ability of a bar code to hold more information.

These technologies make it possible for all kinds of organizations, schools, universities and companies in all industries to leverage the power of computers to manage their inventories. In many countries, almost every item now purchased in a retail store has a UPC printed on it, and is scanned. UPC led to the retirement of the manual and electro-mechanical cash registers which, as a technology, had been around since the 1880s. By the early 2000s, bar code technologies had become a $17 billion business, scanned billions of times each day.

The full text of this article can be found on IBM at 100: http://www.ibm.com/ibm100/us/en/icons/upc/

Humans vs. Watson (Programmed by Humans): Who Has The Advantage?

DAY 3 UPDATE:  If you are a technology person, you had to be impressed.  We all know who won by now so I won’t belabor it.  Ken Jennings played better and made a game of it … at least for a while.  He seemed to anticipate the buzz a little bit better and got on a roll.

You may have noticed that Watson struggled in certain categories last night.  “Actors Who Direct” gave very short clues (or questions) like “The Great Debaters” for which the correct answer was “Who is Denzel Washington”.  For Watson, the longer the question, the better.  If it takes a longer time for Alex to read the question, Watson has more time to consider candidate answers, evidence scores and confidence rankings.  This is another reason why Watson does better in certain categories.  In an attempt to remain competitive in this situation, Watson has multiple ways to process clues or questions.  There is what is called the “short path” (to an answer).  This is used for shorter questions when Watson has less time to decide whether to buzz in or not.  Watson is more inconsistent when it has to answer faster.  As seen last night, he either chose not to answer or Ken and Brad beat him to it.

In the end, the margin of victory was decisive for Watson.  In total, $1.25 million was donated to charity and Ken and Brad took home a parting gifts of $150,000 and $100,000 respectively … pretty good for all involved.  The real winners are science and technology.   This is a major advance in computing that could revolutionize the way we interact with computers … especially with questions and answers.  The commercial applications seem endless.

DAY 2 UPDATE:  Last night was compelling to watch.  I was at the Washington, DC viewing event with several hundred customers, partners and IBMers.  The atmosphere in the briefing center was electric.  When the game started with Watson taking command, the room erupted in cheers.  After Watson got on a roll, and steamrolled Brad and Ken for most of Double Jeopardy, the room began to grow silent in awe of what was happening. 

Erik Mueller (IBM Research) was our featured speaker.  He was bombarded … before, during and after the match with questions like “How does he know what to bet?”  “How does Watson process text?”  How would this be used in medical research?”  “What books were in Watson’s knowledge base?”  “Can Watson hear?” “Does he have to press a button like the human contestants?” and many more.

I was there as a subject matter expert and even though the spotlight was rightfully on Eric, I did get to answer a question on how some of Watson’s technology was being used today.  I explained how our IBM Content Analytics is used and how it is helping to power Watson’s natural language prowess.

When Watson incorrectly answered “What is Toronto????” in Final Jeopardy, the room audibly gasped (myself included).  As everyone seemed to hold their breath, I looked at Erik and he was smiling like a Cheshire cat … brimming with confidence.  The room cheered and applauded when the Watson’s small bet was revealed … a seeming acknowledgement to the technological brilliance.  Applause for a wrong answer!

Afterwards, there were many ideas on how Watson could be applied.  My favorite was from a legal industry colleague who had a number of suggestions for how Watson could optimize document review and analysis that is currently a problem for judges and litigators.

Yesterday (below) I said the human’s have a slight advantage.  And while Watson has built an impressive lead, I still feel that way.  Many of yesterday’s categories played to Watson’s fact based strengths.  It could go the other way tonight and Brad and Ken could get right back into the match.  The second game will air tonight in its entirety and the scores from both games will be combined to determine the $1 million prize winner.  Watson is entering tonight with a more than $25,000 lead.  IBM is donating all prize winnings to charity and Ken Jennings and Brad Rutter are donating 50% to charity.

DAY 1 POST:  After Day 1, Watson is tied with Brad Rutter at $5,000 going into Double Jeopardy – which is pretty impressive.  Ken Jennings has yet to catch his stride.  Brad and Ken seemed a little shell shocked at first, but Brad rebounded right when Watson was faltering towards the end of the first round.  This got me to thinking I should go into a little more detail about who really has the advantage … Watson or the humans? 

If you watched it last night, you may have observed that Watson does very well with factual questions.  He did very well in the Beatles song category – they were mostly facts with contextual references to lyrics.  Answers that involve multiple facts, all of which are required to answer the correct response but are unlikely to be found the same place, are much harder for Watson.  This is why Watson missed the Harry Potter question involving Lord Voldemort.  Watson also switched categories frequently which is part of his game strategy.  You may have also noticed that Watson can’t see or hear.  He answered a question wrong even though Ken gave the same wrong answer seconds before.  More on this later in the post.

Here goes … my take on who has the advantage …

Question Understanding :  Advantage Humans

Humans:  Seemingly Effortless.  Almost instantly knows what is being asked, what is important and how it applies – very naturally gets focus, references, hints, puns, implications, etc.

Watson:  Hugely Challenging.  Has to be programmed to analyze enormous numbers of possibilities to get just a hint of the relevant meaning.  Very difficult due to variability, implicit context, ambiguityof structure and meaning in language.

Language Understanding:  Advantage Humans

Humans:  Seemingly Effortless.  Powerful, general, deep and fast in understanding language – reading, experiencing, summarizing, storing knowledge in natural language.  This information is written for human consumption so reading and understanding what it says is natural for humans.

Watson:  Hugely Challenging.  Answers need to be determined and justified in natural language sources like news articles, reference texts, plays, novels, etc.  Watson must be carefully programmed and automatically trained to deeply analyze even just tiny subsets of language effectively.  Very different from web search, must find a precise answer and understand enough of what it read to know if and why a possible answer may be correct.

Self‐Knowledge (Confidence):  Advantage Humans

Humans:  Seemingly Effortless.  Most often, and almost instantly, humans know if they know the answer.

Watson:  Hugely Challenging.  1000’s of algorithms run in parallel to find and analyze 1000’s of written texts for many different types of evidence.  The results are combined, scored and weighed for their relative importance – how much they justify a candidate answer.  This has to happen in 3 seconds to compute a confidence and decide whether or not to ring-in before it is too late.

Breadth of Knowledge:  Advantage Humans

Humans:  Limited by self-contained memory.  Estimates of >1000’s of terabytes are all much higher than Watson’s memory capacity.  Ability to flexibly understand and summarize human relevance means that humans’ raw input capacity is even higher.

Watson:  Limited by self‐contained memory.  Roughly 1 Million books worth of content stored and processed in 15 Terabytes of working memory.  Weaker ability to meaningfully understand, relate and summarize human‐relevant content.  Must look at lots of data to compute statistical relevance.

Processing Speed:  Advantage Humans

Humans:  Fast Accurate Language Processing.  Native, strong, fast, language abilities.  Highly associative, highly flexible memory and speedy recall.  Very fast to speed read clue, accurately grasp question, determine confidence and answer – in just seconds. 

Watson:  Hugely Challenging.  On 1 CPU Watson can take over 2 hours to answer to a typical Jeopardy! question.  Watson must be parallelized, perhaps in ways similar to the brain, to simultaneously use 1000’s of compute cores to compete against humans in the 3-5 second range.

Reaction Speed:  Toss-up

Humans:  Times the Buzz.  Slower raw reaction speed but potentially faster to the buzz.  Listens to clue and anticipates when to buzz in.  “Timing the buzz” like this providing humans with the fastest absolute possible response time.

Watson:  Fast Hand.  More consistently deliver’s a fast reaction time but ONLY IF and WHEN can determine high enough confidence in time to buzz‐in.  Not able to anticipate when to buzz‐in based on listening to clue, which gives fastest possible response time to humans.  Also has to press same mechanical button as humans do.

Compute Power:  Won’t Impact Outcome

Humans:  Requires 1 brain that fits in a shoebox, can run on a tuna‐fish sandwich and be cooled with a hand‐held paper fan.

Watson:  Hugely Challenging.  Needs 2,880 compute cores (10 refrigerators worth in size and space) requiring about 80Kw of power and 20 tons of cooling.

Betting and Strategy:  Advantage Watson

Humans:  Slower, typically less precise.  Uses strategy and adjusts based on situation and game position.

Watson: Faster, more accurate calculations.  Uses strategy and adjusts based on situation and game position.

Emotions:  Advantage Watson

Humans:  Yes. Can slow down and /or confuse processing.

Watson:  No. Does NOT get nervous, tired, upset or psyched out (but the Watson programming team does!).

In-Game Learning:  Advantage Humans

Humans:  Learn very quickly from context, voice expression and (mostly importantly) right and wrong answers.

Watson:  Watson does not have the ability to hear (speech to text).  It is my understanding that Watson is “fed” the correct answer (in text) after each question so he can learn about the category even if he gets it wrong or does not answer.  However, I don’t believe he is “fed” the wrong answers though.  This is a disadvantage for Watson.  As seen last night, it is not uncommon for him to answer with the same wrong answer as another contestant.  This also happened in the sparring rounds leading up to the taping of last nights show.

As you can see things are closely matched but a slight advantage has to go to Ken and Brad.

And what about Watson’s face?

Another observation I made was how cool Watson’s avatar was.  It actually expresses what he is thinking (or processing).  The Watson avatar shares the graphic structure and tonality of the IBM Smarter Planet marketing campaign; a global map projection with a halo of “thought rays.”  The avatar features dozens of differentiated animation states that mirror the many stages of Jeopardy! gameplay – from choosing categories and answering clues, to winning and losing, to making Daily Double wagers and playing Final Jeopardy!.  Even Watson’s level of confidence – the numeric threshold that determines whether or not Watson will buzz in to answer – is made visible.  Watson’s stage presence is designed to depict the interior processes of the advanced computing system that powers it.  A significant portion of the avatar consists of colored threads orbiting around a central core.  The threads and thought rays that make up Watson’s avatar change color and speed depending on what happens during the game.  For example, when Watson feels confident in an answer the rays on the avatar turn green; they turn orange when Watson gets the answer wrong.  You will see the avatar speed up and activate when Watson’s algorithms are working hard to answer a clue.

I’ll be glued to the TV tonight and tomorrow.  Regardless of the outcome, this whole experience has been fascinating to me … so much so that I just published a new podcast on ECM, Content Analytics and Watson.

You can also visit my previous blog postings on Watson at: IBM at 100:  A Computer Called Watson“What is Content Analytics?, Alex”, 10 Things You Need to Know About the Technology Behind Watson and Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

Introducing IBM at 100: Patents and Innovation

With the looming Jeopardy! challenge competition involving IBM Watson, I am feeling proud of my association with IBM.  In part because IBM is an icon of business.  As a tribute, I plan to re-post a few of the notable achievements by IBM and IBMers from the past 100 years as an attempt to put the company’s contributions years into perspective.   Has IBM made a difference on our world … our planet?  What kind of impact has IBM had on the world?  Is it really a smarter planet as a result of the past 100 years?

I hope to answer these and other questions through these posts.  A dedicated website has these postings and much more about IBM’s past 100 years.   There is also a great overview video.  Check back often.  New stories will be added throughout the centennial year.  Let’s start with Patents and Innovation … a cornerstone of IBM’s heritage and reputation.

IBM’s 100 Icons of Progress

In the span of a century, IBM has evolved from a small business that made scales, time clocks and tabulating machines to a globally integrated enterprise with 400,000 employees and a strong vision for the future. The stories that have emerged throughout our history are complex tales of big risks, lessons learned and discoveries that have transformed the way we work and live. These 100 iconic moments—these Icons of Progress—demonstrate our faith in science, our pursuit of knowledge and our belief that together we can make the world work better.

Patents and Innovation

By hiring engineer and inventor James W. Bryce in 1917, Thomas Watson Sr. showed his commitment to pure inventing. Bryce and his team established IBM as a long-term leader in the development and protection of intellectual property. By 1929, 90 percent of IBM’s products were the result of Watson’s investments in R&D. In 1940, the team invented a method for adding and subtracting using vacuum tubes—a basic building block of the fully electronic computers that transformed business in the1950s. This pattern—using innovation to create intellectual property—shaped IBM’s history.

On January 26, 1939, James W. Bryce, IBM’s chief engineer, dictated a two-page letter to Thomas J. Watson, Sr., the company’s president. It was an update on the research and patents he had been working on. Today, the remarkable letter serves as a window into IBM’s long-held role as a leader in the development and protection of intellectual property.

Bryce was one of the most prolific inventors in American history, racking up more than 500 U.S. and foreign patents by the end of his career. In his letter to Watson, he described six projects, each of which would be considered a signature life achievement for the average person. They included research into magnetic recording of data, an investigation into the use of light rays in computing and plans with Harvard University for what would become one of the first digital computers. But another project was perhaps most significant. Wrote Bryce: “We have been carrying on an investigation in connection with the development of computing devices which do not employ the usual adding wheels, but instead use electronic effects and employ tubes similar to those used in radio work.”

The investigation bore fruit. On January 15, 1940, Arthur H. Dickinson, Bryce’s top associate and a world-beating inventor in his own right, submitted an application for a patent for “certain improvements in accounting apparatus.” In fact, the patent represented a turning point in computing history. Dickinson, under Bryce’s supervision, had invented a method for adding and subtracting using vacuum tubes—a basic building block of the fully electronic computers that began to appear in the 1940s and transformed the world of business in the 1950s.

This pattern—using innovation to create intellectual property—is evident throughout IBM’s history. Indeed, intellectual property has been strategically important at IBM since before it was IBM.

The full text of this article can be found on IBM at 100: http://www.ibm.com/ibm100/us/en/icons/patents/

“What is Content Analytics?, Alex”

“The technology behind Watson represents the future of data management and analytics.  In the real world, this technology will help us uncover insights in everything from traffic to healthcare.”

– John Cohn, IBM Fellow, IBM Systems and Technology Group

How can the same technology used to play Jeopardy! give you better business insight?

Why Watson matters

You have to start by understanding that IBM Watson DeepQA is the world’s most advanced question answering machine.  It uncovers answers by understanding the meaning buried in the context of a natural language question.  By combining advanced Natural Language Processing (NLP) and DeepQA automatic question answering technology, Watson represents the future of content and data management, analytics, and systems design.  IBM Watson leverages core content analysis, along with a number of other advanced technologies, to arrive at a single, precise answer within a very short period of time.  The business applications for this technology is limitless starting with clinical healthcare, customer care, government intelligence and beyond.  I covered the technology side of Watson in my previous posting 10 Things You Need to Know About the Technology Behind Watson.

Amazingly, Watson works like the human brain to analyze the content of a Jeopardy! question.  First, it tries to understand the question to determine what is being asked.  In doing so, it first needs to analyze the natural language text.  Next, it tries to find reasoned answers, by analyzing a wide variety of disparate content mostly in the form of natural language documents.  Finally, Watson assesses and determines the relative likelihood that the answers found, are correct based on a confidence rating.

A great example of the challenge is described by Stephen Baker in his book Final Jeopardy: Man vs. Machine and the Quest to Know Everything: ‘When 60 Minutes premiered, this man was U.S. President.  ‘ Traditionally it’s been difficult for a computer to understand what ‘premiered’ means and that it’s associated with a date.  To a computer, ‘premiere’ could also mean ‘premier’.  Is the question about a person’s title or a production opening?  Then it has to figure out the date when an entity called ’60 Minutes’ premiered, and then find out who was the ‘U.S. President’ at that time.  In short, it requires a ton of contextual understanding.

I am not talking about search here.  This is far beyond what search tools can do.  A recent Forrester report, Take Control Of Your Content, states that 45% of the US workforce spends three or more hours a week just searching for information.  This is completely inefficient.  See my previous posting Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy! for more on this topic.

Natural Language Processing (NLP) can be leveraged in any situation where text is involved. Besides answering questions, it can help improve enterprise search results or even develop an understanding of the insight hidden in the content itself.  Watson leverages the power of NLP as the cornerstone to translate interactions between computers and human (natural) languages.

NLP involves a series of steps that make text understandable (or computable).  A critical step, lexical analysis is the process of converting a sequence of characters into a set of tokens.  Subsequent steps leverage these tokens to perform entity extraction (people, places, things), concept identification (person A belongs to organization B) and the annotation of documents with this and other information.  A feature of IBM Content Analytics (known as LanguageWare) is performing the lexical analysis function in Watson as part of natural language processing.

Why this matters to your business

Jeopardy! poses a similar set of contextual information challenges as those found in the business world today:

  • Over 80 percent of information being stored is unstructured (is text based).
  • Understanding that 80 plus percent isn’t simple.  Like Jeopardy! … subtle meaning, irony, riddles, acronyms, abbreviations and other complexities all present unique computing challenges not found with structured data in order to derive meaning and insight. This is where natural language processing (NLP) comes in.

The same core NLP technology used in Watson is available now to deliver business value today by unlocking the insights trapped in the massive amounts of unstructured information in the many systems and formats you have today.  Understanding the content, context and value of this unstructured information presents an enormous opportunity for your business.  This is already being done today in a number of industries by leveraging IBM Content Analytics.

IBM Content Analytics (ICA) itself is a platform to derive rapid insight.  It can transform raw information into business insight quickly without building models or deploying complex systems.  Enabling all knowledge workers to derive insight in hours or days … not weeks or months.  It helps address industry specific problems such as healthcare treatment effectiveness, fraud detection, product defect detection, public safety concerns, customer satisfaction and churn, crime and terrorism prevention and more.  Here are some actual customer examples:

Healthcare Research – Like most healthcare providers, BJC Healthcare, had a treasure trove of historical information trapped in unstructured clinical notes, diagnostic reports containing essential information for the study of disease progression, treatment effectiveness and long-term outcomes.  Their existing Biomedical Informatics (BMI) resources were disjointed and non-interoperable, available only to a small fraction of researchers, and frequently redundant, with no capability to tap into the wealth of research information trapped in unstructured clinical notes, diagnostic report and the like.

With IBM Content Analytics, BJC and university researchers are now able to analyze unstructured information to answer key questions that were previously unavailable.  Questions like: Does the patient smoke?, How often and for how long?, If smoke free, how long? What home medications is the patient taking? What is the patient sent home with? What was the diagnosis and what procedures performed on patient?  BJC now has deeper insight into medical information and can uncover trends and patterns within their content, to provide better healthcare to their patients.

Customer Satisfaction – Identifying customer satisfaction trends about products, services and personnel is critical to most businesses.  The Hertz Corporation and Mindshare Technologies, a leading provider of enterprise feedback solutions, are using IBM Content Analytics software to examine customer survey data, including text messages, to better identify car and equipment rental performance levels for pinpointing and making the necessary adjustments to improve customer satisfaction levels.

By using IBM Content Analytics, companies like Hertz can drive new marketing campaigns or modify their products and services to meet the demands of their customers. “Hertz gathers an amazing amount of customer insight daily, including thousands of comments from web surveys, emails and text messages. We wanted to leverage this insight at both the strategic level and the local level to drive operational improvements,” said Joe Eckroth, Chief Information Officer, the Hertz Corporation.

For more information about ICA at Hertz: http://www-03.ibm.com/press/us/en/pressrelease/32859.wss

Research Analytics – To North Carolina State University, the essence of a university is more than education – it is the advancement and dissemination of knowledge in all its forms.  One of the main issues faced by NC State was dealing with the vast number of data sources available to them.  The university sought a solution to efficiently mine and analyze vast quantities of data to better identify companies that could bring NC State’s research to the public.  The objective was a solution designed to parse the content of thousands of unstructured information sources, perform data and text analytics and produce a focused set of useful results.

Using IBM Content Analytics, NC State was able to reduce the time needed to find target companies from months to days.  The result is the identification of new commercialization opportunities, with tests yielding a 300 percent increase in the number of candidates.  By obtaining insight into their extensive content sources, NC State’s Office of Technology Transfer was able to find more effective ways to license technologies created through research conducted at the university. “What makes the solution so powerful is its ability to go beyond conventional online search methods by factoring context into its results.” – Billy Houghteling, executive director, NC State Office of Technology Transfer.

For more information about ICA at NC State: http://www-01.ibm.com/software/success/cssdb.nsf/CS/SSAO-8DFLBX?OpenDocument&Site=software&cty=en_us

You can put the technology of tomorrow to work for you today, by leveraging the same IBM Content Analytics capability helping to power Watson.  To learn more about all the IBM ECM products utilizing Watson technology, please visit these sites:

IBM Content Analytics: http://www-01.ibm.com/software/data/content-management/analytics/

IBM Classification Module: http://www-01.ibm.com/software/data/content-management/classification/

IBM eDiscovery Analyzer: http://www-01.ibm.com/software/data/content-management/products/ediscovery-analyzer/

IBM OmniFind Enterprise Edition: http://www-01.ibm.com/software/data/enterprise-search/omnifind-enterprise/

You can also check out the IBM Content Analytics Resource Center or watch the “what it is and why it matters” video.

I’ll be at the Jeopardy! viewing party in Washington, DC on February 15th and 16th … hope to see you there.  In the mean time, leave me your thoughts and questions below.

WikiLeaks Disclosures … A Wakeup Call for Records Management

Earlier in my professional career, I used to hit the snooze button 4 or 5 times every morning when the alarm went off. I did this for years until I realized it was the root cause of being late to work and getting my wrists slapped far too often. It seems simple, but we all hit the snooze button even though we know the repercussions. Guess what … the repercussions are getting worse.

For years, the federal government has been hitting the snooze button on electronic records management. The GAO has been critical of the Federal Government’s ability to manage records and information saying there’s “little assurance that [federal] agencies are effectively managing records, including e-mail records, throughout their life cycle.” During the past few administrations, similar GAO reports and/or embarrassing public information mismanagement incidents have reminded us (and not in a good way) of the importance of good recordkeeping and document control. You may recall incidents over missing emails involving both the Bush and Clinton administrations. Now we have Wikileaks blabbing to the world with embarrassing disclosures of State Department and military documents. This is taking the impact of information mismanagement to a whole level of public embarrassment, exposure and risk. Although it should not be surprising to anyone that this is happening considering the previous incidents and GAO warnings it has still caused quite a stir and had a measurable impact. Corporations should see this as a cautionary tale and a sign of things to come … so start preparing now.

Start by asking yourself, what would happen if your sensitive business records were made publicly available and the entire world was talking, blogging and tweeting about it. For most organizations, this is a very scary thought. Fortunately, there are solutions and best practices available today to protect enterprises from these scenarios.

Implement Electronic Records Management: Update your document control policies to include the handling of sensitive information including official records. Do you even have an Information Lifecycle Governance strategy today? Start by getting the key stakeholders from Legal, Records and IT involved, at a minimum, and ensure you have top down executive support. Implement an electronic records program and system based on an ECM repository you can trust (see my two earlier blogs on trusting repositories). This will put the proper controls, security and policy enforcement in place to govern information over it’s lifespan including defensible disposition. Getting rid of things when you are supposed to dramatically reduces the risk of improper disclosure. Although implementing a records management system has many benefits, including reducing eDiscovery costs and risks, it is also the cornerstone of preventing information from falling into the wrong hands. Standards (DoD 5015.02-STD, ISO 15489), best practices (ARMA GARP) and communities (CGOC) exist to guide and accelerate the process. Records management can be complimented by Information Rights Management and/or Digital Loss Prevention (DLP) technology for enhanced security and control options.

Leverage Content Analytics: Use content analytics to understand employee sentiment and as well as detect any patterns of behavior that could lead to intentional disclosure of information. These technologies leverage text and content analytics to identify disgruntled employees before an incident occurs enabling proactive investigation and management of potentially troublesome situations. They can also serve as background for any investigation that may happen in the event of an incident. Enterprises should proactively monitor for these risks and situations … as an ounce of prevention is worth a pound of cure. Content analytics can also be extended with predictive analytics to evaluate the probably of an incident and the associated exposure.

Leverage Advanced Case Management: Investigating and remediating any risk or fraud scenario requires advanced case management. These case centric investigations are almost always ad-hoc processes with unpredictable twists and turns. You need the ad-hoc and collaborative nature of advanced case management to serve as a process backbone as the case proceeds and ultimately concludes. Having built-in audit trails, records management and governance ensures transparency into the process and minimizes the chance of any hanky-panky. Enterprises should consider advanced case management solutions that integrate with ECM repositories and records management for any content-centric investigation.

This adds up to one simple call to action … stop hitting the snooze button and take action. Any enterprise could be a target and ultimately a victim. The stakes are higher then ever before. Leverage solutions like records management, content analytics and advanced case management to improve your organizations ability to secure, control and retain documents while monitoring and remediating for potential risky disclosure situations.

Leave me your thoughts and ideas. I’ll read and respond later … after I am done hitting the snooze button a few times (kidding of course).