Goodbye Search … It’s About Finding Answers … Enter Watson vs. Jeopardy!

Does anyone really like searching for stuff?  It conjures up images of looking through old boxes in the attic to find that one thing you can never seem to lay your hands on.  Recently, I went looking for my junior high school yearbook when someone “friended” me on FaceBook and I couldn’t remember them.  The experience was exasperating. I looked through at least 20 boxes of stuff, started sneezing from the dust, and never found the darn yearbook.  As a result, I am still not sure I was actually in the same science class as this person.  The experience reminded me of today’s enterprise search limitations.  I blogged about this recently as part of my Top 10 Pet Peeves for 2010

If you think about it … no one actually likes the searching part.  It’s no fun nor is it intuitive.  You have figure out a “query” or “search string” and hope for the best.  Maybe you’ll get lucky and maybe not.  It’s what I call the “search and hope” model and it can be even more frustrating then my attic experience (I feel a sneeze coming on).

In an AIIM Industry Watch Survey earlier this year, one of the key findings was 72% of the people surveyed say it’s harder, or much harder, to find information and documents held on their own internal systems compared to the Web.  That makes you scratch your head for sure.

In the end, no one “wants” to search anyway … it’s the thing we seek that we care about, and not the searching process.  All I wanted was an answer to my question, which was to see if I could remember this former classmate.

IBM has been working at systems to find answers since the 1950s when the first steps were taken with research on machine based learning.  Over 50+ years (and many millions later), we have history being made.  An IBM computing system (Watson) will play Jeopardy! live on television against Ken Jennings and Brad Rutter, the two all-time most successful contestants, in a series of battles to be aired February 14-16. The series will feature two matches to see if a machine can compete by interpreting real-language questions, in the Jeopardy! format, by using text analysis (natural language processing), automated classification and other technologies to find the correct answers.  Here is a brief overview to Watson.

Watson must find the answers in the same timeframe as the two former champs by processing and understanding the question, researching the possible answers, determining the response and answering quicker than the two former champs … plus it has to be right. WOW!

Jeopardy! is the No. 1-rated quiz show in syndication, with more than 9 million daily viewers. Watson has already passed the test that Jeopardy! contestants take to make it on the show and been has warming up by competing against other former Jeopardy! players.  The top prize for the contest is $1 million, $300,000 for second and $200,000 for third. Jennings and Rutter plan to donate half their winnings to charity.  IBM will donate all winnings to charity.

I can’t wait to see this. I suspect my fascination has to do with my being involved with content analytics as part of my job at IBM.  Or maybe it’s just about the coolest thing ever.

Either way, finding answers sure beats searching and hoping … and this ought to be very very interesting.

Here is a deeper explanation of the DeepQA techology behind Watson for those who are as fascinated by this as I am.

WikiLeaks Disclosures … A Wakeup Call for Records Management

Earlier in my professional career, I used to hit the snooze button 4 or 5 times every morning when the alarm went off. I did this for years until I realized it was the root cause of being late to work and getting my wrists slapped far too often. It seems simple, but we all hit the snooze button even though we know the repercussions. Guess what … the repercussions are getting worse.

For years, the federal government has been hitting the snooze button on electronic records management. The GAO has been critical of the Federal Government’s ability to manage records and information saying there’s “little assurance that [federal] agencies are effectively managing records, including e-mail records, throughout their life cycle.” During the past few administrations, similar GAO reports and/or embarrassing public information mismanagement incidents have reminded us (and not in a good way) of the importance of good recordkeeping and document control. You may recall incidents over missing emails involving both the Bush and Clinton administrations. Now we have Wikileaks blabbing to the world with embarrassing disclosures of State Department and military documents. This is taking the impact of information mismanagement to a whole level of public embarrassment, exposure and risk. Although it should not be surprising to anyone that this is happening considering the previous incidents and GAO warnings it has still caused quite a stir and had a measurable impact. Corporations should see this as a cautionary tale and a sign of things to come … so start preparing now.

Start by asking yourself, what would happen if your sensitive business records were made publicly available and the entire world was talking, blogging and tweeting about it. For most organizations, this is a very scary thought. Fortunately, there are solutions and best practices available today to protect enterprises from these scenarios.

Implement Electronic Records Management: Update your document control policies to include the handling of sensitive information including official records. Do you even have an Information Lifecycle Governance strategy today? Start by getting the key stakeholders from Legal, Records and IT involved, at a minimum, and ensure you have top down executive support. Implement an electronic records program and system based on an ECM repository you can trust (see my two earlier blogs on trusting repositories). This will put the proper controls, security and policy enforcement in place to govern information over it’s lifespan including defensible disposition. Getting rid of things when you are supposed to dramatically reduces the risk of improper disclosure. Although implementing a records management system has many benefits, including reducing eDiscovery costs and risks, it is also the cornerstone of preventing information from falling into the wrong hands. Standards (DoD 5015.02-STD, ISO 15489), best practices (ARMA GARP) and communities (CGOC) exist to guide and accelerate the process. Records management can be complimented by Information Rights Management and/or Digital Loss Prevention (DLP) technology for enhanced security and control options.

Leverage Content Analytics: Use content analytics to understand employee sentiment and as well as detect any patterns of behavior that could lead to intentional disclosure of information. These technologies leverage text and content analytics to identify disgruntled employees before an incident occurs enabling proactive investigation and management of potentially troublesome situations. They can also serve as background for any investigation that may happen in the event of an incident. Enterprises should proactively monitor for these risks and situations … as an ounce of prevention is worth a pound of cure. Content analytics can also be extended with predictive analytics to evaluate the probably of an incident and the associated exposure.

Leverage Advanced Case Management: Investigating and remediating any risk or fraud scenario requires advanced case management. These case centric investigations are almost always ad-hoc processes with unpredictable twists and turns. You need the ad-hoc and collaborative nature of advanced case management to serve as a process backbone as the case proceeds and ultimately concludes. Having built-in audit trails, records management and governance ensures transparency into the process and minimizes the chance of any hanky-panky. Enterprises should consider advanced case management solutions that integrate with ECM repositories and records management for any content-centric investigation.

This adds up to one simple call to action … stop hitting the snooze button and take action. Any enterprise could be a target and ultimately a victim. The stakes are higher then ever before. Leverage solutions like records management, content analytics and advanced case management to improve your organizations ability to secure, control and retain documents while monitoring and remediating for potential risky disclosure situations.

Leave me your thoughts and ideas. I’ll read and respond later … after I am done hitting the snooze button a few times (kidding of course).

Top 10 ECM Pet Peeve Predictions for 2011

It’s that time of the year when all of the prognosticators, futurists and analysts break out the crystal balls and announce their predictions for the coming year.  Not wanting to miss the fun, I am taking a whack at it myself but with a slightly more irreverent approach … with a Top 10 of my own.  I hope this goes over as well as the last time I pontificated about the future with Crystal Ball Gazing … Enterprise Content Management 2020.

I don’t feel the need to cover all of the cool or obvious technology areas that my analyst friends would.  A number of social media, mobile computing and cloud computing topics would be on any normal ECM predictions list for 2011.  I do believe that social media, combined with mobile computing, delivered from the cloud will forever change the way we interact with content but this list is more of my own technology pet peeve list.  I’ve decided to avoid this set of topics as there is plenty being written about all three topics already.  I’ve also avoided all of the emerging fringe ECM technology topics such as video search, content recommendation engines, sentiment analysis and many more.  There is plenty of time to write about those topics in the future.  Getting this list to just 10 items wasn’t easy … I really wanted to write something more specific on how lousy most ECM meta data is but decided to keep the list to these 10 items.  As such, ECM meta data quality is on the cutting room floor.  So without further a do … Craig’s Top 10 Pet Peeve Predictions for 2011:

 
Number 10:  Enterprise Search Results Will Still Suck
Despite a continuing increase in software sales and an overall growing market, many enterprises haven’t figured out that search is the ultimate garbage in, garbage out, model.  Most end-users are frustrated at their continued inability to find what they need when they need it.  Just ask any room full of people.  Too many organizations simply decide to index everything thinking that’s all you need to do … bad idea.  There is no magic pill here, search results will ultimately improve when organizations (1) eliminate the unnecessary junk that keeps cluttering up search results and (2) consistently classify information, based on good meta data, to improve findability.  Ultimately, enterprise search deployments with custom relevance models can deliver high quality optimal results, but that’s a pipedream for most organizations today.  The basics need to be done first and there is a lot of ignorance on this topic.  Unfortunately, very little changes in 2011, but we can hope.
 
Number 9:  Meaning Based Technologies Are Not That Meaningful
Meaningful to whom?  It’s the user, business or situation context that determines what is meaningful.  Any vendor, with a machine based technology claiming that it can figure out meaning without understanding the context of the situation is stretching the truth.  Don’t be fooled by this brand of snake oil.  Without the ability to customize to specific business and industry situations these “meaning” based approaches don’t work … or are of limited value.  Vendors currently making these claims will “tone down” their rhetoric in 2011 as the market becomes more educated and sophisticated on this  topic.  People will realize that the emperor has no clothes in 2011.
 
Number 8:  Intergalactic Content Federation Is Exposed As A Myth
The ability to federate every ECM repository for every use case is wishful thinking.  Federation works very well when trying to access, identify, extract and re-use content for applications like search, content analytics, or LOB application access.  It works poorly or inconsistently when trying to directly control content in foreign repositories for records management and especially eDiscovery.  There are too many technology hurdles such as security models, administrator access, lack of API support, incompatible data models that make this very hard.  For use cases like eDiscovery, many repositories don’t even support placing a legal hold.  Trying to do unlimited full records federation or managing enterprise legal holds in place isn’t realistic yet … and may never be.  It works well in certain situations only.  I suppose, all of this can be solved with enough time and money but you could say that about anything – it’s simply not practical to try to use content federation for every conceivable use case and that won’t change in 2011.  This is another reason why we need the Content Management Interoperability Standard (CMIS).
 
Number 7:  CMIS Adoption Grows, Will Be Demanded From All Content, Discovery and Archive Vendors
Good segue, huh?  If federation is the right approach (it is), but current technology prevents it from becoming a reality, then we need a standard we can all invest in and rely on.  CMIS already has significant market momentum and adoption.  Originally introduced and sponsored by IBM, EMC, Alfresco, OpenText, SAP and Oracle, it is now an OASIS standard where the list of members has expanded to many other vendors.  IBM is already shipping CMIS enabled solutions and repositories, as are many others.  However, some vendors still need encouragement.  None of the archiving or eDiscovery point solution vendors have announced support for CMIS yet.  I expect to see market pressure in 2011 on any content related vendor not supporting CMIS … so get ready Autonomy, Symantec, Guidance Software, and others who are not yet supporting CMIS.  The days of closed proprietary interfaces are over.  
 
Number 6:  ACM Blows Up BPM (in a good way)
Advanced Case Management will forever change the way we build, deploy and interact with process and content centric (or workflow if you are stuck in the ’90s) applications.  Whether you call it Advanced Case Management, Adaptive Case Management or something else, It’s only a matter of time before the old “wait for months for your application model” is dead.  Applications will be deployed in days and customized in hours or even minutes.  IT and business will have a shared success model in the adoption and use of these applications.  This one is a no-brainer.  ACM takes off in a big way in 2011.
 
Number 5:  Viral ECM Technologies without Adequate Governance Models Get Squeezed
In general, convenience seems to trump governance, but not this year.  The viral deployment model is both a blessing and a curse.  IT needs to play a stronger role in governing how these collaborative sites get deployed, used and eventually decommissioned.  There is far too much cost associated with eDiscovery and the inability to produce documents when needed for this not to happen.  There are way too many unknown collaborative sites containing important documents and records.  Many of these have been abandoned causing increased infrastructure costs and risk.  The headaches associated with viral deployments force IT to put its foot down in 2011.  The lack of governance around these viral collaborative sites becomes a major blocker to their deployment starting in 2011.
 
Number 4:  Scalable and Trusted Content Repositories Become Essential
Despite my criticism of AIIM’s labeling of the “Systems of Engagement” concept in my last blog, they’ve nailed the basic idea.  “Systems or Repositories of Record” will be recognized as essential starting in 2011.  We expect 44 times the growth of information in 10 years with 85% being unstructured, yikes!  We’re going to need professional, highly scalable, trusted, defensible repositories of record to support the expected volume and governance requirements, especially as ECM applications embrace content outside the firewall.  Check out my two postings earlier this year on Trusted Content Repositories for more on this topic (Learning How To Trust … and Step 1 – Can You Trust Your Repository?)
 
Number 3:  Classification Technology Is Recognized As Superior To Human Based Approaches
For years, I’ve listened to many, many debates on human classification versus machine based classification.  Information is growing so out of control that it’s simply not possible to even read it all … much less decide how it should be classified and actually do it correctly.  The facts are simple; studies show humans are 92% accurate at best.  The problem is that humans opt out sometimes.  We get busy, get sick, have to go home or simply refuse to do certain things.  When it comes to classification, we participate about 33% of the time on average.  Overall, this makes our effective accuracy more like 30% and not 92%.  Context technology based approaches have consistently hit 70-80% over the years and recently we’ve seen accuracy levels as high as 98.7%.  Technology approaches cost less too.  2011 is the year of auto-classification.
 
Number 2:  Business Intelligence Wakes Up – The Other 85% Does Matter
It’s a well known fact that ~85% of the information being stored today is unstructured.  Most BI or data warehouse deployments focus on structured data (or only 15% of the available information to analyze).  What about the rest of it?  The explosion of content analysis tools over the last few years has made the 85% more understandable and easy to analyze then ever before and that will continue into 2011.  BI, data warehouse and analytics solutions will increasingly include all forms of enterprise content whether inside or outside the firewall.
 
Number 1:  IT Waste Management Becomes a Top Priority
The keep everything forever model has failed.  Too many digital dumpsters litter the enterprise.  It’s estimated over 90% of info being stored today is duplicated at least once and 70% is already past its retention date.  It turns out buying more storage isn’t cheaper, once you add in the management staff, admin costs, training, power and so forth.  One customer told me they’d have to build a new data center every 18 months just to keep storing everything.  In 2011, I expect every organization to more aggressively start assessing and decommissioning unnecessary content as well as the associated systems.  The new model is keep what you need to keep … for only as long as you need to keep it based on value and/or obligation … and defensibly dispose of the rest.
 
I hope you enjoyed reading this as much as I enjoyed writing it.  I hope you agree with me on most of these.  If not, let me know where you think I am wrong or list a few predictions or technology pet peeves of your own.